A Reason For Optimism For Global Economic Growth

There are necessary factors that have to be in place in order for economies to advance over time. The good news is these factors are present today.

There are a myriad of important political, social, economic, and healthcare issues that are plaguing our globe today. But I am still long-term optimistic on the stock market.

This is because I still see so much potential in humanity. There are more than 8.0 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc but I have faith that humanity can clean it up. To me, investing in stocks is ultimately the same as having faith in the long-term positivity of humanity. I will remain long-term optimistic on stocks so long as I continue to have this faith. 

What helps me keep the faith is also the existence of other factors that provide fertile soil for mankind’s desire for progress to flourish. In his excellent book, The Birth of Plenty, the polymathic William Bernstein (he’s a neurologist as well as finance researcher) explained why the pace of global economic growth picked up noticeably starting in the early 1800s; Figure 1 below shows the unmistakable growth spurt in global GDP per capita that started, and continued on, from that period.

 Figure 1; Source: The Birth of Plenty

Bernstein wrote in his book that there are four necessary factors for economies to advance over time: 

  • Respect for property rights: Entrepreneurs and business owners must have confidence that the rewards from their efforts will not be unduly confiscated
  • Broad acceptance of the scientific method for investigating how the world works: The foundation for innovative ideas is a useful intellectual framework  
  • Easy access to capital: Without funding, even the best business ideas will be starved of fuel to take off
  • Methods for rapid and efficient transport of ideas and widgets: Great ideas and products will be unable to find their appropriate audience in time without reliable and fast transportation  

Without any of these factors, economic growth can’t proceed. From my vantage point, all four factors are firmly in place in large swathes of the world, especially in the USA, the world’s largest economy. This is a strong reason for optimism for global economic growth to continue powering on in the years ahead. So, the only time I will turn pessimistic on the long-term returns of stocks is when they become wildly overpriced – and I don’t think this is the case today.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

How To Lose Money Investing With Warren Buffett

Not even Warren Buffett can prevent market volatility from wreaking havoc.

Warren Buffett is one of my investing heroes. He assumed control of Berkshire Hathaway in 1965 and still remains at the helm. Through astute acquisitions and stock-picking, he has grown Berkshire into one of the most valuable companies in the world today. US$1,000 invested in Berkshire at the time Buffett came into the picture would have grown to US$37.9 million by the end of 2022.

Despite this tremendous record, it would have still been easy for an investor to lose money while investing with Buffett. It all has to do with our very human emotions.

Table 1 shows the five highest annualised growth rates in book value per share Berkshire has produced over rolling 10-year calendar-year periods from 1965 to 2022. 

Table 1; Source: Berkshire annual shareholder letters

In the 1974-1983 period, Berkshire produced one of its highest annualised book value per share growth rates at 29.4%. The destination was brilliant, but the journey was anything but smooth. US$1,000 invested in Berkshire shares at the end of 1973 would be worth just US$526 (a decline of 47.4%) by the end of 1975. Over the same years, the S&P 500 was up by 1.0% including dividends. And it wasn’t the case where Berkshire’s book value per share experienced a traumatic decline – in fact, the company’s book value per share increased by a total of 28.6% in that period. Moreover, prior to the decline in Berkshire’s stock price, its book value per share was up by a healthy 16.0% per year from 1965 to 1973.

So in the first two years of one of the best decades of value-building Buffett has led Berkshire in, after a long period of excellent business growth, the company’s stock price fell by nearly half and also dramatically underperformed the US stock market. It is at this juncture – the end of 1975 – where it would have been easy for an investor who bought Berkshire shares before or at the end of 1973 to throw in the towel. Seeing your investment cut in half while the market barely budged is painful, even if you know that the underlying business was growing in value. It’s only human to wave the white flag.

But as an apt reflection of Ben Graham’s timeless analogy of the stock market being a voting machine in the short run but a weighing machine in the long run, Berkshire’s book value per share and stock price compounded at highly similar annual rates of 29.4% and 32.6% over the 1974-1983 timeframe (the S&P 500’s annualised return was just 10.5%). This is the unfortunate reality confronting investors who are focused on the long-term business destinations of the companies they’re invested in: The end point has the potential to be incredibly well-rewarding, but the journey can also be blisteringly painful. Bear this in mind when you invest in stocks, for you can easily lose money – even if you’re investing with Buffett – if you’re not focused on the right numbers (the business’s value) and if you do not have the right temperament.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

Thoughts on Artificial Intelligence

Artificial intelligence has the potential to reshape the world.

The way Jeremy and I see it, artificial intelligence (AI) really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are known as generative AI products – they are software that use AI to generate art and text, respectively (and often at astounding quality), hence the term “generative”. Since then, developments in AI have progressed at a breathtaking pace. One striking observation I’ve found with AI is the much higher level of enthusiasm that company-leaders have for the technology compared to the two other recent “hot things”, namely, blockchain/cryptocurrencies and the metaverse. Put another way, AI could be a real game changer for societies and economies.

I thought it would be useful to write down some of my current thoughts on AI and its potential impact. Putting pen to paper (or fingers to the keyboard) helps me make sense of what’s in my mind. Do note that my thoughts are fragile because the field of AI is developing rapidly and there are many unknowns at the moment. In no order of merit:

  • While companies such as OpenAI and Alphabet have released generative AI products, they have yet to release open-source versions of their foundational AI models that power the products. Meta Platforms, meanwhile, has been open sourcing its foundational AI models in earnest. During Meta’s latest earnings conference call in April this year, management explained that open sourcing allows Meta to benefit from improvements to its foundational models that are made by software developers, outside of Meta, all over the world. Around the same time, there was a purportedly leaked document from an Alphabet employee that discussed the advantages in the development of AI that Meta has over both Alphabet and OpenAI by virtue of it open sourcing its foundational models. There’s a tug-of-war now between what’s better – proprietary or open-sourced foundational AI models – but it remains to be seen which will prevail or if there will even be a clear winner. 
  • During Amazon’s latest earnings conference call (in April 2023), the company’s management team shared their observation that most companies that want to utilise AI have no interest in building their own foundational AI models because it takes tremendous amounts of time and capital. Instead, they merely want to customise foundational models with their own proprietary data. On the other hand, Tencent’s leaders commented in the company’s May 2023 earnings conference call that they see a proliferation of foundational AI models from both established companies as well as startups. I’m watching to find out which point of view is closer to the truth. I also want to point out that the frenzy to develop foundational AI models may be specific to China. Rui Ma, an astute observer of and writer on China’s technology sector, mentioned in a recent tweet that “everyone in China is building their own foundational model.” Meanwhile, the management of online travel platform Airbnb (which is based in the US, works deeply with technology, and is clearly a large company) shared in May 2023 that they have no interest in building foundational AI models – they’re only interested in designing the interface and tuning the models. 
  • A database is a platform to store data. Each piece of software requires a database to store, organize, and process data. The database has a direct impact on the software’s performance, scalability, flexibility, and reliability, so its selection is a highly strategic decision for companies. In the 1970s, relational databases were first developed and they used a programming language known as Structured Query Language (SQL). Relational databases store and organise data points that are related to one another in table form (picture an Excel spreadsheet) and were useful from the 1980s to the late 1990s. But because they were used to store structured data, they began to lose relevance with the rise of the internet. Relational databases were too rigid for the internet era and were not built to support the volume, velocity, and variety of data in the internet era. This is where non-relational databases – also known as NoSQL, which stands for either “non SQL” or “not only SQL” – come into play. NoSQL databases are not constrained to relational databases’ tabular format of data storage and can work with unstructured data such as audio, video, and photos. As a result, they are more flexible and better suited for the internet age. AI appears to require different database architectures. The management of MongoDB, a company that specialises in NoSQL databases, talked about the need for a vector database to store the training results of large language models during the company’s June 2023 earnings conference call. Simply put, a vector database stores data in a way that allows users to easily find data, say, an image (or text), that is related to a given image (or text) – this feature is very useful for generative AI products. This said, MongoDB’s management also commented in the same earnings conference call that NoSQL databases will still be very useful in the AI era. I’m aware that MongoDB’s management could be biased, but I do agree with their point of view. Vector databases appear to be well-suited (to my untrained technical eye!) for a narrow AI-related use case, whereas NoSQL databases are useful in much broader ways. Moreover, AI is likely to increase the volume of software developed for all kinds of software – not just AI software – and they need modern databases. MongoDB’s management also explained in a separate June 2023 conference that a typical generative AI workflow will include both vector databases and other kinds of databases (during the conference, management also revealed MongoDB’s own vector database service). I’m keeping a keen eye on how the landscape of database architectures evolve over time as AI technologies develop.
  • Keeping up with the theme of new architectures, the AI age could also usher in a new architecture for data centres. This new architecture is named accelerated computing by Nvidia. In the traditional architecture of data centres, CPUs (central processing units) are the main source of computing power. In accelerated computing, the entire data centre – consisting of GPUs (graphic processing units), CPUs, DPUs (data processing units), data switches, networking hardware, and more – provides the computing power. Put another way, instead of thinking about the chip as the computer, the data centre becomes the computer under the accelerated computing framework. During Nvidia’s May 2023 earnings conference call, management shared that the company had been working on accelerated computing for many years but it was the introduction of generative AI – with its massive computing requirements – that “triggered a killer app” for this new data centre architecture. The economic opportunity could be immense. Nvidia’s management estimated that US$1 trillion of data centre infrastructure was installed over the last four years and nearly all of it was based on the traditional CPU-focused architecture. But as generative AI gains importance in society, data centre infrastructure would need to shift heavily towards the accelerated computing variety, according to Nvidia’s management.
  • And keeping with the theme of something new, AI could also bring about novel and better consumer experiences. Airbnb’s co-founder and CEO, Brian Chesky, laid out a tantalising view on this potential future during the company’s latest May 2023 earnings conference call. Chesky mentioned that search queries in the travel context are matching questions and the answers depend on who the questioner is and what his/her preferences are. With the help of AI, Airbnb could build “the ultimate AI concierge that could understand you,” thereby providing a highly personalised travel experience. Meanwhile, in a recent interview with Wired, Microsoft’s CEO Satya Nadella shared his dream that “every one of Earth’s 8 billion people can have an AI tutor, an AI doctor, a programmer, maybe a consultant!” 
  • Embedded AI is the concept of AI software that is built into a device itself. This device can be a robot. And if robots with embedded AI can be mass-produced, the economic implications could be tremendous, beyond the impact that AI could have as just software. Tesla is perhaps the most high profile company in the world today that is developing robots with embedded AI. The company’s goal for the Tesla Bot (also known as Optimus) is for it to be “a general purpose, bi-pedal, autonomous humanoid robot capable of performing unsafe, repetitive or boring tasks.” There are other important companies that are working on embedded AI. For example, earlier this year, Nvidia acquired OmniML, a startup whose software shrinks AI models, making it easier for the models to be run on devices rather than on the cloud.
  • Currently, humans are behind the content trained on by foundational AI models underpinning the likes of ChatGPT and other generative AI products. But according to a recently-published paper from UK and Canadian researchers titled The Curse of Recursion: Training on Generated Data Makes Models Forget, the quality of foundational AI models degrades significantly as the proportion of content they are trained on shifts toward an AI-generated corpus. This could be a serious problem in the future if there’s an explosion in the volume of generative AI content, which seems likely; for context, Adobe’s management shared in mid-June this year that the company’s generative AI feature, Firefly, had already powered 500 million content-generations since its launch in March 2023. The degradation, termed “model collapse” by the researchers, happens because content created by humans are a more accurate reflection of the world since they would contain improbable data. Even after training on man-made data, AI models tend to generate content that understates the improbable data. If subsequent AI models train primarily on AI-generated content, the end result is that the improbable data become even less represented. The researchers describe model collapse as “a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time.” Model collapse could have serious societal consequences; one of the researchers, Ilia Shumailov, told Venture Beat that “there are many other aspects that will lead to more serious implications, such as discrimination based on gender, ethnicity or other sensitive attributes.” Ross Anderson, another author of the paper, wrote in a blog post that with model collapse, advantages could accrue to companies that “control access to human interfaces at scale” or that have already trained AI models by scraping the web when human-generated content was still overwhelmingly dominant. 

There’s one other fragile thought I have about AI that we think is more important than what I’ve shared above, and it is related to the concept of emergence. Emergence is a natural phenomenon where sophisticated outcomes spontaneously “emerge” from the interactions of agents in a system, even when these agents were not instructed to produce these outcomes. The following passages from the book, Complexity: The Emerging Science at the Edge of Order and Chaos by Mitch Waldrop, help shed some light on emergence:

“These agents might be molecules or neurons or species or consumers or even corporations. But whatever their nature, the agents were constantly organizing and reorganizing themselves into larger structures through the clash of mutual accommodation and mutual rivalry. Thus, molecules would form cells, neurons would form brains, species would form ecosystems, consumers and corporations would form economies, and so on. At each level, new emergent structures would form and engage in new emergent behaviors. Complexity, in other words, was really a science of emergence… 

…Cells make tissues, tissues make organs, organs make organisms, organisms make ecosystems – on and on. Indeed, thought Holland, that’s what this business of “emergence” was all about: building blocks at one level combining into new building blocks at a higher level. It seemed to be one of the fundamental organizing principles of the world. It certainly seemed to appear in every complex, adaptive system that you looked at…

…Arthur was fascinated by the thing. Reynolds had billed the program as an attempt to capture the essence of flocking behavior in birds, or herding behavior in sheep, or schooling behavior in fish. And as far as Arthur could tell, he had succeeded beautifully. Reynolds’ basic idea was to place a large collection of autonomous, birdlike agents—“boids”—into an onscreen environment full of walls and obstacles. Each boid followed three simple rules of behavior: 

1. It tried to maintain a minimum distance from other objects in the environment, including other boids.

2. It tried to match velocities with boids in its neighborhood.

3. It tried to move toward the perceived center of mass of boids in its neighborhood.

What was striking about these rules was that none of them said, “Form a flock.” Quite the opposite: the rules were entirely local, referring only to what an individual boid could see and do in its own vicinity. If a flock was going to form at all, it would have to do so from the bottom up, as an emergent phenomenon. And yet flocks did form, every time. Reynolds could start his simulation with boids scattered around the computer screen completely at random, and they would spontaneously collect themselves into a flock that could fly around obstacles in a very fluid and natural manner. Sometimes the flock would even break into subflocks that flowed around both sides of an obstacle, rejoining on the other side as if the boids had planned it all along. In one of the runs, in fact, a boid accidentally hit a pole, fluttered around for a moment as though stunned and lost—then darted forward to rejoin the flock as it moved on.”

In our view, the concept of emergence is important in AI because at least some of the capabilities of ChatGPT seen today were not explicitly programmed for – they emerged. Satya Nadella said in his aforementioned interview with Wired that “when we went from GPT 2.5 to 3, we all started seeing these emergent capabilities.” Nadella was referring to the foundational AI models built by OpenAI in his Wired interview. One of the key differences between GPT 2.5 and GPT 3 is that the former contains 1.5 billion parameters, whereas the latter contains 175 billion, more than 100 times more. The basic computational unit within an AI model is known as a node, and parameters are a measure of the strength of a connection between two nodes. The number of parameters can thus be loosely associated with the number of nodes, as well as the number of connections between nodes, in an AI model. With GPT 3’s much higher number of parameters compared to GPT 2.5, the number of nodes and number of connections (or interactions) between nodes in GPT 3 thus far outweigh those of GPT 2.5. Nadella’s observation matches those of David Ha, an expert on AI whose most recent role was the head of research at Stability AI. During a February 2023 podcast hosted by investor Jim O’Shaughnessy, Ha shared the following (emphasis is mine):

Then the interesting thing is, sure, you can train things on prediction or even things like translation. If you have paired English to French samples, you can do that. But what if you train a model to predict itself without any labels? So that’s really interesting because one of the limitations we have is labeling data is a daunting task and it requires a lot of thought, but self-labeling is free. Like anything on the internet, the label is itself, right? So what you can do is there’s two broad types of models that are popular now. There’s language models that generate sequences of data and there’s things like image models, Stable Diffusion you generate an image. These operate on a very similar principle, but for things like language model, you can have a large corpus of text on the internet. And the interesting thing here is all you need to do is train the model to simply predict what the next character is going to be or what the next word is going to be, predict the probability distribution of the next word.

And such a very simple objective as you scale the model, as you scale the size and the number of neurons, you get interesting emerging capabilities as well. So before, maybe back in 2015, ’16, when I was playing around with language models, you can feed it, auto Shakespeare, and it will blab out something that sounds like Shakespeare.

But in the next few years, once people scaled up the number of parameters from 5 million, to a hundred million, to a billion parameters, to a hundred billion parameters, this simple objective, you can now interact with the model. You can actually feed in, “This is what I’m going to say,” and the model takes that as an input as if it said that and predict the next character and give you some feedback on that. And I think this is very interesting, because this is an emergent phenomenon. We didn’t design the model to have these chat functions. It’s just like this capability has emerged from scale.

And the same for image side as well. I think for images, there are data sets that will map the description of that image to that image itself and text to image models can do things like go from a text input into some representation of that text input and its objective is to generate an image that encapsulates what the text prompt is. And once we have enough images, I remember when I started, everyone was just generating tiny images of 10 classes of cats, dogs, airplanes, cars, digits and so on. And they’re not very general. You can only generate so much.

But once you have a large enough data distribution, you can start generating novel things like for example, a Formula 1 race car that looks like a strawberry and it’ll do that. This understanding of concepts are emergent. So I think that’s what I want to get at. You start off with very simple statistical models, but as you increase the scale of the model and you keep the objectives quite simple, you get these emergent capabilities that were not planned but simply emerge from training on that objective.

Emergence occurred in AI models as their number of parameters (i.e. the number of interactions between nodes) grew. This is a crucial point because emergence requires a certain amount of complexity in the interactions between agents, which can only happen if there are large numbers of agents as well as interactions between agents. It’s highly likely, in my view, that more emergent phenomena could develop as AI models become even more powerful over time via an increase in their parameters. It’s also difficult – perhaps impossible – to predict what these emergent phenomena could be, as specific emergent phenomena in any particular complex system are inherently unpredictable. So, any new emergent phenomena from AI that springs up in the future could be anywhere on the spectrum of being wildly positive to destructive for society. Let’s see!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, Meta Platforms, Microsoft, MongoDB, Tencent, and Tesla. Holdings are subject to change at any time.

A Possible Scientific Explanation For Why Top-Down Control of Economies Is A Bad Idea

Economies are complex systems that exhibit unpredictable emergent behaviours.

Mitch Waldrop’s Complexity: The Emerging Science at the Edge of Order and Chaos, published in 1992, is one of the best books I’ve read in recent times. It describes the science behind complex adaptive systems and the work academics from numerous disciplines have done on the concept of emergence. I also think it contains a kernel of insight – and a possible scientific explanation – on why top-down control of economies is a bad idea.

Complexity and emergence

But first, what are complex adaptive systems? The following passages from Waldrop’s book is a neat summary of what they are:

“For example, every one of these questions refers to a system that is complex, in the sense that a great many independent agents are interacting with each other in a great many ways. Think of the quadrillions of chemically reacting proteins, lipids, and nucleic acids that make up a living cell, or the billions of interconnected neurons that make up the brain, or the millions of mutually interdependent individuals who make up a human society.

In every case, moreover, the very richness of these interactions allows the system as a whole to undergo spontaneous self-organization. Thus, people trying to satisfy their material needs unconsciously organize themselves into an economy through myriad individual acts of buying and selling; it happens without anyone being in charge or consciously planning it. The genes in a developing embryo organize themselves in one way to make a liver cell and in another way to make a muscle cell… In every case’ groups of agents seeking mutual accommodation and self-consistency somehow manage to transcend themselves, acquiring collective properties such as life, thought, and purpose that they might never have possessed individually.

Furthermore, these complex, self-organizing systems are adaptive, in that they don’t just passively respond to events the way a rock might roll around in an earthquake. They actively try to turn whatever happens to their advantage. Thus, the human brain constantly organizes and reroganizes its billions of neural connections so as to learn from experience (sometimes, anyway)… the marketplace responds to changing tastes and lifestyles, immigration, technological developments, shifts in the price of raw materials, and a host of other factors. 

Finally, every one of these complex, self-organizing, adaptive systems possesses a kind of dynamism that makes them qualitatively different from static objects such as computer chips or snowflakes, which are merely complicated. Complex systems are more spontaneous, more disorderly, more alive than that. At the same time, however, their peculiar dynamism is also a far cry from the weirdly unpredictable gyrations known as chaos. In the past two decades, chaos theory has shaken science to its foundations with the realization that very simple dynamical rules can give rise to extraordinarily intricate behavior; witness the endlessly detailed beauty of fractals, or the foaming turbulence of a river. And yet chaos by itself doesn’t explain the structure, the coherence, the self-organizing cohesiveness of complex systems.

Instead, all these complex systems have somehow acquired the ability to bring order and chaos into a special kind of balance. This balance point – often called the edge of chaos – is where the components of a system never quite lock into place, and yet never quite dissolve into turbulence, either. The edge of chaos is where life has enough stability to sustain itself and enough creativity to deserve the name of life. The edge of chaos is where new ideas and innovative genotypes are forever nibbling away at the edges of the status quo, and where even the most entrenched old guard will eventually be overthrown.”

Put simply, a complex adaptive system comprises many agents, each of which may be following only simple rules. But through the interactions between the agents, sophisticated outcomes spontaneously “emerge”, even when the agents were not instructed to produce these outcomes. This phenomenon is known as emergence. Waldrop’s book has passages that help shed more light on emergence, and also has an illuminating example of how an emergent behaviour takes shape:

“These agents might be molecules or neurons or species or consumers or even corporations. But whatever their nature, the agents were constantly organizing and reorganizing themselves into larger structures through the clash of mutual accommodation and mutual rivalry. Thus, molecules would form cells, neurons would form brains, species would form ecosystems, consumers and corporations would form economies, and so on. At each level, new emergent structures would form and engage in new emergent behaviors. Complexity, in other words, was really a science of emergence… 

…Cells make tissues, tissues make organs, organs make organisms, organisms make ecosystems – on and on. Indeed, thought Holland, that’s what this business of “emergence” was all about: building blocks at one level combining into new building blocks at a higher level. It seemed to be one of the fundamental organizing principles of the world. It certainly seemed to appear in every complex, adaptive system that you looked at…

…Arthur was fascinated by the thing. Reynolds had billed the program as an attempt to capture the essence of flocking behavior in birds, or herding behavior in sheep, or schooling behavior in fish. And as far as Arthur could tell, he had succeeded beautifully. Reynolds’ basic idea was to place a large collection of autonomous, birdlike agents—“boids”—into an onscreen environment full of walls and obstacles. Each boid followed three simple rules of behavior: 

1. It tried to maintain a minimum distance from other objects in the environment, including other boids.

2. It tried to match velocities with boids in its neighborhood.

3. It tried to move toward the perceived center of mass of boids in its neighborhood.

What was striking about these rules was that none of them said, “Form a flock.” Quite the opposite: the rules were entirely local, referring only to what an individual boid could see and do in its own vicinity. If a flock was going to form at all, it would have to do so from the bottom up, as an emergent phenomenon. And yet flocks did form, every time. Reynolds could start his simulation with boids scattered around the computer screen completely at random, and they would spontaneously collect themselves into a flock that could fly around obstacles in a very fluid and natural manner. Sometimes the flock would even break into subflocks that flowed around both sides of an obstacle, rejoining on the other side as if the boids had planned it all along. In one of the runs, in fact, a boid accidentally hit a pole, fluttered around for a moment as though stunned and lost—then darted forward to rejoin the flock as it moved on.”

Emergence in the economy

In the first series of excerpts I shared from Waldrop’s book, it was hinted that an economy is a complex adaptive system. But this is not always true. Emergence is unlikely to happen in an economy with a very simple make-up. On the other hand, emergence is likely to occur in an economy whose depth and variety of economic activity within has increased over time. Here’s a relevant passage from Waldrop’s book:

“In fact, he argued, once you get beyond a certain threshold of complexity you can expect a kind of phase transition analogous to the ones he had found in his autocatalytic sets. Below that level of complexity you would find countries dependent upon just a few major industries, and their economies would tend to be fragile and stagnant. In that case, it wouldn’t matter how much investment got poured into the country. “If all you do is produce bananas, nothing will happen except that you produce more bananas.” But if a country ever managed to diversify and increase its complexity above the critical point, then you would expect it to undergo an explosive increase in growth and innovation-what some economists have called an “economic takeoff.””

This brings me to the topic behind the title and introduction of this article: Why top-down control of economies is a bad idea. An important aspect of emergence is that specific emergent phenomena in any particular complex adaptive system are inherently unpredictable. This applies to economies too. Given everything above, I think it stands to reason that any government that aims to exert top-down control over an economy that has grown in complexity would likely do a poor job. How can you control something well if you’re unable to predict its behaviour? 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

Nearly a month ago, I published What American Technology Companies Are Thinking About AI. In it, I shared commentary in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management will be building foundational models as well as using generative AI as co-pilots for users

Our generative AI strategy focuses on data, models and interfaces. Our rich datasets across creativity, documents and customer experiences enable us to train models on the highest quality assets. We will build foundation models in the categories where we have deep domain expertise, including imaging, vector, video, documents and marketing. We are bringing generative AI to life as a co-pilot across our incredible array of interfaces to deliver magic and productivity gains for a broader set of customers. 

Adobe’s management thinks its generative AI feature, Firefly, has multiple monetisation opportunities, but will only introduce specific pricing later this year with a focus on monetisation right now

Our generative AI offerings represent additional customer value as well as multiple new monetization opportunities. First, Firefly will be available both as a stand-alone freemium offering for consumers as well as an enterprise offering announced last week. Second, copilot generative AI functionality within our flagship applications will drive higher ARPUs and retention. Third, subscription credit packs will be made available for customers who need to generate greater amounts of content. Fourth, we will offer developer communities access to Firefly APIs and allow enterprises the ability to create exclusive custom models with their proprietary content. And finally, the industry partnerships as well as Firefly represent exciting new top-of-funnel acquisition opportunities for Express, Creative Cloud and Document Cloud. Our priority for now is to get Firefly broadly adopted, and we will introduce specific pricing later this year.

Adobe is seeing outstanding customer demand for generative AI features

We’re really excited, if you can’t tell on the call, about Firefly and what this represents. The early customer and community response has been absolutely exhilarating for all of us. You heard us talk about over 0.5 billion assets that have already been generated. Generations from Photoshop were 80x higher than we had originally projected going into the beta and obviously, feel really good about both the quality of the content being created and also the ability to scale the product to support that

Adobe has built Firefly to be both commercially as well as socially safe for use

Third is that, and perhaps most importantly, we’ve also been able to — because of the way we share and are transparent about where we get our content, we can tell customers that their content generated with Firefly is commercially safe for use. Copyrights are not being violated. Diversity and inclusion is front and center. Harmful imagery is not being generated.

Adobe’s management believes that (1) marketing will become increasingly personalised, (2) the personalisation has to be done at scale, and (3) Adobe can help customers achieve the personalisation with the data that it has

I think if you look at Express and Firefly and also the Sensei GenAI services that we announced for Digital Experience, comes at a time when marketing is going through a big shift from sort of mass marketing to personalized marketing at scale. And for the personalization at scale, everything has to be personalized, whether it’s content or audiences, customer journeys. And that’s the unique advantage we have. We have the data within the audience — the Adobe Experience Platform with the real-time customer profiles. We then have the models that we’re working with like Firefly. And then we have the interfaces through the apps like Adobe Campaign, Adobe Experience Manager and so on.So we can put all of that together in a manner that’s really consistent with the data governance that people — that customers expect so that their data is used only in their context and use that to do personalized marketing at scale. So it really fits very well together.

Adobe’s management believes that content production will increase significantly in the next few years because of AI and this will lead to higher demand for more software-seats

And we’re sitting at a moment where companies are telling us that there’s a 5x increase in content production coming out in the next few — next couple of years. And you see a host of new media types coming out. And we see the opportunity here for both seat expansion as a result of this and also because of the value we’re adding into our products themselves, increase in ARPU as well.

DocuSign (NASDAQ: DOCU)

DocuSign’s management believes that generative AI can transform all aspects of the agreement workflow

In brief, we believe AI unlocks the true potential of the intelligent agreement category. We already have a strong track record, leveraging sophisticated AI models, having built and shipped solutions based on earlier generations of AI. Generative AI can transform all aspects of agreement workflow, and we are uniquely positioned to capitalize on this opportunity. As an early example, we recently introduced a new limited availability feature agreement summarization. This new feature, which is enabled by our integration with Microsoft’s Azure Open AI service and tuned with our own proprietary agreement model uses AI to summarize and documents critical components giving signers a clear grasp of the most relevant information within their agreement, while respecting data security and privacy. 

Some possible future launches of generative AI features by DocuSign include search capabilities across agreement libraries and edits of documents based on industry best practices

Future launches will include search across customer agreement libraries, extractions from agreements and proposed language and edits based on customer, industry and universal best practices.

DocuSign has been working with AI for several years, but management sees the introduction of generative AI as a great opportunity to drive significant improvements to the company’s software products

I’d add to that, that I think the biggest change in our road map beyond that clear focus and articulation on agreement workflow is really the advent of generative AI. We’ve been working on AI for several years. As you know, we have products like Insights that leverage earlier generations of AI models. But given the enormous change there, that’s a fantastic opportunity to really unlock the category. And so, we’re investing very heavily there. We released some new products, and we’ll release more next week at Momentum, but I’m sure we’ll talk more about AI during the call. 

DocuSign’s management sees AI technology as the biggest long-term driver of the company’s growth

So, I think we — overall, I would say, product innovation is going to be the biggest driver and unlocker of our medium- to long-term growth. We do believe that we have very credible low-hanging fruit from better execution on our self-serve and product-backed growth motion. And so, that’s a top priority to drive greater efficiency in the near to medium term. I think the AI impact is perhaps the biggest in the long term. And we are starting to ship products, as I alluded to, and we’ll announce more next week. But in terms of its overall impact on the business, I think it’s still behind the other two in the — in the near to medium term. But in terms of the long-term potential of our category of agreement workflow, I think it’s a massive unlock and a fantastic opportunity for DocuSign.

DocuSign’s management is currently monetising AI by bundling AI features with existing features in some cases, and charging for AI features as add-ons in others; management needs to learn more about how customers are using AI features when it comes to monetization

In terms of monetization, I expect AI features to be both bundled as part of our baseline products, strengthening their functionality and value, as I suggested earlier. And in some cases, packaged as a separately charged add-on. We do both today. So, if you take our Insights product, which is really our AI-driven analytics product for CLM, we both have a stand-alone SKU. It’s sold separately as well as a premium bundle. I think, we’re going to need to learn a little bit more about how customers want to use this and what the key value drivers are before we finalize how we price the different features, but certainly mindful of wanting to capture the — deliver the most value and capture the most value for DocuSign, as we price it.

MongoDB (NASDAQ: MDB)

MongoDB’s management believes that AI will increase software development velocity and will enable more companies to launch more apps, leading to the speed of software development being even more important for companies

We believe AI will be the next frontier of development productivity — developer productivity and will likely lead to a step function increase in software development velocity. We know that most organizations have a huge backlog of projects they would like to take on but they just don’t have the development capacity to pursue. As developer productivity meaningfully improves, companies can dramatically increase their software ambitions and rapidly launch many more applications to transform their business. Consequently, the importance of development velocity to remain competitive will be even more pronounced. Said another way, if you are slow, then you are obsolete.

Companies are increasingly choosing MongoDB’s Atlas database service as the platform to build and run new AI apps

We are observing an emerging trend where customers are increasingly choosing Atlas as the platform to build and run new AI applications. For example, in Q1, more than 200 of the new Atlas customers were AI or ML companies. Well-financed start-ups like Hugging Face, [ Tekion ], One AI and [ Neura ] are examples of companies using MongoDB to help deliver the next wave of AI-powered applications to their customers.

MongoDB’s management believes that apps on legacy platforms will be replatformed to be AI-enabled, and those apps will need to migrate to MongoDB

We also believe that many existing applications will be replatformed to be AI enabled. This will be a compelling reason for customers to migrate from legacy technologies to MongoDB.

MongoDB’s management believes that in an increasingly AI-driven world, (1) AI will lead to more apps and more data storage demand for MongoDB; (2) developers will want to use modern databases like MongoDB to build; and (3) MongoDB can support wide use-cases, so it’s attractive to use MongoDB

First, we expect MongoDB to be a net beneficiary of AI, the reason being is that, as developer productivity increases, the volume of new applications will increase, which by definition will create new apps, which means more data stores, so driving more demand for MongoDB. Second, developers will be attracted to modern platforms like MongoDB because that’s the place where they can build these modern next-generation applications. And third, because of the breadth of our platform and the wide variety of use cases we support, that becomes even more of an impetus to use MongoDB. 

MongoDB’s management knows that AI requires vector databases, but thinks that AI still needs an operational datastore, which is where MongoDB excels in

The results that come from training and LLM against content are known as vector embeddings. And so content is assigned vectors and the vectors are stored in a database. These databases then facilitate searches when users query large language model with the appropriate vector embeddings, and it’s essentially how a user search is matched to content from an LLM. The key point, though, is that you still need an operational data store to store the actual data. And there are some adjunct solutions out there that have come out that are bespoke solutions but are not tied to actually where the data resides, so it’s not the best developer experience. And I believe that, over time, people will gravitate to a more seamless and integrated platform that offers a compelling user experience…

..Again, for generating content that’s accurate in a performant way, you do need to use vector embeddings which are stored in a database. And you — but you also need to store the data and you want to be able to offer a very compelling and seamless developer experience and be able to offer that as part of a broader platform. I think what you’ve seen, Brent, is that there’s been other trends, things like graph and time series, where a lot of people are very excited about these kind of bespoke single-function technologies, but over time, they got subsumed into a broader platform because it didn’t make sense for customers to have all these bespoke solutions which added so much complexity to their data architecture. 

Okta (NASDAQ: OKTA)

Okta has been working with AI for a number of years and some of its products contain AI features

So when we look at our own business, one of our huge — we have AI in our products, and we have for a few years, whether it’s ThreatInsight on the workforce side or Security Center on the customer identity side, which look at our billions of authentications and use AI to make sure we defend other customers from like similar types of threats that have been prosecuted against various customers on the platform. 

Okta’s management thinks AI could be really useful for helping users to auto-configure the set of Okta

One of the ideas that we’re working on that might be a typical use case of how someone like us could use AI is configuring Okta, setting the policy up for Okta across hundreds of applications on the workforce side or 10 or 20 applications on the customer identity side with various access policies and rules about who can access them and how they access them. It’d be pretty complicated to set up, but we’ve actually been prototyping using AI to auto-generate that configuration.

Okta’s management believes that AI will lead to higher demand for identity-use cases for the company

And then the other one we’re excited about is if you zoom out and you think this is a huge platform shift, it’s the next generation of technology. So that means that there’s going to be tons of new applications built with AI. It means that there’s going to be tons of new industries created and industries changed. And there’s going to be a login for all these things. You’re going to need to log on to these experiences. Sometimes it’s going to be machines. Sometimes it’s going to be users. That’s an identity problem, and we can help with that. So in a sense, we’re really going to be selling picks and shovels to the gold miners. 

Salesforce (NYSE: CRM)

Salesforce recently launched EinsteinGPT, a form of generative AI for customer relationship management

Last quarter, I told you of how our AI team is getting ready to launch EinsteinGPT, the world’s first generative AI for CRM. At Trailhead DX in March in front of thousands of trailblazers here in San Francisco, that’s exactly what we did. 

Salesforce announced SlackGPT, an AI assistant for users of the communication software Slack; management also believes that unleashing large language models within Slack can make the software incredibly valuable for users

We saw more of the incredible work of our AI team at our New York City World Tour this month when we demonstrated Slack GPT. Slack is a secure treasure trove of company data that generative AI can use to give every company and every employee their own powerful AI assistant, helping every employee be more productive in transforming the future of work. SlackGPT can leverage the power of generative AI, deliver instant conversation summaries, research tools and writing assistance directly in Slack. And you may never need to leave Slack to get a question answered. Slack is the perfect conversational interface for working with LLMs, which is why so many AI companies are Slack first and why OpenAI, ChatGPT and AnthropicSquad can now use Slack as a native interface…

…I think folks know, I have — my neighbor Sam Altman is the CEO of OpenAI, and I went over to his house for dinner, and it was a great conversation as it always is with him. And he had — he said, “Oh, just hold on one second, Marc, I want to get my laptop.” And he brought his laptop out and gave me some demonstrations of advanced technologies that are not appropriate for the call. But I did notice that there was only one application that he was using on his laptop and that was Slack. And the powerful part about that was I realized that everything from day 1 at OpenAI have been in Slack. And as we kind of brainstorm and talked about — of course, he was paying a Slack user fee and on and on, and he’s a great Slack customer. We’ve done a video about them, it’s on YouTube. But I realize that taking an LLM and embedding it inside Slack, well, maybe Slack will wake up. I mean there is so much data in Slack, I wonder if it could tell him what are the opportunities in OpenAI. What are the conflicts, what are the conversations, what should be his prioritization. What is the big product that got repressed that he never knew about.

And I realized in my own version of Slack at Salesforce, I have over 95 million Slack messages, and these are all open messages. I’m not talking about closed messaging or direct messaging or secure messaging between employees. I’m talking about the open framework that’s going on inside Salesforce and with so many of our customers. And then I realized, wow, I think Slack could wake up, and it could become a tremendous asset with an LLM consuming all that data and driving it. And then, of course, the idea is that is a new version of Slack. Not only do you have the free version of Slack, not only do you have the per user version of Slack, but then you have the additional LLM version of Slack. 

Salesforce is working with luxury brand Gucci to augment its client advisers by building AI chat technology

A great example already deploying this technology is Gucci. We’re working with them to augment their client advisers by building AI chat technology that creates a Gucci-fied tone of service, while incredible new voice, amplifying brand, storytelling and incremental sales as well. It’s an incredibly exciting vision for generative AI to transform which was customer service into now customer service, marketing and sales, all through augmenting Gucci employee capabilities using this amazing generative AI.

Salesforce’s management believes that Salesforce’s AI features can (1) help financial services companies improve the capabilities of their employees and (2) provide data-security for highly regulated companies when their data is used in AI models

But yesterday, there were many questions from my friend who I’m not going to give you his name because he’s one of the – the CEO of one of the largest and most important banks in the world. And I’ll just say that, of course, his primary focus is on productivity. He knows that he wants to make his bankers a lot more successful. He wants every banker to be able to rewrite a mortgage, but not every banker can, because writing the mortgage takes a lot of technical expertise. But as we showed him in the meeting through a combination of Tableau, which we demonstrated and Slack, which we demonstrated, and Salesforce’s Financial Services Cloud, which he has tens of thousands of users on, that banker understood that this would be incredible. But I also emphasize to him that LLMs, or large language models, they have a voracious appetite for data. They want every piece of data that they can consume. But through his regulatory standards, he cannot deliver all that data into the LLM because it becomes amalgamated. Today, he runs on Salesforce, and his data is secured down to the row and cell level.

Salesforce’s management believes that the technology sector experienced a “COVID super cycle” in 2020/2021 that made 2022 difficult for companies in the sector but that the tech could see an acceleration in growth in the future from an “AI supercycle”

I just really think you have to look at 2020, 2021 was just this massive super cycle called the pandemic. I don’t know if you remember, but we had a pandemic a couple of years ago. And during that, we saw tech buying like we never saw. It was incredible and everybody surged on tech buying. So you’re really looking at comparisons against that huge mega cycle… 

…That’s also what gives me tremendous confidence going forward and that what we’re really seeing is that customers are absorbing the huge amounts of technology that they bought. And that is about to come, I believe, to a close. I can’t give you the exact date, and it’s going to be accelerated by this AI super cycle.

Salesforce is doing a lot of work on data security when it comes to developing its AI features

For example, so we are doing a lot of things as the basic security level, like we are really doing tenant level isolation coupled with 0 retention architecture, the LLM level. So the LLM doesn’t remember any of the data. Along with that, they — for them to use these use cases, they want to have — they have a lot of these compliances like GDPR, ISO, SOC, Quadrant, they want to ensure that those compliances are still valid, and we’re going to solve it for that. In addition, the big worry everybody has is people have heard about hallucinations, toxicity, bias, this is what we call model trust. We have a lot of innovation around how to ground the data on 360 data, which is a huge advantage we have. And we are able to do a lot of things at that level. And then the thing, which I think Marc hinted at, which is LLMs are not like a database. These intra-enterprise trust, even once you have an LLM, you can’t open the data to everybody in the company. So you need ability to do this — who can access this data, how is it doing both before the query and after the query, we have to build that. 

Salesforce is importing 7 trillion reports into its Data Cloud to build AI features, and management believes this is a value trove of data

And by the way, the Data Cloud, just in a month, we are importing more than 7 trillion reports into the data layer, so which is a very powerful asset we have. So coupled with all of this is what they are looking for guidance and how we think we can deliver significant value to our customers.

Salesforce’s management sees generative AI as a useful tool to help non-technical users write software

But you can also imagine, for example, even with Salesforce, the ability as we’re going to see in June, that many of our trailblazers are amazing low-code, no-code trailblazers, but soon they’ll have the ability to tap into our LLMs like ProGen and Cogen that have the ability to code for them automatically. hey aren’t coders. They didn’t graduate computer science degrees.

The arc of progress that Salesforce’s management sees with AI: Predictive, then generative, then autonomous

So I think the way I see it is this AI technologies are a continuum that is predictive then they generate, and the real long-term goal is autonomous. The initial version of the generative AI will be more in terms of assistance…

… And then I think the fully autonomous cases, for example, in our own internal use cases with our models, we are able to detect 60% of instance and auto remediate. That requires a little bit more fine-tuning and we’ll have to work with specific customers to get to that level of model performance. So I see this is just the start of this cut. The assistant model is the initial thing to build trust and a human in the loop and validate it. And then as the models get better and better, we’ll keep taking use cases where we can fully automate it.

AI has already improved the productivity of Salesforce’s software developers by at least 20% and management thinks the same productivity-boost can happen for Salesforce’s customers

But the other use cases, which we are going to see, and in fact, I have rolled out our own code elements in our engineering org and we are already seeing minimum 20% productivity…

…In some cases, up to 30%. Now a lot of our customers are asking the same. We are going to roll Einstein GPT for our developers in the ecosystem, which will not only help not only the local developers bridge the gap, where there’s a talent gap but also reduce the cost of implementations for a lot of people. So there’s a lot of value.

Veeva Systems (NYSE: VEEV)

Veeva Systems recently announced an AI chatbot for field sales reps and management is not thinking about the chatbot’s monetisation at the moment

CRM Bot is an AI application for Vault CRM. You can think of it as ChatGPT for field teams…

…Yes, it’s really early right now. We’re focused on ensuring that we have the right product working with our customers. So that’s our focus right now. Let’s get the product right, and then we’ll get into more of the details on kind of the sizing and the opportunity there. But we’re excited overall about the opportunity we have in front of us …CRM bot will — that’s not an included product so that will have a license that will most likely be licensed by the user. So that will be net new. But as Brent mentioned, we’re focused on getting the product right and we don’t have pricing for that or sizing for that yet.

Veeva Systems’ management thinks that AI will not disrupt the company and will instead be a positive

Given the breadth and the nature of our industry cloud software, data, and services, AI will not be a major disruptor for our markets, rather it will be complementary and somewhat positive for Veeva in a few ways. We will develop focused AI applications where the technology is a good fit, such as CRM Bot for Vault CRM. The broadening use of AI will make our proprietary data assets, such as Link and Compass, more valuable over time because the data can be used in new ways. We will also make it easy for customers to connect their own AI applications with their Veeva applications, creating even more value from the Veeva ecosystem…

Veeva Systems’ management thinks that core systems of records will be needed even in the world of AI

I like our position as it relates to AI because we’re a core system of record. So that’s something you’re always going to need. I think that’s 1 thing that people should always understand. Core system of records will be needed even in the world of AI. If I ask Brent, hey, Brent, do you think 10 years from now, you’ll need a financial system to manage your financials. He’s going to tell me, yes, I really need one, you can’t take it away. ChatGPT won’t do it for me, right? I’m making a joke there, but our customers have the same critical operational systems around drug safety, around clinical trials, around regulatory, around their quality processes. So those are always going to be needed.

Veeva Systems is focused on leveraging its proprietary data assets with AI to make them more valuable 

Now we are also building our data assets, and these are proprietary data assets, Link, Compass and we’re building more data assets. Those will also be not affected by AI, but AI will be able to leverage those assets and make those assets more valuable. So I think we’ll develop more — we’ll do basically 3 things. We’ll develop more applications over time. CRM bot the first. We got to get that right. We also will — our proprietary data will get more valuable.

Veeva Systems’ management wants to make it easy for customers to connect their own AI applications with the company’s software products

And the third thing we’ll do is make our applications fit very well when customers have their own proprietary AI applications. So especially the Vault platform, we’ll do a lot of work in there to make it fit really well with the other AI applications they have from other vendors or that they develop themselves, because it’s an open ecosystem, and that’s how that’s part of being Veeva. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, Salesforce, and Veeva Systems. Holdings are subject to change at any time.

An Investing Paradox: Stability Is Destabilising

The epicenters of past periods of economic stress happened in sectors that were strong and robust. Why is that the case?

One of my favourite frameworks for thinking about investing and the economy is the simple but profound concept of stability being destabilising. This comes from the ideas of the late economist Hyman Minsky.

When he was alive, Minsky wasn’t well known. His views on why an economy goes through boom-bust cycles only gained prominence after the 2008-2009 financial crisis. In essence, Minsky theorised that for an economy, stability itself is destabilising. I first learnt about him – and how his ideas can be extended to investing – years ago after coming across a Motley Fool article written by Morgan Housel. Here’s how Housel describes Minsky’s framework:

“Whether it’s stocks not crashing or the economy going a long time without a recession, stability makes people feel safe. And when people feel safe, they take more risk, like going into debt or buying more stocks.

It pretty much has to be this way. If there was no volatility, and we knew stocks went up 8% every year [the long-run average annual return for the U.S. stock market], the only rational response would be to pay more for them, until they were expensive enough to return less than 8%. It would be crazy for this not to happen, because no rational person would hold cash in the bank if they were guaranteed a higher return in stocks. If we had a 100% guarantee that stocks would return 8% a year, people would bid prices up until they returned the same amount as FDIC-insured savings accounts, which is about 0%.

But there are no guarantees—only the perception of guarantees. Bad stuff happens, and when stocks are priced for perfection, a mere sniff of bad news will send them plunging.”

In other words, great fundamentals in business (stability) can cause investors to take risky actions, such as pushing valuations toward the sky or using plenty of leverage. This plants the seeds for a future downturn to come (the creation of instability).

I recently came across a wonderful July 2010 blog post, titled A Batesian Mimicry Explanation of Business Cycles, from economist Eric Falkenstein that shared historical real-life examples of how instability was created in the economy because of stability. Here are the relevant passages from Falkenstein’s blog post (emphases are mine):

“…the housing bubble of 2008 was based on the idea that the borrower’s credit was irrelevant because the underlying collateral, nationwide, had never fallen significantly in nominal terms. This was undoubtedly true. The economics profession, based on what got published in top-tier journals, suggested that uneconomical racial discrimination in mortgage lending was rampant, lending criteria was excessively prudent (underwriting criteria explicitly do not note borrowers race, so presumably lenders were picking up correlated signals). Well-known economists Joe Stiglitz and Peter Orzag wrote a paper for Fannie Mae arguing the expected loss on its $2 trillion in mortgage guarantees of only $2 million dollars, 0.0001%. Moody’s did not consider it important to analyze the collateral within mortgage CDOs, as if the borrower or collateral characteristics were irrelevant. In short, lots of smart people thought housing was an area with extremely low risk.

Each major bust has its peculiar excesses centered on previously prudent and successful sectors. After the Panic of 1837, many American states defaulted quite to the surprise of European investors, who were mistakenly comforted by their strong performance in the Panic of 1819 (perhaps the first world-wide recession). The Panic of 1893 centered on railroads, which had for a half century experienced solid growth, and seemed tested by their performance in the short-lived Panic of 1873.”

It turns out that it were the “prudent and successful sectors” – the stable ones – that were the epicenters of the panics of old. It was their stability that led to investor excesses, exemplifying Minsky’s idea of how stability is destabilising.

The world of investing is full of paradoxes. Minsky’s valuable contribution to the world of economic and investment thinking is one such example.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

Takeaways From Silicon Valley Bank’s Collapse

The collapse of Silicon Valley Bank, or SVB, is a great reminder for investors to always be prepared for the unexpected.

March 2023 was a tumultuous month in the world of finance. On 8 March, Silicon Valley Bank, the 16th largest bank in the USA with US$209 billion in assets at the end of 2022, reported that it would incur a US$1.8 billion loss after it sold some of its assets to meet deposit withdrawals. Just two days later, on 10 March, banking regulators seized control of the bank, marking its effective collapse. It turned out that Silicon Valley Bank, or SVB, had faced US$42 billion in deposit withdrawals, representing nearly a quarter of its deposit base at the end of 2022, in just one day on 9 March.

SVB had failed because of a classic bank run. At a simplified level, banking involves taking in deposits and distributing the capital as loans to borrowers. A bank’s assets (what it owns) are the loans it has doled out, and its liabilities (what it owes) are deposits from depositors. When depositors withdraw their deposits, a bank has to return cash to them. Often, depositors can withdraw their deposits at short notice, whereas a bank can’t easily convert its loans into ready cash quickly. So when a large group of depositors ask for their money back, it’s difficult for a bank to meet the withdrawals – that’s when a bank run happens.

When SVB was initially taken over by regulators, there was no guarantee that the bank’s depositors would be made whole. Official confirmation that the money of SVB’s depositors would be fully protected was only given a few days later. In the leadup to and in the aftermath of SVB’s fall, there was a palpable fear among stock market participants that a systemic bank run could happen within the US banking sector. The Invesco KBW Regional Banking ETF, an exchange-traded fund tracking the KBW Nasdaq Regional Banking Index, which comprises public-listed US regional banks and thrifts, fell by 21% in March 2023. The stock price of First Republic Bank, ranked 14th in America with US$212 billion in assets at the end of 2022, cratered by 89% in the same month. For context, the S&P 500 was up by 3.5%.

SVB was not the only US bank that failed in March 2023. Two other US banks, Silvergate Bank and Signature Bank, did too. There was also contagion beyond the USA. On 19 March, Credit Suisse, a Switzerland-based bank with CHF 531 billion in assets (around US$575 billion) at the end of 2022, was forced by its country’s regulators to agree to be acquired by its national peer, UBS, for just over US$3 billion; two days prior, on 17 March, Credit Suisse had a market capitalization of US$8.6 billion. Going back to the start of 2023, I don’t think it was in anyone’s predictions for the year that banks of significant size in the USA would fail (Signature Bank had US$110 billion in assets at the end of 2022) or that the 167 year-old Credit Suisse would be absorbed by another bank for a relative pittance. These are a sound reminder of a belief I have about investing: Bad scenarios inevitably happen from time to time, but I  just don’t know when. To cope with this uncertainty, I choose to invest in companies that I think have both bright growth prospects in peaceful conditions and a high likelihood of making it through a crisis either relatively unscathed or in even better shape than before.

The SVB bank run is also an example of an important aspect of how I invest: Why I shun forecasts. SVB’s run was different from past bank runs. Jerome Powell, chair of the Federal Reserve, said in a 22 March speech (emphasis is mine):

The speed of the run [on SVB], it’s very different from what we’ve seen in the past and it does kind of suggest that there’s a need for possible regulatory and supervisory changes just because supervision and regulation need to keep up with what’s happening in the world.”

There are suggestions from observers of financial markets that the run on SVB could happen at such breakneck speed – US$42 billion of deposits, which is nearly a quarter of the bank’s deposit base, withdrawn in one day – because of the existence of mobile devices and internet banking. I agree. Bank runs of old would have involved people physically waiting in line at bank branches to withdraw their money. Outflow of deposits would thus take a relatively longer time. Now it can happen in the time it takes to tap a smartphone. In 2014, author James Surowiecki reviewed Walter Friedman’s book on the folly of economic forecasting titled Fortune Tellers. In his review, Surowiecki wrote (emphasis is mine):

The failure of forecasting is also due to the limits of learning from history. The models forecasters use are all built, to one degree or another, on the notion that historical patterns recur, and that the past can be a guide to the future. The problem is that some of the most economically consequential events are precisely those that haven’t happened before. Think of the oil crisis of the 1970s, or the fall of the Soviet Union, or, most important, China’s decision to embrace (in its way) capitalism and open itself to the West. Or think of the housing bubble. Many of the forecasting models that the banks relied on assumed that housing prices could never fall, on a national basis, as steeply as they did, because they had never fallen so steeply before. But of course they had also never risen so steeply before, which made the models effectively useless.”

There is great truth in something writer Kelly Hayes once said: “Everything feels unprecedented when you haven’t engaged with history.” SVB’s failure can easily feel epochal to some investors, since it was one of the largest banks in America when it fell. But it was actually just 15 years ago, in 2008, when the largest bank failure in the USA – a record that still holds – happened. The culprit, Washington Mutual, had US$307 billion in assets at the time. In fact, bank failures are not even a rare occurrence in the USA. From 2001 to the end of March 2023, there have been 563 such incidents. But Hayes’ wise quote misses an important fact about life: Things that have never happened before do happen. Such is the case when it came to the speed of SVB’s bank run. For context, Washington Mutual crumbled after a total of US$16.7 billion in deposits – less than 10% of its total deposit base – fled over 10 days.

I have also seen that unprecedented things do happen with alarming regularity. It was just three years ago, in April 2020, when the price of oil went negative for the first time in history. When investing, I have – and always will – keep this in mind. I also know that I am unable to predict what these unprecedented events could look like, but I am sure that they are bound to happen. To deal with these, I fall back to what I shared earlier:

“To cope with this uncertainty, I choose to invest in companies that I think have both bright growth prospects in peaceful conditions and a high likelihood of making it through a crisis either relatively unscathed or in even better shape than before.”

I think such companies carry the following traits that I have been looking for for a long time in my investing activities:

  1. Revenues that are small in relation to a large and/or growing market, or revenues that are large in a fast-growing market 
  2. Strong balance sheets with minimal or reasonable levels of debt
  3. Management teams with integrity, capability, and an innovative mindset
  4. Revenue streams that are recurring in nature, either through contracts or customer-behaviour
  5. A proven ability to grow
  6. A high likelihood of generating a strong and growing stream of free cash flow in the future

These traits interplay with each other to produce companies I believe to be antifragile. I first came across the concept of antifragility – referring to something that strengthens when exposed to non-lethal stress – from Nassim Nicholas Taleb’s book, Antifragile. Antifragility is an important concept for the way I invest. As I mentioned earlier, I operate on the basis that bad things will happen from time to time – to economies, industries, and companies – but I just don’t know how and when. As such, I am keen to own shares in antifragile companies, the ones which can thrive during chaos. This is why the strength of a company’s balance sheet is an important investment criteria for us – having a strong balance sheet increases the chance that a company can survive or even thrive in rough seas. But a company’s antifragility goes beyond its financial numbers. It can also be found in how the company is run, which in turn stems from the mindset of its leader.

It’s crucial to learn from history, as Hayes’s quote suggests. But it’s also important to recognise that the future will not fully resemble the past. Forecasts tend to fail because there are limits to learning from history and this is why I shun forecasts. In a world where unprecedented things can and do happen, I am prepared for the unexpected.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What American Technology Companies Are Thinking About AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. Here they are, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks AI is a massive platform shift

Well, why don’t I start, Justin, with AI. This is certainly the biggest revolution and test since I came to Silicon Valley. It’s certainly as big of a platform shift as the Internet, and many people think it might be even bigger. 

Airbnb’s management thinks of foundational models as the highways and what they are interested in, is to build the cars on the highways, in other words, they are interested in tuning the model

And I’ll give you kind of a bit of an overview of how we think about AI. So all of this is going to be built on the base model. The base models, the large language models, think of those as GPT-4. Google has a couple of base models, Microsoft reaches Entropic. These are like major infrastructure investments. Some of these models might cost tens of billions of dollars towards the compute power. And so think of that as essentially like building a highway. It’s a major infrastructure project. And we’re not going to do that. We’re not an infrastructure company. But we’re going to build the cars on the highway. In other words, we’re going to design the interface and the tuning of the model on top of AI, on top of the base model. So on top of the base model is the tuning of the model. And the tuning of the model is going to be based on the customer data you have.

Airbnb’s management thinks AI can be used to help the company learn more about its users and build a much better way to match accommodation options with the profile of a user

If you were to ask a question to ChatGPT, and if I were to ask a question to ChatGPT, we’re both going to get pretty much the same answer. And the reason both of us are going to get pretty close the same answer is because ChatGPT doesn’t know that it’s between you and I, doesn’t know anything about us. Now this is totally fine for many questions, like how far is it from this destination to that destination. But it turns out that a lot of questions in travel aren’t really search questions. They’re matching questions. Another is, they’re questions that the answer depends on who you are and what your preferences are. So for example, I think that going forward, Airbnb is going to be pretty different. Instead of asking you questions like where are you going and when are you going, I want us to build a robust profile about you, learn more about you and ask you 2 bigger and more fundamental questions: who are you? And what do you want?

Airbnb’s management wants to use AI to build a global travel community and world-class personalised travel concierge

And ultimately, what I think Airbnb is building is not just a service or a product. But what we are in the largest sense is a global travel community. And the role of Airbnb and that travel community is to be the ultimate host. Think of us with AI as building the ultimate AI concierge that could understand you. And we could build these world-class interfaces, tune our model. Unlike most other travel companies, we know a lot more about our guests and hosts. This is partly why we’re investing in the Host Passport. We want to continue to learn more about people. And then our job is to match you to accommodations, other travel services and eventually things beyond travel. So that’s the big vision of where we’re going to go. I think it’s an incredibly expanding opportunity.

Airbnb’s management thinks that AI can help level the playing field in terms of the service Airbnb provides versus that of hotels

One of the strengths of Airbnb is that Airbnb’s offering is one of a kind. The problem with Airbnb is our service is also one of a kind. And so therefore, historically less consistent than a hotel. I think AI can level the playing field from a service perspective relative to hotels because hotels have front desk, Airbnb doesn’t. But we have literally millions of people staying on Airbnb every night. And imagine they call customer service. We have agents that have to adjudicate between 70 different user policies. Some of these are as many as 100 pages long. What AI is going to do is be able to give us better service, cheaper and faster by augmenting the agents. And I think this is going to be something that is a huge transformation. 

Airbnb’s management thinks that AI can help improve the productivity of its developers

The final thing I’ll say is developer productivity and productivity of our workforce generally. I think our employees could easily be, especially our developers, 30% more productive in the short to medium term, and this will allow significantly greater throughput through tools like GitHub’s Copilot. 

Alphabet (NASDAQ: GOOG)

Alphabet’s management thinks AI will unlock new experiences in Search as it evolves

As it evolves, we’ll unlock entirely new experiences in Search and beyond just as camera, voice and translation technologies have all opened entirely new categories of queries and exploration.

AI has been foundational for Alphabet’s digital advertising business for over a decade

AI has also been foundational to our ads business for over a decade. Products like Performance Max use the full power of Google’s AI to help advertisers find untapped and incremental conversion opportunities. 

Alphabet’s management is focused on making AI safe

And as we continue to bring AI to our products, our AI principles and the highest standards of information integrity remain at the core of all our work. As one example, our Perspective API helps to identify and reduce the amount of toxic text that language models train on, with significant benefits for information quality. This is designed to help ensure the safety of generative AI applications before they are released to the public.

Examples of Alphabet bringing generative AI to customers of its cloud computing service

We are bringing our generative AI advances to our cloud customers across our cloud portfolio. Our PaLM generative AI models and Vertex AI platform are helping Behavox to identify insider threats, Oxbotica to test its autonomous vehicles and Lightricks to quickly develop text-to-image features. In Workspace, our new generative AI features are making content creation and collaboration even easier for customers like Standard Industries and Lyft. This builds on our popular AI Bard Workspace tools, Smart Canvas and Translation Hub used by more than 9 million paying customers. Our product leadership also extends to data analytics, which provides customers the ability to consolidate their data and understand it better using AI. New advances in our data cloud enable Ulta Beauty to scale new digital and omnichannel experiences while focusing on customer loyalty; Shopify to bring better search results and personalization using AI; and Mercedes-Benz to bring new products to market more quickly. We have introduced generative AI to identify and prioritize cyber threats, automate security workflows and response and help scale cybersecurity teams. Our cloud cybersecurity products helped protect over 30,000 companies, including innovative brands like Broadcom and Europe’s Telepass.

The cost of computing when integrating LLMs (large language models) to Google Search is something Alphabet’s management has been thinking about 

On the cost side, we have always — cost of compute has always been a consideration for us. And if anything, I think it’s something we have developed extensive experience over many, many years. And so for us, it’s a nature of habit to constantly drive efficiencies in hardware, software and models across our fleet. And so this is not new. If anything, the sharper the technology curve is, we get excited by it, because I think we have built world-class capabilities in taking that and then driving down cost sequentially and then deploying it at scale across the world. So I think we’ll take all that into account in terms of how we drive innovation here, but I’m comfortable with how we’ll approach it.

Alphabet’s management does not seem concerned with any potential revenue-impact from integrating LLMs into Google’s core Search product

So first of all, throughout the years, as we have gone through many, many shifts in Search, and as we’ve evolved Search, I think we’ve always had a strong grounded approach in terms of how we evolve ads as well. And we do that in a way that makes sense and provide value to users. The fundamental drivers here are people are looking for relevant information. And in commercial categories, they find ads to be highly relevant and valuable. And so that’s what drives this virtuous cycle. And I don’t think the underpinnings over the fact that users want relevant commercial information, they want choice in what they look at, even in areas where we are summarizing and answering, et cetera, users want choice. We care about sending traffic. Advertisers want to reach users. And so all those dynamics, I think, which have long served us well, remain. And as I said, we’ll be iterating and testing as we go. And I feel comfortable we’ll be able to drive innovation here like we’ve always done.

Amazon (NASDAQ: AMZN)

Amazon’s management thinks that the AI boom will drive significant growth in data consumption and products in the cloud

And I also think that there are a lot of folks that don’t realize the amount of nonconsumption right now that’s going to happen and be spent in the cloud with the advent of large language models and generative AI. I think so many customer experiences are going to be reinvented and invented that haven’t existed before. And that’s all going to be spent, in my opinion, on the cloud.

Amazon has been investing in machine learning for more than two decades, and has been investing large sums of capital to build its own LLMs for several years

I think when you think about machine learning, it’s useful to remember that we have had a pretty substantial investment in machine learning for 25-plus years in Amazon. It’s deeply ingrained in virtually everything we do. It fuels our personalized e-commerce recommendations. It drives the pick pass in our fulfillment centers. We have it in our Go stores. We have it in our Prime Air, our drones. It’s obviously in Alexa. And then AWS, we have 25-plus machine learning services where we have the broadest machine learning functionality and customer base by a fair bit. And so it is deeply ingrained in our heritage…

…We’ve been investing in building in our own large language models for several years, and we have a very large investment across the company. 

Amazon’s management decided to build chips – Trainium for training and Inferentia for inference – that have great price and performance because LLMs are going to run on compute, which depend on chips (particularly GPUs, or graphic processing units) and GPUs are scarce; Amazon’s management also thinks that a lot of machine learning training will be taking place on AWS

If you think about maybe the bottom layer here, is that all of the large language models are going to run on compute. And the key to that compute is going to be the chips that’s in that compute. And to date, I think a lot of the chips there, particularly GPUs, which are optimized for this type of workload, they’re expensive and they’re scarce. It’s hard to find enough capacity. And so in AWS, we’ve been working for several years on building customized machine learning chips, and we built a chip that’s specialized for training, machine learning training, which we call Trainium, and a chip that’s specialized for inference or the predictions that come from the model called Inferentia. The reality, by the way, is that most people are spending most of their time and money on the training. But as these models graduate to production, where they’re in the apps, all the spend is going to be in inference. So they both matter a lot. And if you look at — we just released our second versions of both Trainium and Inferentia. And the combination of price and performance that you can get from those chips is pretty differentiated and very significant. So we think that a lot of that machine learning training and inference will run on AWS.

Amazon’s management thinks that most companies that want to use AI are not interested to build their own foundational models because it takes a lot of resources; Amazon has the resources to build foundational models, and is providing the foundational models to customers who can then customise the models

And if you look at the really significant leading large language models, they take many years to build and many billions of dollars to build. And there will be a small number of companies that want to invest that time and money, and we’ll be one of them in Amazon. But most companies don’t. And so what most companies really want and what they tell AWS is that they’d like to use one of those foundational models and then have the ability to customize it for their own proprietary data and their own needs and customer experience. And they want to do it in a way where they don’t leak their unique IP to the broader generalized model. And that’s what Bedrock is, which we just announced a week ago or so. It’s a managed foundational model service where people can run foundational models from Amazon, which we’re exposing ourselves, which we call Titan. Or they can run it from leading large language model providers like AI 21 and Anthropic and Stability AI. And they can run those models, take the baseline, customize them for their own purposes and then be able to run it with the same security and privacy and all the features they use for the rest of their applications in AWS. That’s very compelling for customers.

Every single one of Amazon’s businesses are built on top of LLMs

Every single one of our businesses inside Amazon are building on top of large language models to reinvent our customer experiences, and you’ll see it in every single one of our businesses, stores, advertising, devices, entertainment and devices, which was your specific question, is a good example of that.

ASML (NASDAQ: ASML)

ASML’s management sees that mature semiconductor technologies are actually needed even in AI systems

So I think this is something people underestimate how significant the demand in the mid-critical and the mature semiconductor space is. And it will just grow double digit, whether it’s automotive, whether it’s the energy transition, whether it’s just the entire industrial products area, where is the — well, those are the sensors that we actually need as an integral component of the AI systems. This is where the mid-critical and the mature semiconductor space is very important and needs to grow.

Block (NYSE: SQ)

Block’s management is focused on three technology trends, one of which is AI

The three trends we’re focused on: Number one is artificial intelligence; number two is open protocols; and number three is the global south. Consider how many times you’ve heard the term AI or GPT in the earnings calls just this quarter versus all quarters in history prior. This trend seems to be moving faster than anyone can comprehend or get a handle on. Everyone feels like they’re on their back foot and struggling to catch up. Utilizing machine learning is something we’ve always employed at Block, and the recent acceleration in availability of tools is something we’re eager to implement across all of our products and services. We see this first as a way to create efficiencies, both internally and for our customers. And we see many opportunities to apply these technologies to create entirely new features for our customers. More and more effort in the world will ship to creative endeavors as AI continues to automate mechanical tasks away.

Datadog (NASDAQ: DDOG)

Datadog’s management thinks AI can make software developers more productive in terms of generating more code; as a result, the complexity of a company’s technology will also increase, which will lead to more importance for observability and trouble-shooting software products

First, from a market perspective, over the long term, we believe AI will significantly expand our opportunity in observability and beyond. We seek massive improvements in developer productivity will allow individuals to write more applications and to do so faster than ever before. And as with past productivity increases, we think this will further shift value from writing code to observing, managing, fixing and securing live applications…

… Longer term, I think we can all glimpse at the future where productivity for everyone, including software engineers, increases dramatically. And the way we see that as a business is, our job is to help our customers absorb the complexity of the applications they’ve built so they can understand and modify them, run them, secure them. And we think that the more productivity there is, the more people can write in the amount of time. The less they understand the software they produce and the more they need us, the more value it sends our way. So that’s what makes us very confident in the long term here…

…And we — the way this has played out in the past typically is you just end up generating more stuff and more mess. So basically, if one person can produce 10x more, you end up with 10x more stuff and that person will still not understand everything they’ve produced. So the way we imagine the future is companies are going to deliver a lot more functionality to their users a lot faster. They’re going to solve a lot more problems in software. But the they won’t be as tight and understanding from their engineering team as to what it is they’ve built and how they built it and what might break and what might be the corner cases that don’t work and things like that. And that’s consistent with what we can see people building with a copilot today and things like that.

Etsy (NASDAQ: ETSY)

Etsy’s management thinks that AI can greatly improve the search-experience for customers who are looking for specific products

We’ve been at the cutting edge of search technology for the past several years, and while we use large language models today, we couldn’t be more excited about the potential of newer large language models and generative AI to further accelerate the transformation of Etsy’s user experience. Even with all our enhancements, Etsy search today is still key-word driven and text based and essentially the result is a grid with many thousands of listings. We’ve gotten better at reading the tea leaves, but it’s still a repetitive cycle of query result reformulation. In the future we expect search on Etsy to utilize more natural language and multimodal approaches. Rather than manipulating key words, our search engines will enable us to ask the right question at the right time to show the buyer a curated set of results that can be so much better than it is today. We’re investigating additional search engine technologies to identify attributes of an item, multi-label learning models for instant search, graph neural networks and so much more, which will be used in combination with our other search engine technologies. It’s our belief that Etsy will benefit from generative AI and other advances in search technology as much or perhaps even more so than others…

When you run a search at Etsy, we already use multiple machine learning techniques. So I don’t think generative AI replaces everything we’re doing, but it’s another tool that will be really powerful. And there are times when having a conversation instead of entering a query and then getting a bunch of search results and then going back and reformulating your query and then getting a bunch of search results, that’s not always very satisfying. And being able to say, no, I meant more like this. How about this? I’d like something that has this style and have that feel like more of a conversation, I think that can be a better experience a lot of the time. And I think in particular for Etsy where we don’t have a catalog, it might be particularly powerful.

Fiverr (NYSE: FVRR) 

Fiverr’s management thinks that the proliferation of AI services will not diminish the demand for freelancers, but it will lead to a bifurcation in the fates of freelancers between those who embrace AI, and those who don’t

We haven’t seen AI negatively impact our business. On the contrary, the categories we open to address AI-related services are booming. The number of AI-related gigs has increased over tenfold and buyer searches for AI have soared over 1,000% compared to 6 months ago, indicating a strong demand and validating our efforts to stay ahead of the curve in this rapidly evolving technological landscape. We are witnessing the increasing need for human skills to deploy and implement AI technologies, which we believe will enable greater productivity and improved quality of work when human talent is augmented by AI capabilities. In the long run, we don’t anticipate AI development to displace the need for human talent. We believe AI won’t replace our sellers; rather sellers using AI will outcompete those who don’t…

…In terms of your question about AI, you’re right, it’s very hard to understand what categories or how categories might be influenced. I think that there’s one principle that we’ve — that I’ve shared in my opening remarks, which I think is very important, and this is how we view this, which is that AI technology is not going to displace our sellers, but sellers who have better gross and better usage of AI are going to outcompete those who don’t. And this is not really different than any meaningful advancement within technology, and we’ve seen that in recent years. Every time when there’s a cool new technology or device or form factor that sellers need to become professional at, those who become professional first are those who are actually winning. And we’re seeing the same here. So I don’t think that this is a different case. It’s just different professions, which, by the way, is super exciting.

Fiverr’s management thinks that AI-produced work will still need a human touch

Furthermore, while AI-generated content can be well constructed, it is all based on existing human-created content. To generate novel and authentic content, human input remains vital. Additionally, verifying and editing the AI-generated content, which often contains inaccuracies, requires human expertise and effort. That’s why we have seen categories such as fact-checking or AI content editing flourish on our marketplace in recent months.

Mastercard (NYSE: MA)

Mastercard’s management thinks AI is a foundational technology for the company

For us we’ve been using AI for the better part of the last decade. So it’s embedded in a whole range of our products…

…So you’ll find it embedded in a range of our products, including generative AI. So we have used generative AI technology, particularly in creating data sets that allow us to compare and find threats in the cybersecurity space. You will find AI in our personalization products. So there’s a whole range of things that we set us apart. We use this as foundational technology. And internally, you can see increasingly so, that generative AI might be a good solution for us when it comes to customer service propositions and so forth.

MercadoLibre (NASDAQ: MELI)

MercadoLibre is utilising AI within its products and services, in areas such as customer-service and product-discovery

In terms of AI, I think as most companies, we do see some very relevant short- to midterm positive impact in terms of engineering productivity. And we are also increasing the amount of work being done on what elements of the consumer-facing experiences we can deploy AI on I think the focus right now is on some of the more obvious use cases, improving and streamlining customer service and interactions with reps, improving workflows for reps through AI-assisted workflow tools and then deploying AI to help a better search and discovery in terms of better finding products on our website and better understanding specific — specifications of products where existing LLM are quite efficient. And then beyond that, I think there’s a lot of work going on, and we hope to come up with other innovative forms of AI that we can place into the consumer-facing experience. but the ones I just mentioned are the ones that we’re currently working on the most.

Meta Platforms (NASDAQ: META)

Meta’s work in AI has driven significant improvements in (a) the quality of content seen by users of its services and (b) the monetisation of its services

Our investment in recommendations and ranking systems has driven a lot of the results that we’re seeing today across our discovery engine, reels and ads. Along with surfacing content from friends and family, now more than 20% of content in your Facebook and Instagram Feeds are recommended by AI from people groups or accounts that you don’t follow. Across all of Instagram, that’s about 40% of the content that you see. Since we launched Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Our AI work is also improving monetization. Reels monetization efficiency is up over 30% on Instagram and over 40% on Facebook quarter-over-quarter. Daily revenue from Advantage+ shopping campaigns is up 7x in the last 6 months.

Meta’s management is focused on open-sourcing Meta’s AI models because they think going open-source will benefit the company in terms of it being able to make use of improvements to the models brought on by the open-source-community

Our approach to AI and our infrastructure has always been fairly open. We open source many of our state-of-the-art models, so people can experiment and build with them. This quarter, we released our LLaMA LLM to researchers. It has 65 billion parameters but outperforms larger models and has proven quite popular. We’ve also open sourced 3 other groundbreaking visual models along with their training data and model weights, Segment Anything, DINOv2 and our Animated Drawings tool, and we’ve gotten some positive feedback on all of those as well…

…And the reason why I think why we do this is that unlike some of the other companies in the space, we’re not selling a cloud computing service where we try to keep the different software infrastructure that we’re building proprietary. For us, it’s way better if the industry standardizes on the basic tools that we’re using, and therefore, we can benefit from the improvements that others make and others’ use of those tools can, in some cases, like Open Compute, drive down the costs of those things, which make our business more efficient, too. So I think to some degree, we’re just playing a different game on the infrastructure than companies like Google or Microsoft or Amazon, and that creates different incentives for us. So overall, I think that that’s going to lead us to do more work in terms of open sourcing some of the lower-level models and tools, but of course, a lot of the product work itself is going to be specific and integrated with the things that we do. So it’s not that everything we do is going to be open. Obviously, a bunch of this needs to be developed in a way that creates unique value for our products. But I think in terms of the basic models, I would expect us to be pushing and helping to build out an open ecosystem here, which I think is something that’s going to be important.

Meta’s management thinks the company now has enough computing infrastructure to do leading AI-related work after spending significant sums of money over the past few years to build that out

A couple of years ago, I asked our infra teams to put together ambitious plans to build out enough capacity to support not only our existing products but also enough buffer capacity for major new products as well. And this has been the main driver of our increased CapEx spending over the past couple of years. Now at this point, we are no longer behind in building out our AI infrastructure, and to the contrary, we now have the capacity to do leading work in this space at scale. 

Meta’s management is focused on using AI to improve its advertising services

We remain focused on continuing to improve ads ranking and measurement with our ongoing AI investments while also leveraging AI to power increased automation for advertisers through products like Advantage+ shopping, which continues to gain adoption and receive positive feedback from advertisers. These investments will help us develop and deploy privacy-enhancing technologies and build new innovative tools that make it easier for businesses to not only find the right audience for their ad but also optimize and eventually develop their ad creative.

Meta’s management thinks that generative AI can be a very useful tool for advertisers, but they’re still early in the stage of understanding what generative AI is really capable of

 Although there aren’t that many details that I’m going to share at this point, more of this will come in focus as we start shipping more of these things over the coming months. But I do think that there’s a big opportunity here. You asked specifically about advertisers, but I think it’s going to also help create more engaging experiences, which should create more engagement, and that, by itself, creates more opportunities for advertisers. But then I think that there’s a bunch of opportunities on the visual side to help advertisers create different creative. We don’t have the tools to do that over time, eventually making it. So we’ve always strived to just have an advertiser just be able to tell us what their objective is and then have us be able to do as much of the work as possible for them, and now being able to do more of the creative work there and ourselves for those who want that, I think, could be a very exciting opportunity…

…And then the third bucket is really around CapEx investments now to support gen AI. And this is an emerging opportunity for us. We’re still in the beginning stages of understanding the various applications and possible use cases. And I do think this may represent a significant investment opportunity for us that is earlier on the return curve relative to some of the other AI work that we’ve done. And it’s a little too early to say how this is going to impact our overall capital intensity in the near term.

Meta’s management also thinks that generative AI can be a very useful way for companies to have high-quality chatbots interacting with customers

I also think that there’s going to be a very interesting convergence between some of the AI agents in messaging and business messaging, where, right now, we see a lot of the places where business messaging is most successful are places where a lot of businesses can afford to basically have people answering a lot of questions for people and engaging with them in chat. And obviously, once you light up the ability for tens of millions of small businesses to have AI agents acting on their behalf, you’ll have way more businesses that can afford to have someone engaging in chat with customers.

Microsoft (NASDAQ: MSFT)

Microsoft’s management thinks there is a generational shift in online search happening now because of AI

As we look towards a future where chat becomes a new way for people to seek information, consumers have real choice in business model and modalities with Azure-powered chat entry points across Bing, Edge, Windows and OpenAI’s ChatGPT. We look forward to continuing this journey in what is a generational shift in the largest software category, search.

Because of Microsoft’s partnership with OpenAI, Microsoft Azure is now exposed to new AI-related workloads that it previously was not

Because of some of the work we’ve done in AI even in the last couple of quarters, we are now seeing conversations we never had, whether it’s coming through you and just OpenAI’s API, right, if you think about the consumer tech companies that are all spinning, essentially, i.e. the readers, because they have gone to OpenAI and are using their API. These were not customers of Azure at all. Second, even Azure OpenAI API customers are all new, and the workload conversations, whether it’s B2C conversations in financial services or drug discovery on another side, these are all new workloads that we really were not in the game in the past, whereas we now are. 

Microsoft’s management has plans to monetise all the different AI-copilots that it is introducing to its various products

Overall, we do plan to monetize a separate set of meters across all of the tech stack, whether they’re consumption meters or per user subscriptions. The Copilot that’s priced and it is there is GitHub Copilot. That’s a good example of incrementally how we monetize the prices that are there out there and others are to be priced because they’re in 3D mode. But you can expect us to do what we’ve done with GitHub Copilot pretty much across the board.

Microsoft’s management expects the company to lower the cost of compute for AI workloads over time

And so we have many knobs that will continuously — continue to drive optimization across it. And you see it even in the — even for a given generation of a large model, where we started them through the cost footprint to where we end in the cost footprint in a period of a quarter changes. So you can expect us to do what we have done over the decade plus with the public cloud to bring the benefits of, I would say, continuous optimization of our COGS to a diverse set of workloads.

Microsoft’s management has not been waiting – and is not waiting – for AI-related regulations to show up – instead, they are thinking hard about unintended consequences from Day 1 and have built those concerns into the engineering process

So overall, we’ve taken the approach that we are not waiting for regulation to show up. We are taking an approach where the unintended consequences of any new technology is something that from day 1, we think about as first class and build into our engineering process, all the safeguards. So for example, in 2016 is when we put out the AI principles, we translated the AI principles into a set of internal standards that then are further translated into an implementation process that then we hold ourselves to internal audit essentially. So that’s the framework we have. We have a Chief AI Officer who is sort of responsible for both thinking of what the standards are and then the people who even help us internally audit our following of the process. And so we feel very, very good in terms of us being able to create trust in the systems we put out there. And so we will obviously engage with any regulation that comes up in any jurisdiction. But quite honestly, we think that the more there is any form of trust as a differentiated position in AI, I think we stand to gain from that.

Nvidia (NASDAQ: NVDA)

Cloud service providers (CSPs) are racing to deploy Nvidia’s chips for AI-related work

First, CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer AI applications for training and inference. Multiple CSPs announced the availability of H100 on their platforms, including private previews at Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, upcoming offerings at AWS and general availability at emerging GPU-specialized cloud providers like CoreWeave and Lambda. In addition to enterprise AI adoption, these CSPs are serving strong demand for H100 from generative AI pioneers.

Nvidia’s management sees consumer internet companies as being at the forefront of adopting AI

Second, consumer Internet companies are also at the forefront of adopting generative AI and deep-learning-based recommendation systems, driving strong growth. For example, Meta has now deployed its H100-powered Grand Teton AI supercomputer for its AI production and research teams.

Nvidia’s management is seeing companies in industries such as automotive, financial services, healthcare, and telecom adopt AI rapidly

Third, enterprise demand for AI and accelerated computing is strong. We are seeing momentum in verticals such as automotive, financial services, health care and telecom where AI and accelerated computing are quickly becoming integral to customers’ innovation road maps and competitive positioning. For example, Bloomberg announced it has a $50 billion parameter model, BloombergGPT, to help with financial natural language processing tasks such as sentiment analysis, named entity recognition, news classification and question answering. Auto insurance company CCC Intelligent Solutions is using AI for estimating repairs. And AT&T is working with us on AI to improve fleet dispatches so their field technicians can better serve customers. Among other enterprise customers using NVIDIA AI are Deloitte for logistics and customer service, and Amgen for drug discovery and protein engineering.

Nvidia is making it easy for companies to deploy AI technology

And with the launch of DGX Cloud through our partnership with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, we deliver the promise of NVIDIA DGX to customers from the cloud. Whether the customers deploy DGX on-prem or via DGX Cloud, they get access to NVIDIA AI software, including NVIDIA-based command, end-to-end AI frameworks and pretrained models. We provide them with the blueprint for building and operating AI, spanning our expertise across systems, algorithms, data processing and training methods. We also announced NVIDIA AI Foundations, which are model foundry services available on DGX Cloud that enable businesses to build, refine and operate custom large language models and generative AI models trained with their own proprietary data created for unique domain-specific tasks. They include NVIDIA NeMo for large language models, NVIDIA Picasso for images, video and 3D, and NVIDIA BioNeMo for life sciences. Each service has 6 elements: pretrained models, frameworks for data processing and curation, proprietary knowledge-based vector databases, systems for fine-tuning, aligning and guard railing, optimized inference engines, and support from NVIDIA experts to help enterprises fine-tune models for their custom use cases.

Nvidia’s management thinks that the advent of AI will drive a shift towards accelerated computing in data centers

Now let me talk about the bigger picture and why the entire world’s data centers are moving toward accelerated computing. It’s been known for some time, and you’ve heard me talk about it, that accelerated computing is a full stack problem but — it is full stack challenged. But if you could successfully do it in a large number of application domain that’s taken us 15 years, it’s sufficiently that almost the entire data center’s major applications could be accelerated. You could reduce the amount of energy consumed and the amount of cost for a data center substantially by an order of magnitude. It costs a lot of money to do it because you have to do all the software and everything and you have to build all the systems and so on and so forth, but we’ve been at it for 15 years.

And what happened is when generative AI came along, it triggered a killer app for this computing platform that’s been in preparation for some time. And so now we see ourselves in 2 simultaneous transitions. The world’s $1 trillion data center is nearly populated entirely by CPUs today. And I — $1 trillion, $250 billion a year, it’s growing of course. But over the last 4 years, call it $1 trillion worth of infrastructure installed, and it’s all completely based on CPUs and dumb NICs. It’s basically unaccelerated.

In the future, it’s fairly clear now with this — with generative AI becoming the primary workload of most of the world’s data centers generating information, it is very clear now that — and the fact that accelerated computing is so energy efficient, that the budget of a data center will shift very dramatically towards accelerated computing, and you’re seeing that now. We’re going through that moment right now as we speak, while the world’s data center CapEx budget is limited. But at the same time, we’re seeing incredible orders to retool the world’s data centers. And so I think you’re starting — you’re seeing the beginning of, call it, a 10-year transition to basically recycle or reclaim the world’s data centers and build it out as accelerated computing. You have a pretty dramatic shift in the spend of a data center from traditional computing and to accelerated computing with SmartNICs, smart switches, of course, GPUs and the workload is going to be predominantly generative AI…

…The second part is that generative AI is a large-scale problem, and it’s a data center scale problem. It’s another way of thinking that the computer is the data center or the data center is the computer. It’s not the chip. It’s the data center, and it’s never happened like us before. And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of the networking gear, the switches and the computing systems, the computing fabric, that entire system is your computer, and that’s what you’re trying to operate. And so in order to get the best performance, you have to understand full stack and understand data center scale. And that’s what accelerated computing is.

Nvidia’s management thinks that the training of AI models will be an always-on process

 You’re never done with training. You’re always — every time you deploy, you’re collecting new data. When you collect new data, you train with the new data. And so you’re never done training. You’re never done producing and processing a vector database that augments the large language model. You’re never done with vectorizing all of the collected structured, unstructured data that you have. And so whether you’re building a recommender system, a large language model, a vector database, these are probably the 3 major applications of — the 3 core engines, if you will, of the future of computing as well as a bunch of other stuff. But obviously, these are very — 3 very important ones. They are always, always running.

When it comes to inference – or the generation of an output – there’s a lot more that goes into it than just the AI models

The other thing that’s important is these are models, but they’re connected ultimately to applications. And the applications could have image in, video out, video in, text out, image in, proteins out, text in, 3D out, video in, in the future, 3D graphics out. So the input and the output requires a lot of pre and postprocessing. The pre and postprocessing can’t be ignored. And this is one of the things that most of the specialized chip arguments fall apart. And it’s because the length — the model itself is only, call it, 25% of the data — of the overall processing of inference. The rest of it is about preprocessing, postprocessing, security, decoding, all kinds of things like that.

Paycom Software (NYSE: PAYC)

Paycom’s management thinks AI is definitely going to have a major impact in the payroll and HCM (human capital management) industry

I definitely think it’ll be relevant. You can use AI for multiple things. There are areas that you can use it for that are better than others. And they’re front-end things you can use it for direct to the client. There are back-end things that you can use it for that a client may never see. And so when you’re talking about AI, it has many uses, some of which is front end and some back end. And I don’t want to talk specifically about what exactly we’re using it for already internally and what our opportunities would be into the future. But in answer to your question, yes, I do think that over time, AI is going to be a thing in our industry.

PayPal (NASDAQ: PYPL)

PayPal has been working with AI (in fraud and risk management) for several years, and management thinks generative AI and other forms of AI will be useful in the online payments industry

For several years, we’ve been at the forefront of advanced forms of machine learning and AI to combat fraud and to implement our sophisticated risk management programs. With the new advances of generative AI, we will also be able to accelerate our productivity initiatives. We expect AI will enable us to meaningfully lower our costs for years to come. Furthermore, we believe that AI, combined with our unique scale and sets of data, will drive not only efficiencies, but will also drive a differentiated and unique set of value propositions for our merchants and consumers…

…And we are now beginning to experiment with the first generation of what we call AI-powered checkout, which looks at the full checkout experience, not just the PayPal checkout experience, but the full checkout experience for our merchants…

…There’s no question that AI is going to impact almost every function inside of PayPal, whether it be our front office, back office, marketing, legal, engineering, you name it. AI will have an impact and allow us to not just lower cost, but have higher performance and do things that is not about trade-offs. It’s about doing both in there.

Shopify (NASDAQ: SHOP)

Shopify’s management thinks the advent of AI makes a copilot for entrepreneurship possible

But now we are at the dawn of the AI era and the new capabilities that are unlocked by that are unprecedented. Shopify has the privilege of being amongst the companies with the best chances of using AI to help our customers. A copilot for entrepreneurship is now possible. Our main quest demands from us to build the best thing that is now possible, and that has just changed entirely.

Shopify recently launched an AI-powered shopping assistant that is powered by OpenAI’s ChatGPT

We also — you’re also seeing — we announced a couple of weeks ago, Shop at AI, which is what I think is the coolest shopping concierge on the planet, whereby you as a consumer can use Shop at AI and you can browse through hundreds of millions of products and you can say things like I want to have a barbecue and here’s the theme and it will suggest great products, and you can buy it right in line right through the shopping concierge.  

Shopify has been using AI to help its merchants write product descriptions so that merchants can better focus on taking care of their customers

 For example, the task of writing product descriptions is now made meaningfully easier by injecting AI into that process. And what does that — the end result of that is merchants spend less time running product descriptions and more time making beautiful products and communicating and engaging with their customers. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management sees demand in most end-markets as being mostly soft, but AI-related demand is growing

We observed the PC and smartphone market continue to be soft at the present time, while automotive demand is holding steady for TSMC and it is showing signs of soften into second half of 2023. I’m talking about automotive. On the other hand, we have recently observed incremental upside in AI-related demand

TSMC’s management thinks it’s a little too early to tell how big the semiconductor market can grow into because of AI, but they do see a positive trend

We certainly, we have observed an incremental increase in AI-related demand. It will also help the ongoing inventory digestion. The trend is very positive for TSMC. But today, if you ask me to quantitatively to say that how much of the amount increase or what is the dollar content in the server, it’s too early to say. It still continue to be developed. And ChatGPT right now reinforce the already stronger conviction that we have in HPC and AI as a structurally megatrend for TSMC’s business growth in the future. Whether this one has been included in our previous announcement is said that we have a 15% to 20% CAGR, the answer is probably partly yes, because of — for several, we have accelerated into our consideration. But this ChatGPT is a large language model is a new application. And we haven’t really have a kind of a number that put into our CAGR. But is definitely, as I said, it really reinforced our already strong conviction that HPC and AI will give us a much higher opportunities in the future…

…We did see some positive signs of the people getting much more attention to AI application, especially the ChatGPT’s area. However, as I said, quantitatively, we haven’t have enough data to summing it up to see what is the contribution and what kind of percentage to TSMC’s business. But we remain confident that this trend is definitely positive for TSMC.

TSMC’s management sees most of the AI work performed today as being focused on training but that it will flip to inference in the future – but nonetheless, high-performance semiconductors will still need be needed for AI-related work

Right now, most of the AI concentrate or focus on training. And in the future, it will be inference. But let me say that, no matter what kind of application, they need to use a very high-performance semiconductor component, and that actually is a TSMC’s advantage. So we expect that semiconductor content starting from a data center for [indiscernible] to device and edge device or those kind of things, put all together, they need a very high-speed computing with a very power-efficient one. And so we expect it will add to TSMC’s business a lot.

Tencent (NASDAQ: TCEHY)

Tencent is using AI to deliver more relevant ads to users of its services

We upgraded our machine learning advertising platform to deliver higher conversions for advertisers. For example, we help our advertisers dynamically feature their most relevant products inside their advertisements by applying our deep learning model to the standard product unit attributes we have aggregated within our SPU database. 

Tencent’s management thinks there will be a proliferation of AI models – both foundational as well as vertical – from both established companies as well as startups

So in terms of going forward, we do believe that number one, there’s going to be many models in the market going forward for the large companies, I think each one of us would have a foundation model. And the model will be supporting our own use cases as well as offer it to the market both on a 2C basis as well as on a 2B basis. And at the same time, there will be many start-ups, which will be creating their own models, some of them may be general foundation model. Some of them may be more industry and vertical models and they will be coming with new applications. I think overall, it’s going to be a very vibrant industry from a model availability perspective.

Tencent’s management thinks AI can help improve the quality of UGC (user-generated content)

In terms of the user-to-user interaction type of services like social network and short video network and games, long lead content, there will be — a lot of usages that helps to increase the quality of content, the efficiency at which the content are created as well as lowering the cost of content creation. And that will be net beneficiary to these applications. 

Tencent’s management thinks China’s government is supportive of innovation in AI

Now in terms of — you asked about regulation. Without the government’s general stance is like it’s supportive of regulation, but the industry has to be regulated. And I think this is not something that’s specific to China, even around the world. And you look at the U.S., there’s a lot of public discussion about having regulation and even the founder of OpenAI has been testifying and asking for regulation in the industry. So I think that is something which is necessary, but we felt under the right regulation and regulatory framework, then the government stance is supportive of innovation and the industry will actually have room for healthy growth.

Tesla (NASDAQ: TSLA)

Tesla’s management thinks data will be incredibly valuable when building out AI services, especially in self-driving

Regarding Autopilot and Full Self-Driving. We’ve now crossed over 150 million miles driven by Full Self-Driving beta, and this number is growing exponentially. We’re — I mean, this is a data advantage that really no one else has. Those who understand AI will understand the importance of data — of training data and how fundamental that is to achieving an incredible outcome. So yes, so we’re also very focused on improving our neural net training capabilities as is one of the main limiting factors of achieving full autonomy. 

Tesla’s management thinks the company’s supercomputer project, Dojo, could significantly improve the cost of training AI models

So we’re continuing to simultaneously make significant purchases of NVIDIA GPUs and also putting a lot of effort into Dojo, which we believe has the potential for an order of magnitude improvement in the cost of training. 

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management thinks that generative AI is only as good as the data that it has been trained on

ChatGPT is an amazing technology, but its usefulness is conditioned on the quality of the dataset it is pointed at. Regurgitating bad data, bad opinions or fake news, where AI generated deep bases, for example, will be a problem that all generative AI will likely be dealing with for decades to come. We believe many of the novel AI use cases in market today will face challenges with monetization and copyright and data integrity or truth and scale.

Trade Desk has very high-quality advertising data at scale (it’s handling 10 million ad requests per second) so management thinks that the company can excel by applying generative AI to its data

By contrast, we are so excited about our position in the advertising ecosystem when it comes to AI. We look at over 10 million ad requests every second. Those requests, in sum, represent a very robust and very unique dataset with incredible integrity. We can point generative AI at that dataset with confidence for years to come. We know that our size, our dataset size and integrity, our profitability and our team will make Koa and generative AI a promising part of our future.

Trade Desk’s management sees AI bringing positive impacts to many areas of the company’s business, such as generating code faster, generating creatives faster, and helping clients learn programmatic advertising faster

In the future, you’ll also hear us talk about other applications of AI in our business. These include generating code faster; changing the way customers understand and interact with their own data; generating new and more targeted creatives, especially for video and CTV; and using virtual assistance to shorten the learning curve that comes with the complicated world of programmatic advertising by optimizing the documentation process and making it more engaging.

Visa (NYSE: V)

Visa, which is in the digital payments industry, has a long history of working with AI and management sees AI as an important component of what the company does

I’ll just mention that we have a long history developing and using predictive AI and deep learning. We were one of the pioneers of applied predictive AI. We have an enormous data set that we’ve architected to be utilized at scale by hundreds of AI and ML, different services that people use all across Visa. We use it — we use it to run our company more effectively. We use it to serve our clients more effectively. And this will continue to be a big part of what we do.

Visa’s management thinks generative AI can take the company’s current AI services to the next level

As you transition to generative AI, this is where — we see this as an opportunity to take our current AI services to the next level. We are kind of as a platform, experimenting with a lot of the new capabilities that are available. We’ve got people all over the company that are tinkering and dreaming and thinking and doing testing and figuring out ways that we could use generative AI to transform how we do what we do, which is deliver simple, safe and easy-to-use payment solutions. And we’re also spending a fair bit of time thinking how generative AI will change the way that sellers sell, and we all buy and all of the shop. So that is — it’s a big area of opportunity that we’re looking at in many different ways across the company.

Wix (NASDAQ: WIX)

Wix’s management thinks AI can reduce a lot of friction for users in creating websites

First, our goal at Wix is to reduce friction. The easier it is for our users to build websites, the better Wix is. We have proven this many times before, through the development of software and products, including AI. As we make it easier for our users to achieve their goals, their satisfaction goes up, conversion goes up, user retention goes up, monetization goes up and the value of Wix grows…

…  Today, new emerging AI technologies create an even bigger opportunity to reduce friction in more areas that were almost impossible to solve a few years ago and further increase the value of our platform. We believe this opportunity will result in an increased addressable market and even more satisfied users. 

Wix’s management thinks that much more is needed to run e-commerce websites than just AI and even if AI can automate every layer, it is still very far into the future

The second important point is that there is a huge amount of complexity in software, even with websites, and it’s growing. Even if AI could code a fully functional e-commerce website, for example — which I believe we are still very far from — there is still a need for the site to be deployed to a server, to run the code, to make sure the code continues to work, to manage and maintain a database for when someone wants to buy something, to manage security, to ship the products, to partner with payment gateways, and many more things. So even if you have something that can build pages and content and code…you still need so much more. This gets to my third and final point, which is that even in the far future, if AI is able to automate all of these layers, it will have to disrupt a lot of the software industry, including database management, server management and cloud computing. I believe we are very far from that and that before then, there will be many more opportunities for Wix to leverage AI and create value for our users.

Zoom Video Communications (NASDAQ: ZM)

Zoom management’s approach to AI is federated, empowering, and responsible

We outlined our approach to AI is to drive forward solutions that are federated empowering and responsible. Federated means flexible and customizable to businesses unique scenarios and nomenclature. Empowering refers to building solutions that improve individual and team productivity as well as enhance the customers’ experience. And responsible means customer control of their data with an emphasis on privacy, security, trust and safety.

Zoom recently made a strategic investment in Anthropic and management will be integrating Anthropic’s AI assistant feature across Zoom’s product portfolio

Last week, we announced our strategic investment in Anthropic, an AI safety and research company working to build reliable, interpretable and steerable AI systems. Our partnership with Anthropic further boosts our federated approach to AI by allowing Anthropic’s AI assistant cloud to be integrated across Zoom’s entire platform. We plan to begin by layering Claude into our Contact Center portfolio, which includes Zoom Contact Center, Zoom Virtual Agent, and now in-beta Zoom Workforce Engagement Management. With Claude guiding agents towards trustworthy resolutions and empowering several service for end users, companies will be able to take customer relationships to the next level.

Zoom’s management thinks that having AI models is important, but it’s even more important to fine-tune them based on proprietary data

Having said that, there are 2 things really important. One is the model, right? So OpenAI has a model, Anthropic and Facebook as well, Google and those companies. But the most important thing is how to leverage these models to fine tune based on your proprietary data, right? That is extremely important when it comes to collaboration, communication, right? Take a zoom employee, for example. We have so many meetings, right, and talk about — every day, like our sales team use the Zoom call with the customers. We accumulated a lot of, let’s say, internal meeting data. How to fine tune the model with those data, it’s very important, right?

Examples of good AI use cases in Zoom’s platform

We also look at our core meeting platform, right, in meeting summary. It is extremely important, right? And it’s also we have our team chat solution and also how to lever that to compose a chat. Remember, last year, we also have email candidate as well. How do we leverage the generative AI to understand the context, right, and kind of bring all the information relative to you and help you also generate the message, right? When you send an e-mail back to customers or prospects, right, either China message or e-mail, right? We can lever to generate is, right? I think a lot of areas, even like you like say, maybe you might be later to the meeting, right? 10 minutes later, you joined the meeting. You really want to stand in what had happened, right? Can you get a quick summary over the positive minutes. Yes, you just also have to generate AI as well. You also can get that as well. 

Zoom’s management thinks there are multiple ways to monetise AI

I think in terms of how to monetize generative I think first of all, take Zoom IQ for Sales for example, that’s a new service to target the sales deportment. That AI technology is based on generative AI, right, so we can monetize. And also seeing some features, even before the generative AI popularity, we have a live transmission feature, right? And also, that’s not a free feature. It is a paid feature, right, behind the pay wall, right? And also a lot of good features, right, take the Zoom meeting summary, for example, for enterprise — and the customers… For to customers, all those SMB customers, they did not deploy Zoom One, they may not get to those features, right? That’s the reason — another reason for us to monetize. I think there’s multiple ways to monetize, yes.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, ASML, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, Paycom Software, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Visa, Wix, Zoom. Holdings are subject to change at any time.

The Split-up Of Alibaba And What It Means

One of China’s largest compannies, Alibaba, recently announced an important organisational restructure. Here’s what the reorganisation means.

Last week, on 31 March 2023, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station. My friend Willie Keng, the founder of investor education website Dividend Titan, was hosting a segment for the radio show and we talked about a few topics:

  • Alibaba’s recent announcement that it would be splitting into six business units and what the move could mean for its shareholders
  • What investors should look out for now when it comes to China’s technology sector
  • The risks involved with investing in technology companies

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have no vested interest in any companies mentioned. Holdings are subject to change at any time.

A Lesson From An Old Banking Crisis That’s Important Now

The Savings & Loans crisis in the USA started in the 1980s and holds an important lesson for the banks of today.

This March has been a wild month in the world of finance. So far, three banks in the USA have collapsed, including Silicon Valley Bank, the 16th largest bank in the country with US$209 billion in assets at the end of 2022. Meanwhile, First Republic Bank, ranked 14th in America with US$212 billion in assets, has seen a 90% month-to-date decline in its share price. Over in Europe, the Swiss bank Credit Suisse, with a market capitalization of US$8.6 billion on the 17th, was forced by regulators to agree to be acquired by its national peer, UBS, for just over US$3 billion on the 19th.

The issues plaguing the troubled banks have much to do with the sharp rise in interest rates seen in the USA and Europe that began in earnest in the third quarter of 2022. For context, the Federal Funds Rate – the key interest rate benchmark in the USA – rose from a target range of 1.5%-1.75% at the end of June 2022 to 4.5%-4.75% right now. Over the same period, the key interest rate benchmarks in the European Union rose from a range of -0.5% to 0.25%, to a range of 3.0% to 3.75%. Given the aforementioned banking failures, it’s clear that rising interest rates are already a big problem for banks. But there could be more pain ahead for banks who fail to understand a lesson from an old banking crisis in the USA.

A blast from the past

The Savings & Loan (S&L) crisis started in the early 1980s and stretched into the early 1990s. There were nearly 4,000 S&L institutions in the USA in 1980; by 1989, 563 of them had failed. S&L institutions are also known as thrift banks and savings banks. They provide similar services as commercial banks, such as deposit products, loans, and mortgages. But S&L institutions have a heavier emphasis on mortgages as opposed to commercial banks, which tend to also focus on business and personal lending.

In the early 1980s, major changes were made to the regulations governing S&L institutions and these reforms sparked the crisis. One of the key changes was the removal of caps on the interest rates that S&L institutions could offer for deposits. Other important changes included the removal of loan-to-value ratios on the loans that S&L institutions could make, and the relaxation on the types of assets that they could own.

The regulatory changes were made to ease two major problems that S&L institutions were confronting. First, since the rates they could offer were limited by the government, S&L institutions found it increasingly difficult to compete for deposits after interest rates rose dramatically in the late 1970s. Second, the mortgage loans that S&L institutions made were primarily long-term fixed rate mortgages; the rise in interest rates caused the value of these mortgage loans to fall. The US government thought that S&L institutions could cope better if they were deregulated.

But because of the relaxation in rules, it became much easier for S&L institutions to engage in risky activities. For instance, S&L institutions were now able to pay above-market interest rates on deposits, which attracted a flood of savers. Besides paying high interest on deposits, another risky thing the S&L institutions did was to make questionable loans in areas outside of residential lending. For perspective, the percentage of S&L institutions’ total assets that were in mortgage loans fell from 78% in 1981 to just 56% by 1986.

Ultimately, the risks that the S&L institutions had taken on, as a group, were too high. As a result, many of them collapsed.

Learning from the past

Hingham Institution of Savings is a 189-year-old bank headquartered in Massachusetts, USA. Its current CEO, Robert Gaughen Jr, assumed control in the middle of 1993. Since then, the bank has been profitable every single year. From 1994 (the first full-year where the bank was led by Robert Gaughen Jr) to 2022, Hingham’s average return on equity (ROE) was a respectable 14.2%, and the lowest ROE achieved was 8.4% (in 2007). There are two things worth noting about the 1994-2022 timeframe in relation to Hingham: 

  • The bank had – and still has – heavy exposure to residential-related real estate lending.
  • The period covers the 2008-09 Great Financial Crisis. During the crisis, many American banks suffered financially and US house prices crashed. For perspective, the US banking industry had ROEs of 7.8%,  0.7%, and -0.7% in 2007, 2008, and 2009, while Hingham’s ROEs were higher – at times materially so – at 8.4%, 11.1%, and 12.8%.

Hingham’s most recent annual shareholder’s meeting was held in April 2022. During the meeting, its president and chief operating officer, Patrick Gaughen, shared an important lesson that banks should learn from the S&L crisis (emphasis is mine):

We spent some time talking in the past about why bankers have misunderstandings about the S&L crisis in the ’80s, with respect to how a lot of those banks failed. And that was in periods when rates were rising, there were a lot of S&Ls that looked for assets that had yields that would offset the rising price of their liabilities, and those assets had risks that the S&Ls did not appreciate. Rather than accepting tighter margins through those periods where there were flatter curves, they resisted. They tried to protect those profits. And in doing so, they put assets on the balance sheet that when your capital’s levered 10x or 11x or 12x or 13x to 1 — they put assets on the balance sheet that went under.”

In other words, the S&L institutions failed because they wanted to protect their profits at a time when their cost of deposits were high as a result of rising interest rates. And they tried to protect their profits by investing in risky assets to chase higher returns, a move which backfired.

A better way

At Hingham’s aforementioned shareholder’s meeting, Patrick Gaughen also shared what he and his colleagues think should be the right way to manage the problem of a high cost of deposits stemming from rising interest rates (emphases are mine):

And I think it’s important to think and maybe describe the way that I think about this is that we’re not protecting profits in any given period. We’re thinking about how to maximize returns on equity on a per share basis over a long time period, which means that there are probably periods where we earn, as I said earlier, outsized returns relative to that long-term trend. And then periods where we earn what are perfectly satisfactory returns. Looking back over history, with 1 year, 2 years exceptions, double-digit returns on equity. So it’s satisfactory, but it’s not 20% or 21%.

And the choices that we’ve made from a structural perspective about the business mean that we need to live with both sides of that trade as it occurs. But over the long term, there are things we can do to offset that. So the first thing we’re always focused on regardless of the level and the direction of rates is establishing new relationships with strong borrowers, deepening the relationships that we have with customers that we already have in the business. Because over time, those relationships as the shape of curve changes, those relationships are going to be the relationships that give us the opportunity to source an increasing volume of high quality assets. And those are the relationships that are going to form the core of the noninterest-bearing deposits to allow us to earn those spreads. And so that’s true regardless of the direction of rates.”

I find the approach of Hingham’s management team to be sensible. By being willing to accept lower profits when interest rates (and thus deposit rates) are high, management is able to ensure Hingham’s longevity and maximise its long-term returns.

In our current environment, interest rates are elevated, which makes the cost of deposits expensive for banks. If you’re looking to invest in the shares of a bank right now, you may want to determine if the bank’s management team has grasped the crucial lesson from the S&L crisis of old. If there’s a bank today which fails to pay heed, they may well face failure in the road ahead.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Hingham Institution of Savings. Holdings are subject to change at any time.