How Innovation Happens

Innovation can appear from the most unexpected places, take unpredictable paths, or occur when supporting technologies improve over time.

There are a myriad of important political, social, economic, and healthcare issues that are plaguing our globe today. But Jeremy and I are still long-term optimistic on the stock market.

This is because we still see so much potential in humanity. There are nearly 8.1 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc but we have faith that humanity can clean it up. To us, investing in stocks is ultimately the same as having faith in the long-term ingenuity of humanity. We will remain long-term optimistic on stocks so long as we continue to have this faith.

There may be times in the future when it seems that mankind’s collective ability to innovate is faltering (things are booming now with the AI rush). But here are three stories I learnt recently that would help me – and I hope you, too – keep the faith.

The first story is from Morgan Housel’s latest book Same As Ever. In it, he wrote: 

“Author Safi Bahcall notes that Polaroid film was discovered when sick dogs that were fed quinine to treat parasites showed an unusual type of crystal in their urine. Those crystals turned out to be the best polarizers ever discovered. Who predicts that? Who sees that coming? Nobody. Absolutely nobody.”

What the quinine and polarizers story shows is that the root of innovative ideas can show up completely unexpectedly. This brings me to the second story, which is also from Same As Ever. This time, it is Housel’s recounting of how the invention of planes moved in an unpredictable path that led to the invention of nuclear power plants (nuclear power is a zero-emission, clean energy source, so it could play a really important role in society’s sustainable energy efforts), and how a 1960s invention linking computers to manage Cold War secrets unpredictably led to the photo-sharing social app Instagram:

“When the airplane came into practical use in the early 1900s, one of the first tasks was trying to foresee what benefits would come from it. A few obvious ones were mail delivery and sky racing.

No one predicted nuclear power plants. But they wouldn’t have been possible without the plane. Without the plane we wouldn’t have had the aerial bomb. Without the aerial bomb we wouldn’t have had the nuclear bomb. And without the nuclear bomb we wouldn’t have discovered the peaceful use of nuclear power. Same thing today. Google Maps, TurboTax, and Instagram wouldn’t be possible without ARPANET, a 1960s Department of Defense project linking computers to manage Cold War secrets, which became the foundation for the internet. That’s how you go from the threat of nuclear war to filing your taxes from your couch—a link that was unthinkable fifty years ago, but there it is.”

This idea of one innovation leading to another, brings me to my third story. There was a breakthrough in the healthcare industry in November 2023 when the UK’s health regulator approved a drug named Casgevy – developed by CRISPR Therapeutics and Vertex Pharmaceuticals – for the treatment of blood disorders known as sickle cell disease and  beta thalassaemia. Casgevy’s greenlight is groundbreaking because it is the first drug in the world to be approved that is based on the CRISPR (clustered regularly interspaced short palindromic repeats) gene editing technique. A few weeks after the UK’s decision, Casgevy became the first gene-editing treatment available in the USA for sickle cell disease (the use of Casgevy for beta thalassaemia in the USA is currently still being studied). Casgevy is a huge upgrade for sickle cell patients over the current way the condition is managed. Here’s Sarah Zhang, writing at The Atlantic in November 2023:

When Victoria Gray was still a baby, she started howling so inconsolably during a bath that she was rushed to the emergency room. The diagnosis was sickle-cell disease, a genetic condition that causes bouts of excruciating pain—“worse than a broken leg, worse than childbirth,” one doctor told me. Like lightning crackling in her body is how Gray, now 38, has described the pain. For most of her life, she lived in fear that it could strike at any moment, forcing her to drop everything to rush, once again, to the hospital.

After a particularly long and debilitating hospitalization in college, Gray was so weak that she had to relearn how to stand, how to use a spoon. She dropped out of school. She gave up on her dream of becoming a nurse.

Four years ago, she joined a groundbreaking clinical trial that would change her life. She became the first sickle-cell patient to be treated with the gene-editing technology CRISPR—and one of the first humans to be treated with CRISPR, period. CRISPR at that point had been hugely hyped, but had largely been used only to tinker with cells in a lab. When Gray got her experimental infusion, scientists did not know whether it would cure her disease or go terribly awry inside her. The therapy worked—better than anyone dared to hope. With her gene-edited cells, Gray now lives virtually symptom-free. Twenty-nine of 30 eligible patients in the trial went from multiple pain crises every year to zero in 12 months following treatment.

The results are so astounding that this therapy, from Vertex Pharmaceuticals and CRISPR Therapeutics, became the first CRISPR medicine ever approved, with U.K. regulators giving the green light earlier this month; the FDA appears prepared to follow suit in the next two weeks.” 

The manufacturing technologies behind Casgevy include electroporation, where an electric field is used to increase the permeability of a cell’s membrane. This enables molecules, such as genetic material and proteins, to be introduced in a cell for the purposes of gene editing. According to an expert-call on electroporation that I reviewed, the technology has been around for over four decades, but only started gaining steam in recent years with the decline in genetic sequencing costs; without affordable genetic sequencing, it was expensive to know if a gene editing process done via electroporation was successful. The relentless work of Illumina has played a huge role in lowering genetic sequencing costs over time.

These show how one innovation (cheaper genetic sequencing) supported another in a related field (the viability of electroporation) that then enabled yet another in a related field (the creation of gene editing therapies).    

The three stories I just shared highlight the different ways that innovation can happen. It can appear from the most unexpected places (quinine and polarizers); it can take unpredictable paths (from planes to nuclear power plants); and it can occur when supporting technologies improve over time (the development of Casgevy). What they signify is that we shouldn’t lose hope in mankind’s creative prowess when it appears that nothing new of significance has been built for a while. Sometimes, what’s needed is just time


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

An Attempt To Expand Our Circle of Competence

We tried to expand the limits of our investing knowledge.

Jeremy and I have not invested in an oil & gas company for years. The reason can be traced to the very first stocks I bought when I started investing. Back then, in October 2010, I bought six US-listed stocks at one go, two of which were Atwood Oceanics and National Oilwell Varco (or NOV). Atwood was an owner of oil rigs while NOV supplied parts and equipment that kept oil rigs running. 

I invested in them because I wanted to be diversified according to sectors. I thought that oil & gas was a sector that was worth investing in since the demand for oil would likely remain strong for a long time. My view on the demand for oil was right, but the investments still went awry. By the time I sold Atwood and NOV in September 2016 and June 2017, respectively, their stock prices were down by 77% and 31% from my initial investments. 

It turned out that while global demand for oil did indeed grow from 2010 to 2016 – the consumption of oil increased from 86.5 million barrels per day to 94.2 million barrels – oil prices still fell significantly over the same period, from around US$80 per barrel to around US$50. I was not able to predict prices for oil and I had completely missed out on the important fact that these prices would have an outsized impact on the business fortunes of both Atwood and NOV.

In its fiscal year ended 30 September 2010 (FY2010), Atwood’s revenue and net income were US$650 million and US$257 million, respectively. By FY2016, Atwood’s revenue had increased to US$1.0 billion, but its net income barely budged, coming in at US$265 million. Importantly, its return on equity fell from 21% to 9% in that period while its balance sheet worsened dramatically. For perspective, Atwood’s net debt (total debt minus cash and equivalents) ballooned from US$49 million in FY2010 to US$1.1 billion in FY2016.

As for NOV, from 2010 to 2016, its revenue fell from US$12.2 billion to US$7.2 billion and its net income collapsed from US$1.7 billion to a loss of US$2.4 billion. This experience taught me to be wary of companies whose business results have strong links to commodity prices, since I had no ability to foretell their movements. 

Fast forward to the launch of the investment fund that Jeremy and I run in July 2020, and I was clear that I still had no ability to divine oil prices – and neither did Jeremy. Said another way, we were fully aware that companies related to the oil & gas industry were beyond our circle of competence. Then 2022 rolled around and during the month of August, we came across a US-listed oil & gas company named Unit Corporation. 

At the time, Unit had three segments that spanned the oil & gas industry’s value chain: Oil and Natural Gas; Mid-Stream, and Contract Drilling. In the Oil and Natural Gas segment, Unit owned oil and natural gas fields in the USA – most of which were in the Anadarko Basin in the Oklahoma region – and was producing these natural resources. The Mid-Stream segment consisted of Unit’s 50% ownership of Superior Pipeline Company, which gathers, processes, and treats natural gas, and owns more than 3,800 miles of gas pipelines (a private equity firm, Partners Group, controlled the other 50% stake). The last segment, Contract Drilling, is where Unit owned 21 available-for-use rigs for the drilling of oil and gas.

When we first heard of Unit in August 2022, it had a stock price of around US$60, a market capitalisation of just over US$560 million, and an enterprise value (market capitalisation minus net-cash) of around US$470 million (Unit’s net-cash was US$88 million back then). But the company’s intrinsic value could be a lot higher. 

In January 2022, Unit launched a sales process for its entire Oil and Natural Gas segment, pegging the segment’s proven, developed, and producing reserves at a value of US$765 million. This US$765 million value came from the estimated future cash flows of the segment – based on oil prices we believe were around US$80 per barrel – discounted back to the present at 10% per year. Unit ended the sales process for the Oil and Natural Gas segment in June 2022 after selling only a small portion of its assets for US$45 million. Nonetheless, when we first knew Unit, the Oil and Natural Gas segment probably still had a value that was in the neighbourhood of the company’s estimation during the sales process, since oil prices were over US$80 per barrel in August 2022. Meanwhile, we also saw some estimates in the same month that it would cost at least US$400 million for someone to build the entire fleet of rigs that were in the Contract Drilling segment. As for the Mid-Stream segment, due to Superior Pipeline’s ownership structure and the cash flows it was producing, the value that accrued to Unit was not significant*.

So here’s what we saw in Unit in August 2022 after putting everything together: The value of the company’s Oil and Natural Gas and Contract-Drilling segments (around US$765 million and US$400 million, respectively) dwarfed its enterprise value of US$470 million.

But there was a catch. The estimated intrinsic values of Unit’s two important segments Oil and Natural Gas, and Contract Drilling – were based on oil prices in the months leading up to August 2022. This led Jeremy and I to attempt to expand our circle of competence: We wanted to better understand the drivers for oil prices. There were other motivations. First, Warren Buffett was investing tens of billions of dollars in the shares of oil & gas companies such as Occidental Petroleum and Chevron in the first half of 2022. Second, we also came across articles and podcasts from oil & gas investors discussing the supply-and-demand dynamics in the oil market that could lead to sustained high prices for the energy commodity. So, we started digging into the history of oil prices and what influences it.

Here’s a brief history on major declines in the price of WTI Crude over the past four decades:

  • 1980 – 1986: From around US$30 to US$10
  • 1990 – 1994: From around US$40 to less than US$14
  • 2008 – 2009: From around US$140 to around US$40
  • 2014 – 2016: From around US$110 to less than US$33
  • 2020: From around US$60 to -US$37 

Since oil is a commodity, it would be logical to think that differences in the level of oil’s supply-and-demand would heavily affect its price movement – when demand is lower than supply, prices would crash, and vice versa. The UK-headquartered BP, one of the largest oil-producing companies in the world, has a dataset on historical oil production and consumption going back to 1965. BP’s data is plotted in Figure 1 below and it shows that from 1981 onwards, the demand for oil (consumption) was higher than the supply of oil (production) in every year. What this means is the price of oil has surprisingly experienced at least five major crashes over the past four decades despite its demand being higher than supply over the entire period

Figure 1; Source: BP

We shared our unexpected findings with our network of investor friends, which included Vision Capital’s Eugene Ng. He was intrigued and noticed that the U.S. Energy Information Administration (EIA) maintained its own database for long-term global oil consumption and production. After obtaining similar results from EIA’s data compared to what we got from BP, Eugene asked the EIA how it was possible for oil consumption to outweigh production for decades. The EIA responded and Eugene kindly shared the answers with us. It turns out that there could be errors within EIA’s data. The possible sources of errors come from incomplete accounting of Transfers and Backflows in oil balances: 

  • Transfers include the direct and indirect conversion of coal and natural gas to petroleum.
  • Backflows refer to double-counting of oil-streams in consumption. Backflows can happen if the data collection process does not properly account for recycled streams.

The EIA also gave an example of how a backflow could happen with the fuel additive, MTBE, or methyl tert-butyl ether (quote is lightly edited for clarity):

“The fuel additive MTBE is an useful example of both, as its most common feedstocks are methanol (usually from a non-petroleum fossil source) and Iso-Butylene whose feedstock likely comes from feed that has already been accounted for as butane (or iso-butane) consumption. MTBE adds a further complexity in that it is often exported as a chemical and thus not tracked in the petroleum trade balance.”

Thanks to the EIA, we realised that BP’s historical data on the demand and supply of oil might contain errors and how they could have happened. But despite knowing this, Jeremy and I still could not tell what the actual demand-and-supply dynamics of oil were during the five major price crashes that happened from the 1980s to today**. We tried expanding our circle of competence to creep into the oil & gas industry, but were stopped in our tracks. As a result, we decided to pass on investing in Unit. 

I hope that my sharing of how Jeremy and I attempted to enlarge our circle of competence would provide any of you reading this ideas on how you can improve your own investing process. 

*In April 2018, Unit sold a 50% stake in Superior Pipeline to entities controlled by Partners Group – that’s how Partners Group’s aforementioned 50% control came about. When we first studied Unit in August 2022, either Unit or Partners Group could initiate a process after April 2023 to liquidate Superior Pipeline or sell it to a third-party. If a liquidation or sale of Superior Pipeline were to happen, Partners Group would be entitled to an annualised return of 7% on its initial investment of US$300 million before Unit could receive any proceeds; as of 30 June 2022, a sum of US$354 million was required for Partners Group to achieve its return-goal. In the first half of 2022, the cash flow generated by Superior Pipeline was US$24 million, which meant that Unit’s Mid-stream segment was on track to generate around US$50 million in cash flow for the whole of 2022. We figured that a sale of Superior Pipeline in April 2023, with around US$50 million in 2022 cash flow, would probably fetch a total amount that was in the neighbourhood of the US$354 million mentioned earlier that Partners Group was entitled to. So if Superior Pipeline was sold, there would not be much proceeds left for Unit after Partners Group has its piece. 

**If you’re reading this and happen to have insight on the actual historical levels of production and consumption of oil during the past crashes, we would deeply appreciate it if you could get in touch with us. Thanks in advance!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q4 2023

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the fourth quarter of 2023.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the fourth quarter of 2023 – was held two weeks ago and contained useful insights on the state of American consumers and businesses. The bottom-line is this: The US economy remains resilient, but there are significant risks that are causing JPMorgan’s management team to be cautious.  

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy and consumer remains resilient, and management’s base case is that consumer credit remains strong, although loan losses (a.k.a net charge-off rate) for credit cards is expected to be “<3.5%” in 2024 compared to around 2.5% for 2023

The U.S. economy continues to be resilient, with consumers still spending, and markets currently expect a soft landing…

…We continue to expect the 2024 card net charge-off rate to be below 3.5%, consistent with Investor Day guidance…

…In terms of consumer resilience, I made some comments about this on the press call. The way we see it, the consumers find all of the relevant metrics are now effectively normalized. And the question really in light of the fact that cash buffers are now also normal, but that, that means that consumers have been spending more than they’re taking in is how that spending behavior adjusts as we go into the new year, in a world where cash buffers are less comfortable than they were. So one can speculate about different trajectories that, that could take, but I do think it’s important to take a step back and remind ourselves that consistent with that soft landing view, just in the central case modeling, obviously, we always worry about the tail scenarios is a very strong labor market. And a very strong labor market means, all else equal, strong consumer credit. So that’s how we see the world.

2.  Management thinks that inflation and interest rates may be higher than markets expect…

It is important to note that the economy is being fueled by large amounts of government deficit spending and past stimulus. There is also an ongoing need for increased spending due to the green economy, the restructuring of global supply chains, higher military spending and rising healthcare costs. This may lead inflation to be stickier and rates to be higher than markets expect.

3. …and they’re also cautious given the multitude of risks they see on the horizon

On top of this, there are a number of downside risks to watch. Quantitative tightening is draining over $900 billion of liquidity from the system annually, and we have never seen a full cycle of tightening. And the ongoing wars in Ukraine and the Middle East have the potential to disrupt energy and food markets, migration, and military and economic relationships, in addition to their dreadful human cost. These significant and somewhat unprecedented forces cause us to remain cautious.

4. Management is seeing a deterioration in the value of commercial real estate

The net reserve build was primarily driven by loan growth in card and the deterioration in the outlook related to commercial real estate valuations in the commercial bank.

5. Auto loan growth was strong

And in auto, originations were $9.9 billion, up 32% as we gained market share, while retaining strong margins.

6. Overall capital markets activity is picking up, but merger & acquisition (M&A) activity still remains weak…

We are starting the year with a healthy pipeline, and we are encouraged by the level of capital markets activity, but announced M&A remains a headwind and the extent as well as the timing of capital markets normalization remains uncertain…

…Gross Investment Banking and Markets revenue of $924 million was up 32% year-on-year primarily reflecting increased capital markets and M&A activity…

…So as you know, all else equal, this more dovish rate environment is, of course, supportive for capital markets. So if you go into the details a little bit, if you start with ECM [Equity Capital Markets], that helps higher — and the recent rally in the equity markets helps. I think there have been some modest challenges with the 2023 IPO vintage in terms of post-launch performance or whatever. So that’s a little bit of a headwind at the margin in terms of converting the pipeline, but I’m not too concerned about that in general. So I would expect to see rebound there. In DCM [Debt Capital Markets], again all else equal, lower rates are clearly supportive. One of the nuances there is the distinction between the absolute level of rates and the rate of change. So sometimes you see corporates seeing and expecting lower rates and, therefore, waiting to refinance in the hope of even lower rates. So that can go both ways. And then M&A, it’s a slightly different dynamic. I think there’s a couple of nuances there. One, as you obviously know, announced volume was lower this year. So that will be a headwind in reported revenues in 2024, all else equal. And of course, we are in an environment of M&A regulatory headwinds, as has been heavily discussed. But having said that, I think we’re seeing a bit of pickup in deal flow, and I would expect the environment to be a bit more supportive. 

7. …and appetite for loans among businesses is muted

C&I loans were down 2%, reflecting lower revolver utilization and muted demand for new loans as clients remain cautious…

…We expect strong loan growth in card to continue but not at the same pace as 2023. Still, this should help offset some of the impact of lower rates. Outside of card, loan growth will likely remain muted. 

8. Management is not seeing any changes to their macro outlook for the US economy

So the weighted average unemployment rate and the number is still 5.5%. We didn’t have any really big revisions in the macro outlook driving the numbers, and our skew remains as it has been, a little bit skewed to the downside. 

9. Management’s outlook for 2024 includes six rate-cuts by the Fed, but that outlook comes from financial market data, and not from management’s insights

[Question] Coming back to your outlook and forecast for net interest income for the upcoming year with the 6 Fed fund rate cuts that you guys are assuming. Can you give us a little insight why you’re assuming 6 cuts? 

[Answer] I wish the answer were more interesting, but it’s just our practice. We just always use the forward curve for our outlook, and that’s what’s in there.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Everlasting Things In Human Affairs

Knowing the things that are stable over time can be incredibly useful in all areas of life.

Morgan Housel is one of my favourite writers in finance. In November 2023, he published his second book, Same as Ever: A Guide to What Never Changes. As the title suggests, the book is about mankind’s behavioural patterns and ways of thinking that do not seem to change over time.

Jeff Bezos, Amazon’s founder, once said: 

“I very frequently get the question: “What’s going to change in the next 10 years?” And that is a very interesting question; it’s a very common one. I almost never get the 14 question: “What’s not going to change in the next 10 years?” And I submit to you that that second question is actually the more important of the two — because you can build a business strategy around the things that are stable in time. … [I]n our retail business, we know that customers want low prices, and I know that’s going to be true 10 years from now. They want fast delivery; they want vast selection.”

Similarly, I believe that knowing the things that are stable over time can be incredibly useful in all areas of life – business, investing, relationships, and more. While reading Same as Ever, I made notes of the striking things I learnt from the book. I thought it would be useful to share this with a wider audience, so here they are:

The USA could have lost the Revolutionary War to Britain were it not for something as capricious as the wind

The Battle of Long Island was a disaster for George Washington’s army. His ten thousand troops were crushed by the British and its four-hundred-ship fleet. But it could have been so much worse. It could have been the end of the Revolutionary War. All the British had to do was sail up the East River and Washington’s cornered troops would have been wiped out. But it never happened, because the wind wasn’t blowing in the right direction and sailing up the river became impossible.

Historian David McCullough once told interviewer Charlie Rose that “if the wind had been in the other direction on the night of August twenty-eighth [1776], I think it would have all been over.”

“No United States of America if that had happened?” Rose asked.

“I don’t think so,” said McCullough.

“Just because of the wind, history was changed?” asked Rose.

“Absolutely,” said McCullough. 

Risk is what you don’t see

As financial advisor Carl Richards says, “Risk is what’s left over after you think you’ve thought of everything.” That’s the real definition of risk—what’s left over after you’ve prepared for the risks you can imagine. Risk is what you don’t see.

When a past event looks inevitable to us today, we may be fooled by hindsight bias

Two things can explain something that looks inevitable but wasn’t predicted by those who experienced it at the time: 

  • Either everyone in the past was blinded by delusion.
  • Or everyone in the present is fooled by hindsight.

We are crazy to think it’s all the former and none of the latter.

The level of uncertainty in the economy rarely fluctuates, just people’s perceptions

There is rarely more or less economic uncertainty; just changes in how ignorant people are to potential risks. Asking what the biggest risks are is like asking what you expect to be surprised about. If you knew what the biggest risk was you would do something about it, and doing something about it would make it less risky. What your imagination can’t fathom is the dangerous stuff, and it’s why risk can never be mastered

Even when the Great Depression of the 1930s happened, unemployment was not thought to be an issue by people with high posts

The Depression, as we know today, began in 1929. But when the well-informed members of the National Economic League were polled in 1930 as to what they considered the biggest problem of the United States, they listed, in order:

1. Administration of justice

2. Prohibition

3. Disrespect for law

4. Crime

5. Law enforcement

6. World peace

And in eighteenth place . . . unemployment.

A year later, in 1931—a full two years into what we now consider the Great Depression—unemployment had moved to just fourth place, behind prohibition, justice, and law enforcement. That’s what made the Great Depression so awful: No one was prepared for it because no one saw it coming. So people couldn’t deal with it financially (paying their debts) and mentally (the shock and grief of sudden loss).

Having expectations instead of forecasts is important when trying to manage risk

It’s impossible to plan for what you can’t imagine, and the more you think you’ve imagined everything the more shocked you’ll be when something happens that you hadn’t considered. But two things can push you in a more helpful direction.

One, think of risk the way the State of California thinks of earthquakes. It knows a major earthquake will happen. But it has no idea when, where, or of what magnitude. Emergency crews are prepared despite no specific forecast. Buildings are designed to withstand earthquakes that may not occur for a century or more. Nassim Taleb says, “Invest in preparedness, not in prediction.” That gets to the heart of it. Risk is dangerous when you think it requires a specific forecast before you start preparing for it. It’s better to have expectations that risk will arrive, though you don’t know when or where, than to rely exclusively on forecasts— almost all of which are either nonsense or about things that are well-known. Expectations and forecasts are two different things, and in a world where risk is what you don’t see, the former is more valuable than the latter.

Two, realize that if you’re only preparing for the risks you can envision, you’ll be unprepared for the risks you can’t see every single time. So, in personal finance, the right amount of savings is when it feels like it’s a little too much. It should feel excessive; it should make you wince a little. The same goes for how much debt you think you should handle—whatever you think it is, the reality is probably a little less. Your preparation shouldn’t make sense in a world where the biggest historical events all would have sounded absurd before they happened.

Geniuses are unique in BOTH good and bad ways

Something that’s built into the human condition is that people who think about the world in unique ways you like almost certainly also think about the world in unique ways you won’t like…

…John Maynard Keynes once purchased a trove of Isaac Newton’s original papers at auction. Many had never been seen before, as they had been stashed away at Cambridge for centuries. Newton is probably the smartest human to ever live. But Keynes was astonished to find that much of the work was devoted to alchemy, sorcery, and trying to find a potion for eternal life. Keynes wrote:

I have glanced through a great quantity of this at least 100,000 words, I should say. It is utterly impossible to deny that it is wholly magical and wholly devoid of scientific value; and also impossible not to admit that Newton devoted years of work to it.

I wonder: Was Newton a genius in spite of being addicted to magic, or was being curious about things that seemed impossible part of what made him so successful? I think it’s impossible to know. But the idea that crazy geniuses sometimes just look straight-up crazy is nearly unavoidable…

…Take Elon Musk. What kind of thirty-two-year-old thinks they can take on GM, Ford, and NASA at the same time? An utter maniac. The kind of person who thinks normal constraints don’t apply to them—not in an egotistical way, but in a genuine, believe-it-in-your-bones way. Which is also the kind of person who doesn’t worry about, say, Twitter etiquette.

A mindset that can dump a personal fortune into colonizing Mars is not the kind of mindset that worries about the downsides of hyperbole. And the kind of person who proposes making Mars habitable by constantly dropping nuclear bombs in its atmosphere is not the kind of person worried about overstepping the boundaries of reality.

The kind of person who says there’s a 99.9999 percent chance humanity is a computer simulation is not the kind of person worried about making untenable promises to shareholders. The kind of person who promises to solve the water problems in Flint, Michigan, within days of trying to save a Thai children’s soccer team stuck in a cave, within days of rebuilding the Tesla Model 3 assembly line in a tent, is not the kind of person who views his lawyers signing off as a critical step.

People love the visionary genius side of Musk, but want it to come without the side that operates in his distorted I-don’t-care-about-your-customs version of reality. But I don’t think those two things can be separated. They’re the risk-reward trade-offs of the same personality trait.

What gets you to the top also brings you down

What kind of person makes their way to the top of a successful company, or a big country? Someone who is determined, optimistic, doesn’t take no for an answer, and is relentlessly confident in their own abilities. What kind of person is likely to go overboard, bite off more than they can chew, and discount risks that are blindingly obvious to others? Someone who is determined, optimistic, doesn’t take no for an answer, and is relentlessly confident in their own abilities. Reversion to the mean is one of the most common stories in history. It’s the main character in economies, markets, countries, companies, careers—everything. Part of the reason it happens is because the same personality traits that push people to the top also increase the odds of pushing them over the edge.

Outrageous things can easily happen if the sample size is big enough

Evelyn Marie Adams won $3.9 million in the New Jersey lottery in 1986. Four months later she won again, collecting another $1.4 million. ‘‘I’m going to quit playing,’’ she told The New York Times. ‘‘I’m going to give everyone else a chance.’’ It was a big story at the time, because number crunchers put the odds of her double win at a staggering 1 in 17 trillion.

Three years later two mathematicians, Persi Diaconis and Frederick Mosteller, threw cold water on the excitement. If one person plays the lottery, the odds of picking the winning numbers twice are indeed 1 in 17 trillion. But if one hundred million people play the lottery week after week— which is the case in America—the odds that someone will win twice are actually quite good. Diaconis and Mosteller figured it was 1 in 30. That number didn’t make many headlines. ‘‘With a large enough sample, any outrageous thing is apt to happen,” Mosteller said

Why something bad happens nearly every year

If next year there’s a 1 percent chance of a new disastrous pandemic, a 1 percent chance of a crippling depression, a 1 percent chance of a catastrophic flood, a 1 percent chance of political collapse, and on and on, then the odds that something bad will happen next year—or any year—are . . . not bad.

The demise of local news, because of the internet, altered our perception on the frequency of bad news

The decline of local news has all kinds of implications. One that doesn’t get much attention is that the wider the news becomes the more likely it is to be pessimistic. Two things make that so: 

  • Bad news gets more attention than good news because pessimism is seductive and feels more urgent than optimism.
  • The odds of a bad news story—a fraud, a corruption, a disaster—occurring in your local town at any given moment is low. When you expand your attention nationally, the odds increase. When they expand globally, the odds of something terrible happening in any given moment are 100 percent.

To exaggerate only a little: Local news reports on softball tournaments. Global news reports on plane crashes and genocides. 

The internet’s existence means we’re more aware of bad things happening – but bad things are not necessarily happening more today

In modern times our horizons cover every nation, culture, political regime, and economy in the world. There are so many good things that come from that. But we shouldn’t be surprised that the world feels historically broken in recent years and will continue that way going forward. It’s not—we just see more of the bad stuff that’s always happened than we ever saw before.

A contemporary of Ben Graham seemed to know more about investing but was not as good a writer, so he is today much more obscure than Graham

Professor John Burr Williams had more profound insight on the topic of valuing stocks than Benjamin Graham. But Graham knew how to write a good paragraph, so he became the legend and sold millions of books.

US forces suffered against German forces during WWII because American leaders failed to account for Hitler going mad

Historian Stephen Ambrose notes that Eisenhower and General Omar Bradley got all the war-planning reasoning and logic right in late 1944, except for one detail—the extent to which Hitler had lost his mind. An aide to Bradley mentioned during the war: “If we were fighting reasonable people they would have surrendered long ago.” But they weren’t, and it—the one thing that was hard to measure with logic—mattered more than anything.

Lehman Brothers actually had strong financial ratios – better than even Goldman Sachs and Bank of America – in 2008 just before it went bankrupt; what went wrong for Lehman was that investors lost faith in the bank

A few examples of how powerful this can be: Lehman Brothers was in great shape on September 10, 2008. Its tier 1 capital ratio—a measure of a bank’s ability to endure loss—was 11.7 percent. That was higher than the previous quarter. Higher than Goldman Sachs. Higher than Bank of America. It was more capital than Lehman had in 2007, when the banking industry was about as strong as it had ever been. 

Seventy-two hours later Lehman was bankrupt. The only thing that changed during those three days was investors’ faith in the company. One day they believed in the company and bought its debt. The next day that belief stopped, and so did its funding. That faith is the only thing that mattered. But it was the one thing that was hard to quantify, hard to model, hard to predict, and didn’t compute in a traditional valuation model. GameStop

Hyman Minsky’s economic theory of stability leading to instability can be found in nature too

California was hit with an epic drought in the mid-2010s. Then 2017 came, dropping a preposterous amount of moisture. Parts of Lake Tahoe received—I’m not making this up—more than sixty-five feet of snow in a few months. The six-year drought was declared over.

You’d think that would be great. But it backfired in an unexpected way. Record rain in 2017 led to record vegetation growth that summer. It was called a superbloom, and it caused even desert towns to be covered in green. A dry 2018 meant all that vegetation died and became dry kindling. That led to some of the biggest wildfires California had ever seen.

So record rain directly led to record fires. There’s a long history of this, verified by looking at tree rings, which inscribe both heavy rainfall and subsequent fire scars. The two go hand in hand. “A wet year reduces fires while increasing vegetation growth, but then the increased vegetation dries out in subsequent dry years, thereby increasing the fire fuel,” the National Oceanic and Atmospheric Administration wrote. That’s hardly intuitive, but here again—calm plants the seeds of crazy. 

Why financial markets will always overshoot on both ends of the optimism and pessimism spectrum

The only way to know we’ve exhausted all potential opportunity from markets—the only way to identify the top —is to push them not only past the point where the numbers stop making sense, but beyond the stories people believe about those numbers. When a tire company develops a new tire and wants to know its limitations, the process is simple. They put it on a car and run it until it blows up. Markets, desperate to know the limits of what other investors can endure, do the same thing. Always been the case, always will be.

Markets going beyond the point of crazy is a normal thing 

One is accepting that crazy doesn’t mean broken. Crazy is normal; beyond the point of crazy is normal. Every few years there seems to be a declaration that markets don’t work anymore—that they’re all speculation or detached from fundamentals. But it’s always been that way. People haven’t lost their minds; they’re just searching for the boundaries of what other investors are willing to believe

Many things in life have a “most convenient size”

“For every type of animal there is a most convenient size, and a change in size inevitably carries with it a change of form,” Haldane wrote. A most convenient size. A proper state where things work well but break when you try to scale them to a different size or speed. It applies to so many things in life…

…Starbucks had 425 stores in 1994, its twenty-third year in existence. In 1999 it opened 625 new stores. By 2007 it was opening 2,500 stores per year—a new coffee shop every four hours. One thing led to another. The need to hit growth targets eventually elbowed out rational analysis. Examples of Starbucks saturation became a joke. Same-store sales growth fell by half as the rest of the economy boomed. 

Howard Schultz wrote to senior management in 2007: “In order to go from less than 1,000 stores to 13,000 stores we have had to make a series of decisions that, in retrospect, have led to the watering down of the Starbucks experience.” Starbucks closed six hundred stores in 2008 and laid off twelve thousand employees. Its stock fell 73 percent, which was dreadful even by 2008 standards.

Schultz wrote in his 2011 book Onward: “Growth, we now know all too well, is not a strategy. It is a tactic. And when undisciplined growth became a strategy, we lost our way.” There was a most convenient size for Starbucks—there is for all businesses. Push past it and you realize that revenue might scale but disappointed customers scale faster, in the same way Robert Wadlow became a giant but struggled to walk.

Different management skills are needed as a company changes in size

A management style that works brilliantly at a ten-person company can destroy a thousand-person company, which is a hard lesson to learn when some companies grow that fast in a few short years. Travis Kalanick, the former CEO of Uber, is a great example. No one but him was capable of growing the company early on, and anyone but him was needed as the company matured. I don’t think that’s a flaw, just a reflection that some things don’t scale. 

Militaries are really good at innovating because the problems they deal with are so important

Militaries are engines of innovation because they occasionally deal with problems so important—so urgent, so vital—that money and manpower are removed as obstacles, and those involved collaborate in ways that are hard to emulate during calm times. You cannot compare the incentives of Silicon Valley coders trying to get you to click on ads to Manhattan Project physicists trying to end a war that threatened the country’s existence. You can’t even compare their capabilities. The same people with the same intelligence have wildly different potential under different circumstances.

How the harsh conditions of the 1930s forced USA to innovate

The 1930s were a disaster, one of the darkest periods in American history. Almost a quarter of Americans were out of work in 1932. The stock market fell 89 percent. Those two economic stories dominate the decade’s attention, and they should. But there’s another story about the 1930s that rarely gets mentioned: it was, by far, the most productive and technologically progressive decade in U.S. history.

The number of problems people solved, and the ways they discovered how to build stuff more efficiently, is a forgotten story of the ’30s that helps explain a lot of why the rest of the twentieth century was so prosperous. Here are the numbers: total factor productivity—that’s economic output relative to the number of hours people worked and the amount of money invested in the economy—hit levels not seen before or since. Economist Alex Field wrote that by 1941 the U.S. economy was producing 40 percent more output than it had in 1929, with virtually no increase in the total number of hours worked. Everyone simply became staggeringly more productive.

A couple of things happened during this period that are worth paying attention to, because they explain why this happened when it did. Take cars. The 1920s was the era of the automobile. The number of cars on the road in America jumped from one million in 1912 to twenty-nine million by 1929. But roads were a different story. Cars were sold in the 1920s faster than roads were built. That changed in the 1930s when road construction, driven by the New Deal’s Public Works Administration, took off. Spending on road construction went from 2 percent of GDP in 1920 to over 6 percent in 1933 (versus less than 1 percent today). The Department of Highway Transportation tells a story of how quickly projects began: 

Construction began on August 5, 1933, in Utah on the first highway project under the act. By August 1934, 16,330 miles of new roadway projects were completed.

What this did to productivity is hard to overstate. The Pennsylvania Turnpike, as one example, cut travel times between Pittsburgh and Harrisburg by 70 percent. The Golden Gate Bridge, built in 1933, opened up Marin County, which had previously been accessible from San Francisco only by ferryboat. Multiply those kinds of leaps across the nation and the 1930s was the decade that transportation blossomed in the United States. It was the last link that made the century-old railroad network truly efficient, creating last-mile service that connected the world.

Electrification also surged in the 1930s, particularly to rural Americans left out of the urban electrification of the 1920s. The New Deal’s Rural Electrification Administration (REA) brought power to farms in what may have been the decade’s only positive development in regions that were economically devastated. The number of rural American homes with electricity rose from less than 10 percent in 1935 to nearly 50 percent by 1945. It is hard to fathom, but it was not long ago—during some of our lifetimes and most of our grandparents’—that a substantial portion of America was literally dark.

Franklin Roosevelt said in a speech on the REA:

Electricity is no longer a luxury. It is a definite necessity. . . . In our homes it serves not only for light, but it can become the willing servant of the family in countless ways. It can relieve the drudgery of the housewife and lift the great burden off the shoulders of the hardworking farmer.

Electricity becoming a “willing servant”—introducing washing machines, vacuum cleaners, and refrigerators—freed up hours of household labor in a way that let female workforce participation rise. It’s a trend that lasted more than half a century and is a key driver of both twentieth-century growth and gender equality.

Another productivity surge of the 1930s came from everyday people forced by necessity to find more bang for their buck. The first supermarket opened in 1930. The traditional way of purchasing food was to walk from your butcher, who served you from behind a counter, to the bakery, who served you from behind a counter, to a produce stand, who took your order. Combining everything under one roof and making customers pick it from the shelves themselves was a way to make the economics of selling food work during a time when a quarter of the nation was unemployed.

Laundromats were also invented in the 1930s after sales of individual washing machines fell; they marketed themselves as washing machine rentals.

Factories of all kinds looked at bludgeoned sales and said, “What must we do to survive?” The answer often was to build the kind of assembly line Henry Ford introduced to the world in the previous decade. Output per hour in factories had grown 21 percent during the 1920s. “During the Depression decade of 1930–1940— when many plants were shut down or working part time,” Frederick Lewis Allen wrote, “there was intense pressure for efficiency and economy—it had increased by an amazing 41 per cent.”

“The trauma of the Great Depression did not slow down the American invention machine,” economist Robert Gordon wrote. “If anything, the pace of innovation picked up.” Driving knowledge work in the ’30s was the fact that more young people stayed in school because they had nothing else to do. High school graduation surged during the Depression to levels not seen again until the 1960s.

All of this—the better factories, the new ideas, the educated workers— became vital in 1941 when America entered the war and became the Allied manufacturing engine. The big question is whether the technical leap of the 1930s could have happened without the devastation of the Depression. And I think the answer is no—at least not to the extent that it occurred. You could never push through something like the New Deal without an economy so wrecked that people were desperate to try anything to fix it.

Innovation takes time to be recognised, so it’s easy for people to think that innovation is lacking 

A lot of pessimism is fueled by the fact that it often looks like we haven’t innovated in years—but that’s usually because it takes years to notice a new innovation.

Economic progress has been incredible over long periods of time, but is unnoticeable over short periods

Real GDP per capita increased eightfold in the last hundred years. America of the 1920s had the same real per capita GDP as Turkmenistan does today. Our growth over the last century has been unbelievable. But GDP growth averages about 3 percent per year, which is easy to ignore in any given year, decade, or lifetime. Americans over age fifty have seen real GDP per person at least double since they were born. But people don’t remember the world when they were born. They remember the last few months, when progress is always invisible. Same for careers, social progress, brands, companies, and relationships. Progress always takes time, often too much time to even notice it’s happened.

Why progress happens slowly but bad news comes quickly 

Growth always fights against competition that slows its rise. New ideas fight for attention, business models fight incumbents, skyscrapers fight gravity. There’s always a headwind. But everyone gets out of the way of decline. Some might try to step in and slow the fall, but it doesn’t attract masses of outsiders who rush in to push back in the other direction the way progress does…

…The irony is that growth and progress are way more powerful than setbacks. But setbacks will always get more attention because of how fast they occur. So slow progress amid a drumbeat of bad news is the normal state of affairs. It’s not an easy thing to get used to, but it’ll always be with us. 

Good news is what did NOT happen whereas bad news is what did happen

A lot of progress and good news concerns things that didn’t happen, whereas virtually all bad news is about what did occur. Good news is the deaths that didn’t take place, the diseases you didn’t get, the wars that never happened, the tragedies avoided, and the injustices prevented. That’s hard for people to contextualize or even imagine, let alone measure. But bad news is visible. More than visible, it’s in your face. It’s the terrorist attack, the war, the car accident, the pandemic, the stock market crash, and the political battle you can’t look away from.

Why we underestimate big risks

Big risks are easy to overlook because they’re just a chain reaction of small events, each of which is easy to shrug off. So people always underestimate the odds of big risks…

…The Tenerife airport disaster in 1977 is the deadliest aircraft accident in history. The error was stunning. One plane took off while another was still on the runway, and the two Boeing 747s collided, killing 583 people on a runway on the Spanish island. In the aftermath authorities wondered how such an egregious catastrophe could occur. One postmortem study explained exactly how: “Eleven separate coincidences and mistakes, most of them minor . . . had to fall precisely into place” for the crash to occur. Lots of tiny mistakes added up to a huge one. It’s good to always assume the world will break about once per decade, because historically it has. The breakages feel like low-probability events, so it’s common to think they won’t keep happening. But they do, again and again, because they’re actually just smaller high-probability events compounding off one another. That isn’t intuitive, so we’ll discount big risks like we always have.

The fascinating history behind the phrase, “The American Dream”

“The American dream” was a phrase first used by author James Truslow Adams in his 1931 book The Epic of America. The timing is interesting, isn’t it? It’s hard to think of a year when the dream looked more broken than in 1931.

When Adams wrote that “a man by applying himself, by using the talents he has, by acquiring the necessary skills, can rise from lower to higher status, and that his family can rise with him,” the unemployment rate was nearly 25 percent and wealth inequality was near the highest it had been in American history.

When he wrote of “that American dream of a better, richer, and happier life for all our citizens of every rank,” food riots were breaking out across the country as the Great Depression ripped the economy to shreds.

When he wrote of “being able to grow to fullest development as men and women, unhampered by the barriers which had slowly been erected in older civilizations,” schools were segregated and some states required literacy tests to vote.

At few points in American history had the idea of the American dream looked so false, so out of touch with the reality everyone faced. Yet Adams’s book surged in popularity. An optimistic phrase born during a dark period in American history became an overnight household motto.

One quarter of Americans being out of work in 1931 didn’t ruin the idea of the American Dream. The stock market falling 89 percent—and bread lines across the country—didn’t, either. The American Dream actually may have gained popularity because things were so dire. You didn’t have to see the American Dream to believe in it—and thank goodness, because in 1931 there was nothing to see. You just had to believe it was possible and then, boom, you felt a little better.

In nature, species are never perfect in any one trait because perfection involves compromising in other areas

There is no perfect species, one adapted to everything at all times. The best any species can do is to be good at some things until the things it’s not good at suddenly matter more. And then it dies.

A century ago a Russian biologist named Ivan Schmalhausen described how this works. A species that evolves to become very good at one thing tends to become vulnerable at another. A bigger lion can kill more prey, but it’s also a larger target for hunters to shoot at. A taller tree captures more sunlight, but becomes vulnerable to wind damage. There is always some inefficiency. So species rarely evolve to become perfect at anything, because perfecting one skill comes at the expense of another skill that will eventually be critical to survival. The lion could be bigger and catch more prey; the tree could be taller and get more sun. But they’re not, because it would backfire. So they’re all a little imperfect. Nature’s answer is a lot of good enough, below-potential traits across all species.

Biologist Anthony Bradshaw says that evolution’s successes get all the attention, but its failures are equally important. And that’s how it should be: Not maximizing your potential is actually the sweet spot in a world where perfecting one skill compromises another.

The probability of a species going extinct is independent of its age

Leigh Van Valen was a crazy-looking evolutionary biologist who came up with a theory so wild no academic journal would publish it. So he created his own journal and published it, and the idea eventually became accepted wisdom. Those kinds of ideas—counterintuitive, but ultimately true—are the ones worth paying most attention to, because they’re easiest to overlook.

For decades, scientists assumed that the longer a species had been around, the more likely it was to stick around, because age proved a strength that was likely to endure. Longevity was seen as both a trophy and a forecast. In the early 1970s, Van Valen set out to prove that the conventional wisdom was right. But he couldn’t. The data just didn’t fit.

He began to wonder whether evolution was such a relentless and unforgiving force that long-lived species were just lucky. The data fit that theory better. You’d think a new species discovering its niche would be fragile and susceptible to extinction—let’s say a 10 percent chance of extinction in a given period—while an old species had proven its might, and has, say, a 0.01 percent chance of extinction.

But when Van Valen plotted extinctions by a species’ age, the trend looked more like a straight line. Some species survived a long time. But among groups of species, the probability of extinction was roughly the same whether it was 10,000 years old or 10 million years old.

In a 1973 paper titled “A New Evolutionary Law,” Van Valen wrote that “the probability of extinction of a taxon is effectively independent of its age.” If you take a thousand marbles and remove 2 percent of them each year, some marbles will remain in the jar after twenty years. But the odds of being picked out are the same every year (2 percent). Marbles don’t get better at staying in the jar. Species are the same. Some happen to live a long time, but the odds of surviving don’t improve over time. Van Valen argued that’s the case mainly because competition isn’t like a football game that ends with a winner who can then take a break. Competition never stops. A species that gains an advantage over a competitor instantly incentivizes the competitor to improve. It’s an arms race.

Evolution is the study of advantages. Van Valen’s idea is simply that there are no permanent advantages. Everyone is madly scrambling all the time, but no one gets so far ahead that they become extinction-proof.

An example of the unpredictable path of innovations: how planes made nuclear power plants possible

When the airplane came into practical use in the early 1900s, one of the first tasks was trying to foresee what benefits would come from it. A few obvious ones were mail delivery and sky racing. No one predicted nuclear power plants. But they wouldn’t have been possible without the plane. Without the plane we wouldn’t have had the aerial bomb. Without the aerial bomb we wouldn’t have had the nuclear bomb. And without the nuclear bomb we wouldn’t have discovered the peaceful use of nuclear power. Same thing today. Google Maps, TurboTax, and Instagram wouldn’t be possible without ARPANET, a 1960s Department of Defense project linking computers to manage Cold War secrets, which became the foundation for the internet. That’s how you go from the threat of nuclear war to filing your taxes from your couch—a link that was unthinkable fifty years ago, but there it is

The fascinating backstory behind the invention of Polaroid film

Author Safi Bahcall notes that Polaroid film was discovered when sick dogs that were fed quinine to treat parasites showed an unusual type of crystal in their urine. Those crystals turned out to be the best polarizers ever discovered. Who predicts that? Who sees that coming? Nobody. Absolutely nobody. 

The power of incentives can explain extreme events, unsustainable events occuring for prolonged periods of time, and warped beliefs

When good and honest people can be incentivized into crazy behavior, it’s easy to underestimate the odds of the world going off the rails. Everything from wars to recessions to frauds to business failures to market bubbles happen more often than people think because the moral boundaries of what people are willing to do can be extended with certain incentives. That goes both ways. It’s easy to underestimate how much good people can do, how talented they can become, and what they can accomplish when they operate in a world where their incentives are aligned toward progress.

Extremes are the norm. Unsustainable things can last longer than you anticipate. Incentives can keep crazy, unsustainable trends going longer than seems reasonable because there are social and financial reasons preventing people from accepting reality for as long as they can. A good question to ask is, “Which of my current views would change if my incentives were different?” If you answer “none,” you are likely not only persuaded but blinded by your incentives.

It’s hard to predict our behaviour during downturns because the environment changes so much

In investing, saying “I will be greedy when others are fearful” is easier said than done, because people underestimate how much their views and goals can change when markets break. The reason you may embrace ideas and goals you once thought unthinkable during a downturn is because more changes during downturns than just asset prices.

If I, today, imagine how I’d respond to stocks falling 30 percent, I picture a world where everything is like it is today except stock valuations, which are 30 percent cheaper. But that’s not how the world works. Downturns don’t happen in isolation. The reason stocks might fall 30 percent is because big groups of people, companies, and politicians screwed something up, and their screwups might sap my confidence in our ability to recover. So my investment priorities might shift from growth to preservation. It’s difficult to contextualize this mental shift when the economy is booming. And even though Warren Buffett says to be greedy when others are fearful, far more people agree with that quote than actually act on it. The same idea holds true for companies, careers, and relationships. Hard times make people do and think things they’d never imagine when things are calm.

Why humans prefer complexity over simplicity

The question then is: Why? Why are complexity and length so appealing when simplicity and brevity will do? A few reasons: 

Complexity gives a comforting impression of control, while simplicity is hard to distinguish from cluelessness. 

In most fields a handful of variables dictate the majority of outcomes. But paying attention to only those few variables can feel like you’re leaving too much of the outcome to fate. The more knobs you can fiddle with—the hundred-tab spreadsheet, or the Big Data analysis—the more control you feel you have over the situation, if only because the impression of knowledge increases. The flip side is that paying attention to only a few variables while ignoring the majority of others can make you look ignorant. If a client says, “What about this, what’s happening here?” and you respond, “Oh, I have no idea, I don’t even look at that,” the odds that you’ll sound uninformed are greater than the odds you’ll sound like you’ve mastered simplicity.

Things you don’t understand create a mystique around people who do. 

If you say something I didn’t know but can understand, I might think you’re smart. If you say something I can’t understand, I might think you have an ability to think about a topic in ways I can’t, which is a whole different species of admiration. When you understand things I don’t, I have a hard time judging the limits of your knowledge in that field, which makes me more prone to taking your views at face value.

Length is often the only thing that can signal effort and thoughtfulness. 

A typical nonfiction book covering a single topic is perhaps 250 pages, or something like 65,000 words. The funny thing is the average reader does not come close to finishing most books they buy. Even among bestsellers, average readers quit after a few dozen pages. Length, then, has to serve a purpose other than providing more material.

My theory is that length indicates the author has spent more time thinking about a topic than you have, which can be the only data point signaling they might have insights you don’t. It doesn’t mean their thinking is right. And you may understand their point after two chapters. But the purpose of chapters 3–16 is often to show that the author has done so much work that chapters 1 and 2 might have some insight. Same goes for research reports and white papers.

Simplicity feels like an easy walk. Complexity feels like a mental marathon.

If the reps don’t hurt when you’re exercising, you’re not really exercising. Pain is the sign of progress that tells you you’re paying the unavoidable cost of admission. Short and simple communication is different. Richard Feynman and Stephen Hawking could teach math with simple language that didn’t hurt your head, not because they dumbed down the topics but because they knew how to get from A to Z in as few steps as possible. An effective rule of thumb doesn’t bypass complexity; it wraps things you don’t understand into things you do. Like a baseball player who—by keeping a ball level in his gaze—knows where the ball will land as well as a physicist calculating the ball’s flight with precision.

The problem with simplicity is that the reps don’t hurt, so you don’t feel like you’re getting a mental workout. It can create a preference for laborious learning that students are actually okay with because it feels like a cognitive bench press, with all the assumed benefits.

Why people will always disagree

The question “Why don’t you agree with me?” can have infinite answers. Sometimes one side is selfish, or stupid, or blind, or uninformed. But usually a better question is, “What have you experienced that I haven’t that makes you believe what you do? And would I think about the world like you do if I experienced what you have?”

It’s the question that contains the most answers about why people don’t agree with one another. But it’s such a hard question to ask. It’s uncomfortable to think that what you haven’t experienced might change what you believe, because it’s admitting your own ignorance. It’s much easier to assume that those who disagree with you aren’t thinking as hard as you are.

So people will disagree, even as access to information explodes. They may disagree more than ever because, as Benedict Evans says, “The more the Internet exposes people to new points of view, the angrier people get that different views exist.” Disagreement has less to do with what people know and more to do with what they’ve experienced. And since experiences will always be different, disagreement will be constant. Same as it’s ever been. Same as it will always be. Same as it ever was


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Amazon. Holdings are subject to change at any time.

Ben Graham’s Q&A

Ben Graham appeared in a news clip in the 1950s, answering questions and assuaging people’s worries about the stock market.

I recently came across an old US TV news clip from the 1950s that featured Ben Graham, the mentor of Warren Buffett, and the author of the highly influential investing texts, The Intelligent Investor and Security Analysis. In the clip, Graham was leading a seminar at Columbia University together with Dean Courtney Brown. The two men gave a short speech and answered questions from the crowd. 

The news clip also featured a short interview of Senator William Fulbright, who at the time, was commissioning a study on the US stock market after stock prices had advanced near the heights of the 1929 peak just before the Great Depression of the 1930s reared its ugly head. (The study was conducted and published in 1955.)

I was fascinated by the news clip, because Fulbright and the people asking questions to Graham and Brown, had worries about the stock market that are similar to today. For example, Fulbright was concerned that stock prices were too high and might collapse drastically yet again, similar to the great crash that happened during the Great Depression. In another example, the question at the 21:09 mark was concerned about inflation that was driven by “deficits spending”, “easy money policy”, “increased union wages”, “increased minimum wage”, and a “rogue [spending] programme of US$101 billion which the government has just announced” – these are worries in the 1950s that would absolutely fit in today. And importantly, the Dow Jones Industrial Average (I’m using the Dow because it is the index that is referenced in the news clip) is up from around 400 points in 1955 to over 37,000 currently. 

I decided to create a transcript of the news clip for my own reference in the future, and thought of sharing it with the possibility that it might be useful for any of you reading this. Enjoy!

Transcript

TV presenter (10:00): There is no shortage of experts on the market. As for us we’re barely able to tell the difference between a bull and a bear. So we sat in on part of a seminar at The Graduate School of Business at Columbia University. After all it’s older than the stock exchange and we thought professors familiar with the language of the street might treat the market with detachment. Dean Courtney Brown and Professor Benjamin Graham were instructing future brokers and customersmen. Here is See It Now’s short course in the market.

Courtney Brown (10:36): First let me give a caution. I hardly need give it to a group of informed students such as you. No one knows precisely why the market behaves as it behaves, either in retrospect, or in prospect. The best we can do as you well know is express informed judgments. But it is important that those judgments be informed. We do know that there has been a substantial rise. That rise has been going on for a number of years, particularly since the middle of 1953. And we do know that the rate of that rise has been very rapid, uncomfortably like that of the 1928-29 period. It has resulted in a lot of comparisons being made in the press. Moreover the present level of stock prices, as measured by the Dow Jones Averages, is about equal to, indeed a little above the peaks of 1929.

A number of explanations have been advanced regarding the stock market’s rise that suggests it may reflect a return to inflationary conditions. This doesn’t seem to me to be very convincing. First because there is no evidence of inflation in the behaviour of commodity prices, either at the wholesale or at the retail level and there hasn’t been over the past a year and a half – extraordinary stability in the behaviour of both indexes. There is so much surplus capacity around in almost every direction that it’s hard to conceive of a strong inflationary trend reasserting itself at this time.

Still another explanation is that the stock market has gone up because there has been a return of that kind of speculative fever that has from time to time in the past gripped the country – the Florida land boom, the 1929 stock boom. They’ve occurred in history as you know, all the way back from the Tulip speculations in Holland. I suspect there’s a certain element of truth in this one. However, it doesn’t seem to me that it gives us too much concern because there has been no feeding of this fever by the injection of credit. I think it is important for us to observe that the amount of brokers’ loans – loans made to brokers for the financing of securities of their customers that have been bought on margin – are less and then US$2 billion at present. In 1929, they were in excess of US$8.5 billion and there is now a larger volume of securities on the stock exchange. Now gentlemen, Professor Graham will pick up the story at that point.

Ben Graham (13:37): One of the comparisons is interesting is one not between 1929, which is so long ago but 1950 which is only a few years ago. It would be very proper to ask why a price is twice as much as they are now when the earnings of companies both in ‘54 and probably in 1955 are less than they were in 1950. Now that is an extraordinary difference and the explanation cannot be found in any mathematics but it has to be found in investor psychology. 

Ben Graham (14:10): You can have an extraordinary difference in the price level merely because not only speculators but investors themselves are looking at the situation through rose-coloured glasses rather than dark-blue glasses. It may well be true that the underlying psychology of the American people has not changed so much and that what the American people have been waiting for for many years has been an excuse for going back to the speculative attitudes which used to characterize them from time to time. Now if that is so, then the present situation can carry a very large degree of danger to people who are now becoming interested in common stocks for the first time. It would seem if history counts for anything, that the stock market is much more likely than not to advance to a point where of real danger.

Unknown questioner (15:03): You said that stock prices now are not too high but that you fear they will go higher. Well then are you recommending the decline?

Courtney Brown (15:09) Well here I’ll defend you on that [laughs].

Ben Graham (15:10): [Laughs] Yeah, go right ahead.

Courtney Brown (15:17): Those who have watched the security market’s behaviour over the years have become more and more impressed with the fact that stocks always go too high on the upside and tend to go too low on the downside. The swings in other words are always more dramatic and more – the amplitude of change is greater than might normally be justified by an analytical appraisal of the values that are represented there. I think what Professor Graham had to say was that his analysis of a series of underlying values would indicate that the stock prices are just about in line with where they might properly be.

However, from experience that would be the least likely thing to happen that stocks would just stabilise right here. Now if it’s the least likely thing to happen, and you have to select a probability between going up further or down further because of the strong momentum that they have had, I think I would be inclined to agree with him [referring to Graham] that the more probable direction would be towards a somewhat higher level.

Unknown questioner (16:24) When stockholders believed the market was too high, they switched from stocks to cash. Now, many people feel that due to capital gains tax they are not free to act. They are, what you might say, locked in. What effect does this have on the stock market in general?

Courtney Brown (16:41): No question about the fact that it does discourage some sales that might otherwise be made because one selling stocks trying to replace them would have to replace them at substantially lower prices and to come out even after paying the capital gains tax. However, that’s not the only reason people are reluctant to sell stocks and buy bonds. Stocks are still yielding about 4.5% on the basis of current dividend payments whereas bonds of prime quality are closer to 3%. Here again we find a contrast with the situation in 1929, when stocks were yielding about 3.5% and prime bonds closer to 5%.

Unknown questioner (17:24): In addition to raising margin requirements, should the federal government take other measures to check a speculative boom in the stock market, and which method is the better?

Ben Graham (17:34): My own opinion would be that the Federal Reserve should first exhaust the possibilities of raising the margin requirements to 100% and then consider very seriously before they imposed other sanctions if needed 

Unknown questioner (17:47): What is the significance of the broadening public participation in stock purchasing and ownership? 

Courtney Brown (17:58): There are probably two elements there that are important. One, the broadening participation of the public in stock purchases is one measure of the degree of speculative fever that we were talking about before. However, subject to that being controlled – and I believe that it can be controlled as Professor Graham has indicated. But over and above that, there is a broad social significance to that, it seems to me. What in essential terms means is that the ownership of American industry is being more widely dispersed among more and more people. This has very favourable repercussions in terms of our political and social life.

Unknown questioner (18:45): This question concerns the so-called Wall Street professional. Our Wall Street professionals, usually more accurate in their near or long-term market trends – forecasts of stock market trends. If not, why not?

Ben Graham (19:03): I said you say that they are more often wrong than right on their forecasts?

Unknown questioner (19:08): What I mean is are they more accurate in the shorter term than the long-term forecasts?

Ben Graham (19:11): Well we’ve been following that interesting question for a generation or more and I must say frankly that our studies indicate that you have your choice between tossing coins and taking the consensus of expert opinion. And the results are just about the same in each case. Your question as to why they are not more dependable – it’s a very good one and interesting one. My own explanation for that is this: That everybody in Wall Street is so smart, that their brilliance offsets each other, and that whatever they know is already reflected in the level of stock prices pretty much. And consequently what happens in the future represents what they don’t know.

Unknown questioner (19:56): Would you kindly comment on an item appearing in the newspapers to the effect that while 45% of buying today is on margin, the money borrowed is equal to only 1% of the value of listed stock.

Courtney Brown (20:12): The amount of trading on the stock exchange is a very small part of the total value of all the securities that are listed there on. And when you say that the total amount of borrowing on margins financed by brokerage loans is only 1% of the value, it is a reconcilable figure. You can’t reconcile it unless you have the detailed data with you, but it isn’t incompatible in any way.

Ben Graham (20:34): I might add a point on that Dean Brown and that is the slow increase in brokers loans as compared with 45% marginal trade, would indicate that a good deal of the marginal trading is between people who are taking in each other’s washing – that is the marginal buyers are buying from sellers who are previously on margin. And that’s why the rate of growth of brokers’ loans is so much smaller now than it had been in the 1920s, when I think a good deal of the selling had come from long-term owners and really smart people who were selling out to the suckers.

Unknown questioner (21:09): I want to raise a point of argument here on this question of inflation. Seems to me that you’re correct in stating that there’s been no inflation in ‘54 but there also appears to be several long-term inflationary points in the economy today. These I think are the deficits spending that’s supposed to be continued by the government, the easy money policy which is expected to continue, the question of increased union wages, the talk about increased minimum wage, and the talk about a guaranteed wage. All these and on top of this, the rogue program of US$101 billion which the government has just announced. These seem to me to be long-term inflationary things in the US economy and I wish you’d talk about these.

Courtney Brown (21:57): That’s a question that has a good many angles on it. Perhaps we both better try it. Prof Graham, why don’t you take the first crash?

Ben Graham (22:00): I think there are two answers to that in my mind. The first is that acknowledging that there are inflationary elements in governmental policy as it’s now being carried out, it may be argued that those are just necessary to keep things on an even keel because without them, we might have some inbuilt deflationary factors in the way business operates through increased productivity capacity and so forth.

Courtney Brown (22:27): I’ve been impressed with the possibility of labour costs as an inflationary factor. But a rise in wages does not necessarily mean a rise in labour costs. It depends upon the relationship of the rate of change in wages and the rate of change in output  per man-hour, or productivity. Now if wages are related to productivity, as you know they were in the General Motors contract, there is no necessary inflationary consequence to be anticipated. However, apart from that, it’s entirely possible that if wages go ahead faster than changes in productivity there could be a seriously inflationary factor. 

Unknown questioner (23:13): On the basis of your recent answer with regard to the psychological impact of the present condition of the market on the small investor, do you discount the entire theory of dollar averaging? 

Ben Graham (23:30): I think there’s no doubt for this, accepting your premise the man will put the same amount of money in the market year after year for the next 20 years, let’s say, there is a great chance of coming out ahead regardless of when he begins and particularly regardless we should begin now. You have to allow for the human nature factor that no man can really say definitely just how he’s going to behave over the next 10 to 20 years. And there is danger that people start with the idea of being systematic investors over the next 10 to 20 years, may change their attitude as the market fluctuates – in the first instance, put more money into the market because they become speculators, and secondly, get disgusted and scared and don’t buy at all later on when prices get low. It’s a psychological danger – the fault is not in the stars or in the system but in ourselves I think. 

TV presenter (24:27): That was a glimpse of a seminar examining the stock market at Columbia University. We move now to Washington, where Democratic Senator William J Fulbright has announced that his Banking and Currency committee will conduct an investigation of the market.

Unknown questioner (24:40): Senator Fulbright, why is your committee going to investigate the stock market?

William Fulbright (24:43): Well Mr Mayor, there are two principal reasons. One is that my committee has jurisdiction over the subject matter through its control and responsibility for the SEC. The second reason is that the unusual increase during the last 12 to 18 months in the level of prices would seem to warrant a study at this time. 

Unknown questioner (25:04): Are you worried about another 1929?

William Fulbright (25:06): But of course there’s certainly a possibility of it. This situation is reminiscent of 1929. We know the Great Depression in the early ‘30s was heralded by the tremendous increase, the great rise in the stock market and then the great drop. That’s unsettling to the whole economy and it frightens people. It causes great harm to people on fixed incomes and so on. And another thing about it is that the greatest criticism of our system and our economy by our enemies – especially the Communists – is the instability of our economy and the why of our fluctuations and we should endeavour to minimise those fluctuations. Now I don’t know all the reasons involved in this. That’s why we’re going to have the study. But the objective is is to inform the Congress and inform the people as far as we can about the conditions that now exist and we would then hope to be able to develop some remedy for it, some way to control these wild fluctuations. 

I confess with what limited knowledge I have, it does disturb me because it has gone up for such a long time and to such a great extent – I think far beyond what the conditions in the country itself warrant. I happen to know of my own knowledge that in the agricultural areas in the southwest, we are having a very severe depressed period. There is no boom in the agricultural areas, the rural areas of the West, and the Southwest. So that most of this boom is concentrated in the market and I think it is unhealthy but I’m unwilling to take a dogmatic stand now. That’s why as I say, we’ll have the study. 

Unknown questioner (26:52): Well Senator Fulbright, I think you have referred to this as a friendly investigation. What exactly is a friendly investigation?

William Fulbright (27:00): Well what I meant to convey is that I have no knowledge nor even suspicion of wrongdoing, manipulation, or anything of that kind in this increase. And I approach it in a friendly spirit in the spirit of trying to find out for the information of the country and of our committee and the Congress, what has been taking place. I’m not approaching it with the idea that we’re going to reveal a lot of wrongdoing.

TV presenter (27:27): The stock exchange hasn’t been investigated for 20 years, but it remains the subject of curiosity and concern as to whether what is good for the exchange is good for the country and the people who live here. There have been no official charges that it has been rigged or manipulated but rather the question of whether or not the market is healthy. There is wide disagreement amongst the experts as to why the market behaves as it does. But there is considerable agreement that it behaves the way it does because people behave the way they do. 

Good night and good luck. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2023 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q3 earnings season.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI (2023 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s third quarter after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest comments, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management believes that generative AI is a generational opportunity to deliver new products and services

We believe that every massive technology shift offers generational opportunities to deliver new products and solutions to an ever-expanding set of customers. AI and generative AI is one such opportunity, and we have articulated how we intend to invest and differentiate across data, models and interfaces. 

The integration of Adobe’s generative AI Firefly models with the company’s Creative Cloud’s suite of products have led to more than 4.5 billion generations since their launch in March

The general availability of our generative AI Firefly models and their integrations across Creative Cloud drove tremendous customer excitement with over 4.5 billion generations since launch in March.

Adobe’s management has released three new Firefly models for different functions

The release of 3 new Firefly models, Firefly Image 2 model, Firefly Vector model and Firefly Design model, offering highly differentiated levels of control with effects, photo settings and generative match

Adobe’s Creative Cloud subscription plans now include generative credits; Adobe’s management introduced generative credits to Adobe’s paid plans to drive adoption of the plans and drive usage of the generative AI functions; management does not expect the generative credits (or packs) to have a large impact on Adobe’s financials in the short term beyond driving more customer sign-ups

We also introduced generative credits as part of our Creative Cloud subscription plans…

…Secondly, we priced the generative packs — sorry, we integrated the generative capabilities and credits directly into our paid plans with the express intent of driving adoption of the paid subscription plans and getting broad proliferation of the ability to use those…

… I don’t personally expect generative packs to have a large impact in the short term other than to drive more customers to our paid existing subscription plans.

Photoshop Generative Fill and Generative Expand are now generally available and are seeing record adoption, with them being among the most used features in the Photoshop product

The general availability of Photoshop Generative Fill and Generative Expand, which are seeing record adoption. They’re already among the most used features in the product.

Adobe’s management believes that Adobe Express’s generative AI capabilities are driving adoption of the product

 The family of generative capabilities across Express, including text to image, text effects, text to template and generative fill are driving adoption of Express and making it even faster and more fun for users of all skill levels.

Adobe’s management is seeing high level of excitement among customers for the Firefly integrations across Adobe’s product suite

Customer excitement around Firefly integrations across our applications has been great to see with community engagement, social interactions and creative marketing campaigns driving organic brand search volume, traffic and record demand. 

Adobe’s management expects generative AI features to deliver additional value and attract new customers to Adobe’s Document Cloud suite of products; generative AI capabilities for Document Cloud is now in private beta, with a public beta to come in the next few months and general availability (GA) to arrive later in 2024

Much like the Creative business, we expect generative AI to deliver additional value and attract new customers to Document Cloud. Acrobat’s generative AI capabilities, which will enable new creation, comprehension and collaboration functionality have already been rolled out in a private beta. We expect to release this in a public beta in the coming months…

…What we’re really excited about as we bring the AI assistant to market, which, by the way, as I mentioned, is now in private beta. Expect it to come out in the next few months as a public beta and then GA later in the year.

Adobe’s management is focusing Adobe’s generative AI efforts within its Experience Cloud suite of products in three areas: (1) Building an AI assistant, (2) reimagining Experience Cloud’s existing applications, and (3) creating new generative AI solutions

Generative AI accelerates our pace of innovation across the Experience Cloud portfolio, enabling us to build on our capabilities to deliver personalized digital experiences. Our efforts are focused in 3 areas: one, augmenting our applications with an AI assistant that significantly enhances productivity for current users and provides an intuitive conversational interface to enable more knowledge workers to use our products; two, reimagining existing Experience Cloud applications like we did with Adobe Experience Manager; and three, developing entirely new solutions built for the age of generative AI like Adobe GenStudio.

Adobe’s management recently released Adobe GenStudio, a solution with generative AI capabilities that combines Creative Cloud, Express, and Experience Cloud, to help brands create content; Adobe GenStudio is seeing tremendous customer interest

Release of Adobe GenStudio, an end-to-end solution that brings together best-in-class applications across Creative Cloud, Express and Experience Cloud with Firefly generative AI at the core to help brands meet the rising demand for content. GenStudio provides a comprehensive offering spanning content ideation, creation, production and activation. We are seeing tremendous interest in GenStudio from brands like Henkel, Pepsi and Verizon and agencies like Publicis, Omnicom and Havas as they look to accelerate and optimize their content supply chains.

Adobe now has a pilot program where some customers are able to bring their own assets and content to extend Adobe’s Firefly models in a custom way; Adobe is exposing Firefly through APIs to that customers can build Firefly into their workflows; Adobe is enabling users to integrate Firefly-generated-content into a holistic Adobe workflow

So with Firefly and Express, very excited about the momentum that we continue to see. You heard that we crossed 4.5 billion generations now so we continue to see really, really strong adoption and usage of it, partially as a stand-alone business but also integrated into our Photoshop and Illustrator and these existing workflows.

And we’re starting to see a lot of interest not just in the context of using it as part of the existing products but also using it as part of the ecosystem within enterprises. So we’ve been working with a number of customers to not just enable them with Firefly, which is the predominance of the growth that we’re seeing in Q4 for enterprise adoption but also have a number of pilot customers already engaged around custom model extensions so that they can bring their own assets and their own content into what Firefly generates.

Second, we’re also enabling the ability to expose it through APIs so they can build it into their existing workflows. And third, we’re, of course, connecting it and tying it all into Adobe Express, which now also has its own Firefly and additional capabilities like things so that you can not just sort of create content using Firefly but then start to assemble it, start to schedule social posts around it, start to do multi-language translations, that those are all features that are already in there and then create a stakeholder workflow from people working in Photoshop to the marketers that are trying to post externally. So that’s where things get very interesting and exciting in terms of the connection we have with GenStudio and everything that Anil is doing.

Adobe’s management intends to improve the generative capabilities over time, which might be more expensive in terms of the generative credits consumed, and management believes this will help drive Adobe’s growth over time

But what will happen over the course of the year and the next few years is that we will be integrating more and more generative capabilities into the existing product workflows. And that will drive — and we’ll be integrating capabilities like video generation, which will cost more than 1 generation, and that will drive a natural inflation in that market and that will become a driver for growth subsequently. 

Adobe’s management believes that Firefly is a great on-ramp for Adobe Express, and a great catalyst for all of Adobe’s products across the spectrum (the same underlying generative AI technology is also a great catalyst for Adobe’s Document Cloud business)

And that sort of brings them as an on-ramp into Express, which would be the other part. Express is certainly the introductory pricing, the ability to get millions more into the fold. And the ability right now, it used to be that Express and other offerings in that is to all worry about do I have the right templates? Well, AI is going to completely change that. We have our own models. And so Firefly will allow anybody to take whatever creative idea that they have and make that available. So I think Firefly really helps with the Express offering.

On the Creative Cloud, David mentioned this. I mean, if you look at the adoption of that functionality and usage that’s being driven, whether it’s in Photoshop right now, Illustrator, as we add video, both in terms of providing greater value, and we certainly will, therefore, have the uplift in pricing as well as the retentive ability for Firefly, that’s where I think you’re going to see a lot of the really interesting aspects of how Firefly will drive both adoption as well as monetization.

And then if you go at the other end of the spectrum to the enterprise, GenStudio, every single marketer that I know and CFO and CMO are all worried about how much am I spending on data? How do I get agility in my campaigns? And the fact that Firefly is integrated into both Express as well as when we do the custom models for them so they can upload their own models and then have the brand consistency that they want. So Firefly really is the fact that we have our own models, a great catalyst for business all across the spectrum…

… And then you take the same technology that we have in Creative and think about its impact in both Document Cloud when we do that and the ability to have summaries and have conversational interfaces with PDF, thereby making every single PDF, as David again said, both for communication, collaboration and creation far more compelling. I think you’re going to see that same kind of uplift in usage and therefore, monetization on the Acrobat side.

DocuSign (NASDAQ: DOCU)

DocuSign’s management will be introducing generative AI enhancements to its CLM (Contract Lifecycle Management) platform; Veeco was an eSignature customer that has started using CLM, and DocuSign’s AI CLM features will help Veeco with surfacing actionable insights from customer contracts

CLM continues to grow well, particularly with North American enterprise customers. And for the fourth year in a row, our CLM solution was recognized as a leader by Gartner in contract life cycle management, noting our strong market understanding, product strategy and road map vision, including upcoming Generative AI enhancements. This quarter, we expanded a relationship that began more than 5 years ago with Veeco USA. Who’s the leader in workplace innovation. Veeco began using DocuSign eSignature and has added CLM as part of this transformation into a digital services company. Our AI solution will help Veeco streamline and enhance search and review of executed customer contracts with actionable insights to better serve its customers

MongoDB (NASDAQ: MDB)

MongoDB’s management held a customer feedback session recently and they saw four themes that emerged from the conversations, one of which was that customers of all sizes are interested in AI

This quarter, we held our most recent global Customer Advisory Board meeting where customers across various geographies and industries came together to share feedback and insight about the experience using MongoDB. From these discussions as well as our ongoing C-suite dialogue with our customers, a few themes emerge. First, AI is in nearly every conversation with customers of all sizes.

MongoDB’s management is seeing great early feedback from MongoDB’s partnership with AWS CodeWhisperer; MongoDB’s management also thinks that Microsoft Github Copilot is capable of generating useful code

We’re seeing great early feedback from our partnership with AWS’ CodeWhisperer, the AI-powered footing companion that is now trained on MongoDB data to generate codesuggestions based on MongoDB’s best practices from over 15 years of history. Microsoft GitHub Copilot is also proficient at generating code suggestions that reflect best practices in developers to build highly performant applications even faster on MongoDB.

MongoDB’s management is seeing software developers being asked to also build AI functionalities into their applications

And with the recent advances in Gen AI, building applications is no longer the sole domain of AI/ML experts. Increasingly, it’s software developers who are being asked to build powerful AI functionality directly into their applications. We are well positioned to help them do just that.

MongoDB’s Atlas Vector Search – the company’s AI vector search feature – recently received the highest NPS (net promoter score) among vector databases from developers; crucially, the NPS survey was done on the preview version of Vector Search and not even on the generally available version, which is better

In a recent state of AI survey reported by Retool, Atlas Vector Search received by far the highest Net Promoter Score from developers compared to all other vector databases available…

……As I said in the prepared remarks, there was a recent analysis done by a consultancy firm called [ Retool ] that really spoke to lots of customers, and we came out of top on — in terms of NPS. And by the way, our product was a preview product. It wasn’t even the GA product. 

MongoDB’s Atlas Vector Search allows developers to combine vector searches with another kind of search capabilities available in MongoDB, resulting in the ability to run very complex queries

Moreover, developers can combine vector search with any other query capabilities available in MongoDB, namely analytics, tech search, geospatial and time series. This provides powerful ways of defining additional filters on vector-based queries that other solutions just cannot provide. For example, you can run complex AI and rich queries such as “find pants and shoes in my size that look like the outfit in this image within a particular price range and have free shipping” or “find real estate listings with houses that look like this image that were built in the last 5 years and are in an area within 7 miles west of downtown Chicago with top-rated schools.”

MongoDB’s Atlas Vector Search allows customers to scale nodes independently, which gives customers the ability to achieve the right level of performance at the most efficient cost, so management thinks this is a very compelling value proposition for customers

One of the announcements we also made was that you can now do workload isolation. So for search or vector search functionality, you can scale those nodes independently of your overall cluster. So what that really does is allow customers to really configure their clusters to have the right level of performance at the most efficient cost. So we’ve been very sensitive on making sure that based on the different use cases, you can scale up and down different nodes based on your application needs. So by definition, that will be a very compelling value proposition for customers…

…[Question] With Vector Search comes quite a bit more data. So how are you making sure that customers don’t receive a surprise bill and end up unhappy?

[Answer] In terms of your question around the amount of data and the data builds, obviously, vectors can be memory-intensive. And the amount of vectors you generate will obviously drive the amount of usage on those nodes. That’s one of the reasons we also introduced dedicated search nodes so you can asymmetrically scale particular nodes of your application, especially your search nodes without having to increase the overall size of your cluster. So you’re not, to your point, soft for the big bill for underlying usage, for nonusage right? So you only scale the nodes that are really need that incremental compute and memory versus nodes that don’t, and that becomes a much more cost-effective way for people to do this. And obviously, that’s another differentiator for MongoDB.

MongoDB’s management believes that customers are aware that their legacy data infrastructure is holding them back from embracing AI (legacy data infrastructure do not allow customers to work with real-time data for AI purposes) but the difficulty in modernising the infrastructure is daunting for them; MongoDB’s management thinks that the modernisation of data infrastructure for AI is still a very early trend but it will be one of the company’s largest long-term opportunities

They are aware that their legacy platforms are holding them back from building modern applications designed for an AI future. However, customers also tell us that they lack the skills and the capacity to modernize. They all want to become modern, but daunted by the challenges as they are aware it’s a complex endeavor that involves technology, process and people. Consequently, customers are increasingly looking to MongoDB to help them modernize successfully…

… There is a lot of focus on data because with AI. Data in some way, it becomes a new code, you can train your models with your proprietary data that allows you to really drive much more value and build smarter applications. Now the key thing is that it’s operational data because with applications, this data is always constantly being updated. And for many customers, most of those applications are right now running on legacy platforms so that operational data is trapped in those legacy platforms. And you can’t really do a batch process of e-tailing all that data into some sort of warehouse and then still able to leverage the real-time use of that data. That’s why customers are now much more interested in potentially modernizing these legacy platforms than they ever have before…

…I would say it’s still very, very early days, we definitely believe that this will be one of the largest long-term opportunities for our business. we’re in the very early days.

MongoDB’s management has launched Query Converter, which uses AI to convert a customer’s existing SQL-related workflows to work with MongoDB’s NoSQL database platform, and customers have tried it out successfully

We launched Relational Migrator earlier this year to help customers successfully migrate data from their legacy relational databases to MongoDB. Now we’re looking beyond data migration to the full life cycle of application modernization. At our local London event, we unveiled the query converter, which uses genetic AI to analyze existing SQL queries and store procedures and convert them to work with MongoDB’s query API. Customers already tooled successfully to convert decades-old procedures to modernize their back-end with minimal need for manual changes.

MongoDB’s management thinks it’s too early to tell how the usage of MongoDB’s AI features by customers will impact MongoDB’s gross margin at maturity

[Question] And then the follow-up is more it’s around AI. So if I look at the demos that you guys have around vector search and how search is getting a lot better, that seems very compelling. And it seems like really straightforward for our clients to improve their the customer experience that they use it for a customer facing up, for example. What is the — what are the implications for gross margins for you, Michael, like do you have to do a lot more computer to be able to handle it?

[Answer] So I think it’s a little too early to tell. There’s obviously plenty of variability in the workloads depending on the nature what the underlying application is. So I think it’s a little early to give a strong direction to that… But I think too early to make a specific call or quantification on the gross margin impacts of AI.

MongoDB’s management thinks that Atlas Vector Search will be a big opportunity for MongoDB, but it’s early days and they find it hard to exactly quantify the revenue opportunity

We’ve seen a lot of demand from customers. And we feel like this is a big, big opportunity. Again, it’s early days. It’s going to take time to materialize, but this is, again, one of the other big growth opportunities for our business. That being said, in terms of the revenue opportunity, it’s really hard to quantify now because the use cases that customers are starting with are still kind of, I would say, early intent because people are still playing around with the technology. But we are seeing, as I mentioned, in UKG is using it to essentially provide an AI-powered assistant for its people. One Energy, European energy company is using terabytes of geospatial data and is using vectors to basically get better insights in terms of the images that they’re getting from the work they’re doing in terms of drilling for oil. So it’s still very, very early days. So hard to give you like an exact numbers.

When it comes to copilot tools for software coding, MongoDB’s management is seeing varying levels of productivity improvement for software developers based on the tools they are using; MongoDB’s management also sees the software written with copilots as being mostly for internal use currently

[Question] As customers began to trial some of these copilot code tools will say. What type of feedback have you gotten from them as it relates to the pace with which they’ve been able to reduce net new workload time to market, how much faster or efficient are customers getting using these tools?

[Answer] We get different answers from a lot of different customers. It really depends on which tool they’re using. Without commenting on who’s better, who’s worse, we definitely see a difference in the quality of the output between the different tools. I think it’s going to take some time for these tools to mature. So I think you’re seeing a lot of customers do a lot of testing and prototyping. I would also tell you that they’re doing a lot of this on internal-facing applications because there’s still lots of questions about IP rights and what is potentially copyrightable and then help to be licensable if they offer this as a shrink-wrap software or service to their end customers. So we’re seeing more of this work on internally facing applications but the productivity gains really do vary by tool and all the very do vary by the sophistication of the app being built. So it’s hard for me to give you a real number. I know there’s people out there quoting 30% or 40% improvement. But it really depends on the customer and the use case and tool that they’re trying to use.

MongoDB’s CEO, Dev Ittycheria, thinks his views – that (1) vector search would become just another functionality in a more holistic database platform, and (2) the database platform that can integrate vector search functionality well into developers’ workflow will win – has played out

I would say that I think 6, 9 months ago, there was a lot of interest in vector databases and there were some point solutions that got a lot of name recognition and a lot of people are wondering, is there a risk that we could be disrupted by them? And at that point in time, we made it clear that we believe vectors, we’re really another form of an index and that every database platform would ultimately incorporate vectors into their architecture. And the winner really would be the technology that made the vector functionality very integrated and cohesive as part of the developer workflow. I would argue that it’s really played out. 

MongoDB’s management saw customers having to work with two databases when performing vector searches for AI purposes; these customers were asking MongoDB to bring vector search capabilities into its database platform because working with one platform helps customers speed up their work and reduce costs

One of the reasons we actually built search is because we got feedback from our customers in many instances, a lot of our customers were dual homing data to MongoDB and to some sort of search database. So consequently, not only had to manage 2 databases, keep that data in sync, but also manage the plumbing that connected those 2 database platforms and customers told us they much would — this is like we don’t understand why you’re not offering a solution because we much rather have it all in one platform with one API. And that ultimately drove our desire to build out our search functionality, which is really becoming more and more popular. So the point for customers is that if you can remove friction in terms of how they can use the platform leverage the platform, have one set of kind of semantics in terms of — to address a broad set of use cases, it really simplifies the data architecture. And the more you simplify data architecture, the more nimble you can be and the more cost-effective you can be, and that’s what’s really resting with customers.

Okta (NASDAQ: OKTA)

Okta’s management introduced Okta AI during the company’s Oktane event in October; Okta AI is powered by the data that Okta has collected over the years from its 18,800 customers and 7,000+ integrations, and is infused into several of Okta’s products

The headline of the event was the introduction of Okta AI, the identity solution for the next era of computing. Okta AI is AI for Identity. It’s powered by the massive amounts of data the company has accumulated over the years, including anonymized insights crowdsourced from our 18,800 customers and the 7,000+ integrations in the Okta Integration Network, as well as data on usage, policies, threats, and risk signals. Okta AI uses that data to perform powerful, real-time security, developer, and policy actions. Okta AI is also infused into several of our products. It makes our existing products more valuable and new products possible — all while expanding what it means to be integrated and protected.

An example of Okta AI at work is Identity Threat Protection, which enables companies to automatically log users out of apps during a security issue

Identity Threat Protection with Okta AI, a new product that will enable businesses to prevent and respond to threats faster than ever before. It empowers organizations to automate the detection and remediation of Identity threats across the tech ecosystem. It extends adaptive risk evaluation from the point of authentication to any time a user is logged in and helps you quickly prevent and respond to threats. Identity Threat Protection allows for an array of powerful new actions like Universal Logout. For the first time in our industry, it’s possible to automatically log users out of their apps during a security issue. Threat actors might be getting more sophisticated, but we are using the power of AI and our ecosystem to keep our customers safe and a step ahead.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Data Cloud’s introduction was great timing because it coincided with the boom in generative AI and a company can’t make AI useful without data

And Data Cloud, this hyperscale, this real-time customer data platform that is performing incredibly well for us, it’s the foundation of every AI transaction, but it’s the foundation of every large deal that we did this quarter. That is what is so exciting. And in just our third quarter, Data Cloud has ingested an astonishing 6.4 trillion records, 6.4 trillion records. That’s 140% year-over-year increase. It triggered 1.4 trillion activations, a 220% increase year-over-year. This is a monster product. I could not be more excited. And it’s the perfect time, we didn’t really understand that it was going to line up so well with this generative AI revolution. It’s a product we’ve been working on for a couple of years. Just the timing of it has been incredible because listen, if you don’t have your data together, in a company, you’re not going to deliver AI. It’s not like companies are going to run their AI off of Reddit or off of some kind of big public data set. They have to have their data set together to make AI work for them, and that is why the Data Cloud is so powerful for them

Salesforce’s management believes that Salesforce is the No.1 AI CRM and is leading the industry in the current AI innovation cycle; they also believe that the current cycle is unlike anything they have ever seen and it’s a view that’s shared widely

We are the #1 AI CRM. If that isn’t clear already, we’re leading the industry through the unprecedented AI innovation cycle. It’s unlike anything I’ve seen and most of the people that I talk to all over the world feel the same way. 

Salesforce’s management believes that trust is going to be important in the AI era and Salesforce will be protecting customer data with a trust layer so that the data can’t be easily accessed by 3rd-party foundation models

Now as I’ve said before, this AI revolution is going to be a trust revolution. It’s not just about CRM, data or AI. It’s also about trust. And I think the trust layer and the way that we’ve architected our platform so that our customers are not basically taking — getting taken advantage of these next-generation large language models, these foundation models, they are so hungry for all of this data, and they want our customers’ data so that they can grow. We’re not going to let them have it. We’re going to separate ourselves from those models through a trust layer so customers can be protected. This is going to be so important for the future of how Salesforce architects itself with artificial intelligence.

Salesforce’s management is seeing customers across the world wanting to invest in AI for more productivity; management also travelled the world and noticed that customers are very excited about AI but at the same time, they are confused about AI’s capabilities – this excitement was not in place a year ago because generative AI apps had not surfaced yet

I’ve been on the road pretty much nonstop especially over the last month. I’ve been in — throughout Europe. I’ve been now in Asia. I’ve been throughout the United States. And I just continue to see these same trends, which is customers are investing for the future and they’re investing and inspired by AI to give them more productivity. Look, they realize unemployment is just so low. Where are they going to hire more people? It’s so hard for them to hire, they’re going to have to get more productivity from their employees. They’re going to do that through this great new technology, and we’re going to help them make that happen…

…And on a global basis, and like I said, in some of these customers in the last 30 days, I was in — I can give you my direct experience. I was in San Francisco, Los Angeles, Las Vegas, Stuttgart, Germany, I was in Nice, Monaco. I visited with our customers throughout that area. And also, I went up to Amsterdam, to France. I had a large customer dinner in the U.K. in London. I went to the U.K. Safety Summit. I then came back and went to Japan. I think I see something very consistently, which is customers are extremely excited about AI everywhere we go. It could be government, it could be commercial organizations. It could be technologists. Everyone is excited about AI. At the same time, there is a lot of confusion about what AI can and cannot do…

… And this excitement, this energy, these ideas of innovation of AI were not in place a year ago. Because don’t forget, a year ago, I don’t think any of us have used ChatGPT or Bard or Anthropic or Cohere or Adapt or any of the new AI companies. None of us had really had our hands on or envisioned what it really meant to us or that we would have Copilots, and that those Copilots would give us the ability to do all kinds of next-generation capabilities. But a year later, it’s a technology revolution. 

Salesforce has been deploying its own generative AI tools at a quick pace and management thinks the results have been excellent

I’ve been impressed with how quickly we deployed our own trusted generative AI tools and applications internally. We’ve launched Sales, GPT and Slack Sales, Elevate internally, and our global support team is live with Service GPT, and we’re seeing incredible results. We’ve streamlined our quoting process with automation, eliminating over 200,000 manual approvals so far this year. And since the introduction in September, our AI-driven chatbot has autonomously resolved thousands of employee-related queries without the need for human involvement.

Salesforce’s management thinks that every customer’s AI transformation is going to begin and end with data 

What I’ll tell you is you’re seeing something that we have been seeing and calling out for the last few quarters, but we probably have not been able to illuminate it to the level that you see now in the numbers, which is that every customer and every customer transformation and every customer AI transformation is going to begin and end with data. And for us to achieve that goal, those customers are going to have to get to another level of excellence with their data. 

Salesforce’s management thinks that there’s still a lot that AI-companies need to do to make AI safe for customers, but it’s getting better over time

We have — we still have a lot of work, as everyone does in our industry, on AI and making it safe for our customers. This is going to be incredibly important. I think for a lot of customers, they realize that they’d like to just let this AI unleashed autonomously but it still hallucinates a huge amount and it also is quite toxic. So we’re not quite ready for that revolution. But every day, it’s getting a little better. 

Salesforce’s management thinks that the movie Minority Report contains a good scene on how AI can be used to automate the personalised customer experience – management also thinks that this is something that many of Salesforce’s customers want to achieve for their own customer experience

And when I — going through the streets of Tokyo, it’s not quite the minority report, which is a movie that was partly written by our futurist, Peter Schwartz, but it’s getting closer to that idea. And when I walked into some of these stores, there’s definitely a lot more automation based on my customer record but not quite the level of automation that Tom Cruise felt when he walked into that Gap store, if you remember that scene, which was so amazing, which is very much front of mind for a lot of our customers because they want to have that capability and they want us to deliver that for them.

Salesforce’s management explained how Data Cloud can be very useful for companies that are deploying AI: Companies can use their own data, via Data Cloud, to augment generative AI models to produce personalised and commercially-useful output that otherwise could not be done

But they’re going to get frustrated when the Copilot that they are given from other companies don’t have any data. They just have data grounded to maybe the application that’s sitting in front of them, but it doesn’t have a normalized data framework on — integrated into the Copilot. So while I think Copilots on productivity applications are exciting because you can tap into these kind of broad consumer databases that we’ve been using. So as an example, the Copilot is I’m writing an e-mail. So now my — I’m saying to the copilot, hey, now can you rewrite this email for me or some — make this 50% shorter or put it into the words of William Shakespeare. That’s all possible and sometimes it’s a cool party trick.

It’s a whole different situation when we say, “I want to write an e-mail to this customer about their contract renewal. And I want to write this e-mail, really references the huge value that they receive from our product and their log-in rates. And I also want to emphasize how the success of all the agreements that we have signed with them have impacted them, and that we’re able to provide this rich data to the Copilot and through the prompt and the prompt engineering that is able to deliver tremendous value back to the customer.” And this date, this customer value will only be provided by companies who have the data. And we are just very fortunate to be a company with a lot of data. And we’re getting a lot more data than we’ve ever had. And a lot of that is coming from the Data Cloud because it’s amplifying the capabilities of all the other data we have. 

Salesforce’s management thinks that there will be significant improvements to Salesforce’s AI features in the near future

I think the demonstrations at Dreamforce were outstanding. The demonstrations that we’ll deliver in our February release will be mind-boggling for our customers of what they will be able to get done. And I think that by the time we get to Dreamforce ’25 or ’24 in September ’24, what we’ll see is nothing that we could have possibly imagined just 24 months earlier before these breakthroughs in generative AI have really taken hold through the whole industry.

Salesforce’s management thinks that no single company will control the development of AI because they think that open source AI models are now as strong as proprietary models and will lead the way; management also thinks that unlike the development of mobile operating systems which is controlled by 2 companies, there are thousands of companies that are working on open-source AI and this will lead to rapid innovation

No one company has a hold on this. I think it’s pretty clear at this point that because of the way AI is built through open source, that these models are very much commodity models, and these responses are very much commodity responses. So we’ve always felt that way about AI for more than a decade. We said that its growth has really been amplified by open source development. Because these open source models now are as strong as commercial models are or proprietary models, I think that what we really can see is that, that is going to accelerate this through every customer. There’s not going to be any kind of restrictions because of the proprietariness or the cost structures of these models. We’re going to see this go much faster than any other technology.

The reference point, as I’ve been using as I travel around, is really mobile operating systems. Mobile operating systems are very important, and we all have one on our desk or in our pocket right now. But really, the development of mobile operating systems has been quite constrained because they’re really held mostly by 2 companies and 2 sets of engineering teams. That’s not how this technology is being built. This technology is highly federated across thousands of companies and thousands of engineering teams who are sharing this technology. And because of that, you’re ending up with a rate of innovation unlike anything we’ve seen in the history of our industry and is moving us into areas very quickly that could become uncomfortable. So this is an exciting moment.

Veeva Systems (NYSE: VEEV)

Veeva’s management has not seen a big impact on the clinical side of Veeva’s business from generative AI

In terms of the generative AI, honestly, I haven’t seen a big impact in clinical. There was good experimentation and projects around helping to write or evaluate protocols, for example, but not using things like generative AI to do statistical analysis or predict where the patients are. I think there, the more appropriate tool which people are using and continue to use more and more data science. Really having the right data, running the right algorithms, being systematic about it. So yes, I just haven’t seen that impact of generative AI. You see it more in other areas that relate to content creation and asking of questions, writing safety narratives, things like that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, Salesforce, and Veeva Systems. Holdings are subject to change at any time.

The Opportunities and Risks In The US Stock Market

Earlier this week, on 12 December 2023, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station. My friend Willie Keng, the founder of investor education website Dividend Titan, was hosting a segment for the radio show and we talked about a few topics concerning the US stock market:

  • Context on the US stock market’s strong performance so far in 2023 (Hint: Investors should not be surprised by the 20%-plus year-to-date gain in the S&P 500 because the index has historically been more likely to produce a gain of 20% or more in a calendar year than to experience a loss)
  • The impact on US stocks from a potential interest rate cut by the Federal Reserve (Hint: US stocks have historically tended to fall over a 1-year period after interest rate cuts, but it’s hard to say if a similar decline will happen again if the Fed does cut rates in 2024, since how stocks react will also depend on the reason for any interest rate cuts)
  • The risks of investing in the US stock market right now (Hint: The world we live in today is no less risky compared to yesterday, or a month ago, or a year ago, or even 10 years ago – the only thing that changes is our perception on the level and the types of risk that the world is facing. Instead of thinking about specific risks, it’s far more important to introduce elements of anti-fragility into our portfolios)
  • The opportunities I see in US stocks (Hint: Meta Platforms has overcome the key problems that were plaguing its business over the past year, and currently has an undemanding valuation) 

You can check out the recording of our conversation below!

Notes (where my data on US market history was sourced from):


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Meta Platforms. Holdings are subject to change at any time.

Lessons From The Immortal Charlie Munger

Neuroscientist David Eagleman once wrote: “There are three deaths: the first is when the body ceases to function. The second is when the body is consigned to the grave. The third is that moment, sometime in the future, when your name is spoken for the last time.”

Along Eagleman’s line of reasoning, Charlie Munger, who passed away peacefully last night, would be immortal since he would never experience the third death – his accomplishments, and the wisdom he has shared throughout his life, would see to it. 

Munger is one of my investing heroes. In remembrance of his life, I would like to share my favourite lessons from him.

On the importance of thinking in reverse, or inverting

“Another idea that I discovered was encapsulated by that story Dean McCaffery recounted earlier about the rustic who wanted to know where he was going to die, so he wouldn’t go there. The rustic who had that ridiculous sounding idea had a profound truth in his possession. The way complex adaptive systems work, and the way mental constructs work, problems frequently become easier to solve through inversion. If you turn problems around into reverse, you often think better. For instance, if you want to help India, the question you should consider asking is not: How can I help India? Instead, you should ask: How can I hurt India? You find what will do the worst damage, and then try to avoid it. Perhaps the two approaches seem logically the same thing. But those who have mastered algebra know that inversion will often and easily solve problems that otherwise resist solution. And in life, just as in algebra, inversion will help you solve problems that you can’t otherwise handle.”

On the importance of being equanimous when investing

“If you’re not willing to react with equanimity to a market price decline of 50% two or three times a century you’re not fit to be a common shareholder and you deserve the mediocre result you’re going to get compared to people who do have the temperament, who can be more philosophical about these market fluctuations.”

On the importance of incentives

“From all business, my favourite case on incentives is Federal Express. The heart and soul of their system – which creates the integrity of the product – is having all their airplanes come to one place in the middle of the night and shift all the packages from plane to plane. If there are delays, the whole operation can’t deliver a product full of integrity to Federal Express customers. And it was always screwed up. They could never get it done on time. They tried everything – moral suasion, threats, you name it. And nothing worked. Finally, somebody got the idea to pay all these people not so much an hour, but so much a shift – and when it’s all done, they can go home. Well, their problems cleared up overnight.”

On great career advice

“Three rules for a career: (1) Don’t sell anything you wouldn’t buy yourself; (2) Don’t work for anyone you don’t respect and admire; and (3) Work only with people you enjoy.”

On the importance of admitting mistakes

“There’s no way that you can live an adequate life without many mistakes. In fact, one trick in life is to get so you can handle mistakes. Failure to handle psychological denial is a common way for people to go broke.”

On the importance of not letting rare events completely shape how you approach life

“Ben Graham had a lot to learn as an investor. His ideas of how to value companies were all shaped by how the Great Crash and the Depression almost destroyed him… It left him with an aftermath of fear for the rest of his life, and all his methods were designed to keep that at bay.”

On the importance of handling problems from many different angles

“Most people are trained in one model – economics, for example – and try to solve all problems in one way. You know the saying: “To the man with a hammer, the world looks like a nail.” This is a dumb way of handling problems.”

On the importance of getting a little wiser each day

“I constantly see people rise in life who are not the smartest, sometimes not even the most diligent, but they are learning machines. They go to bed every night a little wiser than they were when they got up, and boy, does that help, particularly when you have a long run ahead of you.”

On how to invest

Over the long term, it’s hard for a stock to earn a much better return than the business which underlies it earns. If the business earns 6% on capital over 40 years and you hold it for that 40 years, you’re not going to make much different than a 6% return—even if you originally buy it at a huge discount. Conversely, if a business earns 18% on capital over 20 or 30 years, even if you pay an expensive looking price, you’ll end up with a fine result. So the trick is getting into better businesses. And that involves all of these advantages of scale that you could consider momentum effects.”

On how to get others to agree with you

“Well, you’ll end up agreeing with me because you’re smart and I’m right.”

On the secret to a happy life

“I always say the same thing: realistic expectations, which is low expectations. If you have unreasonable demands on life, you’re like a bird that’s trying to destroy himself by bashing his wings on the edge of the cage. And you really can’t get out of the cage. It’s stupid. You want to have reasonable expectations and take life’s results good and bad as they happen with a certain amount of stoicism.”

On courage and perseverance

I saved the most poignant lesson I’ve learned from Munger for the last. Not many may know this, but the first decade-plus of Munger’s adulthood was tragic. 

Munger got married when he was 21, but the marriage ended when he was 29. He “lost everything in the divorce”, according to his daughter Molly Munger. Shortly after the divorce, Munger’s son, Teddy Munger, was diagnosed with leukaemia. “In those days, there was no medical insurance – I just paid all the expenses” Munger once said. But more importantly, there was absolutely nothing doctors back then could do for leukaemia. When Munger was 31, Teddy passed on. Munger recounted the heart-wrenching episode: “I can’t imagine any experience in life worse than losing a child inch by inch. By the time he died, my weight was down 10 to 15 pounds from normal.” One of Munger’s friends, Rick Guerin, said that “when his [Munger’s] son was in the bed and slowly dying, he’d go in and hold him for awhile, then go out walking the streets of Pasadena crying.”

So by the time Munger was 31, he had already gone through a divorce, experienced the painful death of his son from an incurable disease, and was broke. 

But when Munger left the world last night, he was a billionaire, and was widely revered around the world for his wit, wisdom, and character. He taught me that with courage and perseverance, we can eventually build a better life for ourselves. “You should never, when faced with one unbelievable tragedy, let one tragedy increase into two or three because of a failure of will,” he admonished. 

See you on the other side, Mr Munger.

The Latest Thoughts From American Technology Companies On AI (2023 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the third quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management sees generative AI as an opportunity to reimagine the company’s product and transform Airbnb into the ultimate travel agent

First, I think that we are thinking about generative AI as an opportunity to reimagine much of our product category and product catalog. So if you think about how you can sell a lot of different types of products and new offerings, generative AI could be really, really powerful.  It can match you in a way you’ve never seen before. So imagine Airbnb being almost like the ultimate travel agent as an app. We think this can unlock opportunities that we’ve never seen. 

Airbnb’s management believes that digital-first travel companies will benefit from AI faster than physical-first travel companies

So Airbnb and OTAs are probably going to benefit more quickly from AI than, say, a hotel will just because Airbnb and OTAs are more digital. And so the transformation will happen at the digital surface sooner.

Airbnb’s management believes that Airbnb’s customer service can improve significantly by placing an AI agent between a traveller and her foreign host

One of the areas that we’re specifically going to benefit is customer service. Right now, customer service in Airbnb is really, really hard, especially compared to hotels. The problem is, imagine you have a Japanese host booking with — hosting a German guest and there’s a problem, and you have these 2 people speaking different languages calling customer service, there’s a myriad of issues, there’s no front desk, we can’t go on-premise. We don’t understand the inventory, and we need to try to adjudicate an issue based on 70 different policies that can be up to 100 pages long. AI can literally start to solve these problems where agents can supervise a model that can — in second, come up with a better resolution and provide front desk level support in nearly every community in the world. 

Airbnb’s management believes that AI can lead to a fundamentally different search experience for travellers

But probably more importantly, Kevin, is what we can do by reimagining the search experience. Travel search has not really changed much in 25 years since really Expedia, Hotels.com, it’s pretty much the same as it’s been. And Airbnb, we fit that paradigm. There’s a search box, you enter a date location, you refine your results and you book something. And it really hasn’t changed much for a couple of decades. I think now with AI, there can be entirely different booking models. And I think this is like a Cambrian moment for like the Internet or mobile for travel where suddenly an app could actually learn more about you. They could ask you questions and they could offer you a significantly greater personalized service. Before the Internet, there were travel agents, and they actually used to learn about you. And then travel got unbundled, it became self-service and it became all about price. But we do think that there’s a way that travel could change and AI could lead the way with that. 

Airbnb’s management believes that all travel apps will eventually trend towards being an AI travel agent

And I generally think for sure, as Airbnb becomes a little more of a so-called like AI travel agent, which is what I think all travel apps will trend towards to some extent.

Alphabet (NASDAQ: GOOG)

Alphabet’s management has learnt a lot from trials of Search Generative Experience (SGE), and the company has added new capabilities (videos and images); Search Generative Experience has positive user feedback and strong adoption

This includes our work with the Search Generative Experience, which is our experiment to bring generative AI capabilities into Search. We have learned a lot from people trying it, and we have added new capabilities like incorporating videos and images into responses and generating imagery. We have also made it easier to understand and debug generated code. Direct user feedback has been positive with strong growth in adoption.

SGE allows Alphabet to serve a wider range of information needs and provide more links; ads will continue to be relevant in SGE and users actually find ads useful in SGE; Alphabet wants to experiment with SGE-native ad formats

With generative AI applied to Search, we can serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. We are surfacing more links with SGE and linking to a wider range of sources on the results page, creating new opportunities for content to be discovered. Of course, ads will continue to play an important role in this new Search experience. People are finding ads helpful here as they provide useful options to take action and connect with businesses. We’ll experiment with new formats native to SGE that use generative AI to create relevant, high-quality ads customized to every step of the Search journey.

Alphabet’s management thinks SGE could be a subscription service; it’s still very early days in the roll-out of SGE and management wants to get the user experience correct (Alphabet has gone through similar transitions before, so management is confident about this)

And I do think over time, there will be newer paths, just like we have done on YouTube. I think with the AI work, there are subscription models as a possible path as well. And obviously, all of the AI investments we are doing applies across Cloud, too, and I’m pretty optimistic about what’s ahead there as well…

…On the first part about SGE, we are still in very, very early days in terms of how much we have rolled it out, but we have definitely gotten it out to enough people both geographically across user segments and enough to know that the product is working well, it improves the experience and — but there are areas to improve, which we are fine-tuning. Our true north here is getting at the right user experience we want to, and I’m pretty comfortable seeing the trajectory. And we’ve always worked through these transitions, be it from desktop to mobile or from now mobile to AI and then to experience. And so it’s nothing new. 

Alphabet is making it easier for people to identify AI-generated content through digital watermarks

One area we are focused on is making sure people can more easily identify when they are encountering AI-generated content online. Using new technology powered by Google DeepMind SynthID, images generated by Vertex AI can be watermarked in a way that is invisible to the human eye without reducing the image quality. Underlying all this work is the foundational research done by our teams at Google DeepMind and Google Research. 

Alphabet’s management is committed to changing Alphabet’s cost base to accommodate AI investments; Alphabet has, for a long time, driven its cost curves down spectacularly, and management is confident that it will be the same for the current build-out of AI infrastructure

As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts. We remain committed to durably reengineering our cost base in order to help create capacity for these investments in support of long-term sustainable financial value. Across Alphabet, teams are looking at ways to operate as effectively as possible focused on their biggest priorities…

…When I looked at the strength of the work we have done across our infrastructure as a company, our technical infrastructure as a company, and various given stages, at a given moment in time when we adopted new generations of technology, we have looked at the cost of it. But then the curves, the efficiency curves, we have driven on top of it has always been phenomenal to see. And I see the current moment as no different. Already through this year, we are driving significant efficiencies both in our models, in training costs and serving costs and our ability to adapt what’s needed to the right use case. 

Alphabet has new tools (including those powered by AI) that make it easier for (1) creators to produce content for Youtube’s various formats, (2) creators to connect with advertisers, and (3) advertisers drive higher ROI on advertising

At Made On YouTube in September, we announced new tools that make it easier to create engaging content. Dream Screen is an experimental feature that allows creators to add AI-generated video or image backgrounds to Shorts. And YouTube Create is a new mobile app with a suite of production tools for editing Shorts, longer videos or both…

…AI will do wonders for creation and storytelling. From Dream Screen and YouTube Create, which Sundar talked about, to features that audit up content in multiple languages, flip interim existing assets, remix and clip videos and more, we’re just getting started. We’re also helping brands break through its speed and scale across the funnel to drive results. Spotlight Moments launched last week. It uses AI to identify trending content around major cultural moments for brand sponsorship opportunities. There’s video reach campaigns, which are expanding to in-feed and Shorts, and will be generally available in November. AI is helping advertisers find as many people as possible and their ideal audience for the lowest possible price. Early tests are delivering 54% more reach at 42% lower cost. And then with video view campaigns, AI is serving skippable ads across in-stream, in-feed and Shorts and helping advertisers earn the maximum number of views at the lowest possible cost. So far, they’re driving 40% more views on average versus in-stream alone. Then for YouTube and other feed-based services, there’s our new demand-gen campaign, which launched in April, rolled out worldwide last week and was designed for the needs of today’s social marketers to engage people as they stream, scroll and connect. It combines video and image ads in one campaign with access to 3 billion users across YouTube and Google and the ability to optimize and measure across the funnel using Google AI. Demand gen is already driving successful brands like Samsung and Toyota.

Alphabet’s management believes that Google Cloud offers optimised infrastructure for AI training and inference, and more than 50% of all generative AI start-ups are using Google Cloud; Alphabet’s TPUs (tensor processing units) are winning customers; Google Cloud’s Vertex AI platform offers more than 100 AI models and the number of active generative AI projects built on Vertex AI grew by seven times sequentially

We offer advanced AI optimized infrastructure to train and serve models at scale. And today, more than half of all funded generative AI start-ups are Google Cloud customers. This includes AI21 Labs, Contextual, Elemental Cognition, Writer and more. We continue to provide the widest choice of accelerator options. Our A3 VMs [virtual machines] powered by NVIDIA’s H100 GPU are generally available, and we are winning customers with Cloud TPU v5e, our most cost efficient and versatile accelerator to date. On top of our infrastructure, our Vertex AI platform helps customers build, deploy and scale AI-powered applications. We offer more than 100 models, including popular third-party and open source models, as well as tools to quickly build Search in conversation use cases. From Q2 to Q3, the number of active generative AI projects on Vertex AI grew by 7x, including Highmark Health, which is creating more personalized member materials.

Duet AI, Alphabet’s AI assistant, is built on Google’s large foundation models and is used by large companies to boost developer productivity and smaller companies to help with data analytics; more than 1 million testers have used Duet AI in Google Workspace

Duet AI was created using Google’s leading large foundation models and is specially trained to help users to be more productive on Google Cloud. We continue expanding its capabilities and integrating it across a wide range of cloud products and services. With Duet AI, we are helping leading brands like PayPal and Deutsche Bank boost developer productivity, and we are enabling retailers like Aritzia and Gymshark to gain new insights for better and faster business results…

…In Workspace, thousands of companies and more than 1 million trusted testers have used Duet AI. They are writing and refining content in Gmail and Docs, creating original images from text within slides, organizing data and sheets and more.

Alphabet’s new consumer hardware products have an AI chip – Tensor G3 – built in them

Our portfolio of Pixel products are brought to life, thanks to our combination of foundational technologies AI, Android and Google Tensor. Google Tensor G3 is the third generation of our tailor-built chip. It’s designed to power transformative experiences by bringing the latest in Google AI research directly to our newest phones. 

Gemini is the foundation of the next-generation AI models that Google Deepmind will be releasing throughout 2024; Gemini will be multi-modal and will be used internally across all of Alphabet’s products as well as offered externally via Vertex 

On Gemini, obviously, it’s effort from our combined Google DeepMind team. I’m very excited at the progress there as we’re working through getting the model ready. To me, more importantly, we are just really laying the foundation of what I think of as the next-generation series of models we’ll be launching throughout 2024. The pace of innovation is extraordinarily impressive to see. We are creating it from the ground up to be multimodal, highly efficient tool and API integrations and, more importantly, laying the platform to enable future innovations as well. And we are developing Gemini in a way that it is going to be available at various sizes and capabilities, and we’ll be using it immediately across all our products internally as well as bringing it out to both developers and cloud customers through Vertex. So I view it as a journey, and each generation is going to be better than the other. And we are definitely investing, and the early results are very promising.

Alphabet’s AI tools are very well received by advertisers and nearly 80% of advertisers use at least one AI-powered search ads product

Our AI tools are very well received, AI, gen AI are top of mind for everybody, really. There’s a ton of excitement, lots of questions about it. Many understand the value. Nearly 80% of our advertisers already use at least one AI-powered search ads product. And yes, we’re hearing a lot of good feedback on, number one, our ads AI Essentials, which are really helping to unlock the power of AI and set up for durable ROI growth on the advertiser side, this is — those are products like the foundation for data and measurement, things like Google Tech, consent mode and so on; and obviously, Search and PMax, we talked about it; and then all the gen AI products, all those different ones. So there’s a whole lot of interest in those products, yes.

Amazon (NASDAQ: AMZN)

Anthropic, a high-profile AI startup, recently chose AWS as its primary cloud provider, and Anthropic will work with Amazon to further develop Amazon’s Trainium (for training AI models) and Inferentia (for AI inference work) chips; Amazon’s management believes the collaboration with Anthropic will help Amazon bring further price performance advantages to Trainium and Inferentia

Recently, we announced the leading LLM maker Anthropic chose AWS as its primary cloud provider. And we’ll use Trainium training and Inferentia to build, trade and deploy future LLMs. As part of this partnership, AWS and Anthropic will collaborate on the future development of training and inferential technology. We believe this collaboration will be helpful in continuing to accelerate the price performance advantages that Trainium and Inferentia deliver for customers.

Perplexity is another AI startup that chose to run their models with Trainium and Inferentia

We are also seeing success with generative AI start-ups like Perplexity AI who chose to go all in with AWS, including running future models in Trainium and Inferentia.

Amazon’s management believes that Amazon’s Trainium and Inferentia chips are very attractive to people in the industry because they offer better price-performance characteristics and they can meet demand; Anthropic and Perplexity’s decisions to go with Trainium and Inferentia are statements to that effect

I would also say our chips, Trainium and Inferentia, as most people know, there’s a real shortage right now in the industry and chips, it’s really hard to get the amount of GPUs that everybody wants. And so it’s just another reason why Trainium and Inferentia are so attractive to people. They have better price performance characteristics than the other options out there, but also the fact that you can get access to them. And we’ve done a I think, a pretty good job providing supply there and ordering meaningfully in advance as well. And so you’re seeing very large LLM providers make big bets on those chips. I think anthropic deciding to train their future LLM model on Trainium and using Inferentia as well is really a statement. And then you look at the really hot start-up perplexity.ai, who also just made a decision to do all their Trainium and Inferentia on top of Trainium and Inferentia. So those are two examples. 

Amazon recently announced the general availability of Amazon Bedrock (AWS’s LLMs-as-a-service), which gives access to a variety of 3rd-party large language models (LLMs) as well as Amazon’s own LLM called Titan; Meta’s Llama-2 LLM will also be on Bedrock, the first time it is available through a fully-managed service

In the middle layer, which we think of as large language models as a service, we recently introduced general availability for Amazon Bedrock, which offers customers access to leading LLMs from third-party providers like anthropics, stability AI, coherent AI 21 as well as from Amazon’s own LLM called Titan, where customers can take those models, customize them using their own data, but without leaking that data back into the generalized LLM have access to the same security, access control and features that they run the rest of their applications with in AWS all through a managed service. In the last couple of months, we’ve announced the imminent addition of Meta’s Llama 2 model to Bedrock the first time it’s being made available through a fully managed service.

Amazon’s management believes that Bedrock helps customers experiment rapidly with different LLMs and is the easiest way to build and scale enterprise-ready generative AI applications; customer reaction to Bedrock has been very positive; 

Also through our expanded collaboration with Anthropic, customers will gain access to future anthropic models through bedrock with exclusive early access to unique features model customization and the ability to fine-tune the models. And Bedrock has added several new compelling features, including the ability to create agents which can be programmed to accomplish tasks like answering questions or automating workflows. In these early days of generative AI, companies are still learning which models they want to use, which models they use for what purposes and which model sizes they should use to get the latency and cost characteristics they desire. In our opinion, the only certainty is that there will continue to be a high rate of change. Bedrock helps customers with this fluidity, allowing them to rapidly experiment with move between model types and sizes and enabling them to pick the right tool for the right job. The customer reaction to Bedrock has been very positive and the general availability is buoyed that further. Bedrock is the easiest way to build and scale enterprise-ready generative AI applications and a real game changer for developers and companies trying to get value out of this new technology…

Bedrock’s ability to let customers conduct fast experiments is very useful because customers sometimes get surprised at the true costs of running certain AI models

Because what happens is you try a model, you test the model, you like the results of the model and then you plug it into your application and what a lot of companies figure out quickly is that using the really large — the large models and the large sizes ends up often being more expensive than what they anticipated and what they want to spend on that application. And sometimes too much latency in getting the answers as it shovels through the really large models. And so customers are experimenting with lots of different types of models and then different model sizes to get the cost and latency characteristics that they need for different use cases. It’s one of the things that I think is so useful about Bedrock is that customers are trying so many variants right now but to have a service that not only lets you leverage lots of third party as well as Amazon large language miles, but also lots of different sizes and then makes the transition of moving those workloads easy between them is very advantageous.

Amazon Code Whisperer, AWS’s coding companion, has a lot of early traction and has become more powerful recently by having the capability to be customised on a customer’s own code base (a first-of-its kind feature)

Generative AI coding companion Amazon Code Whisper has gotten a lot of early traction and got a lot more powerful recently with the launch of its new customization capability. The #1 enterprise request for coding companions has been wanting these companions to be familiar with customers’ proprietary code bases is not just having code companions trained on open source code. Companies want the equivalent of a long-time senior engineer who knows their code base well. That’s what Code Whisper just launched, another first of its kind out there in its current forum and customers are excited about it.

Amazon’s management believes that customers want to bring AI models to their data, not the other way around – and this is an advantage for AWS as customers’ data resides within AWS

It’s also worth remembering that customers want to bring the models to their data, not the other way around. And much of that data resides in AWS as the clear market segment leader in cloud infrastructure. 

There are many companies that are building generative AI apps on AWS and this number is growing fast

The number of companies building generative AI apps and AWS is substantial and growing very quickly, including Adidas, Booking.com, Bridgewater, Clarient, GoDaddy, Lexus Nexus, Merck, Royal Philips and United Airlines, name a few

Generative AI’s growth rate within AWS is very fast – even faster than Amazon’s management expected – and management believes that the absolute amount of generative AI business within AWS compares very favourably with other cloud providers

I could see it also just the growth rate for us in generative AI is very fast. Again, I have seen a lot of different numbers publicly. It’s real hard to measure an apples-to-apples. But in our best estimation, our — the amount of growth we’re seeing in the absolute amount of generative AI business we’re seeing compares very favorably with anything else I’ve seen externally.

Generative AI is already a pretty significant business for AWS, but it’s still early days

What I would tell you is that we have been surprised at the pace of growth in generative AI. Our generative AI business is growing very, very quickly, as I mentioned earlier. And almost by any measure, it’s a pretty significant business for us already. And yet I would also say that companies are still in the relatively early stages.

All of Amazon’s significant businesses are working on generative AI applications, with examples including using generative AI to (1) help consumers discover products, (2) forecast inventory in various locations, (3) help 3rd-party sellers create new product pages, (4) help advertisers with image generation for ads, and (5) improve Alexa

Beyond AWS, all of our significant businesses are working on generative AI applications to transform their customer experiences. There are too many for me to name on this call, but a few examples include, in our stores business, we’re using generative AI to help people better discover products they want to more easily access the information needed to make decisions. We use generative AI models to forecast inventory we need in our various locations and to derive optimal last mile transportation routes for drivers to employ. We’re also making it much easier for our third-party sellers to create new product pages by entering much less information and getting the models to the rest. In advertising, we just launched a generative AI image generation tool, where all brands need to do is upload a product photo and description to quickly create unique lifestyle images that will help customers discover products they love. And in Alexa, we built a much more expansive LLM and previewed the early version of this. Apart from being a more intelligent version of herself, Alexa’s new conversational AI capabilities include the ability to make multiple requests at once as well as more natural and conversational requests without having to use specific phrases.

Amazon’s management still believes in the importance of building the world’s best personal assistant and they thinksAlexa could be one of these assistants

We continue to be convicted that the vision of being the world’s best personal assistant is a compelling and viable one and that Alexa has a good chance to be one of the long-term winners in this arena. 

While Amazon’s management is pulling back Amazon’s capital expenditure on other areas, they are increasing capital expenditure for AI-related infrastructure

For the full year 2023, we expect capital investments to be approximately $50 billion compared to $59 billion in 2022. We expect fulfillment and transportation CapEx to be down year-over-year partially offset by increased infrastructure CapEx, support growth of our AWS business, including additional investments related to generative AI and large language model efforts. 

Apple (NASDAQ: AAPL)

Apple’s management sees AI and machine learning as fundamental technologies to the company and they’re integrated in virtually every product that Apple ships

If you kind of zoom out and look at what we’ve done on AI and machine learning and how we’ve used it, we view AI and machine learning as fundamental technologies, and they’re integral to virtually every product that we ship. 

Apple’s AI-powered features include Personal Voice in iOS17, and fall detection, crash detection, and ECG on the Apple Watch; Apple’s management does not want to label Apple’s AI-powered features with “AI” – instead the features are labelled as consumer benefits

And so just recently, when we shipped iOS 17, it had features like Personal Voice and Live Voicemail. AI is at the heart of these features. And then you can go all the way to then life-saving features on the Watch and the phone like fall detection, crash detection, ECG on the watch. These would not be possible without AI. And so we don’t label them as such, if you will. We label them as to what their consumer benefit is, but the fundamental technology behind it is AI and machine learning.

Apple is investing in generative AI but management has no details to share yet

In terms of generative AI, we have — obviously, we have work going on. I’m not going to get into details about what it is because as you know, we really don’t do that. But you can bet that we’re investing, we’re investing quite a bit. We are going to do it responsibly. And it will — you will see product advancements over time where those technologies are at the heart of them.

Arista Networks (NYSE: ANET)

From the vantage point of Arista Networks’ management, Oracle has become an important AI data centre company

Our historic classification of our Cloud Titan customers has been based on industry definition of customers with or likely to attain greater than 1 million installed compute service. Looking ahead, we will combine Cloud and AI customer spend into one category called Cloud and AI Titan sector. And as a result of this combination, Oracle OCI becomes a new member of the sector, while Apple shift to cloud specialty providers…

…So I think OCI has become a meaningful top-tier cloud customer and they belong in the cloud tightening category and in addition to their AI investments as well. So for reasons of classification and definition, the change is very warranted. And yes, they happened to be a good customer of Arista, that’s nice as well.

Arista Networks’ management has observed that its large customers have different needs when it comes to AI and non-AI networking technologies 

During the past year, our Cloud Titan customers have been planning a different mix of AI networking and classic cloud networking for their compute and storage clusters.

Arista Networks’ management believes that the company’s recent deal with a public sector organisation to provide Ethernet networking technology for the organisation’s AI initiative is an example of why Ethernet is important in AI

Our next [ one ] showcases our expansion of Arista in the public sector with their AI initiative. This grant-funded project utilizes Arista simplified operational models with CloudVision. New AI workloads require high scale, high ratings, high bandwidth and low latency as well as a need for granular visibility. This build out of a single EVPN-VXLAN based 400-gig fabric is based on deep buffers fines and underscores the importance of a lossless architecture for AI networking.

Arista Networks’ management is seeing its customers prioritise AI in their data centre spending right now, but demand for other forms of data centre-related spending will follow

We’ve always looked at that the cloud network as a front end and the back end. And as we said last year, many of our cloud customers are favoring spending more on the back end with AI, which doesn’t mean they stop spending on front end, but they’re clearly prioritized and doubled down on AI this year. My guess is as we look at the next few years, they’ll continue to double down on AI. But you cannot build an AI bank cluster without thinking of the front end. So we’ll see a full cycle here, while today the focus is greatly on AI and the back end of the network. In the future, we expect to see more investments in the front end as well.

Arista Networks’ management sees AI networking as being dominated by Infiniband today- with some room for a combination of Infiniband and Ethernet – but they still believe that AI networking will trend toward Ethernet over time, with 2025 being a potential inflection point

Today if I look at the 5 major designs for AI networking, one of them is still very InfiniBand dominated, all the others we’re looking at is — are adopting on dual strategy of both Ethernet and InfiniBand. So I think AI networking is going to become more and more favorable to Ethernet, particularly with the Ultra Ethernet Consortium and the work they’re doing to define a spec, you’re going to see more products based on UEC. You’re going to see more of a connection between the back end and the front-end using IP as a singular protocol. And so we’re feeling very encouraged that especially in 2025, there will be a lot of production rollout of back end and, of course, front end based on Ethernet.

Arista Networks’ management sees networking spend as contributing to 10%-15% of the total cost of an AI data centre 

Coming back to this networking spend versus the rest of the GPUs and et cetera, I would say it started to get higher and higher with 100-gig, 400-gig and 800-gig, where the optics and the switches are more than 10%, perhaps even 15% in some cases, 20, a lot of its governed by the cables and optics too. But the percentage hasn’t changed a lot in high-speed networking. In other words, it’s not too different between 10, 100, 200, 400 and 800. So we — you’ll continue to see that 10% to 15% range.

Arista Networks’ management sees diversified activity when it comes to the development of AI data centres

[Question]  And just what you’re seeing in terms of other people kind of building out some of these AI clusters, if you classify some of those customers as largely focused on back end today, and those represent opportunities going forward? Or just kind of what the discussion is outside of the Cloud Titans amongst some of these other guys that are building very large networks?

[Answer]  The Tier 2 cloud providers are doing exactly what the Tier 1 is doing just at a smaller scale. So the activity is out there. Many companies are trying to build these clusters, maybe not hundreds of thousands GPUs but thousands of GPUs together in their real estate if they can get them. But the designs that we’re working on with them, the type of sort of features, fine-tuning is actually very, very similar to the cloud, just at a smaller scale. So we’re very happy with that activity and this is across the board. It’s very positive to see this in the ecosystem that it’s not limited just 4 or 5 customers.

Arista Networks’ management is observing that data centre companies are facing a shortage of GPUs (graphics processing units) and they are trying to develop AI with smaller GPU clusters

I think they’re also waiting for GPUs like everyone else is. So there’s that common problem that we’re not the only one with lead time issues. But to clarify the comment on scale, Anshul and I are also seeing some very interesting enterprise projects against smaller scale. So a lot of customers are trying AI for small clusters, not too different from what we saw with HPC clusters back in the day.

Arista Networks’ management believes that good networking technology for AI requires not just good silicon, but the right software, so they are not concerned about Arista Networks’ suppliers moving up the stack

It’s not just the merchant silicon but how you can enable the merchant silicon with the right software and drivers, and this is an area that really Arista excels, and if you just have chips, you can’t build the system. But our system-wide features, whether it’s a genetic load balancing, or latency analyzer to really improve the job completion time and deal with that frequent communication and generative AI is also fundamentally important…

… [Question] So I think there was a mention on merchant silicon earlier in the Q&A. And one of your merchant silicon partners has actually moved up the stack towards the service provider routing. I’m just curious if there’s any intention on going after that piece if that chip is made available to you?

[Answer] I believe you are referring to the latest announcement Broadcom on their 25.60 Jericho chip that was announced recently.

[Question] Yes, the Qumran3D.

[Answer] Qumran3D, exactly. So it’s the same family, same features. And as you know, we’ve been a great partner of Broadcom for a long time, and we will continue to build new products. This is not a new entry, so to speak. We’ve been building these products that can be used on switches our orders for a while, and that bandwidth just doubled going to now 25.6. So you can expect some products from us in the future with those variants as well. But really — nothing really changed…

…And the investment we have made in our routing stack over the last 10 years, I want to say, has just gotten better and stronger. Power in the Internet, power in the cloud, power in the AI, these are hard problems. And they require thousands of engineers of investment to build the right VXLAN, BGP routing, EVPN, et cetera. So it’s not just a chip. It’s how we name the chip to do these complicated routing algorithms.

AI is becoming a really important component of Arista Networks’ customers

We’re simply seeing AI is going to become such an important component of all our cloud titans that is now a combined vertical.

Datadog (NASDAQ: DDOG)

Datadog’s management is excited about generative AI and large language models and they believe that the adoption of AI will lead to additional growth in cloud workloads

Finally, we continue to be excited about the opportunity in generative AI and Large Language Models. First, we believe adopting NextGen AI will require the use of cloud and other modern technologies and drive additional growth in cloud workloads.

Datadog is building LLM observability products

So we are continuing to invest by integrating with more components at every layer of the new AI stack and by developing our own LLM observability products. 

Datadog’s management is seeing adoption of AI across many of its customers, but the activity is concentrated in AI-native customers

And while we see signs of AI adoption across large parts of our customer base, in the near term, we continue to see AI-related usage manifest itself most accurately with next-gen AI native customers who contributed about 2.5% of our ARR this quarter.

Datadog is adding value to its own platform using AI with one example being Bits AI, Datadog’s test-and-analysis tool

Besides observing the AI stack, we also expect to keep adding value to our own platform using AI. Datadog’s unified platform and purely SaaS model, combined with strong multiproduct adoption by our customers generates a large amount of deep and precise observability data. We believe combining AI capabilities with this broad data set will allow us to deliver differentiated value to customers. And we are working to productise differentiated value through recently announced capabilities such as our Bits AI assistant, AI generated synthetic test and AI-led air analysis and resolution, and we expect to deliver many more related innovation to customers over time.

Datadog’s management is seeing that AI-native customers are using Amazon’s AWS whereas the larger enterprises that are using AI are using Microsoft’s Azure

Interestingly enough, the — when we look at our cohort of customers that are that we consider to be AI native and built largely on AI in all AI providers, they tend to be on different clouds. What we see is that the majority of those companies actually have a lot of their usage on AWS. Today, the larger part of the usage or the larger of these customers are on Azure. So we see really several different adoption trends there that I think are interesting to the broader market.

Datadog’s management is seeing broad usage of AI across Datadog’s customers, but the customers are adopting AI only at low volumes

Whereas we see broad usage of AI functionality across the customer base, but at low volumes, and it corresponds to the fact that for most customers or most enterprises really, they’re still in the early stages of developing and shipping applications. So for now, the usage is concentrated among the model providers.

Datadgo’s management sees a lot of opportunity for Datadog as AI usage proliferates – for example, management believes that the widespread use of AI will result in the creation of a lot of code and these code will need to be monitored

So on the DevSecOps side, I think it’s too early to tell how much the revenue opportunity there is in the tooling specific lab there. When you think of the whole spectrum of tools, the closer you get to the developer side to how are is to monetize and the further you get towards operations and infrastructure, the easier it is to monetize. You can ship things that are very useful and very accretive to our platform because they get you a lot of users, a lot of attention and a lot of stickiness that are harder to monetize. So we’ll see where on the spectrum that is. What we know, though, is that broader Generative AI up and down the stack from the components themselves, the GPUs all the way up to the models and the various things that are used to orchestrate them and store the data and move the data around all of that is going to generate a lot of opportunity for us. We said right now, it’s conciliated among the AI native largely model providers. But we see that it’s going to broaden and concern a lot more of our customers down the road…

…So in general, the more complexity there is, the more useful observability, the more you see his value from writing code to actually understanding it and observing it. So to caricature if you — if you spend a whole year writing 5 lines of code that are really very deep, you actually know those 5 lines pretty well, maybe you don’t observe because you’ll see you understand exactly how they work and what’s going on with them. On the other hand, if thanks to all the major advances of technology and all of the very super source AI and you can just very quickly generate thousands of lines of code, ship them and start operating them, you actually have no idea how these work and what they do. And you need a lot of tooling observability to actually understand that and keep driving that and secure it and do everything you need to do with it over time. So we think that overall, this increases in productivity are going to favor observability.

Datadog’s management is also trying to guess how transformative AI will be, but there are signs that AI’s impact will be truly huge

In terms of the future growth of AI, look, I think like everyone, we’re trying to guess how transformative it’s going to be. It looks like it’s going to be pretty is, if you judge from just internally, how much of that technology we are adopting a how much is the productivity impact, it seems to be having. 

AI-related use cases are still just a small fraction of the overall usage of Datadog’s products, but Datadog’s management thinks that AI will drive a lot of the company’s growth in the future 

So again, today, we only see a tiny bit of it, which is early adoption by model providers and a lot of companies that are trying to scale up and experiment and figure out who it applies to their businesses and what they can ship to use the technology. But we think it’s going to drive a lot of growth in the years to come.

Datadog’s management can’t tell when Datadog’s broader customer base will start ramping up AI workloads but they are experimenting; most of the innovation happening right now is concentrated among the model providers

[Question] Olivier, you called out the 2.5 points from AI native customers a few times, but you’ve also said that the broader customer base should start adding AI workloads to our platform over time. When do you think that actually takes place and the broader customer base starts to impact that AI growth in more earnest?

[Answer] We don’t know. And I think it’s too early to tell. For one part, there’s some uncertainty in terms of — these customers are being to figure out what it is they are going to ship to their own customers. I think everybody is trying to learn that right now and experiment it. And — but the other part is also that right now, the innovation is largely concentrated among the model providers. And so it’s rational right now for most customers to rely on those instead of they’re deploying their own infrastructure. Again, we think it’s slightly going to change. We see a lot of demand in interest in other ways to host models and run models and customers and all those things like that. But today, that’s the — these are the trends of the market today basically.

Etsy (NASDAQ: ETSY)

Etsy’s management is improving the company’s search function by combining humans and machine learning technology to better identify the quality of each product listing on the Etsy platform

We’re moving beyond relevance to the next frontier of search focused on better identifying the quality of each Etsy listing, utilizing humans and ML technology so that from a highly relevant result set, we bring the very best of Etsy to the top, personalized to what we understand of your tastes and preferences. For example, from the start of the year, we’re tracking to a ninefold increase in the number of human-curated listings on Etsy to over 1.5 million listings by year-end. We’re also utilizing ML models designed to determine the visual appeal of items and incorporating that information into our search algorithms. 

Etsy’s management is using generative AI to improve the Etsy search-experience when buyers enter open-ended queries, which helps build purchase-frequency

There’s also a huge opportunity to evolve the Etsy experience so that we show buyers a more diverse set of options when they search for open-ended head query items such as back-to-school. On the left of this slide, you can see an example of how a search for back-to-school items looks on Etsy. We generally show multiple very similar versions of customized pencils, stickers, lawn signs and so on, all mixed together. This is suboptimal as it offers buyers only a few main ideas on the first page of search and requires a ton of cognitive load to distinguish between virtually identical items. We’ve recently launched a variety of experiments with the help of Gen AI to evolve these types of head query searches. As we move into 2024, when a buyer searches for broad queries, we expect to be able to show a far more diverse and compelling set of ideas, all beautifully curated by organizing search results into a number of ideas for you that are truly different and helping to elevate the very best items within each of these ideas, we can take a lot of the hard work out of finding exactly the perfect item. And help build frequency as we highlight the wide range of merchandise available on Etsy.

Etsy’s management is using machine learning to identify product-listings that are not conforming to the company’s product policies, and listing-takedowns are already up 140% year-on-year 

We’ve hired a lot of people, and we also have been investing a lot in machine learning and machine learning is really helping us to be able to identify among the 120 million listings on Etsy, those that may not conform with our policy. Takedowns are up 140% year-over-year. 

Fiverr (NYSE: FVRR)

Fiverr’s management has developed Fiverr Neo, a generative AI tool that helps customers scope their projects better and match them with suitable freelance talent, just like a human recruiter would, just better; management believes that Fiverr Neo will help save customers time when they are looking for freelance talent

The vision for Fiverr Neo is quite wild – we imagine Neo will serve as a personalized recruiting expert that can help our customers more accurately scope their projects and get matched with freelance talent, just like a human recruiter, only with more data and more brain power. What we have done so far is leverage the existing LLM engines to allow customers to express their project needs in natural language, which Neo will synthesize and define the scope before matching the client with a short list of choices pulled from the entire Fiverr freelancer database. It’s a substantial step forward from the existing experience and streamlines the time the customer needs to make an informed decision.

Fiverr’s management used a combination of Fiverr’s own software and LLMs from other companies to build Fiverr Neo

So there’s a lot of learning as we build this product. And what we’re doing is really a hybrid of technologies. Some of them are being developed by us. Some are off the shelf, most of the leading companies that are developing LLM, which have partnered with us. And we’re putting this to the maximum. I think a lot of these systems are not yet optimized for large scale and high performance but we find our own ways of developing a lot of this technology to provide a very smooth experience to our customers. 

Fiverr Neo is still new, but users are already experiencing more accurate matches

In terms of Fiverr neo, we’re very pleased with the rollout. Obviously, very, very young product, but we’re seeing over 100,000 users that are trying the product. And what we’re seeing from their experience is that we’re able to provide more accurate matches, which is basically what we wanted to do and have a higher engagement and satisfaction levels, which we’re very happy with and the beginning of a repeat usage of the product. 

Fiverr’s management thinks that AI has a positive impact on the product categories that Fiverr can introduce to its marketplace and management is ensuring that Fiverr’s catalog will contain any new skills that the AI-age will require; management thinks that a lot of AI hype at the beginning of the year has died down and the world is looking for killer AI applications

So I did address this also in how we think about next year and the fact that AI both impact the efficiency of how we work allows us to do pretty incredible things in our product. It also has an impact — positive impact on the categories that we can introduce. So again, we’re not getting into specific category breakdown. But what we’re seeing on the buyer side, I think we’ve introduced these categories, these categories continue growing. I think that a lot of the height that surrounded AI in the beginning of the year subsided and right now, it’s really looking for the killer applications that could be developed with AI, and we’re developing some of them and our customers are as well. So these are definitely areas where we continue seeing growth, but not just that, but we continue investing in the catalog side to ensure that the new types of skills that pop up are going to be addressed on the Fiverr market base.

Mastercard (NYSE: MA)

Mastercard’s management is using AI to improve the company’s fraud-related solutions and has signed agreements in Argentina, Saudi Arabia, and Nigeria in this area

AI also continues to play a critical role powering our products and fueling our network intelligence. We’re scaling our AI-powered transaction fraud monitoring solution, which delivers real-time predictive scores based on a unique blend of customer and network level insights. This powerful solution gives our customers the ability to take preventive action before the transaction is authorized. This quarter alone, we signed agreements in Argentina, Saudi Arabia and Nigeria with financial institutions and fintechs who will benefit from early fraud detection and with merchants who will experience less friction and higher approval rates.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management is very excited about AI and how it can help MercadoLibre improve the user experience and its business operations

As you know, we don’t guide, but there are many exciting things going on, particularly, obviously, AI. That hopefully will enable us to provide our users a better experience, enable us to launch innovative ideas, and also scale and gain efficiencies, whether it is in customer service, or whether it is in fraud prevention or whether it is in the way our developers, 15,000 developers, go about developing and performing quality control, et cetera. So obviously, looking forward for the next 3 years, I think that’s a key thing to look into.

MercadoLibre’s management is working on using AI to improve the company’s product-search function and they are happy with the progress so far 

Last question in terms of AI and search, we are working on that. I mean we are putting a lot of effort into building solutions around AI. I think we don’t have much to disclose as of now, but search, reviews, questions and answers, buy box and products, as Marcos was saying, copilot for our developer. We’re looking at the broad range of AI uses for MercadoLibre to boost consumer demand and efficiency. And we’re happy with the progress that we have so far, but not much to be said yet.

MercadoLibre’s management has been using AI for many years in fraud prevention and credit scoring for the company’s services

We have been using AI for a long time now for many, many years, both in terms of fraud prevention and credit scoring. Both 2 instances, they are pretty much use cases which are ideal for AI, because we have, in the case of fraud prevention, millions of transactions every day and with a clear outcome, either fraud or not fraud. So with the right variables, we can build a very strong model that has predicted and have really best-in-class fraud prevention. And with that knowledge and given the experience we have been building on credits, we have also been — built our credit scoring models leveraging the AI.

Meta Platforms (NASDAQ: META)

The next-generation Ray-Ban Meta smart glasses has embedded AI

The next generation of Ray-Ban Meta smart glasses, which are the first smart glasses with our Meta AI built in.

Meta Platforms’ management thinks glasses are an ideal form-factor for an AI device as it can see exactly what you see and hear what you hear

And in many ways, glasses are the ideal form factor for an AI device because they enable your AI assistant to see what you see and hear what you hear. 

Llama 2 is now the leading open source AI model with >30 million downloads last month

We’re also building foundation models like Llama 2, which we believe is now the leading open source model with more than 30 million Llama downloads last month.

Beyond generative AI, Meta Platforms’ management is using recommendation AI systems for the company’s Feeds, Reels, ads, and integrity systems and these AI systems are very important to the company; AI feed recommendations led to increases in time spent on Facebook (7%) and Instagram (6%)

Beyond that, there was also a different set of sophisticated recommendation AI systems that powers our Feeds, Reels, ads and integrity systems. And this technology has less hype right now than generative AI but it is also very important and improving very quickly. AI-driven feed recommendations continue to grow their impact on incremental engagement. This year alone, we’ve seen a 7% increase in time spent on Facebook and a 6% increase on Instagram as a result of recommendation improvements. 

Meta Platforms’ AI tools for advertisers has helped drive its Advantage+ advertising product to reach a US$10 billion revenue run-rate, with more than 50% of the company’s advertisers using Advantage+ creative tools

Our AI tools for advertisers are also driving results with Advantage+ shopping campaigns reaching a $10 billion run rate and more than half of our advertisers using our Advantage+ creative tools to optimize images and text and their ads creative.

AI-recommended content has become increasingly incremental to engagement on Meta Platforms’ properties

AI-recommended content from unconnected accounts and feed continues to become increasingly incremental to engagement, including in the U.S. and Canada. These gains are being driven by improvements to our recommendation systems, and we see additional opportunities to advance our systems even further in the future as we deploy more advanced models.

Meta Platforms’ management believes that the company’s Business AIs can easily help businesses set up AIs to communicate with consumers at very low cost, which is important in developed economies where cost of labour is high (businesses in developing economies tend to hire humans to communicate with consumers)

Now I think that this is going to be a really big opportunity for our new Business AIs that I talked about earlier that we hope will enable any business to easily set up an AI that people can message to help with commerce and support. Today, most commerce and messaging is in countries where the cost of labor is low enough that it makes sense for businesses to have people corresponding with customers over text. And in those countries like Thailand or Vietnam, there’s a huge amount of commerce that happens in this way. But in lots of parts of the world, the cost of labor is too expensive for this to be viable. But with business AIs, we have the opportunity to bring down that cost and expand commerce and messaging into larger economies across the world. So making business AIs work for more businesses is going to be an important focus for us into 2024.

Meta Platforms’ management has started testing the company’s AI capabilities with a few partners in business messaging

We’ve recently started testing AI capabilities with a few partners and we’ll take our time to get the experience right, but we believe this will be a big unlock for business messaging in the future.

Meta Platforms’ management still believes in the benefits of open-sourcing Meta’s AI models: It increases adoption (which benefits the company as the security features and cost-efficiency of the models improves) and talent is more attracted to Meta Platforms

We have a pretty long history of open sourcing parts of our infrastructure that are not kind of the direct product code. And a lot of the reason why we do this is because it increases adoption and creates a standard around the industry, which often drives forward innovation faster so we benefit and our products benefit as well as there’s more scrutiny on kind of security and safety-related things so we think that there’s a benefit there.

And sometimes, more companies running models or infrastructure can make it run more efficiently, which helps reduce our costs as well, which is something that we’ve seen with open compute. So I think that there’s a good chance that, that happens here over time. And obviously, our CapEx expenses are a big driver of our costs, so any aid in innovating on efficiency is sort of a big thing there.

The other piece is just that over time with our AI efforts, we’ve tried to distinguish ourselves as being a place that does work that will be shared with the industry and that attracts a lot of the best people to come work here. So a lot of people want to go to the place to work where their work is going to touch most people. One way to do that is by building products that billions of people use. But if you’re really a focused engineer or researcher in this area, you also want to build the thing that’s going to be the standard for the industry. So that’s pretty exciting and it helps us do leading work.

Meta Platforms’ management thinks the AI characters that the company introduced recently could lead to a new kind of medium and art form and ultimately drive increasing engagement for users of the company’s social apps

We’re designing these to make it so that they can help facilitate and encourage interactions between people and make things more fun by making it so you can drop in some of these AIs into group chats and things like that just to make the experiences more engaging. So this should be incremental and create additional engagement. The AIs also have profiles in Instagram and Facebook and can produce content, and over time, going to be able to interact with each other. And I think that’s going to be an interesting dynamic and an interesting, almost a new kind of medium and art form. So I think that will be an interesting vector for increasing engagement and entertainment as well.

Meta Platforms’ management thinks that generative AI is a really exciting technology and that it changes everything and although it’s hard to predict what generative AI’s impact is going to be on how individuals use Meta’s services, they still thinks it’s worth investing in it;In terms of how big this is going to be, it’s hard to predict because I don’t think that anyone has built what we’re building here. I mean, there’s some analogy is like what OpenAI is doing with ChatGPT, but that’s pretty different from what we’re trying to do. Maybe the Meta AI part of what we’re doing overlaps with the type of work that they’re doing, but the AI characters piece, there’s a consumer part of that, there’s a business part, there’s a creators part. I’m just not sure that anyone else is doing this. And when we’re working on things like Stories and Reels, there were some market precedents before that. Here, there’s technology which is extremely exciting. But I think part of what leading in an area and developing a new thing means is you don’t quite know how big it’s going to be. But what I predict is that I do think that the fundamental technology around generative AI is going to transform meaningfully how people use each of the different apps that we build…

…So I think you’re basically seeing that there are going to be — this is a very broad and exciting technology. And frankly, I think that this is partially why working in the technology industry is so awesome, right, is that every once in a while, something comes along like this, that like changes everything and just makes everything a lot better and your ability to just be creative and kind of rethink the things that you’re doing to be better for all the people you serve…

…But yes, it’s hard sitting here now to be able to predict like the metrics are going to be around, like what’s the balance of messaging between AIs and people or what the balance and Feeds between AI content and people content or anything like that. But I mean, I’m highly confident that this is going to be a thing and I think it’s worth investing in.

Meta Platforms’ management believes that generative AI will have a big impact on the digital advertising industry

It’s going to change advertising in a big way. It’s going to make it so much easier to run ads. Businesses that basically before would have had to create their own creative or images now won’t have to do that. They’ll be able to test more versions of creative, whether it’s images or eventually video or text. That’s really exciting, especially when paired with the recommendation AI.

Microsoft (NASDAQ: MSFT)

Microsoft’s management is making AI real for everyone through the introduction of Copilots

With Copilots, we are making the age of AI real for people and businesses everywhere. We are rapidly infusing AI across every layer of the tech stack and for every role and business process to drive productivity gains for our customers.

Microsoft’s management believes that Azure has the best AI infrastructure for both training and inference

We have the most comprehensive cloud footprint with more than 60 data center regions worldwide as well as the best AI infrastructure for both training and inference. And we also have our AI services deployed in more regions than any other cloud provider.

Azure AI provides access to models from OpenAI and open-sourced models (including Meta’s) and 18,000 organisations now use Azure OpenAI

Azure AI provides access to best-in-class frontier models from OpenAI and open-source models, including our own as well as from Meta and Hugging Face, which customers can use to build their own AI apps while meeting specific cost, latency and performance needs. Because of our overall differentiation, more than 18,000 organizations now use Azure OpenAI service, including new to Azure customers.

GitHub Copilot increases developer productivity by up to 55%; there are more than 1 million paid Copilot users and more than 37,000 organisations that subscribe to Copilot for business (up 40% sequentially)

With GitHub Copilot, we are increasing developer productivity by up to 55% while helping them stay in the flow and bringing the joy back to coding. We have over 1 million paid Copilot users and more than 37,000 organizations that subscribe to Copilot for business, up 40% quarter-over-quarter, with significant traction outside the United States.

Microsoft’s management is using AI to improve the healthcare industry: Dragon Ambient Experience (from the Nuance acquisition) has been used in more than 10 million patient interactions to-date to automatically document the interactions, andDAX Copilot can draft clinical notes in seconds, saving 40 minutes of documentation time daily for physicians

In health care, our Dragon Ambient Experience solution helps clinicians automatically document patient interactions at the point of care. It’s been used across more than 10 million interactions to date. And with DAX Copilot, we are applying generative models to draft high-quality clinical notes in seconds, increasing physician productivity and reducing burnout. For example, Atrium Health, a leading provider in Southeast United States, credits DAX Copilot with helping its physicians each save up to 40 minutes per day in documentation time.

Microsoft’s management has infused Copilot across Microsoft’s work-productivity products and tens of thousands of users are already using Copilot in early access

Copilot is your everyday AI assistant, helping you be more creative in Word, more analytical in Excel, more expressive in PowerPoint, more productive in Outlook and more collaborative in Teams. Tens of thousands of employees at customers like Bayer, KPMG, Mayo Clinic, Suncorp and Visa, including 40% of the Fortune 100, are using Copilot as part of our early access program.

Users find Copilot amazing and have enjoyed similar productivity gains as developers did with Github Copilot

Customers tell us that once they use Copilot, they can’t imagine work without it, and we are excited to make it generally available for enterprise customers next week. This quarter, we also introduced a new hero experience in Copilot, helping employees tap into their entire universe of work, data and knowledge using chat. And the new Copilot Lab helps employees build their own work habits for this era of AI by helping them turn good prompts into great ones…

…And in fact, the interesting thing is it’s not any one tool, right, which is the feedback even sort of is very clear that it’s the all up. You just keep hitting the Copilot button across every surface, right, whether it’s in Word to create documents, in Excel to do analysis or PowerPoint or Outlook or Teams. Like clearly, the Teams Meeting, which is an intelligent recap, right? It’s not just a dumb transcript. It’s like having a knowledge base of all your meetings that you can query and add to essentially the knowledge terms of your enterprise. And so we are seeing broad usage across and the interesting thing is by different functions, whether it’s in finance or in sales by roles. We have seen productivity gains like we saw with developers in GitHub Copilot.

At the end of the day, Microsoft management is still grounded about the rate of adoption of Copilot in Office, since it is an enterprise product

And of course, this is an enterprise product. I mean at the end of the day, we are grounded on enterprise cycle times in terms of adoption and ramp. And it’s incrementally priced. So therefore, that all will apply still. But at least for something completely new, to have this level of usage already and this level of excitement is something we’re very, very pleased with.

Microsoft’s management recently introduced Security Copilot, the world’s first generative AI cybersecurity product, and it is seeing high demand

 We see high demand for Security Copilot, the industry’s first and most advanced generative AI product, which is now seamlessly integrated with Microsoft 365 Defender. Dozens of organizations, including Bridgewater, Fidelity National Financial and Government of Alberta, have been using Copilot in preview and early feedback has been positive.

Bing users have engaged in over 1.9 billion chats and Bing has a new personalised answers feature, and better support for DALL-E-3 (more than 1.8 billion images have been created with DALL-E-3 to-date)

Bing users have engaged in more than 1.9 billion chats, and Microsoft Edge has now gained share for 10 consecutive quarters. This quarter, we introduced new personalized answers as well as support for DALL-E 3, helping people get more relevant answers and to create incredibly realistic images. More than 1.8 billion images have been created to date.

Bing is now incorporated into Meta’s AI chat experience

We’re also expanding to new end points, bringing Bing to Meta’s AI chat experience in order to provide more up-to-date answers as well as access to real-time search information. 

Azure saw higher-than-expected AI consumption

In Azure, as expected, the optimization trends were similar to Q4. Higher-than-expected AI consumption contributed to revenue growth in Azure.

Micosoft’s management is seeing new AI project starts in Azure, and these bring other cloud projects

Given our leadership position, we are seeing complete new project starts, which are AI projects. And as you know, AI projects are not just about AI meters. They have lots of other cloud meters as well. So that sort of gives you one side of what’s happening in terms of enterprise.

Microsoft’s management believes the company has very high operating leverage with AI, since the company is using one model across its entire stack of products, and this operating leverage goes down to the silicon level

Yes, it is true that we have — the approach we have taken is a full-stack approach all the way from whether it’s ChatGPT or Bing chat or all our Copilots all share the same model. So in some sense, one of the things that we do have is very, very high leverage of the one model that we used, which we trained, and then the one model that we are doing inferencing at scale. And that advantage sort of trickles down all the way to both utilization internally, utilization of third parties. And also over time, you can see that sort of stack optimization all the way to the silicon because the abstraction layer to which the developers are riding is much higher up than no-level kernels, if you will. So therefore, I think there is a fundamental approach we took, which was a technical approach of saying we’ll have Copilots and Copilot stack all available. That doesn’t mean we don’t have people doing training for open-source models or proprietary models. We also have a bunch of open-source models. We have a bunch of fine-tuning happening, a bunch of RLHF happening. So there’s all kinds of ways people use it, but the thing is we have scale leverage of one large model that was trained and one large model that’s been used for inference across all our first-party SaaS apps as well as our API in our Azure AI service…

…In addition, what Satya mentioned earlier in a question, and I just want to take every chance to reiterate it, if you have a consistent infrastructure from the platform all the way up through its layers that every capital dollar we spend, if we optimize revenue against it, we will have great leverage because wherever demand shows up in the layers, whether it’s at the SaaS layer, whether it’s at the infrastructure layer, whether it’s for training workloads, we’re able to quickly put our infrastructure to work generating revenue on our BEAM workloads. I mean I should have mentioned all the consumer workloads use the same frame.

Microsoft’s management believes that having the discipline to concentrate Microsoft’s tech stack and capital spend is important because the costs of developing and using AI can run up really quickly

I think, is very important for us to be very disciplined on both I’ll call it our tech stack as well as our capital spend all to be concentrated. The lesson learned from the cloud side is this, we’re not running a conglomerate of different businesses. It’s all one tech stack up and down Microsoft’s portfolio. And that I think is going to be very important because that discipline, given what the spend like — it will look like for this AI transition, any business that’s not disciplined about their capital spend accruing across all their businesses could run into trouble.

Nvidia (NASDAQ: NVDA)

Nvidia’s management believes that its chips, together with the Infiniband networking technology, are the reference architecture for AI

NVIDIA HDX with InfiniBand together are essentially the reference architecture for AI supercomputers and data center infrastructures.

Inferencing is now a major workload for Nvidia chips

Inferencing is now a major workload for NVIDIA AI compute.

Nvidia’s management is seeing major consumer internet companies ramping up generative AI deployment, and enterprise software companies starting to

Most major consumer Internet companies are racing to ramp up generative AI deployment. The enterprise wave of AI adoption is now beginning. Enterprise software companies such as Adobe, Databricks, Snowflake and ServiceNow are adding AI copilots and assistance with their pipelines.

Recent US export controls have affected Nvidia’s chip exports to China, Vietnam, and parts of the Middle East

Toward the end of the quarter, the U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products, including our Hopper and MPIR 100 and 800 series and several others. Our sales to China and other affected destinations derived from products that are now subject to licensing requirements have consistently contributed approximately 20% to 25% of data center revenue over the past few quarters. We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe will be more than offset by strong growth in other regions.

Many countries are keen to invest in sovereign AI infrastructure, and Nvidia’s management is helping them do so as it is a multi-billion economic opportunity

Many countries are awaiting to the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. With investments in domestic compute capacity, nations can use their own data to train LLMs and support their local generative AI ecosystem. For example, we are working with India Government and largest tech companies, including Infosys, Reliance and Tata to boost their sovereign AI infrastructure. And French private cloud provider, Scaleway is building a regional AI cloud based on NVIDIA H100, InfiniBand and NVIDIA AI enterprise software to fuel advancement across France and Europe. National investment in compute capacity is a new economic imperative and serving the sovereign AI infrastructure market represents a multibillion-dollar opportunity over the next few years…

…The U.K. government announced it will build 1 of the world’s fastest AI supercomputer called Isambard-AI with almost 5,500 Grace Hopper Super chips. German Supercomputing Center, Elec, also announced that it will build its next-generation AI supercomputer with close to 24,000 Grace Hopper super chips and Quantum 2 InfiniBand, making it the world’s most powerful AI supercomputer with over 90 exaflops of AI performance…

…You’re seeing sovereign AI infrastructures. People countries that now recognize that they have to utilize their own data, keep their own data, keep their own culture, process that data and develop their own AI. 

Nvidia has a new chip with inference speeds that are 2x faster than the company’s flagship H100 GPUs (graphics processing units)

We also announced the latest member of the Hopper family, BH 200, which will be the first GPU to offer HBM3E, faster, larger memory to further accelerate generative AI and LLMs. It moves inference speed up to 2x compared to H100 GPUs for running LLM like [indiscernible]. 

Major cloud computing services providers will soon begin to offer instances for Nvidia’s next-generation GPU, the H200  

Compared to the H100, H200 delivers an 18x performance increase for infancy models like GPT-3, allowing customers to move to larger models and with no increase in latency. Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud will be among the first CSPs to offer H200 base instances starting next year. 

Nvidia’s management is seeing very strong demand for Infiniband; management believes that Infiniband is critical in the deployment of LLMs (large language models); management believes that the vast majority of large-scale AI factories had standardised on Infiniband because of Infiniband’s vastly superior value proposition compared to Ethernet (data-traffic patterns are very different for AI and for typical hyperscale cloud environments)

Networking now exceeds a $10 billion annualized revenue run rate. Strong growth was driven by exceptional demand for InfiniBand, which grew fivefold year-on-year. InfiniBand is critical to gain the scale and performance needed for training LLMs. Microsoft made this very point last week highlighting that Azure uses over 29,000 miles of InfiniBand cabling, enough to circle the globe…

……The vast majority of the dedicated large-scale AI factories standardized on InfiniBand. And the reason for that is really because of its data rate and not only just the latency, but the way that it moves traffic around the network is really important. The way that you process AI and a multi-tenant hyperscale ethernet environment, the traffic pattern is just radically different. And with InfiniBand and with software-defined networks, we could do congestion control, adaptive routing, performance isolation and noise isolation, not to mention, of course, the data rate and the low latency that — and the very low overhead of InfiniBand that’s a natural part of InfiniBand. .

And so InfiniBand is not so much just a network. It’s also a computing fabric. We put a lot of software-defined capabilities into the fabric, including computation. We do floating point calculations and computation right on the switch and right in the fabric itself. And so that’s the reason why that difference in Ethernet versus InfiniBand where InfiniBand versus Ethernet for AI factories is so dramatic. And the difference is profound and the reason for that is because you’ve just invested in a $2 billion infrastructure for AI factories, a 20%, 25%, 30% difference in overall effectiveness, especially as you scale up is measured in hundreds of millions of dollars of value. And if you were renting that infrastructure over the course of 4 or 5 years, it really adds up. And so InfiniBand’s value proposition is undeniable for AI factories. 

Nvidia’s management is expanding the company into Ethernet and Nvidia’s Ethernet technology performs better than traditional offerings; management’s go-to-market strategy for Nvidia’s new Ethernet technology is to collaborate with the company’s large enterprise partners

We are expanding NVIDIA networking into the Ethernet space. Our new Spectrum end-to-end Ethernet offering with technologies, purpose-built for AI will be available in Q1 next year. We support from leading OEMs, including Dell, HP and Lenovo. Spectrum X can achieve 1.6x higher networking performance for AI communication compared to traditional ethernet offerings…

…And our company is — for all of our employees, doesn’t have to be as high performance as the AI factories, we use to train the models. And so we would like the AI to be able to run an Ethernet environment. And so what we’ve done is we invented this new platform that extends Ethernet. It doesn’t replace the Ethernet, it’s 100% compliant with Ethernet, and it’s optimized for east-west traffic, which is where the computing fabric is. It adds to Ethernet with an end-to-end solution with Bluefield as well as our Spectrum switch that allows us to perform some of the capabilities that we have in InfiniBand, not all but some, and we achieved excellent results.

And the way we go to market is we go to market with our large enterprise partners who already offer our computing solution. And so HPL and Lenovo has the NVIDIA AI stack, the NVIDIA enterprise software stack. And now they integrate with BlueField as well as bundle take to market their Spectrum switch and they’ll be able to offer enterprise customers all over the world with their vast sales force and vast network of resellers the — in a fully integrated, if you will, fully optimized at least end-to-end AI solution. And so that’s basically bringing AI to Ethernet for the world’s enterprise. 

Nvidia’s management believes that there’s a new class of data centres emerging, and they’ve named them as “AI factories”; these AI factories are being built all across the world 

This is the traditional data centers that you were just talking about, where we represent about 1/3 of that. But there’s a new class of data centers. And this new class of data centers, unlike the data centers of the past where you have a lot of applications running used by a great many people that are different tenants that are using the same infrastructure and the data center stores a lot of files. These new data essentials are very few applications if not 1 application used by basically 1 tenant. And it processes data. It trains models and it generates tokens, it generates AI. And we call these new data center AI factories. We’re seeing AI factories being built out everywhere in just about every country. 

Nvidia’s management is seeing the appearance of CSPs (cloud services providers) that specialise only in GPUs and processing AI

You’re seeing GTU specialized CSPs cropping up all over the world, and they’re dedicated to doing really 1 thing, which is processing AI. 

Nvidia’s management is seeing an AI adoption-wave moving from startups and CSPs to consumer internet companies, and then to enterprise software companies, and then to industrial companies

And so we’re just — we’re seeing the waves of generative AI starting from the start-ups and CSPs moving to consumer Internet companies moving to enterprise software platforms, moving to enterprise companies. And then — and ultimately, 1 of the areas that you guys have seen us spend a lot of energy on has to do with industrial generative AI. This is where NVIDIA AI and NVIDIA Omniverse comes together. And that is a really, really exciting work. And so I think the — we’re at the beginning of a and basically across the board industrial transition to generative AI to accelerated computing. This is going to affect every company, every industry, every country.

Nvidia’s management believes that Nvidia’s AI Enterprise service – where the company helps its customers develop custom AI models that the customers are then free to monetise in whatever manner they deem fit – will become a very large business for Nvidia

Our monetization model is that with each 1 of our partners, they rent a sandbox on DGX Cloud where we work together. They bring their data. They bring their domain expertise. We’ve got our researchers and engineers. We help them build their custom AI. We help them make that custom AI incredible. Then that customer AI becomes theirs, and they deploy it on a run time that is enterprise grade enterprise optimized or outperformance optimized, runs across everything NVIDIA. We have a giant installed base in the cloud on-prem anywhere. And it’s secure, securely patched, constantly patched and optimized and supported. And we call that NVIDIA AI enterprise.

NVIDIA AI Enterprise is $4,500 per GP per year. That’s our business model. Our business model is basically a license. Our customers then would that basic license can build their monetization model on top of. In a lot of ways we’re wholesale, they become retail. They could have a per-subscription license base. They could per instance or they could do per usage. There’s a lot of different ways that they could take to create their own business model, but ours is basically like a software license like an operating system. And so our business model is help you create your custom models, you run those custom models on NVIDIA AI Enterprise. And it’s off to a great start. NVIDIA AI Enterprise is going to be a very large business for us.

PayPal (NASDAQ: PYPL)

PayPal’s management wants to use AI and the data collected from the company’s Rewards program to drive a shopping recommendation engine

For example, our PayPal Cashback Mastercard provides 3% cash back on PayPal purchases as well as cash back on all other purchases. Customers with this card make, on average, 56 more purchases with PayPal in the year after they adopt the product than they did the year before. Over 25 million consumers have used PayPal Rewards in the past 12 months, and we’ve put more than $200 million back in our customers’ pockets with cashback and savings during that time. But even more interesting, through our Rewards product, we have an active database of over 300 million SKUs of inventory from our merchant partners. These data points can help us use AI to power a robust shopping recommendation engine, to provide more relevant rewards and savings back to our customers.

PayPal’s management believes that machine learning and generative AI can be applied to the company’s data to improve fraud protection and better connect merchants and consumers

 Our machine learning capabilities combine hundreds of risk and fraud models with dozens of real-time analytics engines and petabytes of payments data to generate insights by learning users’ behaviors, relationships, interests and spending habits. This scale gives us a very unique advantage in the market. Our ability to create meaningful profiles with the help of AI is exceptionally promising. You will see us using our data and the advances in generative AI in responsible ways to further connect our merchants and consumers together in a tight flywheel.

Shopify (NASDAQ: SHOP)

Shopify’s management has integrated Shopify Magic – the company’s suite of free AI features – across its products

At Shopify, we believe AI is for everyone, and its capabilities should be captured and embedded across the entirety of a business. We’ve integrated Shopify Magic, our suite of free AI-enabled features, across our products and workflows.

Shopify Magic can help merchants craft personalised pages and content, and is designed specifically for commerce

Shopify Magic can make the power of Shopify and a merchant’s own data to make it work better for them, whether it’s enabling unique personalized page and content generation like instantly crafting an About Us page in your brand voice and tone or building a custom page to showcase all the sizes available in your latest product collection…

…Now unlike other AI products, the difference with Shopify Magic is it’s designed specifically for commerce. And it’s not necessarily just 1 feature or 1 product. It’s really embedded across Shopify to make these workflows in our products just easier to use. It makes it easier for merchants to run and scale their businesses. And of course, we think it’s going to unlock a ton of possibilities for not just small merchants, but merchants of all sizes. And we’re going to continue to work on that over time. It’s just going to get better and better.

Shopify’s management is using AI internally so that the company can make better decisions and improve its customer support

We ourselves are using AI inside of Shopify to make better decisions, but also for things like — things like our support team using it so that questions like domain reconfiguration, or a new password, or I don’t know what my password is. Those things should not necessarily require high-touch communication. What that does is it means that our support team are able to have much higher-quality conversations and act as business coaches for the merchants on Shopify. 

Shopify’s management believes that Shopify is uniquely positioned to harness the power of AI because commerce and the company represent the intersection of humans and technology, and that is the domain of AI

If you kind of think about commerce and Shopify, we kind of interact at the intersection of humans and technology, and that’s exactly what AI is really, really good at. So we think we’re uniquely positioned to harness the power of AI, and the ultimate result of it will be these capabilities for our merchants to grow their businesses.

Shopify has AI-powered language translations for merchants within its software products

This includes things like launching shipping guidance for merchants, navigating them through streamlined privacy guidance, initiating localization experiments across various marketing channels and bringing localization tools and AI-backed language translations to the Shopify App Store.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management sees strong AI-related demand for its chips, but it’s not enough to offset cyclicality in its business 

Moving into fourth quarter 2023. While AI-related demand continues to be strong, it is not enough to offset the overall cyclicality of our business. We expect our business in the fourth quarter to be supported by the continued strong ramp of our 3-nanometer technology, partially offset by customers’ continued inventory adjustment.

TSMC;s management is seeing strong customer interest for its N2 technology node because the surge in AI-related demand leads to demand for energy-efficient computing, and TSMC’s technology platform goes beyond geometry-shrink (making transistors smaller), helping with power efficiency

The recent surge in AI-related demand supports our already strong conviction that demand for energy-efficient computing will accelerate in an intelligent and connected world. The value of our technology platform is expanding beyond the scope of geometry shrink alone and increasing toward greater power efficiency. In addition, as process technology complexity increases, the lead time and engagement with customers also start much earlier. As a result, we are observing a strong level of customer interest and engagement at our N2 similar to or higher than N3 at a similar stage from both HPC and smartphone applications.

TSMC’s management is seeing its customers add AI capabilities into smartphones and PCs and expects more of this phenomenon over time

We do see some activities from customers who add AI capability in end devices such as smartphone and PCs, [ so new growth ] engine and AI and PC, whatever. And we certainly hope that this one will add to the course, help TSMC more strengthen under our AI’s business…

…It started right now, and we will expect that the more and more customer will put that AI’s capability into the end devices, into their product.

TSMC’s management is seeing AI-related demand growing stronger and stronger and TSMC has to grow its manufacturing capacity to support this

The AI demand continues to grow stronger and stronger. So from TSMC’s point of view, now we have about — we have a capacity limitation to support them — to support the demand. We are working hard to increase the capacity to meet their demand, that’s for one thing.

TSMC’s management believes that any kind of AI-related chip will require leading edge chip technology and this is where TSMC excels

Whether customer developed the CPU, GPU, AI accelerator or ASIC for all the type for AI applications, the commonality is that they all require usage of leading-edge technology with stable yield delivery to support larger die size and a strong foundry design ecosystem. All of those are TSMC’s strengths. So we are able to address and capture a major portion of the market in terms of a semiconductor component in AI.

Tencent (NASDAQ: TCEHY)

Tencent’s management is increasing the company’s investments in its AI models and management wants to use AI for the company’s own benefit as well as that of society and its customers

We are increasing investment in our AI models, providing new features to our products and enhancing our targeting capabilities for both content and advertising. We aspire to position our leading AI capability, not only as a growth multiplier for ourselves, but also as a value provider to our enterprise customers and the society at large.

Tencent’s management recently upgraded the size and capabilities of the company’s foundational model – Tencent Hunyuan – which is now available to customers on a limited basis and deployed in some of Tencent’s cloud services

For cloud, we upgrade the size and capabilities of our proprietary foundation model, Tencent Hunyuan. We are making Hunyuan available on a limited basis to the public and to customers and deploying QiDian in Tencent Meeting and Tencent Docs…

…We have upgraded our proprietary foundation model, Tencent Hunyuan. We have made Tencent Hunyuan bot initially available to a smaller but expanding number of users via a mini program. Hunyuan is also now powering meeting summarization in Tencent Meeting and content generation in Tencent Docs. And externally, we’re enabling enterprise customers to utilize our large language model via APIs or model as a Service solutions in our cloud in functions such as coding, data analysis and customer service automation.

Tencent’s management believes that Tencent is one of China’s AI leaders with the development of Hunyuan

In terms of Hunyuan and the overall AI strategy, I would say we have been pretty far along in terms of building up Hunyuan, and we feel that we are one of the leaders within China, and we are also continuously increasing the size of the model and preparing for the next generation of our Hunyuan model, which is going to be a mixture of experts architecture, which we believe will further improve the performance of our Hunyuan model. And by building up Hunyuan, we actually have really build up our capability in general AI across the board. Because Hunyuan, the transformer-based model include — involve the handling of a large amount of data, large amount of training data, large size of computing cluster and a very dedicated fine-tuning process in terms of improving the AI performance.

Tencent’s management is using AI to improve the company’s advertising offerings, in areas such as ad targeting, attribution accuracy, and the generation of advertising visuals – management sees this as evidence that Tencent’s AI investments are already generating tangible results

We have expanded our AI models with more parameters to increase their ad targeting and attribution accuracy contributing to our ad revenue growth. We’re also starting to provide generative AI tools to advertiser partners, which enables them to dynamically generate ad visuals based on text fronts and to optimize ad sizes for different inventories, which should help advertisers create more appealing advertisements with higher click-through rates boosting their transactions in our revenue…

…And the general AI capability is actually helping us quite a bit in terms of the targeting technology related to advertising and our content provisioning service. So in short video by improving our AI capability, we can actually ramp up our video accounts at the faster clip. And in terms of the advertising business by increasing the targeting capability, we are actually increasing our ad revenue and by delivering better results to the — to our customers. So they are generating — so our AI capabilities is generating tangible result at this point in time. 

Tencent’s management wants to build an AI-powered consumer-facing smart agent down the road, but they are wary about the costs of inference

And we also feel that further in the future, when there’s actually a consumer-facing product that is more like a smart agent for people right now, that is further down the road, but it actually carries quite a bit of room for imagination…

…Now in terms of the Hunyuan and in the future, the potential of an AI assistant, I think it’s fair to say it’s still in a very, very early stage of concept design. So definitely not at the stage of product design yet and definitely not at the stage of thinking about monetization yet. But of course, right, if you look at any of these generative AI technology at this point in time, inference cost is a real variable cost, which needs to be considered in the entire equation. And that, to some extent, add to the challenge of the product design, too. So I would say, at this point in time, it’s actually very early stage. There is a promise and imaginary room for opportunity for the future. 

Tencent’s management believes that the company has sufficient amount of chips for the company’s AI-related development work for a couple more generations; the US’s recent semiconductor bans will not affect the development of Tencent’s AI models, but it could affect Tencent’s ability to rent out these chips through Tencent Cloud

Now in terms of the chip situation, right now, we actually have 1 of the largest inventory of of AI chips in China among all the players. And one of the key things that we have done was actually we were the first to put in order for H800, and that allow us to have a pretty good inventory of H800 chips. So we have enough chips to continue our development of Hunyuan for at least a couple more generations. And the ban does not really affect the development of Hunyuan and our AI capability in the near future. Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. 

Tencent’s management wants to explore the use of lower-performance chips for AI inference purposes and they are also exploring domestic suppliers of chips

Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. And going forward, we will have to figure out ways to make our — the usage of our AI chips more efficient. We’ll try to see whether we can offload a lot of the inference capability to lower-performance chips so that we can retain the majority of our high-performance AI chips for training purpose. And we also try to look for domestic stores for these training chips.

Tencent’s management believes that AI can bring significant improvement to a digital ad’s current average click-through rate of 1%

Today, a typical click-through rate might be around 1%. As you deploy large language models, then you can make more use of the thousands of discrete data points that we have potentially for targeting and bring them to bear and turn them into reality. And you can get pretty substantial uplifts in click-through rate and therefore, in revenue, which is what the big U.S. social networks are now starting to see.

Tesla (NASDAQ: TSLA)

Tesla vehicles have now driven over 0.5 billion miles with FSD (Full Self Driving) Beta and the mileage is growing

Regarding Autopilot and AI, our vehicle has now driven over 0.5 billion miles with FSD Beta, full self-driving beta, and that number is growing rapidly.

Tesla’s management sees significant promise with FSD v.12

We’re also seeing significant promise with FSD version 12. This is the end-to-end AI where it’s a photon count in, controls out or really you can think of it as there’s just a large bit stream coming in and a tiny bit stream going out, compressing reality into a very small set of outputs, which is actually kind of how humans work. The vast majority of human data input is optics, from our eyes. And so we are, like the car, photons in, controls out with neural nets, just neural nets, in the middle. It’s really interesting to think about that.

Tesla recently completed building a 10,000 GPU cluster of Nvidia’s H100s chips and has brought the cluster into operation faster than anyone has done (the H100s will help with the development of Tesla’s full self driving efforts)

We recently completed a 10,000th GPU cluster of H100s. We think probably bringing it into operation faster than anyone’s ever brought that much compute per unit time into production since training is the fundamental limiting factor on progress with full self-driving and vehicle autonomy.

Tesla’s management believes that AI is a game changer and wants the company to continue to invest in AI 

We will continue to invest significantly in AI development as this is really the massive game changer, and I mean, success in this regard in the long term, I think has the potential to make Tesla the most valuable company in the world by far.

Tesla’s management believes that the company’s AI team is the best in the world

The Tesla AI team is, I think, one of the world’s best, and I think it is actually by far the world’s best when it comes to real-world AI. But I’ll say that again: Tesla has the best real-world AI team on earth, period, and it’s getting better.

Tesla’s management is very excited about the company’s progress with autonomous driving and it is already driving them around with no human-intervention

I guess, I am very excited about our progress with autonomy. The end-to-end, nothing but net, self-driving software is amazing. I — drives me around Austin with no interventions. So it’s clearly the right move. So it’s really pretty amazing. 

Tesla’s management believes that the company’s work in developing autonomous driving can also be applied to Optimus (the company’s autonomous robots)

And obviously, that same software and approach will enable Optimus to do useful things and enable Optimus to learn how to do things simply by looking. So extremely exciting in the long term.

Tesla’s management believes that Optimus will have a huge positive economic impact on the world and that Tesla is at the forefront of developing autonomous robots; Tesla’s management is aware of the potential dangers to humankind that an autonomous robot such as Optimus can pose, so they are designing the robot carefully

As I’ve mentioned before, given that the economic output is the number of people times productivity, if you no longer have a constraint on people, effectively, you’ve got a humanoid robot that can do as much as you’d like, your economy is twice the infinite or infinite for all intents and purposes. So I don’t think anyone is going to do it better than Tesla, not by a long shot. Boston Dynamics is impressive, but their robot lacks the brain. They’re like the Wizard of Oz or whatever. Yes, lacks the brain. And then you also need to be able to design the humanoid robot in such a way that it can be mass manufactured. And then at some point, the robots will manufacture the robots.

And obviously, we need to make sure that it’s a good place for humans in that future. We do not create some variance of the Terminator outcome. So we’re going to put a lot of effort into localized control of the humanoid robot. So basically, anyone will be able to shut it off locally, and you can’t change that even if you put — like a software update, you can’t change that. It has to be hard-coded.

Tesla’s management believes that Mercedes can easily accept legal liability for any FSD-failures because Mercedes’ FSD is very limited whereas Tesla’s FSD has far less limitations 

[Question] Mercedes is accepting legal liability for when it’s Level 3 autonomous driving system drive pilot is active. Is Tesla planning to accept legal liability for FSD? And if so, when?

[Answer] I mean I think it’s important to remember for everyone that Mercedes’ system is limited to roads in Nevada and some certain cities in California, doesn’t work in the snow or the fog. It must have a [indiscernible] car in plains, only 40 miles per hour. Our system is meant to be holistic and drive in any conditions, so we obviously have a much more capable approach. But with those kind of limitations, it’s really not very useful.

Tesla’s management believes that technological progress building on technological progress is what will eventually lead to full self driving

I would characterize our progress in real world AI as a series of stacked log curves. I think that’s also true in other parts of AI, like [ LOMs ] and whatnot, a series of stacked log curves. Each log curve gets higher than the last one. So if we keep stacking them, we keep stacking logs, eventually, we get to FSD.

The Trade Desk (NASDAQ: TSLA)

The Trade Desk’s management believes that AI will change the world, but not everyone working on AI is delivering meaningful impact

AI has immense promise. It will change the world again. But not everyone talking about AI is delivering something real or impactful.

The Trade Desk’s management is not focusing the company’s AI-related investments on LLMs (large language models) – instead, they are investing in deep-learning models to improve bidding, pricing, value, and ad relevance for Trade Desk’s services

Large Language Models (the basis of ChatGPT) aren’t the highest priority places for us to make our investments in AI right now. Deep learning models pointed at bidding, pricing, value, and ad relevance are perfect places for us to concentrate our investments in AI—all four categories have private betas and some of the best engineers in the world pointed at these opportunities.

The Trade Desk’s management believes that they are many areas to infuse AI into the digital advertising dataset that the company holds

Second is the innovation coming from AI and the many, many opportunities we have ahead of us to find places to inject AI into what may be the most rich and underappreciated data asset on the Internet, which we have here at The Trade Desk.

The Trade Desk’s management believes that traders in the digital advertising industry will not lose their jobs to AI, but they might lose their jobs to traders who know how to work with AI

Traders know that their jobs are not going to be taken away by AI. But instead, they have to compete with each other. So their job could be taken away from a trader who knows how to use AI really well until all of them are looking at ways to use the tools that are fueled by AI that were provided, where AI is essentially doing 1 or 2 things. It’s either doing the math for them, if you will, of course, with very advanced learning models or, in other cases, it’s actually their copilot.

Old Navy achieved a 70% reduction in cost to reach each unique household using The Trade Desk’s AI, Koa

A great example of an advertiser pioneering new approaches to TV advertising with a focus on live sports is Old Navy…  But as Old Navy quickly found out, programmatic guaranteed has limitations. Programmatic guaranteed, or PG, does not allow Old Navy to get the full value of programmatic such as frequency management, audience targeting and the ability to layer on their first-party data. So they took the next step in the form of decision biddable buying within the private marketplace and focused on live sports inventory. CTV live sports advertising was appealing because it offered an opportunity to expose their brand against very high premium content that might be more restrictive and expensive in a traditional linear environment. They were able to use Koa, The Trade Desk’s AI, to optimize pacing and frequency management across the highest-performing inventory. As a result, they saw a 70% reduction in the cost to reach each unique household versus their programmatic guaranteed performance. 

Wix (NASDAQ: WIX)

Users of Wix’s Wix Studio product are enjoying its AI features

Users particularly [indiscernible] Studio responsive AI technology that simplify high-touch and time-sensitive tasks such as ensuring consistent design across web pages on different screen sizes. They are also enjoying the AI code assistant inside the new Wix IDE [integrated development environment], which allowed them to write clinic code and detect errors easily.

Wix recently released new AI products: (1) an SEO tool powered by AI called AI Meta Tags Creator, and (2) AI Chat Experience for Business, which allows new users to chat with an AI who will walk them through the Wix onboarding process; AI Chat Experience for Business is in its early days, but it has already driven a positive impact on Wix’s conversion and revenue

Earlier this week, we released our latest AI products. The first was AI Meta Tags Creator, a groundbreaking SEO tool powered by AI and our first AI-powered feature within our collection of SEO tools. Both self creators looking to generate SEO-friendly tags for each of their pages and professionals looking to enhance their efficiency and make real-time adjustments will benefit from this product. The second was our Conversational AI Chat Experience for Business. This feature, which is now live, paves the way to accelerate onboarding using AI in order to get businesses online more quickly and efficiently. These new tools continue to demonstrate our leadership in utilizing AI to help users of all types to succeed online… 

…Avishai spoke about the AI chat experience for business and its early weeks — and in its early weeks, we have already seen its positive impact on conversion and revenue.

Wix’s management expects Wix’s AI products to drive higher conversion, monetisation, and retention in the company’s Self Creators business

Compounding Partners growth is complemented by re-accelerating growth in our stable and profitable Self Creators business, which we saw once again this quarter. We expect our market-leading product innovation as well as our powerful AI products and technology to drive higher conversion, monetization and retention as we maintain our leadership position in the website building space.

Wix’s management believes that Wix’s AI products are helping to improve conversion because the new AI tools help to generate content for users, which reduces the inertia to create a website

I believe your second question was in regards to what kind of effect we are seeing from different AI products that we are launching, and mostly in regards to improvement in conversion. And we do actually see an improvement in conversion, which is probably the most important KPI by which we measure our success in deploying new products. The reason for that is that with AI, we are able to ask the user better questions and to understand in a smarter way, why is that the user is trying to achieve. From that, we are able to generate a better starting point for their business on top of Wix. And that is not just the skeleton, we are also able to fill in a lot of information, a lot of the content that the user would normally have to fill in manually. The result is that the amount of effort and knowledge that you need to create a website and for your business on Wix is dramatically reduced. And from that, we are able to see very good results in terms of improvement of conversion.

The use of AI tools internally has helped to improve Wix’s margins

So we saw this year a tremendous improvement in margins — in gross margin. And it came mostly from 2 places. The first one is a lot of improvements and savings that we have with our infrastructure, most of you know the hosting activity. So we had a lot of savings over there, but also about our core organization, for example, benefiting from all kind of AI tools that enable us to be more efficient.

Wix’s management believes that the company’s AI features help users with website-creation when it would normally take specialists to do so

And then because of the power of the AI tools, you can create very strong, very professional websites because the AI will continue and finish for you the thing that would normally require to specialize in different variations of web designs.

Zoom Video Communications (NASDAQ: ZM)

Zoom AI Companion, which helps create call summaries, is included in Zoom’s paid plans at no additional costs to customers, and more than 220,000 accounts have enabled it, with 2.8 million meeting summaries created to-date

We also showcased newly-released innovations like Zoom AI Companion, as well as Zoom AI Expert Assist and a Quality Management for the Contact Center. Zoom AI Companion is especially noteworthy for being included at no additional cost to our paid plans, and has fared tremendously well with over 220,000 accounts enabling it and 2.8 million meeting summaries created as of today.

Zoom’s management believes that Zoom AI Companion’s meeting-summary feature is really accurate and really fast; management attributes the good performance to the company’s use of multiple AI models within Zoom AI Companion

I think we are very, very proud of our team’s progress since it launched the Zoom AI Companion, as I mentioned earlier, right, a lot of accounts enabled that. Remember, this is no additional cost to [ outpay ] the customer. A lot of features.One feature of that is like take a meeting summary, for example. Amazingly, it’s very accurate and it really save the meeting host a lot of time. And also, our federated AI approach really contributed to that success because we do not count on a single AI model, and in terms of latency, accuracy, and also the response, the speed and so on and so forth, I think, it really helped our AI Companion.

Free users of Zoom are unable to access Zoom AI Companion

For sure, for free users, they do not — they cannot enjoy this AI Companion, for sure, it’s a [ data health ] for those who free to approve for online upgrade. So anyway, so we keep innovating on AI Companion. We have high confidence. That’s a true differentiation compared to any other AI features, functionalities offered by some of our competitors.

Zoom’s management thinks that Zoom’s AI features for customers will be a key differentiator and a retention tool

But I think what Eric was just mentioning about AI is probably really going to be a key differentiator and a retention — retention tool in the future, because as a reminder, all of the AI Companion features come included for our free — sorry, for our paid users. So we’re seeing it not only help with conversion, but we really believe that for the long term, it will help with retention as well.

Zoom’s management believes that Zoom’s AI features will help to reaccelerate Zoom’s net dollar expansion rate for enterprise customers

[Question] You’re showing stabilization here on some of the major metrics, the Enterprise expansion metric took a step down to 105%. And so just wondering what it takes for that metric to similarly show stabilization as given like in Q1 renewal cohort and kind of walking through that. Anything on the product side for us to consider or just any other commentary there is helpful.

[Answer] Well, as a reminder, it’s a trailing 12-month metric. So as we’ve worsely seen our growth rates come down this year that’s following behind it. But absolutely, we believe that AI Companion in general as well as the success that we are seeing in Zoom Phone, in Zoom Contact Center, Zoom Virtual Agent, all of those will be key contributors to seeing that metric start to reaccelerate again as we see our growth rate starting to reaccelerate as well.

Zoom’s management thinks tjat Zoom’s gross margin could decline – but only slightly – due to the AI features in Zoom’s products being given away for free at the moment

[Question] As I look at gross margins, how sustainable is it keeping at these levels? I know AI Companion is being given away from as part of the package, I guess, prepaid users. But if you think about the cost to run these models, the margin profile of Contact Center and Phone. How durable is it to kind of sustain these levels?

[Answer] But we do expect there’s going to be some impact on gross margins. I mean we — I don’t think it’s going to be significant because the team will continue to operate in the very efficient manner that they do and run our co-los [co-locateds] that way, but we do expect there’s going to be some impact to our gross margin as we move forward.

Zoom’s management wants to leverage AI Companion across the entire Zoom platform

So again, it’s a lot of other features as well. And like for me, I also use our — the client, [indiscernible] client, connect and other services you can, right? You can have you compose e-mail as well, right? It’s a lot of features, right? And down the road awareness for the Whiteboard with AI Companion as well. Almost every service entire platform, we’re going to lever the AI Companion. So and a lot of features and the AI Companion.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.

More Thoughts on Artificial Intelligence

How is artificial intelligence reshaping the world?

I published Thoughts on Artificial Intelligence on 19 July 2023. Since then, developments in AI have continued at a breath-taking speed. In here, I want to share new thoughts on AI that I have, as well as provide updates on some of the initial discussions.

Let’s start with the new thoughts, in no particular order (note that the caution from Thoughts on Artificial Intelligence that my thinking on AI is fragile still apply):

  • AI could be a long-term tailwind for the development of biotechnology drugs. AlphaFold is an AI-model from Alphabet’s subsidiary, Google Deepmind, that is capable of predicting the structure of nearly every protein discovered by scientists thus far – this amounts to more than 200 million structures. And Alphabet is providing this data for free. Proteins are molecules that direct all cellular function in a living organism, including of course, humans. A protein’s structure matters because it is what allows the protein to perform its job within an organism. In fact, diseases in humans can be caused by mis-structured proteins. Understanding the structure of a protein thus means knowing how it could affect the human body. Biotechnology drugs can be composed of proteins, and they tend to manipulate proteins, or the production of proteins, within the human body. According to an Economist article published in September this year, AlphaFold has been used by over 1.2 million researchers to-date. Elsewhere, researchers from biotechnology giant Amgen noted in a recent paper that with the help of AI, the company has reduced, by 60% compared to five years ago, the time it needs to develop a candidate drug up to the clinical-trial stage. But the researchers also shared that AI could do more to help biotechnology companies make the development process for protein-based drugs faster and cheaper. An issue confronting biotechnology companies today is a lack of sufficient in-house data to build reliable models to predict the effects of protein-based drugs. The researchers proposed methods for biotechnology companies to share data to build more powerful predictive AI models in a way that protects their intellectual properties. As AI technology improves over time, I’m excited to observe the advances in the protein-drug creation process that is likely to occur alongside.
  • It now looks even more possible to us that generative AI will have a substantial positive impact on the productivity of technology companies. For example, during Oracle’s earnings conference call that was held in September, management shared that the company is using generative AI to produce the code needed to improve all the features in Cerner’s system (Oracle acquired Cerner, a healthcare technology company, in June 2022), instead of its usual way of writing code in the Java programming language. Oracle’s management also said that even if AI code generators make mistakes, “once you fix the mistake, you fix it everywhere.” In another instance, MongoDB announced in late-September this year that it’s introducing generative AI into its MongoDB Relational Migrator service, which helps reduce friction for companies that are migrating from SQL to NoSQL databases. When companies embark on such a migration, software code needs to be written. With generative AI, MongoDB is able to help users automatically generate the necessary code during the migration process.
  • The use of AI requires massive amounts of data to be transferred within a data centre. There are currently two competing data switching technologies to do so, namely, Ethernet and Infiniband, and they each have their supporters. Arista Networks builds high-speed Ethernet data switches. During the company’s July 2023 earnings conference call, management shared their view that Ethernet is the right long-term technology for data centres where AI models are run. In the other camp, there’s Nvidia, which acquired Mellanox, a company that manufactures Infiniband data switches, in 2020. Nvidia’s leaders commented in the company’s latest earnings conference call (held in late-August this year) that “Infiniband delivers more than double the performance of traditional Ethernet for AI.” It’s also possible that better ways to move data around a data centre for AI workloads could be developed. In Arista Networks’ aforementioned earnings conference call, management also said that “neither technology… were perfectly designed for AI; Infiniband was more focused on HPC [high-performance computing] and Ethernet was more focused on general purpose networking.” We’re watching to see which technology (existing or new) would eventually have the edge here, as the market opportunity for AI-related data switches is likely to be huge. For perspective, Arista Networks estimates the total data centre Ethernet switch market to be over US$30 billion in 2027, up from around US$20 billion in 2022. 

Coming to the updates, in Thoughts on Artificial Intelligence, I discussed how AI software, especially generative AI, requires vector databases but that NoSQL databases will remain relevant. During MongoDB’s latest earnings conference call, held in August this year, management shared their view that the ability to perform vector searches (which is what vector databases do) will ultimately be just a feature that’s built into all databases. This is because standalone vector databases are point-products that still need to be used with other types of databases in order for developers to build applications. I am on the same side as MongoDB’s management because of two things they shared during the company’s aforementioned earnings conference call. Firstly, they see developers preferring to work with multi-functional databases compared to bolting on a separate vector solution onto other databases. Secondly, Atlas Vector Search – MongoDB’s vector search feature within its database service – is already being used by customers in production even though it’s currently just a preview-product; to us, this signifies high customer demand for MongoDB’s database services within the AI community. 

I also touched upon the phenomenon of emergence in AI in Thoughts on Artificial Intelligence. I am even more confident now that emergence is present in AI systems. Sam Altman, the CEO of OpenAI, the company behind ChatGPT, was recently interviewed by Salesforce co-founder and CEO Marc Benioff. During their conversation, Altman said (emphases are mine):

“I think the current GPT paradigm, we know how to keep improving and we can make some predictions about – we can predict with confidence it’s gonna get more capable. But exactly how is a little bit hard. Like when, you know, why a new capability emerges at this scale and not that one. We don’t yet understand that as scientifically as we do about saying it’s gonna perform like this on this benchmark.”

In other words, even OpenAI cannot predict what new capabilities would spring forth from the AI models it has developed as their number of parameters and the amount of data they are trained on increases. The unpredictable formation of sophisticated outcomes is an important feature of emergence. It is also why I continue to approach the future of AI with incredible excitement as well as some fear. As AI models train on an ever increasing corpus of data, they are highly likely to develop new abilities. But it’s unknown if these abilities will be a boon or bane for society. We’ll see!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Alphabet and MongoDB. Holdings are subject to change at any time.