How Innovation Happens

Innovation can appear from the most unexpected places, take unpredictable paths, or occur when supporting technologies improve over time.

There are a myriad of important political, social, economic, and healthcare issues that are plaguing our globe today. But Jeremy and I are still long-term optimistic on the stock market.

This is because we still see so much potential in humanity. There are nearly 8.1 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc but we have faith that humanity can clean it up. To us, investing in stocks is ultimately the same as having faith in the long-term ingenuity of humanity. We will remain long-term optimistic on stocks so long as we continue to have this faith.

There may be times in the future when it seems that mankind’s collective ability to innovate is faltering (things are booming now with the AI rush). But here are three stories I learnt recently that would help me – and I hope you, too – keep the faith.

The first story is from Morgan Housel’s latest book Same As Ever. In it, he wrote: 

“Author Safi Bahcall notes that Polaroid film was discovered when sick dogs that were fed quinine to treat parasites showed an unusual type of crystal in their urine. Those crystals turned out to be the best polarizers ever discovered. Who predicts that? Who sees that coming? Nobody. Absolutely nobody.”

What the quinine and polarizers story shows is that the root of innovative ideas can show up completely unexpectedly. This brings me to the second story, which is also from Same As Ever. This time, it is Housel’s recounting of how the invention of planes moved in an unpredictable path that led to the invention of nuclear power plants (nuclear power is a zero-emission, clean energy source, so it could play a really important role in society’s sustainable energy efforts), and how a 1960s invention linking computers to manage Cold War secrets unpredictably led to the photo-sharing social app Instagram:

“When the airplane came into practical use in the early 1900s, one of the first tasks was trying to foresee what benefits would come from it. A few obvious ones were mail delivery and sky racing.

No one predicted nuclear power plants. But they wouldn’t have been possible without the plane. Without the plane we wouldn’t have had the aerial bomb. Without the aerial bomb we wouldn’t have had the nuclear bomb. And without the nuclear bomb we wouldn’t have discovered the peaceful use of nuclear power. Same thing today. Google Maps, TurboTax, and Instagram wouldn’t be possible without ARPANET, a 1960s Department of Defense project linking computers to manage Cold War secrets, which became the foundation for the internet. That’s how you go from the threat of nuclear war to filing your taxes from your couch—a link that was unthinkable fifty years ago, but there it is.”

This idea of one innovation leading to another, brings me to my third story. There was a breakthrough in the healthcare industry in November 2023 when the UK’s health regulator approved a drug named Casgevy – developed by CRISPR Therapeutics and Vertex Pharmaceuticals – for the treatment of blood disorders known as sickle cell disease and  beta thalassaemia. Casgevy’s greenlight is groundbreaking because it is the first drug in the world to be approved that is based on the CRISPR (clustered regularly interspaced short palindromic repeats) gene editing technique. A few weeks after the UK’s decision, Casgevy became the first gene-editing treatment available in the USA for sickle cell disease (the use of Casgevy for beta thalassaemia in the USA is currently still being studied). Casgevy is a huge upgrade for sickle cell patients over the current way the condition is managed. Here’s Sarah Zhang, writing at The Atlantic in November 2023:

When Victoria Gray was still a baby, she started howling so inconsolably during a bath that she was rushed to the emergency room. The diagnosis was sickle-cell disease, a genetic condition that causes bouts of excruciating pain—“worse than a broken leg, worse than childbirth,” one doctor told me. Like lightning crackling in her body is how Gray, now 38, has described the pain. For most of her life, she lived in fear that it could strike at any moment, forcing her to drop everything to rush, once again, to the hospital.

After a particularly long and debilitating hospitalization in college, Gray was so weak that she had to relearn how to stand, how to use a spoon. She dropped out of school. She gave up on her dream of becoming a nurse.

Four years ago, she joined a groundbreaking clinical trial that would change her life. She became the first sickle-cell patient to be treated with the gene-editing technology CRISPR—and one of the first humans to be treated with CRISPR, period. CRISPR at that point had been hugely hyped, but had largely been used only to tinker with cells in a lab. When Gray got her experimental infusion, scientists did not know whether it would cure her disease or go terribly awry inside her. The therapy worked—better than anyone dared to hope. With her gene-edited cells, Gray now lives virtually symptom-free. Twenty-nine of 30 eligible patients in the trial went from multiple pain crises every year to zero in 12 months following treatment.

The results are so astounding that this therapy, from Vertex Pharmaceuticals and CRISPR Therapeutics, became the first CRISPR medicine ever approved, with U.K. regulators giving the green light earlier this month; the FDA appears prepared to follow suit in the next two weeks.” 

The manufacturing technologies behind Casgevy include electroporation, where an electric field is used to increase the permeability of a cell’s membrane. This enables molecules, such as genetic material and proteins, to be introduced in a cell for the purposes of gene editing. According to an expert-call on electroporation that I reviewed, the technology has been around for over four decades, but only started gaining steam in recent years with the decline in genetic sequencing costs; without affordable genetic sequencing, it was expensive to know if a gene editing process done via electroporation was successful. The relentless work of Illumina has played a huge role in lowering genetic sequencing costs over time.

These show how one innovation (cheaper genetic sequencing) supported another in a related field (the viability of electroporation) that then enabled yet another in a related field (the creation of gene editing therapies).    

The three stories I just shared highlight the different ways that innovation can happen. It can appear from the most unexpected places (quinine and polarizers); it can take unpredictable paths (from planes to nuclear power plants); and it can occur when supporting technologies improve over time (the development of Casgevy). What they signify is that we shouldn’t lose hope in mankind’s creative prowess when it appears that nothing new of significance has been built for a while. Sometimes, what’s needed is just time


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

Is Bitcoin a Speculative or Productive Asset?

There are productive income-generating assets, and then there are speculative assets.

The Bitcoin hype train is back! Bitcoin halving, Bitcoin ETF approval and the prospect of lower interest rates have put Bitcoin back at the center of attention.

But before jumping on the bandwagon, it’s worth asking – is bitcoin is a productive or speculative asset?

Productive assets are able to generate income for the owner such that we don’t mind holding the asset forever. Speculative assets can’t.

To profit from speculative assets, investors need to find a buyer who will purchase the asset at a higher price, which is known as the “greater fool theory”. 

The greater fool theory suggests that we can make money as long as someone else comes along and buys the asset for a higher price despite the asset producing no income to the owner.

This may be profitable for a while, but relying on this method of making money is pure speculation and the party will end when the world runs out of “greater fools”.

With this in mind, let’s see what assets are productive and what are just speculative assets.

Bitcoin

Bitcoin does not produce income for the owner and hence the owner of the Bitcoin can only make a profit by selling it to someone else at a higher price.

By definition, this is relying on the greater fool theory and is speculation. 

I judge an asset by whether you will be willing to hold on to an asset forever. In the case of Bitcoin, holding on to it does you no good and you can only profit if you sell it.

Bitcoin is a clear case of a speculative asset.

Art

I’ve heard people comment that Bitcoin holds value because of its scarcity and hence is akin to rare art which can also appreciate in price. 

But the fact is art is a speculative asset too. Art yields no income for the owner of the asset and the owner relies on selling the art piece at a higher price to make money.

Similar to Bitcoin, art does not generate income so holding the piece of art forever does not generate any returns. Most art are speculative assets.

However, occasionally, rare art may bring some form of cash flow to the owner if the art piece can be rented to a display centre or museum. If that’s the case, then rare art pieces can be considered an investment that generates income.

At least for art, the artwork can be considered a beautiful asset which some people appreciate and may pay to see or buy as a decorative ornament.

Real estate

Real estate generates income for the owner in the form of rental income. Rental provides real estate owners with income that eventually offsets the amount paid for the asset.

Real estate investors don’t need to sell the property to realise an investment gain. Rent out the asset long enough and they’ve made enough rental income to offset the property price.

Real estate is clearly a productive asset.

Stocks

Owning stock of a company is having a part ownership of the business. It entitles you to a share of the profits through dividends.

As such, stock investors do not need to rely on price performance but can earn a good return simply by collecting dividends paid from profits of the company.

However, we cannot paint all stocks with the same brush. 

There are occasionally stocks that trade at such high valuations that people who buy in at that price will never make back their money from dividends. The only way to profit is by selling it to a “greater fool” at a higher price. 

These stocks that fall into this category hence move into the “speculative asset” category.

Bonds

Bonds are a “loan” that you make to a company or government body. In exchange, the “borrower” will pay you interest plus return the full loan amount at the end of the “loan period”.

Bonds provide the investor with a regular income stream and the investor can also get the principle back at the maturity date assuming no default. 

Given the predictable income stream, bonds are a productive asset that produce cash flows to the investor.

Stock derivatives

Stock derivatives are financial assets that derive their value from stock prices. These can be options, futures, warrants etc.

Derivatives such as options can provide the investor with the option to purchase a stock at a particular price before a given date.

However, as stock derivatives have a predetermined expiry date, they are highly dependent on relatively short term stock prices and hence is a speculative asset.

The difference between stocks and stock derivatives is that a stock pays you dividends whereas a derivative does not. On top of that, the derivative has an expiry date which means owners of derivatives rely on short term price movements of the stock to make a profit.

The Bottom line

Don’t get me wrong. I’m not saying investing in Bitcoin, art or derivatives cannot be profitable. In fact, investing in speculative assets has made some people very wealthy. That’s because speculative assets can keep appreciating due to the sheer number of people who believe in them.

For instance, the narrative around bitcoin and the amount of money flowing into cryptocurrencies at the moment have caused bitcoin price to rise substantially in the last decade or so, minting billionaires in the process.

But while it can be profitable, speculation is a difficult game to play and depends on the narrative surrounding the asset. In addition, since the asset is not backed by cash flows, the price can come crashing down and owners are left holding a “non-productive” asset that produces no cash flows.

Personally, this is a game I rather not play. I prefer to invest in productive assets that can produce cash flows to the owner so that I don’t have to rely on narratives or a “greater fool” to profit.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not currently have a vested interest in any stocks mentioned. Holdings are subject to change at any time.

Don’t Judge Your Investments By Their Stock Prices

If you find yourself celebrating (or crying) just because of short term price movements, read this…

Back in December 2020, I wrote an article on Moderna and BioNTech. The two companies were the front runners in the COVID vaccine race and their vaccines were on the brink of FDA approval.

In the article, I concluded that their stock prices had already priced in potential profits from their COVID vaccines. When the article was published, Moderna’s stock price was around US$152 and BioNTech’s was at US$120.

Subsequently, both Moderna and BioNTech’s stock prices continued to rise, reaching a peak of around US$449 and US$389, respectively, by mid-2021. At this point, my conclusion in the article seemed wildly inaccurate. But fast forward to today and Moderna and BioNTech’s stock prices have fallen to just US$107 and US$104, respectively. Both companies’ shares now trade around their respective prices back when I wrote my December 2020 article.

Stock prices fluctuate too much

The point of this article you’re reading now is not to say that I was “right”. On the contrary, just because the stock prices of both companies are around what they were, does not make my December 2020 article right. 

As Moderna’s and BioNTech stock prices have shown, stock prices fluctuate wildly and often do not accurately reflect companies’ intrinsic values. As a long term stock investor, I don’t want to fool myself into thinking that I was right simply because a stock’s price went up or down. What really matters to a long-term investor is whether a company can return dividends over the lifetime of its business and whether that return is more than what the investor paid for the stock.

Judging an investor’s long-term performance therefore requires patience. It takes decades – not months or years – to judge investment performance. We can only judge the investment performance of a stock after the entire lifecycle of the company has completed, which may even stretch for hundreds of years.

Even if you sold for a profit

I’ll go a step further and say that even if we have sold a stock for a profit, it does not mean we were right. Yes, we may have made a profit, but it could be due to the buyer on the other end of the deal overpaying for the stock – we were just lucky that they mispriced the stock. 

You don’t have to look much further than Moderna and BioNTech’s stock prices in 2021. An investor could have bought in December 2020 and sold in mid-2021 for a huge gain. This does not mean that the investor had bought at a good price. It could simply mean that the mid-2021 price was overvalued.

Ultimately, I don’t judge a stock’s investment performance based on the price at the point of sale. What matters is the profit/cash flow that the company generates and dividends paid to shareholders. 

To me, the share price is too volatile and is just short term noise that fluctuates daily.

This reminds me of a quote from the movie, Wolf on Wall Street. Matthew McConaughey’s character said something funny yet somewhat true about stock prices, “It’s a.. Fugazi, Fogazi. It’s a wazi, it’s a woozy. It’s fairy dust. It doesn’t exist, it’s never landed, it is not matter, It’s not on the elemental chart. It’s not f*ing real”.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

Companies Need to Stop Doing These Stupid Things

Stock-based compensation, EBITDA, and buybacks are often conducted poorly by companies.

We see companies do stupid things all the time that erodes shareholder value. Here are three of them that really irk me.

Targeting stock-based compensation as a percent of revenue

Many companies don’t seem to understand stock-based compensation. 

Twilio is one such example. In an investor presentation last year, Twilio mentioned that it was targeting to reduce stock-based compensation as a percent of revenue.

Stock-based compensation on the income statement is recorded based on the share price at the time of grant. Using a percent of revenue as a stock-based compensation measure just shows how little management understands it.

Stock-based compensation on the income statement can drop simply because share prices have fallen. So lower stock-based compensation on the income statement does not necessarily correlate with a lower number of shares issued. 

In fact, if share prices drop drastically – as was seen with tech stocks in 2022 – stock-based compensation recorded on the income statement may end up being lower, but the absolute number of shares vested could be even more than before. This can lead to even larger dilution for shareholders.

Twilio is not the only company that does not understand stock-based compensation. More recently, DocuSign also suggested that it is targeting stock-based compensation based on a percent of revenue, which shows a lack of understanding of the potential dilutive effects of this form of expense.

Instead of focusing on the accounting “dollars” of stock-based compensation, companies should focus on the actual number of shares that they issue.

Focusing on EBITDA

Too many companies make financial targets based on EBITDA.

EBITDA stands for earnings before interest, taxes, depreciation and amortisation. Although I appreciate the use of EBITDA in certain cases, it is usually not the right metric for companies to focus on. 

In particular, EBITDA ignores depreciation expenses, which often need to be accounted for, especially when a business requires maintenance capital expenditures. Capital expenditure is cash spent this year that is not recorded as an expense on the income statement yet. Instead it is recorded as an asset which will depreciate over time in the future. Ignoring this depreciation is akin to completely ignoring the cash outlay used in prior years.

Management teams are either being dishonest by focusing on EBITDA or truly do not appreciate the pitfalls of focusing on maximising EBITDA instead of actual cash flow per share. In other words, they’re either incompetent or dishonest. Either way, it’s bad.

Framing stock buybacks as returning cash to shareholders

Too many companies frame buybacks as a way to return cash to shareholders. However, if we are long-term shareholders who do not plan to sell our shares, we don’t get any cash when a company buys back stock.

Don’t get me wrong.

I think buying back stock when shares are relatively cheap is a great use of capital. However, saying that buybacks is returning cash to shareholders is not entirely correct. Only a small group of shareholders – the shareholders who are selling – receive that cash.

Instead, companies should call buybacks what they really are: A form of investment. Buybacks reduce a company’s shares outstanding. This results in future profits and dividend payouts being split between fewer shares which hopefully leads to a higher dividend per share in the future for long term shareholders.

Naming buybacks as a form of returning cash to shareholders is undermining the truly long-term shareholders who in reality have not seen any cash returned to them. 

If a company mistakenly thinks that buybacks are a form of returning cash to shareholders, it may also mislead them to buy back stock periodically without consideration of the share price. Doing this can be harmful to shareholders.

On the other hand, if the company correctly realises that buybacks are instead a form of investment, then the share price will matter to them and they will be more careful about buying back shares at a good price.

Bottom line

Companies do stupid things all the time.

Although I can give them the benefit of the doubt for many stupid things they do, I draw the line when a company cannot grasp simple accounting concepts or make silly statements.

It may seem trivial, but making silly statements shows a lack of understanding of key concepts that mould a company’s capital allocation decisions.

Executives are paid good money to make good decisions and I expect a basic level of understanding from the people who make key decisions on shareholders’ behalf.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Docusign. Holdings are subject to change at any time.

Ben Graham’s Q&A

Ben Graham appeared in a news clip in the 1950s, answering questions and assuaging people’s worries about the stock market.

I recently came across an old US TV news clip from the 1950s that featured Ben Graham, the mentor of Warren Buffett, and the author of the highly influential investing texts, The Intelligent Investor and Security Analysis. In the clip, Graham was leading a seminar at Columbia University together with Dean Courtney Brown. The two men gave a short speech and answered questions from the crowd. 

The news clip also featured a short interview of Senator William Fulbright, who at the time, was commissioning a study on the US stock market after stock prices had advanced near the heights of the 1929 peak just before the Great Depression of the 1930s reared its ugly head. (The study was conducted and published in 1955.)

I was fascinated by the news clip, because Fulbright and the people asking questions to Graham and Brown, had worries about the stock market that are similar to today. For example, Fulbright was concerned that stock prices were too high and might collapse drastically yet again, similar to the great crash that happened during the Great Depression. In another example, the question at the 21:09 mark was concerned about inflation that was driven by “deficits spending”, “easy money policy”, “increased union wages”, “increased minimum wage”, and a “rogue [spending] programme of US$101 billion which the government has just announced” – these are worries in the 1950s that would absolutely fit in today. And importantly, the Dow Jones Industrial Average (I’m using the Dow because it is the index that is referenced in the news clip) is up from around 400 points in 1955 to over 37,000 currently. 

I decided to create a transcript of the news clip for my own reference in the future, and thought of sharing it with the possibility that it might be useful for any of you reading this. Enjoy!

Transcript

TV presenter (10:00): There is no shortage of experts on the market. As for us we’re barely able to tell the difference between a bull and a bear. So we sat in on part of a seminar at The Graduate School of Business at Columbia University. After all it’s older than the stock exchange and we thought professors familiar with the language of the street might treat the market with detachment. Dean Courtney Brown and Professor Benjamin Graham were instructing future brokers and customersmen. Here is See It Now’s short course in the market.

Courtney Brown (10:36): First let me give a caution. I hardly need give it to a group of informed students such as you. No one knows precisely why the market behaves as it behaves, either in retrospect, or in prospect. The best we can do as you well know is express informed judgments. But it is important that those judgments be informed. We do know that there has been a substantial rise. That rise has been going on for a number of years, particularly since the middle of 1953. And we do know that the rate of that rise has been very rapid, uncomfortably like that of the 1928-29 period. It has resulted in a lot of comparisons being made in the press. Moreover the present level of stock prices, as measured by the Dow Jones Averages, is about equal to, indeed a little above the peaks of 1929.

A number of explanations have been advanced regarding the stock market’s rise that suggests it may reflect a return to inflationary conditions. This doesn’t seem to me to be very convincing. First because there is no evidence of inflation in the behaviour of commodity prices, either at the wholesale or at the retail level and there hasn’t been over the past a year and a half – extraordinary stability in the behaviour of both indexes. There is so much surplus capacity around in almost every direction that it’s hard to conceive of a strong inflationary trend reasserting itself at this time.

Still another explanation is that the stock market has gone up because there has been a return of that kind of speculative fever that has from time to time in the past gripped the country – the Florida land boom, the 1929 stock boom. They’ve occurred in history as you know, all the way back from the Tulip speculations in Holland. I suspect there’s a certain element of truth in this one. However, it doesn’t seem to me that it gives us too much concern because there has been no feeding of this fever by the injection of credit. I think it is important for us to observe that the amount of brokers’ loans – loans made to brokers for the financing of securities of their customers that have been bought on margin – are less and then US$2 billion at present. In 1929, they were in excess of US$8.5 billion and there is now a larger volume of securities on the stock exchange. Now gentlemen, Professor Graham will pick up the story at that point.

Ben Graham (13:37): One of the comparisons is interesting is one not between 1929, which is so long ago but 1950 which is only a few years ago. It would be very proper to ask why a price is twice as much as they are now when the earnings of companies both in ‘54 and probably in 1955 are less than they were in 1950. Now that is an extraordinary difference and the explanation cannot be found in any mathematics but it has to be found in investor psychology. 

Ben Graham (14:10): You can have an extraordinary difference in the price level merely because not only speculators but investors themselves are looking at the situation through rose-coloured glasses rather than dark-blue glasses. It may well be true that the underlying psychology of the American people has not changed so much and that what the American people have been waiting for for many years has been an excuse for going back to the speculative attitudes which used to characterize them from time to time. Now if that is so, then the present situation can carry a very large degree of danger to people who are now becoming interested in common stocks for the first time. It would seem if history counts for anything, that the stock market is much more likely than not to advance to a point where of real danger.

Unknown questioner (15:03): You said that stock prices now are not too high but that you fear they will go higher. Well then are you recommending the decline?

Courtney Brown (15:09) Well here I’ll defend you on that [laughs].

Ben Graham (15:10): [Laughs] Yeah, go right ahead.

Courtney Brown (15:17): Those who have watched the security market’s behaviour over the years have become more and more impressed with the fact that stocks always go too high on the upside and tend to go too low on the downside. The swings in other words are always more dramatic and more – the amplitude of change is greater than might normally be justified by an analytical appraisal of the values that are represented there. I think what Professor Graham had to say was that his analysis of a series of underlying values would indicate that the stock prices are just about in line with where they might properly be.

However, from experience that would be the least likely thing to happen that stocks would just stabilise right here. Now if it’s the least likely thing to happen, and you have to select a probability between going up further or down further because of the strong momentum that they have had, I think I would be inclined to agree with him [referring to Graham] that the more probable direction would be towards a somewhat higher level.

Unknown questioner (16:24) When stockholders believed the market was too high, they switched from stocks to cash. Now, many people feel that due to capital gains tax they are not free to act. They are, what you might say, locked in. What effect does this have on the stock market in general?

Courtney Brown (16:41): No question about the fact that it does discourage some sales that might otherwise be made because one selling stocks trying to replace them would have to replace them at substantially lower prices and to come out even after paying the capital gains tax. However, that’s not the only reason people are reluctant to sell stocks and buy bonds. Stocks are still yielding about 4.5% on the basis of current dividend payments whereas bonds of prime quality are closer to 3%. Here again we find a contrast with the situation in 1929, when stocks were yielding about 3.5% and prime bonds closer to 5%.

Unknown questioner (17:24): In addition to raising margin requirements, should the federal government take other measures to check a speculative boom in the stock market, and which method is the better?

Ben Graham (17:34): My own opinion would be that the Federal Reserve should first exhaust the possibilities of raising the margin requirements to 100% and then consider very seriously before they imposed other sanctions if needed 

Unknown questioner (17:47): What is the significance of the broadening public participation in stock purchasing and ownership? 

Courtney Brown (17:58): There are probably two elements there that are important. One, the broadening participation of the public in stock purchases is one measure of the degree of speculative fever that we were talking about before. However, subject to that being controlled – and I believe that it can be controlled as Professor Graham has indicated. But over and above that, there is a broad social significance to that, it seems to me. What in essential terms means is that the ownership of American industry is being more widely dispersed among more and more people. This has very favourable repercussions in terms of our political and social life.

Unknown questioner (18:45): This question concerns the so-called Wall Street professional. Our Wall Street professionals, usually more accurate in their near or long-term market trends – forecasts of stock market trends. If not, why not?

Ben Graham (19:03): I said you say that they are more often wrong than right on their forecasts?

Unknown questioner (19:08): What I mean is are they more accurate in the shorter term than the long-term forecasts?

Ben Graham (19:11): Well we’ve been following that interesting question for a generation or more and I must say frankly that our studies indicate that you have your choice between tossing coins and taking the consensus of expert opinion. And the results are just about the same in each case. Your question as to why they are not more dependable – it’s a very good one and interesting one. My own explanation for that is this: That everybody in Wall Street is so smart, that their brilliance offsets each other, and that whatever they know is already reflected in the level of stock prices pretty much. And consequently what happens in the future represents what they don’t know.

Unknown questioner (19:56): Would you kindly comment on an item appearing in the newspapers to the effect that while 45% of buying today is on margin, the money borrowed is equal to only 1% of the value of listed stock.

Courtney Brown (20:12): The amount of trading on the stock exchange is a very small part of the total value of all the securities that are listed there on. And when you say that the total amount of borrowing on margins financed by brokerage loans is only 1% of the value, it is a reconcilable figure. You can’t reconcile it unless you have the detailed data with you, but it isn’t incompatible in any way.

Ben Graham (20:34): I might add a point on that Dean Brown and that is the slow increase in brokers loans as compared with 45% marginal trade, would indicate that a good deal of the marginal trading is between people who are taking in each other’s washing – that is the marginal buyers are buying from sellers who are previously on margin. And that’s why the rate of growth of brokers’ loans is so much smaller now than it had been in the 1920s, when I think a good deal of the selling had come from long-term owners and really smart people who were selling out to the suckers.

Unknown questioner (21:09): I want to raise a point of argument here on this question of inflation. Seems to me that you’re correct in stating that there’s been no inflation in ‘54 but there also appears to be several long-term inflationary points in the economy today. These I think are the deficits spending that’s supposed to be continued by the government, the easy money policy which is expected to continue, the question of increased union wages, the talk about increased minimum wage, and the talk about a guaranteed wage. All these and on top of this, the rogue program of US$101 billion which the government has just announced. These seem to me to be long-term inflationary things in the US economy and I wish you’d talk about these.

Courtney Brown (21:57): That’s a question that has a good many angles on it. Perhaps we both better try it. Prof Graham, why don’t you take the first crash?

Ben Graham (22:00): I think there are two answers to that in my mind. The first is that acknowledging that there are inflationary elements in governmental policy as it’s now being carried out, it may be argued that those are just necessary to keep things on an even keel because without them, we might have some inbuilt deflationary factors in the way business operates through increased productivity capacity and so forth.

Courtney Brown (22:27): I’ve been impressed with the possibility of labour costs as an inflationary factor. But a rise in wages does not necessarily mean a rise in labour costs. It depends upon the relationship of the rate of change in wages and the rate of change in output  per man-hour, or productivity. Now if wages are related to productivity, as you know they were in the General Motors contract, there is no necessary inflationary consequence to be anticipated. However, apart from that, it’s entirely possible that if wages go ahead faster than changes in productivity there could be a seriously inflationary factor. 

Unknown questioner (23:13): On the basis of your recent answer with regard to the psychological impact of the present condition of the market on the small investor, do you discount the entire theory of dollar averaging? 

Ben Graham (23:30): I think there’s no doubt for this, accepting your premise the man will put the same amount of money in the market year after year for the next 20 years, let’s say, there is a great chance of coming out ahead regardless of when he begins and particularly regardless we should begin now. You have to allow for the human nature factor that no man can really say definitely just how he’s going to behave over the next 10 to 20 years. And there is danger that people start with the idea of being systematic investors over the next 10 to 20 years, may change their attitude as the market fluctuates – in the first instance, put more money into the market because they become speculators, and secondly, get disgusted and scared and don’t buy at all later on when prices get low. It’s a psychological danger – the fault is not in the stars or in the system but in ourselves I think. 

TV presenter (24:27): That was a glimpse of a seminar examining the stock market at Columbia University. We move now to Washington, where Democratic Senator William J Fulbright has announced that his Banking and Currency committee will conduct an investigation of the market.

Unknown questioner (24:40): Senator Fulbright, why is your committee going to investigate the stock market?

William Fulbright (24:43): Well Mr Mayor, there are two principal reasons. One is that my committee has jurisdiction over the subject matter through its control and responsibility for the SEC. The second reason is that the unusual increase during the last 12 to 18 months in the level of prices would seem to warrant a study at this time. 

Unknown questioner (25:04): Are you worried about another 1929?

William Fulbright (25:06): But of course there’s certainly a possibility of it. This situation is reminiscent of 1929. We know the Great Depression in the early ‘30s was heralded by the tremendous increase, the great rise in the stock market and then the great drop. That’s unsettling to the whole economy and it frightens people. It causes great harm to people on fixed incomes and so on. And another thing about it is that the greatest criticism of our system and our economy by our enemies – especially the Communists – is the instability of our economy and the why of our fluctuations and we should endeavour to minimise those fluctuations. Now I don’t know all the reasons involved in this. That’s why we’re going to have the study. But the objective is is to inform the Congress and inform the people as far as we can about the conditions that now exist and we would then hope to be able to develop some remedy for it, some way to control these wild fluctuations. 

I confess with what limited knowledge I have, it does disturb me because it has gone up for such a long time and to such a great extent – I think far beyond what the conditions in the country itself warrant. I happen to know of my own knowledge that in the agricultural areas in the southwest, we are having a very severe depressed period. There is no boom in the agricultural areas, the rural areas of the West, and the Southwest. So that most of this boom is concentrated in the market and I think it is unhealthy but I’m unwilling to take a dogmatic stand now. That’s why as I say, we’ll have the study. 

Unknown questioner (26:52): Well Senator Fulbright, I think you have referred to this as a friendly investigation. What exactly is a friendly investigation?

William Fulbright (27:00): Well what I meant to convey is that I have no knowledge nor even suspicion of wrongdoing, manipulation, or anything of that kind in this increase. And I approach it in a friendly spirit in the spirit of trying to find out for the information of the country and of our committee and the Congress, what has been taking place. I’m not approaching it with the idea that we’re going to reveal a lot of wrongdoing.

TV presenter (27:27): The stock exchange hasn’t been investigated for 20 years, but it remains the subject of curiosity and concern as to whether what is good for the exchange is good for the country and the people who live here. There have been no official charges that it has been rigged or manipulated but rather the question of whether or not the market is healthy. There is wide disagreement amongst the experts as to why the market behaves as it does. But there is considerable agreement that it behaves the way it does because people behave the way they do. 

Good night and good luck. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2023 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q3 earnings season.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI (2023 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s third quarter after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest comments, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management believes that generative AI is a generational opportunity to deliver new products and services

We believe that every massive technology shift offers generational opportunities to deliver new products and solutions to an ever-expanding set of customers. AI and generative AI is one such opportunity, and we have articulated how we intend to invest and differentiate across data, models and interfaces. 

The integration of Adobe’s generative AI Firefly models with the company’s Creative Cloud’s suite of products have led to more than 4.5 billion generations since their launch in March

The general availability of our generative AI Firefly models and their integrations across Creative Cloud drove tremendous customer excitement with over 4.5 billion generations since launch in March.

Adobe’s management has released three new Firefly models for different functions

The release of 3 new Firefly models, Firefly Image 2 model, Firefly Vector model and Firefly Design model, offering highly differentiated levels of control with effects, photo settings and generative match

Adobe’s Creative Cloud subscription plans now include generative credits; Adobe’s management introduced generative credits to Adobe’s paid plans to drive adoption of the plans and drive usage of the generative AI functions; management does not expect the generative credits (or packs) to have a large impact on Adobe’s financials in the short term beyond driving more customer sign-ups

We also introduced generative credits as part of our Creative Cloud subscription plans…

…Secondly, we priced the generative packs — sorry, we integrated the generative capabilities and credits directly into our paid plans with the express intent of driving adoption of the paid subscription plans and getting broad proliferation of the ability to use those…

… I don’t personally expect generative packs to have a large impact in the short term other than to drive more customers to our paid existing subscription plans.

Photoshop Generative Fill and Generative Expand are now generally available and are seeing record adoption, with them being among the most used features in the Photoshop product

The general availability of Photoshop Generative Fill and Generative Expand, which are seeing record adoption. They’re already among the most used features in the product.

Adobe’s management believes that Adobe Express’s generative AI capabilities are driving adoption of the product

 The family of generative capabilities across Express, including text to image, text effects, text to template and generative fill are driving adoption of Express and making it even faster and more fun for users of all skill levels.

Adobe’s management is seeing high level of excitement among customers for the Firefly integrations across Adobe’s product suite

Customer excitement around Firefly integrations across our applications has been great to see with community engagement, social interactions and creative marketing campaigns driving organic brand search volume, traffic and record demand. 

Adobe’s management expects generative AI features to deliver additional value and attract new customers to Adobe’s Document Cloud suite of products; generative AI capabilities for Document Cloud is now in private beta, with a public beta to come in the next few months and general availability (GA) to arrive later in 2024

Much like the Creative business, we expect generative AI to deliver additional value and attract new customers to Document Cloud. Acrobat’s generative AI capabilities, which will enable new creation, comprehension and collaboration functionality have already been rolled out in a private beta. We expect to release this in a public beta in the coming months…

…What we’re really excited about as we bring the AI assistant to market, which, by the way, as I mentioned, is now in private beta. Expect it to come out in the next few months as a public beta and then GA later in the year.

Adobe’s management is focusing Adobe’s generative AI efforts within its Experience Cloud suite of products in three areas: (1) Building an AI assistant, (2) reimagining Experience Cloud’s existing applications, and (3) creating new generative AI solutions

Generative AI accelerates our pace of innovation across the Experience Cloud portfolio, enabling us to build on our capabilities to deliver personalized digital experiences. Our efforts are focused in 3 areas: one, augmenting our applications with an AI assistant that significantly enhances productivity for current users and provides an intuitive conversational interface to enable more knowledge workers to use our products; two, reimagining existing Experience Cloud applications like we did with Adobe Experience Manager; and three, developing entirely new solutions built for the age of generative AI like Adobe GenStudio.

Adobe’s management recently released Adobe GenStudio, a solution with generative AI capabilities that combines Creative Cloud, Express, and Experience Cloud, to help brands create content; Adobe GenStudio is seeing tremendous customer interest

Release of Adobe GenStudio, an end-to-end solution that brings together best-in-class applications across Creative Cloud, Express and Experience Cloud with Firefly generative AI at the core to help brands meet the rising demand for content. GenStudio provides a comprehensive offering spanning content ideation, creation, production and activation. We are seeing tremendous interest in GenStudio from brands like Henkel, Pepsi and Verizon and agencies like Publicis, Omnicom and Havas as they look to accelerate and optimize their content supply chains.

Adobe now has a pilot program where some customers are able to bring their own assets and content to extend Adobe’s Firefly models in a custom way; Adobe is exposing Firefly through APIs to that customers can build Firefly into their workflows; Adobe is enabling users to integrate Firefly-generated-content into a holistic Adobe workflow

So with Firefly and Express, very excited about the momentum that we continue to see. You heard that we crossed 4.5 billion generations now so we continue to see really, really strong adoption and usage of it, partially as a stand-alone business but also integrated into our Photoshop and Illustrator and these existing workflows.

And we’re starting to see a lot of interest not just in the context of using it as part of the existing products but also using it as part of the ecosystem within enterprises. So we’ve been working with a number of customers to not just enable them with Firefly, which is the predominance of the growth that we’re seeing in Q4 for enterprise adoption but also have a number of pilot customers already engaged around custom model extensions so that they can bring their own assets and their own content into what Firefly generates.

Second, we’re also enabling the ability to expose it through APIs so they can build it into their existing workflows. And third, we’re, of course, connecting it and tying it all into Adobe Express, which now also has its own Firefly and additional capabilities like things so that you can not just sort of create content using Firefly but then start to assemble it, start to schedule social posts around it, start to do multi-language translations, that those are all features that are already in there and then create a stakeholder workflow from people working in Photoshop to the marketers that are trying to post externally. So that’s where things get very interesting and exciting in terms of the connection we have with GenStudio and everything that Anil is doing.

Adobe’s management intends to improve the generative capabilities over time, which might be more expensive in terms of the generative credits consumed, and management believes this will help drive Adobe’s growth over time

But what will happen over the course of the year and the next few years is that we will be integrating more and more generative capabilities into the existing product workflows. And that will drive — and we’ll be integrating capabilities like video generation, which will cost more than 1 generation, and that will drive a natural inflation in that market and that will become a driver for growth subsequently. 

Adobe’s management believes that Firefly is a great on-ramp for Adobe Express, and a great catalyst for all of Adobe’s products across the spectrum (the same underlying generative AI technology is also a great catalyst for Adobe’s Document Cloud business)

And that sort of brings them as an on-ramp into Express, which would be the other part. Express is certainly the introductory pricing, the ability to get millions more into the fold. And the ability right now, it used to be that Express and other offerings in that is to all worry about do I have the right templates? Well, AI is going to completely change that. We have our own models. And so Firefly will allow anybody to take whatever creative idea that they have and make that available. So I think Firefly really helps with the Express offering.

On the Creative Cloud, David mentioned this. I mean, if you look at the adoption of that functionality and usage that’s being driven, whether it’s in Photoshop right now, Illustrator, as we add video, both in terms of providing greater value, and we certainly will, therefore, have the uplift in pricing as well as the retentive ability for Firefly, that’s where I think you’re going to see a lot of the really interesting aspects of how Firefly will drive both adoption as well as monetization.

And then if you go at the other end of the spectrum to the enterprise, GenStudio, every single marketer that I know and CFO and CMO are all worried about how much am I spending on data? How do I get agility in my campaigns? And the fact that Firefly is integrated into both Express as well as when we do the custom models for them so they can upload their own models and then have the brand consistency that they want. So Firefly really is the fact that we have our own models, a great catalyst for business all across the spectrum…

… And then you take the same technology that we have in Creative and think about its impact in both Document Cloud when we do that and the ability to have summaries and have conversational interfaces with PDF, thereby making every single PDF, as David again said, both for communication, collaboration and creation far more compelling. I think you’re going to see that same kind of uplift in usage and therefore, monetization on the Acrobat side.

DocuSign (NASDAQ: DOCU)

DocuSign’s management will be introducing generative AI enhancements to its CLM (Contract Lifecycle Management) platform; Veeco was an eSignature customer that has started using CLM, and DocuSign’s AI CLM features will help Veeco with surfacing actionable insights from customer contracts

CLM continues to grow well, particularly with North American enterprise customers. And for the fourth year in a row, our CLM solution was recognized as a leader by Gartner in contract life cycle management, noting our strong market understanding, product strategy and road map vision, including upcoming Generative AI enhancements. This quarter, we expanded a relationship that began more than 5 years ago with Veeco USA. Who’s the leader in workplace innovation. Veeco began using DocuSign eSignature and has added CLM as part of this transformation into a digital services company. Our AI solution will help Veeco streamline and enhance search and review of executed customer contracts with actionable insights to better serve its customers

MongoDB (NASDAQ: MDB)

MongoDB’s management held a customer feedback session recently and they saw four themes that emerged from the conversations, one of which was that customers of all sizes are interested in AI

This quarter, we held our most recent global Customer Advisory Board meeting where customers across various geographies and industries came together to share feedback and insight about the experience using MongoDB. From these discussions as well as our ongoing C-suite dialogue with our customers, a few themes emerge. First, AI is in nearly every conversation with customers of all sizes.

MongoDB’s management is seeing great early feedback from MongoDB’s partnership with AWS CodeWhisperer; MongoDB’s management also thinks that Microsoft Github Copilot is capable of generating useful code

We’re seeing great early feedback from our partnership with AWS’ CodeWhisperer, the AI-powered footing companion that is now trained on MongoDB data to generate codesuggestions based on MongoDB’s best practices from over 15 years of history. Microsoft GitHub Copilot is also proficient at generating code suggestions that reflect best practices in developers to build highly performant applications even faster on MongoDB.

MongoDB’s management is seeing software developers being asked to also build AI functionalities into their applications

And with the recent advances in Gen AI, building applications is no longer the sole domain of AI/ML experts. Increasingly, it’s software developers who are being asked to build powerful AI functionality directly into their applications. We are well positioned to help them do just that.

MongoDB’s Atlas Vector Search – the company’s AI vector search feature – recently received the highest NPS (net promoter score) among vector databases from developers; crucially, the NPS survey was done on the preview version of Vector Search and not even on the generally available version, which is better

In a recent state of AI survey reported by Retool, Atlas Vector Search received by far the highest Net Promoter Score from developers compared to all other vector databases available…

……As I said in the prepared remarks, there was a recent analysis done by a consultancy firm called [ Retool ] that really spoke to lots of customers, and we came out of top on — in terms of NPS. And by the way, our product was a preview product. It wasn’t even the GA product. 

MongoDB’s Atlas Vector Search allows developers to combine vector searches with another kind of search capabilities available in MongoDB, resulting in the ability to run very complex queries

Moreover, developers can combine vector search with any other query capabilities available in MongoDB, namely analytics, tech search, geospatial and time series. This provides powerful ways of defining additional filters on vector-based queries that other solutions just cannot provide. For example, you can run complex AI and rich queries such as “find pants and shoes in my size that look like the outfit in this image within a particular price range and have free shipping” or “find real estate listings with houses that look like this image that were built in the last 5 years and are in an area within 7 miles west of downtown Chicago with top-rated schools.”

MongoDB’s Atlas Vector Search allows customers to scale nodes independently, which gives customers the ability to achieve the right level of performance at the most efficient cost, so management thinks this is a very compelling value proposition for customers

One of the announcements we also made was that you can now do workload isolation. So for search or vector search functionality, you can scale those nodes independently of your overall cluster. So what that really does is allow customers to really configure their clusters to have the right level of performance at the most efficient cost. So we’ve been very sensitive on making sure that based on the different use cases, you can scale up and down different nodes based on your application needs. So by definition, that will be a very compelling value proposition for customers…

…[Question] With Vector Search comes quite a bit more data. So how are you making sure that customers don’t receive a surprise bill and end up unhappy?

[Answer] In terms of your question around the amount of data and the data builds, obviously, vectors can be memory-intensive. And the amount of vectors you generate will obviously drive the amount of usage on those nodes. That’s one of the reasons we also introduced dedicated search nodes so you can asymmetrically scale particular nodes of your application, especially your search nodes without having to increase the overall size of your cluster. So you’re not, to your point, soft for the big bill for underlying usage, for nonusage right? So you only scale the nodes that are really need that incremental compute and memory versus nodes that don’t, and that becomes a much more cost-effective way for people to do this. And obviously, that’s another differentiator for MongoDB.

MongoDB’s management believes that customers are aware that their legacy data infrastructure is holding them back from embracing AI (legacy data infrastructure do not allow customers to work with real-time data for AI purposes) but the difficulty in modernising the infrastructure is daunting for them; MongoDB’s management thinks that the modernisation of data infrastructure for AI is still a very early trend but it will be one of the company’s largest long-term opportunities

They are aware that their legacy platforms are holding them back from building modern applications designed for an AI future. However, customers also tell us that they lack the skills and the capacity to modernize. They all want to become modern, but daunted by the challenges as they are aware it’s a complex endeavor that involves technology, process and people. Consequently, customers are increasingly looking to MongoDB to help them modernize successfully…

… There is a lot of focus on data because with AI. Data in some way, it becomes a new code, you can train your models with your proprietary data that allows you to really drive much more value and build smarter applications. Now the key thing is that it’s operational data because with applications, this data is always constantly being updated. And for many customers, most of those applications are right now running on legacy platforms so that operational data is trapped in those legacy platforms. And you can’t really do a batch process of e-tailing all that data into some sort of warehouse and then still able to leverage the real-time use of that data. That’s why customers are now much more interested in potentially modernizing these legacy platforms than they ever have before…

…I would say it’s still very, very early days, we definitely believe that this will be one of the largest long-term opportunities for our business. we’re in the very early days.

MongoDB’s management has launched Query Converter, which uses AI to convert a customer’s existing SQL-related workflows to work with MongoDB’s NoSQL database platform, and customers have tried it out successfully

We launched Relational Migrator earlier this year to help customers successfully migrate data from their legacy relational databases to MongoDB. Now we’re looking beyond data migration to the full life cycle of application modernization. At our local London event, we unveiled the query converter, which uses genetic AI to analyze existing SQL queries and store procedures and convert them to work with MongoDB’s query API. Customers already tooled successfully to convert decades-old procedures to modernize their back-end with minimal need for manual changes.

MongoDB’s management thinks it’s too early to tell how the usage of MongoDB’s AI features by customers will impact MongoDB’s gross margin at maturity

[Question] And then the follow-up is more it’s around AI. So if I look at the demos that you guys have around vector search and how search is getting a lot better, that seems very compelling. And it seems like really straightforward for our clients to improve their the customer experience that they use it for a customer facing up, for example. What is the — what are the implications for gross margins for you, Michael, like do you have to do a lot more computer to be able to handle it?

[Answer] So I think it’s a little too early to tell. There’s obviously plenty of variability in the workloads depending on the nature what the underlying application is. So I think it’s a little early to give a strong direction to that… But I think too early to make a specific call or quantification on the gross margin impacts of AI.

MongoDB’s management thinks that Atlas Vector Search will be a big opportunity for MongoDB, but it’s early days and they find it hard to exactly quantify the revenue opportunity

We’ve seen a lot of demand from customers. And we feel like this is a big, big opportunity. Again, it’s early days. It’s going to take time to materialize, but this is, again, one of the other big growth opportunities for our business. That being said, in terms of the revenue opportunity, it’s really hard to quantify now because the use cases that customers are starting with are still kind of, I would say, early intent because people are still playing around with the technology. But we are seeing, as I mentioned, in UKG is using it to essentially provide an AI-powered assistant for its people. One Energy, European energy company is using terabytes of geospatial data and is using vectors to basically get better insights in terms of the images that they’re getting from the work they’re doing in terms of drilling for oil. So it’s still very, very early days. So hard to give you like an exact numbers.

When it comes to copilot tools for software coding, MongoDB’s management is seeing varying levels of productivity improvement for software developers based on the tools they are using; MongoDB’s management also sees the software written with copilots as being mostly for internal use currently

[Question] As customers began to trial some of these copilot code tools will say. What type of feedback have you gotten from them as it relates to the pace with which they’ve been able to reduce net new workload time to market, how much faster or efficient are customers getting using these tools?

[Answer] We get different answers from a lot of different customers. It really depends on which tool they’re using. Without commenting on who’s better, who’s worse, we definitely see a difference in the quality of the output between the different tools. I think it’s going to take some time for these tools to mature. So I think you’re seeing a lot of customers do a lot of testing and prototyping. I would also tell you that they’re doing a lot of this on internal-facing applications because there’s still lots of questions about IP rights and what is potentially copyrightable and then help to be licensable if they offer this as a shrink-wrap software or service to their end customers. So we’re seeing more of this work on internally facing applications but the productivity gains really do vary by tool and all the very do vary by the sophistication of the app being built. So it’s hard for me to give you a real number. I know there’s people out there quoting 30% or 40% improvement. But it really depends on the customer and the use case and tool that they’re trying to use.

MongoDB’s CEO, Dev Ittycheria, thinks his views – that (1) vector search would become just another functionality in a more holistic database platform, and (2) the database platform that can integrate vector search functionality well into developers’ workflow will win – has played out

I would say that I think 6, 9 months ago, there was a lot of interest in vector databases and there were some point solutions that got a lot of name recognition and a lot of people are wondering, is there a risk that we could be disrupted by them? And at that point in time, we made it clear that we believe vectors, we’re really another form of an index and that every database platform would ultimately incorporate vectors into their architecture. And the winner really would be the technology that made the vector functionality very integrated and cohesive as part of the developer workflow. I would argue that it’s really played out. 

MongoDB’s management saw customers having to work with two databases when performing vector searches for AI purposes; these customers were asking MongoDB to bring vector search capabilities into its database platform because working with one platform helps customers speed up their work and reduce costs

One of the reasons we actually built search is because we got feedback from our customers in many instances, a lot of our customers were dual homing data to MongoDB and to some sort of search database. So consequently, not only had to manage 2 databases, keep that data in sync, but also manage the plumbing that connected those 2 database platforms and customers told us they much would — this is like we don’t understand why you’re not offering a solution because we much rather have it all in one platform with one API. And that ultimately drove our desire to build out our search functionality, which is really becoming more and more popular. So the point for customers is that if you can remove friction in terms of how they can use the platform leverage the platform, have one set of kind of semantics in terms of — to address a broad set of use cases, it really simplifies the data architecture. And the more you simplify data architecture, the more nimble you can be and the more cost-effective you can be, and that’s what’s really resting with customers.

Okta (NASDAQ: OKTA)

Okta’s management introduced Okta AI during the company’s Oktane event in October; Okta AI is powered by the data that Okta has collected over the years from its 18,800 customers and 7,000+ integrations, and is infused into several of Okta’s products

The headline of the event was the introduction of Okta AI, the identity solution for the next era of computing. Okta AI is AI for Identity. It’s powered by the massive amounts of data the company has accumulated over the years, including anonymized insights crowdsourced from our 18,800 customers and the 7,000+ integrations in the Okta Integration Network, as well as data on usage, policies, threats, and risk signals. Okta AI uses that data to perform powerful, real-time security, developer, and policy actions. Okta AI is also infused into several of our products. It makes our existing products more valuable and new products possible — all while expanding what it means to be integrated and protected.

An example of Okta AI at work is Identity Threat Protection, which enables companies to automatically log users out of apps during a security issue

Identity Threat Protection with Okta AI, a new product that will enable businesses to prevent and respond to threats faster than ever before. It empowers organizations to automate the detection and remediation of Identity threats across the tech ecosystem. It extends adaptive risk evaluation from the point of authentication to any time a user is logged in and helps you quickly prevent and respond to threats. Identity Threat Protection allows for an array of powerful new actions like Universal Logout. For the first time in our industry, it’s possible to automatically log users out of their apps during a security issue. Threat actors might be getting more sophisticated, but we are using the power of AI and our ecosystem to keep our customers safe and a step ahead.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Data Cloud’s introduction was great timing because it coincided with the boom in generative AI and a company can’t make AI useful without data

And Data Cloud, this hyperscale, this real-time customer data platform that is performing incredibly well for us, it’s the foundation of every AI transaction, but it’s the foundation of every large deal that we did this quarter. That is what is so exciting. And in just our third quarter, Data Cloud has ingested an astonishing 6.4 trillion records, 6.4 trillion records. That’s 140% year-over-year increase. It triggered 1.4 trillion activations, a 220% increase year-over-year. This is a monster product. I could not be more excited. And it’s the perfect time, we didn’t really understand that it was going to line up so well with this generative AI revolution. It’s a product we’ve been working on for a couple of years. Just the timing of it has been incredible because listen, if you don’t have your data together, in a company, you’re not going to deliver AI. It’s not like companies are going to run their AI off of Reddit or off of some kind of big public data set. They have to have their data set together to make AI work for them, and that is why the Data Cloud is so powerful for them

Salesforce’s management believes that Salesforce is the No.1 AI CRM and is leading the industry in the current AI innovation cycle; they also believe that the current cycle is unlike anything they have ever seen and it’s a view that’s shared widely

We are the #1 AI CRM. If that isn’t clear already, we’re leading the industry through the unprecedented AI innovation cycle. It’s unlike anything I’ve seen and most of the people that I talk to all over the world feel the same way. 

Salesforce’s management believes that trust is going to be important in the AI era and Salesforce will be protecting customer data with a trust layer so that the data can’t be easily accessed by 3rd-party foundation models

Now as I’ve said before, this AI revolution is going to be a trust revolution. It’s not just about CRM, data or AI. It’s also about trust. And I think the trust layer and the way that we’ve architected our platform so that our customers are not basically taking — getting taken advantage of these next-generation large language models, these foundation models, they are so hungry for all of this data, and they want our customers’ data so that they can grow. We’re not going to let them have it. We’re going to separate ourselves from those models through a trust layer so customers can be protected. This is going to be so important for the future of how Salesforce architects itself with artificial intelligence.

Salesforce’s management is seeing customers across the world wanting to invest in AI for more productivity; management also travelled the world and noticed that customers are very excited about AI but at the same time, they are confused about AI’s capabilities – this excitement was not in place a year ago because generative AI apps had not surfaced yet

I’ve been on the road pretty much nonstop especially over the last month. I’ve been in — throughout Europe. I’ve been now in Asia. I’ve been throughout the United States. And I just continue to see these same trends, which is customers are investing for the future and they’re investing and inspired by AI to give them more productivity. Look, they realize unemployment is just so low. Where are they going to hire more people? It’s so hard for them to hire, they’re going to have to get more productivity from their employees. They’re going to do that through this great new technology, and we’re going to help them make that happen…

…And on a global basis, and like I said, in some of these customers in the last 30 days, I was in — I can give you my direct experience. I was in San Francisco, Los Angeles, Las Vegas, Stuttgart, Germany, I was in Nice, Monaco. I visited with our customers throughout that area. And also, I went up to Amsterdam, to France. I had a large customer dinner in the U.K. in London. I went to the U.K. Safety Summit. I then came back and went to Japan. I think I see something very consistently, which is customers are extremely excited about AI everywhere we go. It could be government, it could be commercial organizations. It could be technologists. Everyone is excited about AI. At the same time, there is a lot of confusion about what AI can and cannot do…

… And this excitement, this energy, these ideas of innovation of AI were not in place a year ago. Because don’t forget, a year ago, I don’t think any of us have used ChatGPT or Bard or Anthropic or Cohere or Adapt or any of the new AI companies. None of us had really had our hands on or envisioned what it really meant to us or that we would have Copilots, and that those Copilots would give us the ability to do all kinds of next-generation capabilities. But a year later, it’s a technology revolution. 

Salesforce has been deploying its own generative AI tools at a quick pace and management thinks the results have been excellent

I’ve been impressed with how quickly we deployed our own trusted generative AI tools and applications internally. We’ve launched Sales, GPT and Slack Sales, Elevate internally, and our global support team is live with Service GPT, and we’re seeing incredible results. We’ve streamlined our quoting process with automation, eliminating over 200,000 manual approvals so far this year. And since the introduction in September, our AI-driven chatbot has autonomously resolved thousands of employee-related queries without the need for human involvement.

Salesforce’s management thinks that every customer’s AI transformation is going to begin and end with data 

What I’ll tell you is you’re seeing something that we have been seeing and calling out for the last few quarters, but we probably have not been able to illuminate it to the level that you see now in the numbers, which is that every customer and every customer transformation and every customer AI transformation is going to begin and end with data. And for us to achieve that goal, those customers are going to have to get to another level of excellence with their data. 

Salesforce’s management thinks that there’s still a lot that AI-companies need to do to make AI safe for customers, but it’s getting better over time

We have — we still have a lot of work, as everyone does in our industry, on AI and making it safe for our customers. This is going to be incredibly important. I think for a lot of customers, they realize that they’d like to just let this AI unleashed autonomously but it still hallucinates a huge amount and it also is quite toxic. So we’re not quite ready for that revolution. But every day, it’s getting a little better. 

Salesforce’s management thinks that the movie Minority Report contains a good scene on how AI can be used to automate the personalised customer experience – management also thinks that this is something that many of Salesforce’s customers want to achieve for their own customer experience

And when I — going through the streets of Tokyo, it’s not quite the minority report, which is a movie that was partly written by our futurist, Peter Schwartz, but it’s getting closer to that idea. And when I walked into some of these stores, there’s definitely a lot more automation based on my customer record but not quite the level of automation that Tom Cruise felt when he walked into that Gap store, if you remember that scene, which was so amazing, which is very much front of mind for a lot of our customers because they want to have that capability and they want us to deliver that for them.

Salesforce’s management explained how Data Cloud can be very useful for companies that are deploying AI: Companies can use their own data, via Data Cloud, to augment generative AI models to produce personalised and commercially-useful output that otherwise could not be done

But they’re going to get frustrated when the Copilot that they are given from other companies don’t have any data. They just have data grounded to maybe the application that’s sitting in front of them, but it doesn’t have a normalized data framework on — integrated into the Copilot. So while I think Copilots on productivity applications are exciting because you can tap into these kind of broad consumer databases that we’ve been using. So as an example, the Copilot is I’m writing an e-mail. So now my — I’m saying to the copilot, hey, now can you rewrite this email for me or some — make this 50% shorter or put it into the words of William Shakespeare. That’s all possible and sometimes it’s a cool party trick.

It’s a whole different situation when we say, “I want to write an e-mail to this customer about their contract renewal. And I want to write this e-mail, really references the huge value that they receive from our product and their log-in rates. And I also want to emphasize how the success of all the agreements that we have signed with them have impacted them, and that we’re able to provide this rich data to the Copilot and through the prompt and the prompt engineering that is able to deliver tremendous value back to the customer.” And this date, this customer value will only be provided by companies who have the data. And we are just very fortunate to be a company with a lot of data. And we’re getting a lot more data than we’ve ever had. And a lot of that is coming from the Data Cloud because it’s amplifying the capabilities of all the other data we have. 

Salesforce’s management thinks that there will be significant improvements to Salesforce’s AI features in the near future

I think the demonstrations at Dreamforce were outstanding. The demonstrations that we’ll deliver in our February release will be mind-boggling for our customers of what they will be able to get done. And I think that by the time we get to Dreamforce ’25 or ’24 in September ’24, what we’ll see is nothing that we could have possibly imagined just 24 months earlier before these breakthroughs in generative AI have really taken hold through the whole industry.

Salesforce’s management thinks that no single company will control the development of AI because they think that open source AI models are now as strong as proprietary models and will lead the way; management also thinks that unlike the development of mobile operating systems which is controlled by 2 companies, there are thousands of companies that are working on open-source AI and this will lead to rapid innovation

No one company has a hold on this. I think it’s pretty clear at this point that because of the way AI is built through open source, that these models are very much commodity models, and these responses are very much commodity responses. So we’ve always felt that way about AI for more than a decade. We said that its growth has really been amplified by open source development. Because these open source models now are as strong as commercial models are or proprietary models, I think that what we really can see is that, that is going to accelerate this through every customer. There’s not going to be any kind of restrictions because of the proprietariness or the cost structures of these models. We’re going to see this go much faster than any other technology.

The reference point, as I’ve been using as I travel around, is really mobile operating systems. Mobile operating systems are very important, and we all have one on our desk or in our pocket right now. But really, the development of mobile operating systems has been quite constrained because they’re really held mostly by 2 companies and 2 sets of engineering teams. That’s not how this technology is being built. This technology is highly federated across thousands of companies and thousands of engineering teams who are sharing this technology. And because of that, you’re ending up with a rate of innovation unlike anything we’ve seen in the history of our industry and is moving us into areas very quickly that could become uncomfortable. So this is an exciting moment.

Veeva Systems (NYSE: VEEV)

Veeva’s management has not seen a big impact on the clinical side of Veeva’s business from generative AI

In terms of the generative AI, honestly, I haven’t seen a big impact in clinical. There was good experimentation and projects around helping to write or evaluate protocols, for example, but not using things like generative AI to do statistical analysis or predict where the patients are. I think there, the more appropriate tool which people are using and continue to use more and more data science. Really having the right data, running the right algorithms, being systematic about it. So yes, I just haven’t seen that impact of generative AI. You see it more in other areas that relate to content creation and asking of questions, writing safety narratives, things like that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, Salesforce, and Veeva Systems. Holdings are subject to change at any time.

Investing Like a Business Owner

We often forget that investing in stocks is investing in businesses. As such we need to think like a business owner to succeed.

Rob Vinall is one of the top performing fund managers in the past decade and a half.

Vinall manages a fund named the Business Owner Fund. Since inception 15 years ago, the Business Owner Fund has returned 589%, or an annualised rate of 13.7%, in euro terms. One thing about Vinall that stands out to me is that as his fund’s name suggests, he strives to invest like a business owner.

Too often, investors look at stocks as just prices that move up and down and make investments decisions based on these prices. They often forget that there are businesses and cash flows behind these stock prices and stock tickers.

Step into the shoes of a business owner

Imagine you are starting a restaurant business. There are two big financial numbers you need to consider before you start. They are: (1) how much do you need to put into the business and (2) how much can you get out of it over time?

For instance, let’s say the initial start up cost is $1m. But you can take out $200k in dividends every year after that for the next 20 years. Knowing these projections, you can decide if it is worthwhile to start your restaurant business. In the above projections, you can calculate that over twenty years, you would have quadrupled your money.

Investing in stocks should also involve the same thinking. How much can we get out of the stock over the lifespan of the business? That means, how much in dividends per share can we get over the lifespan of the business and will that cover the cost that we spend on buying the shares.

But what about selling the stock?

A business owner who owns her own restaurant may not have an opportunity to sell the restaurant. As such, the only way to receive any returns is from the profits of the business. This means that the business owner naturally places emphasis on ensuring the profits that the business can generate exceeds how much she puts in.

On the other hand, when we invest in stocks, we can sell the stock. This is both a blessing and a curse in my opinion. It’s good because it provides us with liquidity if we need the cash. But it’s bad because investors then tend to focus on the stock price and not the business fundamentals.

Like a business owner, stock investors should be focused on the cash flow of the business rather than its share price. This means looking at the future cash flow per share, and ultimately how much dividends, they can receive over the lifespan of the business.

In the long-term, while a company may not be paying dividends yet, the earnings and cash flows allows a company to eventually dish out dividends, which should offset the amount you paid for your investment and more.

Final words

Investing in the stock market should be similar to being a business owner. We should focus on how much profits a company can return to us instead of how much we can sell the stock at a future date. 

The quoted stock price on the stock market can fluctuate wildly and will depend greatly on external factors such as the risk free rate or how Wall Street views the company. This can distract us from what is truly important and why we really invested in the company.

By focusing on the cash flows of the business, we can more safely predict our returns instead of being beholden to the externalities of the environment that may impact our sale price.

Ultimately, just like a business owner, we should focus on our returns from the dividends instead of wasting energy hoping that the share price goes up. This is often outside our control and if it does then great but if it doesn’t, it shouldn’t matter as the overall returns from our cash flow should be good enough for us to make a positive return.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

Having a Margin of Safety

How do we buy stocks with a margin of safety and how wide of a margin do we need?

Warren Buffett once said that we should invest only when the price provides us with a margin of safety. But what does a margin of safety really mean? Let’s break it down.

Accounting for shortfalls in forecasts

Investing is a game of probability. 

It is impossible to forecast the exact cash flows and dividends that a company will pay in the future. This is where the concept of a margin of safety comes in. Morgan Housel once wrote:

“Margin of safety is simply the distance between your predictions coming true and needing those predictions to come true. You can still try to predict the future, but a margin of safety gives you room for error to be wrong.”

For instance, we may forecast a company to provide us with $1 per share in dividends for 10 years and then close down after the 10 years is over. 

Using a dividend discount model and a 10% required rate of return, we can calculate that the value of the shares should be $6.14 each. In other words, if we pay $6.14, it will give us a 10% annual return based on the expected dividends we can receive over time.

But what if our forecast falls short? Say the company ends up paying a dividend of only $0.80 per share each year. In this case, paying $6.14 for the company’s shares will not get us our desired return of 10% per year.

To account for this potential 20% shortfall in dividends per share, we should have a margin of safety. We can calculate that we should only buy the stock if the stock price is $4.92 so that we have a “margin of safety” in case our forecast falls short.

Accounting for different discount rates

But a margin of safety does not only mean that we should account for the company’s actual results deviating from our forecasts. There is another crucial factor that comes into play.

If you intend to sell the stock, we need to factor in our sale price, which will be dependent on the buyer’s required rate of return, or discount rate.

For instance, we want to buy the same company above but instead of buying and holding for the full 10 years, we intend to sell the shares after just 5 years.

If we are buying the stock for the full 10 years, we can pay $6.14 per share, knowing that we will get a 10% return simply by collecting the dividend and reinvesting the dividend at a 10% rate.

But if we intend to sell the shares after 5 years, another factor comes into play – the sale price of the shares at the 5-year mark. Obviously, if we can’t get a good price during the sale, our returns will be subpar.

If the person buying the stock from us at the 5-year mark also requires a 10% rate of return, we can sell the stock at “his price” ($3.79) and still receive a 10% annualised return.

However, if the person that we are selling the stock to requires a 12% rate of return, he will only be willing to pay us $3.60 for the shares. In this case, we will receive less than a 10% annual return over our 5-year holding period.

So instead of paying $6.14 per share, we should only pay $5.82 per share to provide us with a margin of safety in case the required rate of return of the buyer goes up to 12% at our point of sale.

Margin for upside

Factoring in a margin of safety provides us comfort that we can achieve our desired rate of return. In addition, if things go smoothly, there is the potential to earn even more than our required rate of return.

But while the concept seems straightforward, its application is a bit more challenging. It requires a keen understanding of business and a valuation that provides sufficient margin of safety. 

It also requires some judgement on our part. How much of a margin of safety is enough? For companies with very stable and predictable dividend streams, our margin of safety can be narrower. But for companies with less predictable dividend streams, we may want to factor in a larger margin of safety.

I also prefer to demand a relatively high rate of return so that it is unlikely that the required rate of return by the buyer at the point of sale will negatively impact my return.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2023 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the third quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management sees generative AI as an opportunity to reimagine the company’s product and transform Airbnb into the ultimate travel agent

First, I think that we are thinking about generative AI as an opportunity to reimagine much of our product category and product catalog. So if you think about how you can sell a lot of different types of products and new offerings, generative AI could be really, really powerful.  It can match you in a way you’ve never seen before. So imagine Airbnb being almost like the ultimate travel agent as an app. We think this can unlock opportunities that we’ve never seen. 

Airbnb’s management believes that digital-first travel companies will benefit from AI faster than physical-first travel companies

So Airbnb and OTAs are probably going to benefit more quickly from AI than, say, a hotel will just because Airbnb and OTAs are more digital. And so the transformation will happen at the digital surface sooner.

Airbnb’s management believes that Airbnb’s customer service can improve significantly by placing an AI agent between a traveller and her foreign host

One of the areas that we’re specifically going to benefit is customer service. Right now, customer service in Airbnb is really, really hard, especially compared to hotels. The problem is, imagine you have a Japanese host booking with — hosting a German guest and there’s a problem, and you have these 2 people speaking different languages calling customer service, there’s a myriad of issues, there’s no front desk, we can’t go on-premise. We don’t understand the inventory, and we need to try to adjudicate an issue based on 70 different policies that can be up to 100 pages long. AI can literally start to solve these problems where agents can supervise a model that can — in second, come up with a better resolution and provide front desk level support in nearly every community in the world. 

Airbnb’s management believes that AI can lead to a fundamentally different search experience for travellers

But probably more importantly, Kevin, is what we can do by reimagining the search experience. Travel search has not really changed much in 25 years since really Expedia, Hotels.com, it’s pretty much the same as it’s been. And Airbnb, we fit that paradigm. There’s a search box, you enter a date location, you refine your results and you book something. And it really hasn’t changed much for a couple of decades. I think now with AI, there can be entirely different booking models. And I think this is like a Cambrian moment for like the Internet or mobile for travel where suddenly an app could actually learn more about you. They could ask you questions and they could offer you a significantly greater personalized service. Before the Internet, there were travel agents, and they actually used to learn about you. And then travel got unbundled, it became self-service and it became all about price. But we do think that there’s a way that travel could change and AI could lead the way with that. 

Airbnb’s management believes that all travel apps will eventually trend towards being an AI travel agent

And I generally think for sure, as Airbnb becomes a little more of a so-called like AI travel agent, which is what I think all travel apps will trend towards to some extent.

Alphabet (NASDAQ: GOOG)

Alphabet’s management has learnt a lot from trials of Search Generative Experience (SGE), and the company has added new capabilities (videos and images); Search Generative Experience has positive user feedback and strong adoption

This includes our work with the Search Generative Experience, which is our experiment to bring generative AI capabilities into Search. We have learned a lot from people trying it, and we have added new capabilities like incorporating videos and images into responses and generating imagery. We have also made it easier to understand and debug generated code. Direct user feedback has been positive with strong growth in adoption.

SGE allows Alphabet to serve a wider range of information needs and provide more links; ads will continue to be relevant in SGE and users actually find ads useful in SGE; Alphabet wants to experiment with SGE-native ad formats

With generative AI applied to Search, we can serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. We are surfacing more links with SGE and linking to a wider range of sources on the results page, creating new opportunities for content to be discovered. Of course, ads will continue to play an important role in this new Search experience. People are finding ads helpful here as they provide useful options to take action and connect with businesses. We’ll experiment with new formats native to SGE that use generative AI to create relevant, high-quality ads customized to every step of the Search journey.

Alphabet’s management thinks SGE could be a subscription service; it’s still very early days in the roll-out of SGE and management wants to get the user experience correct (Alphabet has gone through similar transitions before, so management is confident about this)

And I do think over time, there will be newer paths, just like we have done on YouTube. I think with the AI work, there are subscription models as a possible path as well. And obviously, all of the AI investments we are doing applies across Cloud, too, and I’m pretty optimistic about what’s ahead there as well…

…On the first part about SGE, we are still in very, very early days in terms of how much we have rolled it out, but we have definitely gotten it out to enough people both geographically across user segments and enough to know that the product is working well, it improves the experience and — but there are areas to improve, which we are fine-tuning. Our true north here is getting at the right user experience we want to, and I’m pretty comfortable seeing the trajectory. And we’ve always worked through these transitions, be it from desktop to mobile or from now mobile to AI and then to experience. And so it’s nothing new. 

Alphabet is making it easier for people to identify AI-generated content through digital watermarks

One area we are focused on is making sure people can more easily identify when they are encountering AI-generated content online. Using new technology powered by Google DeepMind SynthID, images generated by Vertex AI can be watermarked in a way that is invisible to the human eye without reducing the image quality. Underlying all this work is the foundational research done by our teams at Google DeepMind and Google Research. 

Alphabet’s management is committed to changing Alphabet’s cost base to accommodate AI investments; Alphabet has, for a long time, driven its cost curves down spectacularly, and management is confident that it will be the same for the current build-out of AI infrastructure

As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts. We remain committed to durably reengineering our cost base in order to help create capacity for these investments in support of long-term sustainable financial value. Across Alphabet, teams are looking at ways to operate as effectively as possible focused on their biggest priorities…

…When I looked at the strength of the work we have done across our infrastructure as a company, our technical infrastructure as a company, and various given stages, at a given moment in time when we adopted new generations of technology, we have looked at the cost of it. But then the curves, the efficiency curves, we have driven on top of it has always been phenomenal to see. And I see the current moment as no different. Already through this year, we are driving significant efficiencies both in our models, in training costs and serving costs and our ability to adapt what’s needed to the right use case. 

Alphabet has new tools (including those powered by AI) that make it easier for (1) creators to produce content for Youtube’s various formats, (2) creators to connect with advertisers, and (3) advertisers drive higher ROI on advertising

At Made On YouTube in September, we announced new tools that make it easier to create engaging content. Dream Screen is an experimental feature that allows creators to add AI-generated video or image backgrounds to Shorts. And YouTube Create is a new mobile app with a suite of production tools for editing Shorts, longer videos or both…

…AI will do wonders for creation and storytelling. From Dream Screen and YouTube Create, which Sundar talked about, to features that audit up content in multiple languages, flip interim existing assets, remix and clip videos and more, we’re just getting started. We’re also helping brands break through its speed and scale across the funnel to drive results. Spotlight Moments launched last week. It uses AI to identify trending content around major cultural moments for brand sponsorship opportunities. There’s video reach campaigns, which are expanding to in-feed and Shorts, and will be generally available in November. AI is helping advertisers find as many people as possible and their ideal audience for the lowest possible price. Early tests are delivering 54% more reach at 42% lower cost. And then with video view campaigns, AI is serving skippable ads across in-stream, in-feed and Shorts and helping advertisers earn the maximum number of views at the lowest possible cost. So far, they’re driving 40% more views on average versus in-stream alone. Then for YouTube and other feed-based services, there’s our new demand-gen campaign, which launched in April, rolled out worldwide last week and was designed for the needs of today’s social marketers to engage people as they stream, scroll and connect. It combines video and image ads in one campaign with access to 3 billion users across YouTube and Google and the ability to optimize and measure across the funnel using Google AI. Demand gen is already driving successful brands like Samsung and Toyota.

Alphabet’s management believes that Google Cloud offers optimised infrastructure for AI training and inference, and more than 50% of all generative AI start-ups are using Google Cloud; Alphabet’s TPUs (tensor processing units) are winning customers; Google Cloud’s Vertex AI platform offers more than 100 AI models and the number of active generative AI projects built on Vertex AI grew by seven times sequentially

We offer advanced AI optimized infrastructure to train and serve models at scale. And today, more than half of all funded generative AI start-ups are Google Cloud customers. This includes AI21 Labs, Contextual, Elemental Cognition, Writer and more. We continue to provide the widest choice of accelerator options. Our A3 VMs [virtual machines] powered by NVIDIA’s H100 GPU are generally available, and we are winning customers with Cloud TPU v5e, our most cost efficient and versatile accelerator to date. On top of our infrastructure, our Vertex AI platform helps customers build, deploy and scale AI-powered applications. We offer more than 100 models, including popular third-party and open source models, as well as tools to quickly build Search in conversation use cases. From Q2 to Q3, the number of active generative AI projects on Vertex AI grew by 7x, including Highmark Health, which is creating more personalized member materials.

Duet AI, Alphabet’s AI assistant, is built on Google’s large foundation models and is used by large companies to boost developer productivity and smaller companies to help with data analytics; more than 1 million testers have used Duet AI in Google Workspace

Duet AI was created using Google’s leading large foundation models and is specially trained to help users to be more productive on Google Cloud. We continue expanding its capabilities and integrating it across a wide range of cloud products and services. With Duet AI, we are helping leading brands like PayPal and Deutsche Bank boost developer productivity, and we are enabling retailers like Aritzia and Gymshark to gain new insights for better and faster business results…

…In Workspace, thousands of companies and more than 1 million trusted testers have used Duet AI. They are writing and refining content in Gmail and Docs, creating original images from text within slides, organizing data and sheets and more.

Alphabet’s new consumer hardware products have an AI chip – Tensor G3 – built in them

Our portfolio of Pixel products are brought to life, thanks to our combination of foundational technologies AI, Android and Google Tensor. Google Tensor G3 is the third generation of our tailor-built chip. It’s designed to power transformative experiences by bringing the latest in Google AI research directly to our newest phones. 

Gemini is the foundation of the next-generation AI models that Google Deepmind will be releasing throughout 2024; Gemini will be multi-modal and will be used internally across all of Alphabet’s products as well as offered externally via Vertex 

On Gemini, obviously, it’s effort from our combined Google DeepMind team. I’m very excited at the progress there as we’re working through getting the model ready. To me, more importantly, we are just really laying the foundation of what I think of as the next-generation series of models we’ll be launching throughout 2024. The pace of innovation is extraordinarily impressive to see. We are creating it from the ground up to be multimodal, highly efficient tool and API integrations and, more importantly, laying the platform to enable future innovations as well. And we are developing Gemini in a way that it is going to be available at various sizes and capabilities, and we’ll be using it immediately across all our products internally as well as bringing it out to both developers and cloud customers through Vertex. So I view it as a journey, and each generation is going to be better than the other. And we are definitely investing, and the early results are very promising.

Alphabet’s AI tools are very well received by advertisers and nearly 80% of advertisers use at least one AI-powered search ads product

Our AI tools are very well received, AI, gen AI are top of mind for everybody, really. There’s a ton of excitement, lots of questions about it. Many understand the value. Nearly 80% of our advertisers already use at least one AI-powered search ads product. And yes, we’re hearing a lot of good feedback on, number one, our ads AI Essentials, which are really helping to unlock the power of AI and set up for durable ROI growth on the advertiser side, this is — those are products like the foundation for data and measurement, things like Google Tech, consent mode and so on; and obviously, Search and PMax, we talked about it; and then all the gen AI products, all those different ones. So there’s a whole lot of interest in those products, yes.

Amazon (NASDAQ: AMZN)

Anthropic, a high-profile AI startup, recently chose AWS as its primary cloud provider, and Anthropic will work with Amazon to further develop Amazon’s Trainium (for training AI models) and Inferentia (for AI inference work) chips; Amazon’s management believes the collaboration with Anthropic will help Amazon bring further price performance advantages to Trainium and Inferentia

Recently, we announced the leading LLM maker Anthropic chose AWS as its primary cloud provider. And we’ll use Trainium training and Inferentia to build, trade and deploy future LLMs. As part of this partnership, AWS and Anthropic will collaborate on the future development of training and inferential technology. We believe this collaboration will be helpful in continuing to accelerate the price performance advantages that Trainium and Inferentia deliver for customers.

Perplexity is another AI startup that chose to run their models with Trainium and Inferentia

We are also seeing success with generative AI start-ups like Perplexity AI who chose to go all in with AWS, including running future models in Trainium and Inferentia.

Amazon’s management believes that Amazon’s Trainium and Inferentia chips are very attractive to people in the industry because they offer better price-performance characteristics and they can meet demand; Anthropic and Perplexity’s decisions to go with Trainium and Inferentia are statements to that effect

I would also say our chips, Trainium and Inferentia, as most people know, there’s a real shortage right now in the industry and chips, it’s really hard to get the amount of GPUs that everybody wants. And so it’s just another reason why Trainium and Inferentia are so attractive to people. They have better price performance characteristics than the other options out there, but also the fact that you can get access to them. And we’ve done a I think, a pretty good job providing supply there and ordering meaningfully in advance as well. And so you’re seeing very large LLM providers make big bets on those chips. I think anthropic deciding to train their future LLM model on Trainium and using Inferentia as well is really a statement. And then you look at the really hot start-up perplexity.ai, who also just made a decision to do all their Trainium and Inferentia on top of Trainium and Inferentia. So those are two examples. 

Amazon recently announced the general availability of Amazon Bedrock (AWS’s LLMs-as-a-service), which gives access to a variety of 3rd-party large language models (LLMs) as well as Amazon’s own LLM called Titan; Meta’s Llama-2 LLM will also be on Bedrock, the first time it is available through a fully-managed service

In the middle layer, which we think of as large language models as a service, we recently introduced general availability for Amazon Bedrock, which offers customers access to leading LLMs from third-party providers like anthropics, stability AI, coherent AI 21 as well as from Amazon’s own LLM called Titan, where customers can take those models, customize them using their own data, but without leaking that data back into the generalized LLM have access to the same security, access control and features that they run the rest of their applications with in AWS all through a managed service. In the last couple of months, we’ve announced the imminent addition of Meta’s Llama 2 model to Bedrock the first time it’s being made available through a fully managed service.

Amazon’s management believes that Bedrock helps customers experiment rapidly with different LLMs and is the easiest way to build and scale enterprise-ready generative AI applications; customer reaction to Bedrock has been very positive; 

Also through our expanded collaboration with Anthropic, customers will gain access to future anthropic models through bedrock with exclusive early access to unique features model customization and the ability to fine-tune the models. And Bedrock has added several new compelling features, including the ability to create agents which can be programmed to accomplish tasks like answering questions or automating workflows. In these early days of generative AI, companies are still learning which models they want to use, which models they use for what purposes and which model sizes they should use to get the latency and cost characteristics they desire. In our opinion, the only certainty is that there will continue to be a high rate of change. Bedrock helps customers with this fluidity, allowing them to rapidly experiment with move between model types and sizes and enabling them to pick the right tool for the right job. The customer reaction to Bedrock has been very positive and the general availability is buoyed that further. Bedrock is the easiest way to build and scale enterprise-ready generative AI applications and a real game changer for developers and companies trying to get value out of this new technology…

Bedrock’s ability to let customers conduct fast experiments is very useful because customers sometimes get surprised at the true costs of running certain AI models

Because what happens is you try a model, you test the model, you like the results of the model and then you plug it into your application and what a lot of companies figure out quickly is that using the really large — the large models and the large sizes ends up often being more expensive than what they anticipated and what they want to spend on that application. And sometimes too much latency in getting the answers as it shovels through the really large models. And so customers are experimenting with lots of different types of models and then different model sizes to get the cost and latency characteristics that they need for different use cases. It’s one of the things that I think is so useful about Bedrock is that customers are trying so many variants right now but to have a service that not only lets you leverage lots of third party as well as Amazon large language miles, but also lots of different sizes and then makes the transition of moving those workloads easy between them is very advantageous.

Amazon Code Whisperer, AWS’s coding companion, has a lot of early traction and has become more powerful recently by having the capability to be customised on a customer’s own code base (a first-of-its kind feature)

Generative AI coding companion Amazon Code Whisper has gotten a lot of early traction and got a lot more powerful recently with the launch of its new customization capability. The #1 enterprise request for coding companions has been wanting these companions to be familiar with customers’ proprietary code bases is not just having code companions trained on open source code. Companies want the equivalent of a long-time senior engineer who knows their code base well. That’s what Code Whisper just launched, another first of its kind out there in its current forum and customers are excited about it.

Amazon’s management believes that customers want to bring AI models to their data, not the other way around – and this is an advantage for AWS as customers’ data resides within AWS

It’s also worth remembering that customers want to bring the models to their data, not the other way around. And much of that data resides in AWS as the clear market segment leader in cloud infrastructure. 

There are many companies that are building generative AI apps on AWS and this number is growing fast

The number of companies building generative AI apps and AWS is substantial and growing very quickly, including Adidas, Booking.com, Bridgewater, Clarient, GoDaddy, Lexus Nexus, Merck, Royal Philips and United Airlines, name a few

Generative AI’s growth rate within AWS is very fast – even faster than Amazon’s management expected – and management believes that the absolute amount of generative AI business within AWS compares very favourably with other cloud providers

I could see it also just the growth rate for us in generative AI is very fast. Again, I have seen a lot of different numbers publicly. It’s real hard to measure an apples-to-apples. But in our best estimation, our — the amount of growth we’re seeing in the absolute amount of generative AI business we’re seeing compares very favorably with anything else I’ve seen externally.

Generative AI is already a pretty significant business for AWS, but it’s still early days

What I would tell you is that we have been surprised at the pace of growth in generative AI. Our generative AI business is growing very, very quickly, as I mentioned earlier. And almost by any measure, it’s a pretty significant business for us already. And yet I would also say that companies are still in the relatively early stages.

All of Amazon’s significant businesses are working on generative AI applications, with examples including using generative AI to (1) help consumers discover products, (2) forecast inventory in various locations, (3) help 3rd-party sellers create new product pages, (4) help advertisers with image generation for ads, and (5) improve Alexa

Beyond AWS, all of our significant businesses are working on generative AI applications to transform their customer experiences. There are too many for me to name on this call, but a few examples include, in our stores business, we’re using generative AI to help people better discover products they want to more easily access the information needed to make decisions. We use generative AI models to forecast inventory we need in our various locations and to derive optimal last mile transportation routes for drivers to employ. We’re also making it much easier for our third-party sellers to create new product pages by entering much less information and getting the models to the rest. In advertising, we just launched a generative AI image generation tool, where all brands need to do is upload a product photo and description to quickly create unique lifestyle images that will help customers discover products they love. And in Alexa, we built a much more expansive LLM and previewed the early version of this. Apart from being a more intelligent version of herself, Alexa’s new conversational AI capabilities include the ability to make multiple requests at once as well as more natural and conversational requests without having to use specific phrases.

Amazon’s management still believes in the importance of building the world’s best personal assistant and they thinksAlexa could be one of these assistants

We continue to be convicted that the vision of being the world’s best personal assistant is a compelling and viable one and that Alexa has a good chance to be one of the long-term winners in this arena. 

While Amazon’s management is pulling back Amazon’s capital expenditure on other areas, they are increasing capital expenditure for AI-related infrastructure

For the full year 2023, we expect capital investments to be approximately $50 billion compared to $59 billion in 2022. We expect fulfillment and transportation CapEx to be down year-over-year partially offset by increased infrastructure CapEx, support growth of our AWS business, including additional investments related to generative AI and large language model efforts. 

Apple (NASDAQ: AAPL)

Apple’s management sees AI and machine learning as fundamental technologies to the company and they’re integrated in virtually every product that Apple ships

If you kind of zoom out and look at what we’ve done on AI and machine learning and how we’ve used it, we view AI and machine learning as fundamental technologies, and they’re integral to virtually every product that we ship. 

Apple’s AI-powered features include Personal Voice in iOS17, and fall detection, crash detection, and ECG on the Apple Watch; Apple’s management does not want to label Apple’s AI-powered features with “AI” – instead the features are labelled as consumer benefits

And so just recently, when we shipped iOS 17, it had features like Personal Voice and Live Voicemail. AI is at the heart of these features. And then you can go all the way to then life-saving features on the Watch and the phone like fall detection, crash detection, ECG on the watch. These would not be possible without AI. And so we don’t label them as such, if you will. We label them as to what their consumer benefit is, but the fundamental technology behind it is AI and machine learning.

Apple is investing in generative AI but management has no details to share yet

In terms of generative AI, we have — obviously, we have work going on. I’m not going to get into details about what it is because as you know, we really don’t do that. But you can bet that we’re investing, we’re investing quite a bit. We are going to do it responsibly. And it will — you will see product advancements over time where those technologies are at the heart of them.

Arista Networks (NYSE: ANET)

From the vantage point of Arista Networks’ management, Oracle has become an important AI data centre company

Our historic classification of our Cloud Titan customers has been based on industry definition of customers with or likely to attain greater than 1 million installed compute service. Looking ahead, we will combine Cloud and AI customer spend into one category called Cloud and AI Titan sector. And as a result of this combination, Oracle OCI becomes a new member of the sector, while Apple shift to cloud specialty providers…

…So I think OCI has become a meaningful top-tier cloud customer and they belong in the cloud tightening category and in addition to their AI investments as well. So for reasons of classification and definition, the change is very warranted. And yes, they happened to be a good customer of Arista, that’s nice as well.

Arista Networks’ management has observed that its large customers have different needs when it comes to AI and non-AI networking technologies 

During the past year, our Cloud Titan customers have been planning a different mix of AI networking and classic cloud networking for their compute and storage clusters.

Arista Networks’ management believes that the company’s recent deal with a public sector organisation to provide Ethernet networking technology for the organisation’s AI initiative is an example of why Ethernet is important in AI

Our next [ one ] showcases our expansion of Arista in the public sector with their AI initiative. This grant-funded project utilizes Arista simplified operational models with CloudVision. New AI workloads require high scale, high ratings, high bandwidth and low latency as well as a need for granular visibility. This build out of a single EVPN-VXLAN based 400-gig fabric is based on deep buffers fines and underscores the importance of a lossless architecture for AI networking.

Arista Networks’ management is seeing its customers prioritise AI in their data centre spending right now, but demand for other forms of data centre-related spending will follow

We’ve always looked at that the cloud network as a front end and the back end. And as we said last year, many of our cloud customers are favoring spending more on the back end with AI, which doesn’t mean they stop spending on front end, but they’re clearly prioritized and doubled down on AI this year. My guess is as we look at the next few years, they’ll continue to double down on AI. But you cannot build an AI bank cluster without thinking of the front end. So we’ll see a full cycle here, while today the focus is greatly on AI and the back end of the network. In the future, we expect to see more investments in the front end as well.

Arista Networks’ management sees AI networking as being dominated by Infiniband today- with some room for a combination of Infiniband and Ethernet – but they still believe that AI networking will trend toward Ethernet over time, with 2025 being a potential inflection point

Today if I look at the 5 major designs for AI networking, one of them is still very InfiniBand dominated, all the others we’re looking at is — are adopting on dual strategy of both Ethernet and InfiniBand. So I think AI networking is going to become more and more favorable to Ethernet, particularly with the Ultra Ethernet Consortium and the work they’re doing to define a spec, you’re going to see more products based on UEC. You’re going to see more of a connection between the back end and the front-end using IP as a singular protocol. And so we’re feeling very encouraged that especially in 2025, there will be a lot of production rollout of back end and, of course, front end based on Ethernet.

Arista Networks’ management sees networking spend as contributing to 10%-15% of the total cost of an AI data centre 

Coming back to this networking spend versus the rest of the GPUs and et cetera, I would say it started to get higher and higher with 100-gig, 400-gig and 800-gig, where the optics and the switches are more than 10%, perhaps even 15% in some cases, 20, a lot of its governed by the cables and optics too. But the percentage hasn’t changed a lot in high-speed networking. In other words, it’s not too different between 10, 100, 200, 400 and 800. So we — you’ll continue to see that 10% to 15% range.

Arista Networks’ management sees diversified activity when it comes to the development of AI data centres

[Question]  And just what you’re seeing in terms of other people kind of building out some of these AI clusters, if you classify some of those customers as largely focused on back end today, and those represent opportunities going forward? Or just kind of what the discussion is outside of the Cloud Titans amongst some of these other guys that are building very large networks?

[Answer]  The Tier 2 cloud providers are doing exactly what the Tier 1 is doing just at a smaller scale. So the activity is out there. Many companies are trying to build these clusters, maybe not hundreds of thousands GPUs but thousands of GPUs together in their real estate if they can get them. But the designs that we’re working on with them, the type of sort of features, fine-tuning is actually very, very similar to the cloud, just at a smaller scale. So we’re very happy with that activity and this is across the board. It’s very positive to see this in the ecosystem that it’s not limited just 4 or 5 customers.

Arista Networks’ management is observing that data centre companies are facing a shortage of GPUs (graphics processing units) and they are trying to develop AI with smaller GPU clusters

I think they’re also waiting for GPUs like everyone else is. So there’s that common problem that we’re not the only one with lead time issues. But to clarify the comment on scale, Anshul and I are also seeing some very interesting enterprise projects against smaller scale. So a lot of customers are trying AI for small clusters, not too different from what we saw with HPC clusters back in the day.

Arista Networks’ management believes that good networking technology for AI requires not just good silicon, but the right software, so they are not concerned about Arista Networks’ suppliers moving up the stack

It’s not just the merchant silicon but how you can enable the merchant silicon with the right software and drivers, and this is an area that really Arista excels, and if you just have chips, you can’t build the system. But our system-wide features, whether it’s a genetic load balancing, or latency analyzer to really improve the job completion time and deal with that frequent communication and generative AI is also fundamentally important…

… [Question] So I think there was a mention on merchant silicon earlier in the Q&A. And one of your merchant silicon partners has actually moved up the stack towards the service provider routing. I’m just curious if there’s any intention on going after that piece if that chip is made available to you?

[Answer] I believe you are referring to the latest announcement Broadcom on their 25.60 Jericho chip that was announced recently.

[Question] Yes, the Qumran3D.

[Answer] Qumran3D, exactly. So it’s the same family, same features. And as you know, we’ve been a great partner of Broadcom for a long time, and we will continue to build new products. This is not a new entry, so to speak. We’ve been building these products that can be used on switches our orders for a while, and that bandwidth just doubled going to now 25.6. So you can expect some products from us in the future with those variants as well. But really — nothing really changed…

…And the investment we have made in our routing stack over the last 10 years, I want to say, has just gotten better and stronger. Power in the Internet, power in the cloud, power in the AI, these are hard problems. And they require thousands of engineers of investment to build the right VXLAN, BGP routing, EVPN, et cetera. So it’s not just a chip. It’s how we name the chip to do these complicated routing algorithms.

AI is becoming a really important component of Arista Networks’ customers

We’re simply seeing AI is going to become such an important component of all our cloud titans that is now a combined vertical.

Datadog (NASDAQ: DDOG)

Datadog’s management is excited about generative AI and large language models and they believe that the adoption of AI will lead to additional growth in cloud workloads

Finally, we continue to be excited about the opportunity in generative AI and Large Language Models. First, we believe adopting NextGen AI will require the use of cloud and other modern technologies and drive additional growth in cloud workloads.

Datadog is building LLM observability products

So we are continuing to invest by integrating with more components at every layer of the new AI stack and by developing our own LLM observability products. 

Datadog’s management is seeing adoption of AI across many of its customers, but the activity is concentrated in AI-native customers

And while we see signs of AI adoption across large parts of our customer base, in the near term, we continue to see AI-related usage manifest itself most accurately with next-gen AI native customers who contributed about 2.5% of our ARR this quarter.

Datadog is adding value to its own platform using AI with one example being Bits AI, Datadog’s test-and-analysis tool

Besides observing the AI stack, we also expect to keep adding value to our own platform using AI. Datadog’s unified platform and purely SaaS model, combined with strong multiproduct adoption by our customers generates a large amount of deep and precise observability data. We believe combining AI capabilities with this broad data set will allow us to deliver differentiated value to customers. And we are working to productise differentiated value through recently announced capabilities such as our Bits AI assistant, AI generated synthetic test and AI-led air analysis and resolution, and we expect to deliver many more related innovation to customers over time.

Datadog’s management is seeing that AI-native customers are using Amazon’s AWS whereas the larger enterprises that are using AI are using Microsoft’s Azure

Interestingly enough, the — when we look at our cohort of customers that are that we consider to be AI native and built largely on AI in all AI providers, they tend to be on different clouds. What we see is that the majority of those companies actually have a lot of their usage on AWS. Today, the larger part of the usage or the larger of these customers are on Azure. So we see really several different adoption trends there that I think are interesting to the broader market.

Datadog’s management is seeing broad usage of AI across Datadog’s customers, but the customers are adopting AI only at low volumes

Whereas we see broad usage of AI functionality across the customer base, but at low volumes, and it corresponds to the fact that for most customers or most enterprises really, they’re still in the early stages of developing and shipping applications. So for now, the usage is concentrated among the model providers.

Datadgo’s management sees a lot of opportunity for Datadog as AI usage proliferates – for example, management believes that the widespread use of AI will result in the creation of a lot of code and these code will need to be monitored

So on the DevSecOps side, I think it’s too early to tell how much the revenue opportunity there is in the tooling specific lab there. When you think of the whole spectrum of tools, the closer you get to the developer side to how are is to monetize and the further you get towards operations and infrastructure, the easier it is to monetize. You can ship things that are very useful and very accretive to our platform because they get you a lot of users, a lot of attention and a lot of stickiness that are harder to monetize. So we’ll see where on the spectrum that is. What we know, though, is that broader Generative AI up and down the stack from the components themselves, the GPUs all the way up to the models and the various things that are used to orchestrate them and store the data and move the data around all of that is going to generate a lot of opportunity for us. We said right now, it’s conciliated among the AI native largely model providers. But we see that it’s going to broaden and concern a lot more of our customers down the road…

…So in general, the more complexity there is, the more useful observability, the more you see his value from writing code to actually understanding it and observing it. So to caricature if you — if you spend a whole year writing 5 lines of code that are really very deep, you actually know those 5 lines pretty well, maybe you don’t observe because you’ll see you understand exactly how they work and what’s going on with them. On the other hand, if thanks to all the major advances of technology and all of the very super source AI and you can just very quickly generate thousands of lines of code, ship them and start operating them, you actually have no idea how these work and what they do. And you need a lot of tooling observability to actually understand that and keep driving that and secure it and do everything you need to do with it over time. So we think that overall, this increases in productivity are going to favor observability.

Datadog’s management is also trying to guess how transformative AI will be, but there are signs that AI’s impact will be truly huge

In terms of the future growth of AI, look, I think like everyone, we’re trying to guess how transformative it’s going to be. It looks like it’s going to be pretty is, if you judge from just internally, how much of that technology we are adopting a how much is the productivity impact, it seems to be having. 

AI-related use cases are still just a small fraction of the overall usage of Datadog’s products, but Datadog’s management thinks that AI will drive a lot of the company’s growth in the future 

So again, today, we only see a tiny bit of it, which is early adoption by model providers and a lot of companies that are trying to scale up and experiment and figure out who it applies to their businesses and what they can ship to use the technology. But we think it’s going to drive a lot of growth in the years to come.

Datadog’s management can’t tell when Datadog’s broader customer base will start ramping up AI workloads but they are experimenting; most of the innovation happening right now is concentrated among the model providers

[Question] Olivier, you called out the 2.5 points from AI native customers a few times, but you’ve also said that the broader customer base should start adding AI workloads to our platform over time. When do you think that actually takes place and the broader customer base starts to impact that AI growth in more earnest?

[Answer] We don’t know. And I think it’s too early to tell. For one part, there’s some uncertainty in terms of — these customers are being to figure out what it is they are going to ship to their own customers. I think everybody is trying to learn that right now and experiment it. And — but the other part is also that right now, the innovation is largely concentrated among the model providers. And so it’s rational right now for most customers to rely on those instead of they’re deploying their own infrastructure. Again, we think it’s slightly going to change. We see a lot of demand in interest in other ways to host models and run models and customers and all those things like that. But today, that’s the — these are the trends of the market today basically.

Etsy (NASDAQ: ETSY)

Etsy’s management is improving the company’s search function by combining humans and machine learning technology to better identify the quality of each product listing on the Etsy platform

We’re moving beyond relevance to the next frontier of search focused on better identifying the quality of each Etsy listing, utilizing humans and ML technology so that from a highly relevant result set, we bring the very best of Etsy to the top, personalized to what we understand of your tastes and preferences. For example, from the start of the year, we’re tracking to a ninefold increase in the number of human-curated listings on Etsy to over 1.5 million listings by year-end. We’re also utilizing ML models designed to determine the visual appeal of items and incorporating that information into our search algorithms. 

Etsy’s management is using generative AI to improve the Etsy search-experience when buyers enter open-ended queries, which helps build purchase-frequency

There’s also a huge opportunity to evolve the Etsy experience so that we show buyers a more diverse set of options when they search for open-ended head query items such as back-to-school. On the left of this slide, you can see an example of how a search for back-to-school items looks on Etsy. We generally show multiple very similar versions of customized pencils, stickers, lawn signs and so on, all mixed together. This is suboptimal as it offers buyers only a few main ideas on the first page of search and requires a ton of cognitive load to distinguish between virtually identical items. We’ve recently launched a variety of experiments with the help of Gen AI to evolve these types of head query searches. As we move into 2024, when a buyer searches for broad queries, we expect to be able to show a far more diverse and compelling set of ideas, all beautifully curated by organizing search results into a number of ideas for you that are truly different and helping to elevate the very best items within each of these ideas, we can take a lot of the hard work out of finding exactly the perfect item. And help build frequency as we highlight the wide range of merchandise available on Etsy.

Etsy’s management is using machine learning to identify product-listings that are not conforming to the company’s product policies, and listing-takedowns are already up 140% year-on-year 

We’ve hired a lot of people, and we also have been investing a lot in machine learning and machine learning is really helping us to be able to identify among the 120 million listings on Etsy, those that may not conform with our policy. Takedowns are up 140% year-over-year. 

Fiverr (NYSE: FVRR)

Fiverr’s management has developed Fiverr Neo, a generative AI tool that helps customers scope their projects better and match them with suitable freelance talent, just like a human recruiter would, just better; management believes that Fiverr Neo will help save customers time when they are looking for freelance talent

The vision for Fiverr Neo is quite wild – we imagine Neo will serve as a personalized recruiting expert that can help our customers more accurately scope their projects and get matched with freelance talent, just like a human recruiter, only with more data and more brain power. What we have done so far is leverage the existing LLM engines to allow customers to express their project needs in natural language, which Neo will synthesize and define the scope before matching the client with a short list of choices pulled from the entire Fiverr freelancer database. It’s a substantial step forward from the existing experience and streamlines the time the customer needs to make an informed decision.

Fiverr’s management used a combination of Fiverr’s own software and LLMs from other companies to build Fiverr Neo

So there’s a lot of learning as we build this product. And what we’re doing is really a hybrid of technologies. Some of them are being developed by us. Some are off the shelf, most of the leading companies that are developing LLM, which have partnered with us. And we’re putting this to the maximum. I think a lot of these systems are not yet optimized for large scale and high performance but we find our own ways of developing a lot of this technology to provide a very smooth experience to our customers. 

Fiverr Neo is still new, but users are already experiencing more accurate matches

In terms of Fiverr neo, we’re very pleased with the rollout. Obviously, very, very young product, but we’re seeing over 100,000 users that are trying the product. And what we’re seeing from their experience is that we’re able to provide more accurate matches, which is basically what we wanted to do and have a higher engagement and satisfaction levels, which we’re very happy with and the beginning of a repeat usage of the product. 

Fiverr’s management thinks that AI has a positive impact on the product categories that Fiverr can introduce to its marketplace and management is ensuring that Fiverr’s catalog will contain any new skills that the AI-age will require; management thinks that a lot of AI hype at the beginning of the year has died down and the world is looking for killer AI applications

So I did address this also in how we think about next year and the fact that AI both impact the efficiency of how we work allows us to do pretty incredible things in our product. It also has an impact — positive impact on the categories that we can introduce. So again, we’re not getting into specific category breakdown. But what we’re seeing on the buyer side, I think we’ve introduced these categories, these categories continue growing. I think that a lot of the height that surrounded AI in the beginning of the year subsided and right now, it’s really looking for the killer applications that could be developed with AI, and we’re developing some of them and our customers are as well. So these are definitely areas where we continue seeing growth, but not just that, but we continue investing in the catalog side to ensure that the new types of skills that pop up are going to be addressed on the Fiverr market base.

Mastercard (NYSE: MA)

Mastercard’s management is using AI to improve the company’s fraud-related solutions and has signed agreements in Argentina, Saudi Arabia, and Nigeria in this area

AI also continues to play a critical role powering our products and fueling our network intelligence. We’re scaling our AI-powered transaction fraud monitoring solution, which delivers real-time predictive scores based on a unique blend of customer and network level insights. This powerful solution gives our customers the ability to take preventive action before the transaction is authorized. This quarter alone, we signed agreements in Argentina, Saudi Arabia and Nigeria with financial institutions and fintechs who will benefit from early fraud detection and with merchants who will experience less friction and higher approval rates.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management is very excited about AI and how it can help MercadoLibre improve the user experience and its business operations

As you know, we don’t guide, but there are many exciting things going on, particularly, obviously, AI. That hopefully will enable us to provide our users a better experience, enable us to launch innovative ideas, and also scale and gain efficiencies, whether it is in customer service, or whether it is in fraud prevention or whether it is in the way our developers, 15,000 developers, go about developing and performing quality control, et cetera. So obviously, looking forward for the next 3 years, I think that’s a key thing to look into.

MercadoLibre’s management is working on using AI to improve the company’s product-search function and they are happy with the progress so far 

Last question in terms of AI and search, we are working on that. I mean we are putting a lot of effort into building solutions around AI. I think we don’t have much to disclose as of now, but search, reviews, questions and answers, buy box and products, as Marcos was saying, copilot for our developer. We’re looking at the broad range of AI uses for MercadoLibre to boost consumer demand and efficiency. And we’re happy with the progress that we have so far, but not much to be said yet.

MercadoLibre’s management has been using AI for many years in fraud prevention and credit scoring for the company’s services

We have been using AI for a long time now for many, many years, both in terms of fraud prevention and credit scoring. Both 2 instances, they are pretty much use cases which are ideal for AI, because we have, in the case of fraud prevention, millions of transactions every day and with a clear outcome, either fraud or not fraud. So with the right variables, we can build a very strong model that has predicted and have really best-in-class fraud prevention. And with that knowledge and given the experience we have been building on credits, we have also been — built our credit scoring models leveraging the AI.

Meta Platforms (NASDAQ: META)

The next-generation Ray-Ban Meta smart glasses has embedded AI

The next generation of Ray-Ban Meta smart glasses, which are the first smart glasses with our Meta AI built in.

Meta Platforms’ management thinks glasses are an ideal form-factor for an AI device as it can see exactly what you see and hear what you hear

And in many ways, glasses are the ideal form factor for an AI device because they enable your AI assistant to see what you see and hear what you hear. 

Llama 2 is now the leading open source AI model with >30 million downloads last month

We’re also building foundation models like Llama 2, which we believe is now the leading open source model with more than 30 million Llama downloads last month.

Beyond generative AI, Meta Platforms’ management is using recommendation AI systems for the company’s Feeds, Reels, ads, and integrity systems and these AI systems are very important to the company; AI feed recommendations led to increases in time spent on Facebook (7%) and Instagram (6%)

Beyond that, there was also a different set of sophisticated recommendation AI systems that powers our Feeds, Reels, ads and integrity systems. And this technology has less hype right now than generative AI but it is also very important and improving very quickly. AI-driven feed recommendations continue to grow their impact on incremental engagement. This year alone, we’ve seen a 7% increase in time spent on Facebook and a 6% increase on Instagram as a result of recommendation improvements. 

Meta Platforms’ AI tools for advertisers has helped drive its Advantage+ advertising product to reach a US$10 billion revenue run-rate, with more than 50% of the company’s advertisers using Advantage+ creative tools

Our AI tools for advertisers are also driving results with Advantage+ shopping campaigns reaching a $10 billion run rate and more than half of our advertisers using our Advantage+ creative tools to optimize images and text and their ads creative.

AI-recommended content has become increasingly incremental to engagement on Meta Platforms’ properties

AI-recommended content from unconnected accounts and feed continues to become increasingly incremental to engagement, including in the U.S. and Canada. These gains are being driven by improvements to our recommendation systems, and we see additional opportunities to advance our systems even further in the future as we deploy more advanced models.

Meta Platforms’ management believes that the company’s Business AIs can easily help businesses set up AIs to communicate with consumers at very low cost, which is important in developed economies where cost of labour is high (businesses in developing economies tend to hire humans to communicate with consumers)

Now I think that this is going to be a really big opportunity for our new Business AIs that I talked about earlier that we hope will enable any business to easily set up an AI that people can message to help with commerce and support. Today, most commerce and messaging is in countries where the cost of labor is low enough that it makes sense for businesses to have people corresponding with customers over text. And in those countries like Thailand or Vietnam, there’s a huge amount of commerce that happens in this way. But in lots of parts of the world, the cost of labor is too expensive for this to be viable. But with business AIs, we have the opportunity to bring down that cost and expand commerce and messaging into larger economies across the world. So making business AIs work for more businesses is going to be an important focus for us into 2024.

Meta Platforms’ management has started testing the company’s AI capabilities with a few partners in business messaging

We’ve recently started testing AI capabilities with a few partners and we’ll take our time to get the experience right, but we believe this will be a big unlock for business messaging in the future.

Meta Platforms’ management still believes in the benefits of open-sourcing Meta’s AI models: It increases adoption (which benefits the company as the security features and cost-efficiency of the models improves) and talent is more attracted to Meta Platforms

We have a pretty long history of open sourcing parts of our infrastructure that are not kind of the direct product code. And a lot of the reason why we do this is because it increases adoption and creates a standard around the industry, which often drives forward innovation faster so we benefit and our products benefit as well as there’s more scrutiny on kind of security and safety-related things so we think that there’s a benefit there.

And sometimes, more companies running models or infrastructure can make it run more efficiently, which helps reduce our costs as well, which is something that we’ve seen with open compute. So I think that there’s a good chance that, that happens here over time. And obviously, our CapEx expenses are a big driver of our costs, so any aid in innovating on efficiency is sort of a big thing there.

The other piece is just that over time with our AI efforts, we’ve tried to distinguish ourselves as being a place that does work that will be shared with the industry and that attracts a lot of the best people to come work here. So a lot of people want to go to the place to work where their work is going to touch most people. One way to do that is by building products that billions of people use. But if you’re really a focused engineer or researcher in this area, you also want to build the thing that’s going to be the standard for the industry. So that’s pretty exciting and it helps us do leading work.

Meta Platforms’ management thinks the AI characters that the company introduced recently could lead to a new kind of medium and art form and ultimately drive increasing engagement for users of the company’s social apps

We’re designing these to make it so that they can help facilitate and encourage interactions between people and make things more fun by making it so you can drop in some of these AIs into group chats and things like that just to make the experiences more engaging. So this should be incremental and create additional engagement. The AIs also have profiles in Instagram and Facebook and can produce content, and over time, going to be able to interact with each other. And I think that’s going to be an interesting dynamic and an interesting, almost a new kind of medium and art form. So I think that will be an interesting vector for increasing engagement and entertainment as well.

Meta Platforms’ management thinks that generative AI is a really exciting technology and that it changes everything and although it’s hard to predict what generative AI’s impact is going to be on how individuals use Meta’s services, they still thinks it’s worth investing in it;In terms of how big this is going to be, it’s hard to predict because I don’t think that anyone has built what we’re building here. I mean, there’s some analogy is like what OpenAI is doing with ChatGPT, but that’s pretty different from what we’re trying to do. Maybe the Meta AI part of what we’re doing overlaps with the type of work that they’re doing, but the AI characters piece, there’s a consumer part of that, there’s a business part, there’s a creators part. I’m just not sure that anyone else is doing this. And when we’re working on things like Stories and Reels, there were some market precedents before that. Here, there’s technology which is extremely exciting. But I think part of what leading in an area and developing a new thing means is you don’t quite know how big it’s going to be. But what I predict is that I do think that the fundamental technology around generative AI is going to transform meaningfully how people use each of the different apps that we build…

…So I think you’re basically seeing that there are going to be — this is a very broad and exciting technology. And frankly, I think that this is partially why working in the technology industry is so awesome, right, is that every once in a while, something comes along like this, that like changes everything and just makes everything a lot better and your ability to just be creative and kind of rethink the things that you’re doing to be better for all the people you serve…

…But yes, it’s hard sitting here now to be able to predict like the metrics are going to be around, like what’s the balance of messaging between AIs and people or what the balance and Feeds between AI content and people content or anything like that. But I mean, I’m highly confident that this is going to be a thing and I think it’s worth investing in.

Meta Platforms’ management believes that generative AI will have a big impact on the digital advertising industry

It’s going to change advertising in a big way. It’s going to make it so much easier to run ads. Businesses that basically before would have had to create their own creative or images now won’t have to do that. They’ll be able to test more versions of creative, whether it’s images or eventually video or text. That’s really exciting, especially when paired with the recommendation AI.

Microsoft (NASDAQ: MSFT)

Microsoft’s management is making AI real for everyone through the introduction of Copilots

With Copilots, we are making the age of AI real for people and businesses everywhere. We are rapidly infusing AI across every layer of the tech stack and for every role and business process to drive productivity gains for our customers.

Microsoft’s management believes that Azure has the best AI infrastructure for both training and inference

We have the most comprehensive cloud footprint with more than 60 data center regions worldwide as well as the best AI infrastructure for both training and inference. And we also have our AI services deployed in more regions than any other cloud provider.

Azure AI provides access to models from OpenAI and open-sourced models (including Meta’s) and 18,000 organisations now use Azure OpenAI

Azure AI provides access to best-in-class frontier models from OpenAI and open-source models, including our own as well as from Meta and Hugging Face, which customers can use to build their own AI apps while meeting specific cost, latency and performance needs. Because of our overall differentiation, more than 18,000 organizations now use Azure OpenAI service, including new to Azure customers.

GitHub Copilot increases developer productivity by up to 55%; there are more than 1 million paid Copilot users and more than 37,000 organisations that subscribe to Copilot for business (up 40% sequentially)

With GitHub Copilot, we are increasing developer productivity by up to 55% while helping them stay in the flow and bringing the joy back to coding. We have over 1 million paid Copilot users and more than 37,000 organizations that subscribe to Copilot for business, up 40% quarter-over-quarter, with significant traction outside the United States.

Microsoft’s management is using AI to improve the healthcare industry: Dragon Ambient Experience (from the Nuance acquisition) has been used in more than 10 million patient interactions to-date to automatically document the interactions, andDAX Copilot can draft clinical notes in seconds, saving 40 minutes of documentation time daily for physicians

In health care, our Dragon Ambient Experience solution helps clinicians automatically document patient interactions at the point of care. It’s been used across more than 10 million interactions to date. And with DAX Copilot, we are applying generative models to draft high-quality clinical notes in seconds, increasing physician productivity and reducing burnout. For example, Atrium Health, a leading provider in Southeast United States, credits DAX Copilot with helping its physicians each save up to 40 minutes per day in documentation time.

Microsoft’s management has infused Copilot across Microsoft’s work-productivity products and tens of thousands of users are already using Copilot in early access

Copilot is your everyday AI assistant, helping you be more creative in Word, more analytical in Excel, more expressive in PowerPoint, more productive in Outlook and more collaborative in Teams. Tens of thousands of employees at customers like Bayer, KPMG, Mayo Clinic, Suncorp and Visa, including 40% of the Fortune 100, are using Copilot as part of our early access program.

Users find Copilot amazing and have enjoyed similar productivity gains as developers did with Github Copilot

Customers tell us that once they use Copilot, they can’t imagine work without it, and we are excited to make it generally available for enterprise customers next week. This quarter, we also introduced a new hero experience in Copilot, helping employees tap into their entire universe of work, data and knowledge using chat. And the new Copilot Lab helps employees build their own work habits for this era of AI by helping them turn good prompts into great ones…

…And in fact, the interesting thing is it’s not any one tool, right, which is the feedback even sort of is very clear that it’s the all up. You just keep hitting the Copilot button across every surface, right, whether it’s in Word to create documents, in Excel to do analysis or PowerPoint or Outlook or Teams. Like clearly, the Teams Meeting, which is an intelligent recap, right? It’s not just a dumb transcript. It’s like having a knowledge base of all your meetings that you can query and add to essentially the knowledge terms of your enterprise. And so we are seeing broad usage across and the interesting thing is by different functions, whether it’s in finance or in sales by roles. We have seen productivity gains like we saw with developers in GitHub Copilot.

At the end of the day, Microsoft management is still grounded about the rate of adoption of Copilot in Office, since it is an enterprise product

And of course, this is an enterprise product. I mean at the end of the day, we are grounded on enterprise cycle times in terms of adoption and ramp. And it’s incrementally priced. So therefore, that all will apply still. But at least for something completely new, to have this level of usage already and this level of excitement is something we’re very, very pleased with.

Microsoft’s management recently introduced Security Copilot, the world’s first generative AI cybersecurity product, and it is seeing high demand

 We see high demand for Security Copilot, the industry’s first and most advanced generative AI product, which is now seamlessly integrated with Microsoft 365 Defender. Dozens of organizations, including Bridgewater, Fidelity National Financial and Government of Alberta, have been using Copilot in preview and early feedback has been positive.

Bing users have engaged in over 1.9 billion chats and Bing has a new personalised answers feature, and better support for DALL-E-3 (more than 1.8 billion images have been created with DALL-E-3 to-date)

Bing users have engaged in more than 1.9 billion chats, and Microsoft Edge has now gained share for 10 consecutive quarters. This quarter, we introduced new personalized answers as well as support for DALL-E 3, helping people get more relevant answers and to create incredibly realistic images. More than 1.8 billion images have been created to date.

Bing is now incorporated into Meta’s AI chat experience

We’re also expanding to new end points, bringing Bing to Meta’s AI chat experience in order to provide more up-to-date answers as well as access to real-time search information. 

Azure saw higher-than-expected AI consumption

In Azure, as expected, the optimization trends were similar to Q4. Higher-than-expected AI consumption contributed to revenue growth in Azure.

Micosoft’s management is seeing new AI project starts in Azure, and these bring other cloud projects

Given our leadership position, we are seeing complete new project starts, which are AI projects. And as you know, AI projects are not just about AI meters. They have lots of other cloud meters as well. So that sort of gives you one side of what’s happening in terms of enterprise.

Microsoft’s management believes the company has very high operating leverage with AI, since the company is using one model across its entire stack of products, and this operating leverage goes down to the silicon level

Yes, it is true that we have — the approach we have taken is a full-stack approach all the way from whether it’s ChatGPT or Bing chat or all our Copilots all share the same model. So in some sense, one of the things that we do have is very, very high leverage of the one model that we used, which we trained, and then the one model that we are doing inferencing at scale. And that advantage sort of trickles down all the way to both utilization internally, utilization of third parties. And also over time, you can see that sort of stack optimization all the way to the silicon because the abstraction layer to which the developers are riding is much higher up than no-level kernels, if you will. So therefore, I think there is a fundamental approach we took, which was a technical approach of saying we’ll have Copilots and Copilot stack all available. That doesn’t mean we don’t have people doing training for open-source models or proprietary models. We also have a bunch of open-source models. We have a bunch of fine-tuning happening, a bunch of RLHF happening. So there’s all kinds of ways people use it, but the thing is we have scale leverage of one large model that was trained and one large model that’s been used for inference across all our first-party SaaS apps as well as our API in our Azure AI service…

…In addition, what Satya mentioned earlier in a question, and I just want to take every chance to reiterate it, if you have a consistent infrastructure from the platform all the way up through its layers that every capital dollar we spend, if we optimize revenue against it, we will have great leverage because wherever demand shows up in the layers, whether it’s at the SaaS layer, whether it’s at the infrastructure layer, whether it’s for training workloads, we’re able to quickly put our infrastructure to work generating revenue on our BEAM workloads. I mean I should have mentioned all the consumer workloads use the same frame.

Microsoft’s management believes that having the discipline to concentrate Microsoft’s tech stack and capital spend is important because the costs of developing and using AI can run up really quickly

I think, is very important for us to be very disciplined on both I’ll call it our tech stack as well as our capital spend all to be concentrated. The lesson learned from the cloud side is this, we’re not running a conglomerate of different businesses. It’s all one tech stack up and down Microsoft’s portfolio. And that I think is going to be very important because that discipline, given what the spend like — it will look like for this AI transition, any business that’s not disciplined about their capital spend accruing across all their businesses could run into trouble.

Nvidia (NASDAQ: NVDA)

Nvidia’s management believes that its chips, together with the Infiniband networking technology, are the reference architecture for AI

NVIDIA HDX with InfiniBand together are essentially the reference architecture for AI supercomputers and data center infrastructures.

Inferencing is now a major workload for Nvidia chips

Inferencing is now a major workload for NVIDIA AI compute.

Nvidia’s management is seeing major consumer internet companies ramping up generative AI deployment, and enterprise software companies starting to

Most major consumer Internet companies are racing to ramp up generative AI deployment. The enterprise wave of AI adoption is now beginning. Enterprise software companies such as Adobe, Databricks, Snowflake and ServiceNow are adding AI copilots and assistance with their pipelines.

Recent US export controls have affected Nvidia’s chip exports to China, Vietnam, and parts of the Middle East

Toward the end of the quarter, the U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products, including our Hopper and MPIR 100 and 800 series and several others. Our sales to China and other affected destinations derived from products that are now subject to licensing requirements have consistently contributed approximately 20% to 25% of data center revenue over the past few quarters. We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe will be more than offset by strong growth in other regions.

Many countries are keen to invest in sovereign AI infrastructure, and Nvidia’s management is helping them do so as it is a multi-billion economic opportunity

Many countries are awaiting to the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. With investments in domestic compute capacity, nations can use their own data to train LLMs and support their local generative AI ecosystem. For example, we are working with India Government and largest tech companies, including Infosys, Reliance and Tata to boost their sovereign AI infrastructure. And French private cloud provider, Scaleway is building a regional AI cloud based on NVIDIA H100, InfiniBand and NVIDIA AI enterprise software to fuel advancement across France and Europe. National investment in compute capacity is a new economic imperative and serving the sovereign AI infrastructure market represents a multibillion-dollar opportunity over the next few years…

…The U.K. government announced it will build 1 of the world’s fastest AI supercomputer called Isambard-AI with almost 5,500 Grace Hopper Super chips. German Supercomputing Center, Elec, also announced that it will build its next-generation AI supercomputer with close to 24,000 Grace Hopper super chips and Quantum 2 InfiniBand, making it the world’s most powerful AI supercomputer with over 90 exaflops of AI performance…

…You’re seeing sovereign AI infrastructures. People countries that now recognize that they have to utilize their own data, keep their own data, keep their own culture, process that data and develop their own AI. 

Nvidia has a new chip with inference speeds that are 2x faster than the company’s flagship H100 GPUs (graphics processing units)

We also announced the latest member of the Hopper family, BH 200, which will be the first GPU to offer HBM3E, faster, larger memory to further accelerate generative AI and LLMs. It moves inference speed up to 2x compared to H100 GPUs for running LLM like [indiscernible]. 

Major cloud computing services providers will soon begin to offer instances for Nvidia’s next-generation GPU, the H200  

Compared to the H100, H200 delivers an 18x performance increase for infancy models like GPT-3, allowing customers to move to larger models and with no increase in latency. Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud will be among the first CSPs to offer H200 base instances starting next year. 

Nvidia’s management is seeing very strong demand for Infiniband; management believes that Infiniband is critical in the deployment of LLMs (large language models); management believes that the vast majority of large-scale AI factories had standardised on Infiniband because of Infiniband’s vastly superior value proposition compared to Ethernet (data-traffic patterns are very different for AI and for typical hyperscale cloud environments)

Networking now exceeds a $10 billion annualized revenue run rate. Strong growth was driven by exceptional demand for InfiniBand, which grew fivefold year-on-year. InfiniBand is critical to gain the scale and performance needed for training LLMs. Microsoft made this very point last week highlighting that Azure uses over 29,000 miles of InfiniBand cabling, enough to circle the globe…

……The vast majority of the dedicated large-scale AI factories standardized on InfiniBand. And the reason for that is really because of its data rate and not only just the latency, but the way that it moves traffic around the network is really important. The way that you process AI and a multi-tenant hyperscale ethernet environment, the traffic pattern is just radically different. And with InfiniBand and with software-defined networks, we could do congestion control, adaptive routing, performance isolation and noise isolation, not to mention, of course, the data rate and the low latency that — and the very low overhead of InfiniBand that’s a natural part of InfiniBand. .

And so InfiniBand is not so much just a network. It’s also a computing fabric. We put a lot of software-defined capabilities into the fabric, including computation. We do floating point calculations and computation right on the switch and right in the fabric itself. And so that’s the reason why that difference in Ethernet versus InfiniBand where InfiniBand versus Ethernet for AI factories is so dramatic. And the difference is profound and the reason for that is because you’ve just invested in a $2 billion infrastructure for AI factories, a 20%, 25%, 30% difference in overall effectiveness, especially as you scale up is measured in hundreds of millions of dollars of value. And if you were renting that infrastructure over the course of 4 or 5 years, it really adds up. And so InfiniBand’s value proposition is undeniable for AI factories. 

Nvidia’s management is expanding the company into Ethernet and Nvidia’s Ethernet technology performs better than traditional offerings; management’s go-to-market strategy for Nvidia’s new Ethernet technology is to collaborate with the company’s large enterprise partners

We are expanding NVIDIA networking into the Ethernet space. Our new Spectrum end-to-end Ethernet offering with technologies, purpose-built for AI will be available in Q1 next year. We support from leading OEMs, including Dell, HP and Lenovo. Spectrum X can achieve 1.6x higher networking performance for AI communication compared to traditional ethernet offerings…

…And our company is — for all of our employees, doesn’t have to be as high performance as the AI factories, we use to train the models. And so we would like the AI to be able to run an Ethernet environment. And so what we’ve done is we invented this new platform that extends Ethernet. It doesn’t replace the Ethernet, it’s 100% compliant with Ethernet, and it’s optimized for east-west traffic, which is where the computing fabric is. It adds to Ethernet with an end-to-end solution with Bluefield as well as our Spectrum switch that allows us to perform some of the capabilities that we have in InfiniBand, not all but some, and we achieved excellent results.

And the way we go to market is we go to market with our large enterprise partners who already offer our computing solution. And so HPL and Lenovo has the NVIDIA AI stack, the NVIDIA enterprise software stack. And now they integrate with BlueField as well as bundle take to market their Spectrum switch and they’ll be able to offer enterprise customers all over the world with their vast sales force and vast network of resellers the — in a fully integrated, if you will, fully optimized at least end-to-end AI solution. And so that’s basically bringing AI to Ethernet for the world’s enterprise. 

Nvidia’s management believes that there’s a new class of data centres emerging, and they’ve named them as “AI factories”; these AI factories are being built all across the world 

This is the traditional data centers that you were just talking about, where we represent about 1/3 of that. But there’s a new class of data centers. And this new class of data centers, unlike the data centers of the past where you have a lot of applications running used by a great many people that are different tenants that are using the same infrastructure and the data center stores a lot of files. These new data essentials are very few applications if not 1 application used by basically 1 tenant. And it processes data. It trains models and it generates tokens, it generates AI. And we call these new data center AI factories. We’re seeing AI factories being built out everywhere in just about every country. 

Nvidia’s management is seeing the appearance of CSPs (cloud services providers) that specialise only in GPUs and processing AI

You’re seeing GTU specialized CSPs cropping up all over the world, and they’re dedicated to doing really 1 thing, which is processing AI. 

Nvidia’s management is seeing an AI adoption-wave moving from startups and CSPs to consumer internet companies, and then to enterprise software companies, and then to industrial companies

And so we’re just — we’re seeing the waves of generative AI starting from the start-ups and CSPs moving to consumer Internet companies moving to enterprise software platforms, moving to enterprise companies. And then — and ultimately, 1 of the areas that you guys have seen us spend a lot of energy on has to do with industrial generative AI. This is where NVIDIA AI and NVIDIA Omniverse comes together. And that is a really, really exciting work. And so I think the — we’re at the beginning of a and basically across the board industrial transition to generative AI to accelerated computing. This is going to affect every company, every industry, every country.

Nvidia’s management believes that Nvidia’s AI Enterprise service – where the company helps its customers develop custom AI models that the customers are then free to monetise in whatever manner they deem fit – will become a very large business for Nvidia

Our monetization model is that with each 1 of our partners, they rent a sandbox on DGX Cloud where we work together. They bring their data. They bring their domain expertise. We’ve got our researchers and engineers. We help them build their custom AI. We help them make that custom AI incredible. Then that customer AI becomes theirs, and they deploy it on a run time that is enterprise grade enterprise optimized or outperformance optimized, runs across everything NVIDIA. We have a giant installed base in the cloud on-prem anywhere. And it’s secure, securely patched, constantly patched and optimized and supported. And we call that NVIDIA AI enterprise.

NVIDIA AI Enterprise is $4,500 per GP per year. That’s our business model. Our business model is basically a license. Our customers then would that basic license can build their monetization model on top of. In a lot of ways we’re wholesale, they become retail. They could have a per-subscription license base. They could per instance or they could do per usage. There’s a lot of different ways that they could take to create their own business model, but ours is basically like a software license like an operating system. And so our business model is help you create your custom models, you run those custom models on NVIDIA AI Enterprise. And it’s off to a great start. NVIDIA AI Enterprise is going to be a very large business for us.

PayPal (NASDAQ: PYPL)

PayPal’s management wants to use AI and the data collected from the company’s Rewards program to drive a shopping recommendation engine

For example, our PayPal Cashback Mastercard provides 3% cash back on PayPal purchases as well as cash back on all other purchases. Customers with this card make, on average, 56 more purchases with PayPal in the year after they adopt the product than they did the year before. Over 25 million consumers have used PayPal Rewards in the past 12 months, and we’ve put more than $200 million back in our customers’ pockets with cashback and savings during that time. But even more interesting, through our Rewards product, we have an active database of over 300 million SKUs of inventory from our merchant partners. These data points can help us use AI to power a robust shopping recommendation engine, to provide more relevant rewards and savings back to our customers.

PayPal’s management believes that machine learning and generative AI can be applied to the company’s data to improve fraud protection and better connect merchants and consumers

 Our machine learning capabilities combine hundreds of risk and fraud models with dozens of real-time analytics engines and petabytes of payments data to generate insights by learning users’ behaviors, relationships, interests and spending habits. This scale gives us a very unique advantage in the market. Our ability to create meaningful profiles with the help of AI is exceptionally promising. You will see us using our data and the advances in generative AI in responsible ways to further connect our merchants and consumers together in a tight flywheel.

Shopify (NASDAQ: SHOP)

Shopify’s management has integrated Shopify Magic – the company’s suite of free AI features – across its products

At Shopify, we believe AI is for everyone, and its capabilities should be captured and embedded across the entirety of a business. We’ve integrated Shopify Magic, our suite of free AI-enabled features, across our products and workflows.

Shopify Magic can help merchants craft personalised pages and content, and is designed specifically for commerce

Shopify Magic can make the power of Shopify and a merchant’s own data to make it work better for them, whether it’s enabling unique personalized page and content generation like instantly crafting an About Us page in your brand voice and tone or building a custom page to showcase all the sizes available in your latest product collection…

…Now unlike other AI products, the difference with Shopify Magic is it’s designed specifically for commerce. And it’s not necessarily just 1 feature or 1 product. It’s really embedded across Shopify to make these workflows in our products just easier to use. It makes it easier for merchants to run and scale their businesses. And of course, we think it’s going to unlock a ton of possibilities for not just small merchants, but merchants of all sizes. And we’re going to continue to work on that over time. It’s just going to get better and better.

Shopify’s management is using AI internally so that the company can make better decisions and improve its customer support

We ourselves are using AI inside of Shopify to make better decisions, but also for things like — things like our support team using it so that questions like domain reconfiguration, or a new password, or I don’t know what my password is. Those things should not necessarily require high-touch communication. What that does is it means that our support team are able to have much higher-quality conversations and act as business coaches for the merchants on Shopify. 

Shopify’s management believes that Shopify is uniquely positioned to harness the power of AI because commerce and the company represent the intersection of humans and technology, and that is the domain of AI

If you kind of think about commerce and Shopify, we kind of interact at the intersection of humans and technology, and that’s exactly what AI is really, really good at. So we think we’re uniquely positioned to harness the power of AI, and the ultimate result of it will be these capabilities for our merchants to grow their businesses.

Shopify has AI-powered language translations for merchants within its software products

This includes things like launching shipping guidance for merchants, navigating them through streamlined privacy guidance, initiating localization experiments across various marketing channels and bringing localization tools and AI-backed language translations to the Shopify App Store.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management sees strong AI-related demand for its chips, but it’s not enough to offset cyclicality in its business 

Moving into fourth quarter 2023. While AI-related demand continues to be strong, it is not enough to offset the overall cyclicality of our business. We expect our business in the fourth quarter to be supported by the continued strong ramp of our 3-nanometer technology, partially offset by customers’ continued inventory adjustment.

TSMC;s management is seeing strong customer interest for its N2 technology node because the surge in AI-related demand leads to demand for energy-efficient computing, and TSMC’s technology platform goes beyond geometry-shrink (making transistors smaller), helping with power efficiency

The recent surge in AI-related demand supports our already strong conviction that demand for energy-efficient computing will accelerate in an intelligent and connected world. The value of our technology platform is expanding beyond the scope of geometry shrink alone and increasing toward greater power efficiency. In addition, as process technology complexity increases, the lead time and engagement with customers also start much earlier. As a result, we are observing a strong level of customer interest and engagement at our N2 similar to or higher than N3 at a similar stage from both HPC and smartphone applications.

TSMC’s management is seeing its customers add AI capabilities into smartphones and PCs and expects more of this phenomenon over time

We do see some activities from customers who add AI capability in end devices such as smartphone and PCs, [ so new growth ] engine and AI and PC, whatever. And we certainly hope that this one will add to the course, help TSMC more strengthen under our AI’s business…

…It started right now, and we will expect that the more and more customer will put that AI’s capability into the end devices, into their product.

TSMC’s management is seeing AI-related demand growing stronger and stronger and TSMC has to grow its manufacturing capacity to support this

The AI demand continues to grow stronger and stronger. So from TSMC’s point of view, now we have about — we have a capacity limitation to support them — to support the demand. We are working hard to increase the capacity to meet their demand, that’s for one thing.

TSMC’s management believes that any kind of AI-related chip will require leading edge chip technology and this is where TSMC excels

Whether customer developed the CPU, GPU, AI accelerator or ASIC for all the type for AI applications, the commonality is that they all require usage of leading-edge technology with stable yield delivery to support larger die size and a strong foundry design ecosystem. All of those are TSMC’s strengths. So we are able to address and capture a major portion of the market in terms of a semiconductor component in AI.

Tencent (NASDAQ: TCEHY)

Tencent’s management is increasing the company’s investments in its AI models and management wants to use AI for the company’s own benefit as well as that of society and its customers

We are increasing investment in our AI models, providing new features to our products and enhancing our targeting capabilities for both content and advertising. We aspire to position our leading AI capability, not only as a growth multiplier for ourselves, but also as a value provider to our enterprise customers and the society at large.

Tencent’s management recently upgraded the size and capabilities of the company’s foundational model – Tencent Hunyuan – which is now available to customers on a limited basis and deployed in some of Tencent’s cloud services

For cloud, we upgrade the size and capabilities of our proprietary foundation model, Tencent Hunyuan. We are making Hunyuan available on a limited basis to the public and to customers and deploying QiDian in Tencent Meeting and Tencent Docs…

…We have upgraded our proprietary foundation model, Tencent Hunyuan. We have made Tencent Hunyuan bot initially available to a smaller but expanding number of users via a mini program. Hunyuan is also now powering meeting summarization in Tencent Meeting and content generation in Tencent Docs. And externally, we’re enabling enterprise customers to utilize our large language model via APIs or model as a Service solutions in our cloud in functions such as coding, data analysis and customer service automation.

Tencent’s management believes that Tencent is one of China’s AI leaders with the development of Hunyuan

In terms of Hunyuan and the overall AI strategy, I would say we have been pretty far along in terms of building up Hunyuan, and we feel that we are one of the leaders within China, and we are also continuously increasing the size of the model and preparing for the next generation of our Hunyuan model, which is going to be a mixture of experts architecture, which we believe will further improve the performance of our Hunyuan model. And by building up Hunyuan, we actually have really build up our capability in general AI across the board. Because Hunyuan, the transformer-based model include — involve the handling of a large amount of data, large amount of training data, large size of computing cluster and a very dedicated fine-tuning process in terms of improving the AI performance.

Tencent’s management is using AI to improve the company’s advertising offerings, in areas such as ad targeting, attribution accuracy, and the generation of advertising visuals – management sees this as evidence that Tencent’s AI investments are already generating tangible results

We have expanded our AI models with more parameters to increase their ad targeting and attribution accuracy contributing to our ad revenue growth. We’re also starting to provide generative AI tools to advertiser partners, which enables them to dynamically generate ad visuals based on text fronts and to optimize ad sizes for different inventories, which should help advertisers create more appealing advertisements with higher click-through rates boosting their transactions in our revenue…

…And the general AI capability is actually helping us quite a bit in terms of the targeting technology related to advertising and our content provisioning service. So in short video by improving our AI capability, we can actually ramp up our video accounts at the faster clip. And in terms of the advertising business by increasing the targeting capability, we are actually increasing our ad revenue and by delivering better results to the — to our customers. So they are generating — so our AI capabilities is generating tangible result at this point in time. 

Tencent’s management wants to build an AI-powered consumer-facing smart agent down the road, but they are wary about the costs of inference

And we also feel that further in the future, when there’s actually a consumer-facing product that is more like a smart agent for people right now, that is further down the road, but it actually carries quite a bit of room for imagination…

…Now in terms of the Hunyuan and in the future, the potential of an AI assistant, I think it’s fair to say it’s still in a very, very early stage of concept design. So definitely not at the stage of product design yet and definitely not at the stage of thinking about monetization yet. But of course, right, if you look at any of these generative AI technology at this point in time, inference cost is a real variable cost, which needs to be considered in the entire equation. And that, to some extent, add to the challenge of the product design, too. So I would say, at this point in time, it’s actually very early stage. There is a promise and imaginary room for opportunity for the future. 

Tencent’s management believes that the company has sufficient amount of chips for the company’s AI-related development work for a couple more generations; the US’s recent semiconductor bans will not affect the development of Tencent’s AI models, but it could affect Tencent’s ability to rent out these chips through Tencent Cloud

Now in terms of the chip situation, right now, we actually have 1 of the largest inventory of of AI chips in China among all the players. And one of the key things that we have done was actually we were the first to put in order for H800, and that allow us to have a pretty good inventory of H800 chips. So we have enough chips to continue our development of Hunyuan for at least a couple more generations. And the ban does not really affect the development of Hunyuan and our AI capability in the near future. Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. 

Tencent’s management wants to explore the use of lower-performance chips for AI inference purposes and they are also exploring domestic suppliers of chips

Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. And going forward, we will have to figure out ways to make our — the usage of our AI chips more efficient. We’ll try to see whether we can offload a lot of the inference capability to lower-performance chips so that we can retain the majority of our high-performance AI chips for training purpose. And we also try to look for domestic stores for these training chips.

Tencent’s management believes that AI can bring significant improvement to a digital ad’s current average click-through rate of 1%

Today, a typical click-through rate might be around 1%. As you deploy large language models, then you can make more use of the thousands of discrete data points that we have potentially for targeting and bring them to bear and turn them into reality. And you can get pretty substantial uplifts in click-through rate and therefore, in revenue, which is what the big U.S. social networks are now starting to see.

Tesla (NASDAQ: TSLA)

Tesla vehicles have now driven over 0.5 billion miles with FSD (Full Self Driving) Beta and the mileage is growing

Regarding Autopilot and AI, our vehicle has now driven over 0.5 billion miles with FSD Beta, full self-driving beta, and that number is growing rapidly.

Tesla’s management sees significant promise with FSD v.12

We’re also seeing significant promise with FSD version 12. This is the end-to-end AI where it’s a photon count in, controls out or really you can think of it as there’s just a large bit stream coming in and a tiny bit stream going out, compressing reality into a very small set of outputs, which is actually kind of how humans work. The vast majority of human data input is optics, from our eyes. And so we are, like the car, photons in, controls out with neural nets, just neural nets, in the middle. It’s really interesting to think about that.

Tesla recently completed building a 10,000 GPU cluster of Nvidia’s H100s chips and has brought the cluster into operation faster than anyone has done (the H100s will help with the development of Tesla’s full self driving efforts)

We recently completed a 10,000th GPU cluster of H100s. We think probably bringing it into operation faster than anyone’s ever brought that much compute per unit time into production since training is the fundamental limiting factor on progress with full self-driving and vehicle autonomy.

Tesla’s management believes that AI is a game changer and wants the company to continue to invest in AI 

We will continue to invest significantly in AI development as this is really the massive game changer, and I mean, success in this regard in the long term, I think has the potential to make Tesla the most valuable company in the world by far.

Tesla’s management believes that the company’s AI team is the best in the world

The Tesla AI team is, I think, one of the world’s best, and I think it is actually by far the world’s best when it comes to real-world AI. But I’ll say that again: Tesla has the best real-world AI team on earth, period, and it’s getting better.

Tesla’s management is very excited about the company’s progress with autonomous driving and it is already driving them around with no human-intervention

I guess, I am very excited about our progress with autonomy. The end-to-end, nothing but net, self-driving software is amazing. I — drives me around Austin with no interventions. So it’s clearly the right move. So it’s really pretty amazing. 

Tesla’s management believes that the company’s work in developing autonomous driving can also be applied to Optimus (the company’s autonomous robots)

And obviously, that same software and approach will enable Optimus to do useful things and enable Optimus to learn how to do things simply by looking. So extremely exciting in the long term.

Tesla’s management believes that Optimus will have a huge positive economic impact on the world and that Tesla is at the forefront of developing autonomous robots; Tesla’s management is aware of the potential dangers to humankind that an autonomous robot such as Optimus can pose, so they are designing the robot carefully

As I’ve mentioned before, given that the economic output is the number of people times productivity, if you no longer have a constraint on people, effectively, you’ve got a humanoid robot that can do as much as you’d like, your economy is twice the infinite or infinite for all intents and purposes. So I don’t think anyone is going to do it better than Tesla, not by a long shot. Boston Dynamics is impressive, but their robot lacks the brain. They’re like the Wizard of Oz or whatever. Yes, lacks the brain. And then you also need to be able to design the humanoid robot in such a way that it can be mass manufactured. And then at some point, the robots will manufacture the robots.

And obviously, we need to make sure that it’s a good place for humans in that future. We do not create some variance of the Terminator outcome. So we’re going to put a lot of effort into localized control of the humanoid robot. So basically, anyone will be able to shut it off locally, and you can’t change that even if you put — like a software update, you can’t change that. It has to be hard-coded.

Tesla’s management believes that Mercedes can easily accept legal liability for any FSD-failures because Mercedes’ FSD is very limited whereas Tesla’s FSD has far less limitations 

[Question] Mercedes is accepting legal liability for when it’s Level 3 autonomous driving system drive pilot is active. Is Tesla planning to accept legal liability for FSD? And if so, when?

[Answer] I mean I think it’s important to remember for everyone that Mercedes’ system is limited to roads in Nevada and some certain cities in California, doesn’t work in the snow or the fog. It must have a [indiscernible] car in plains, only 40 miles per hour. Our system is meant to be holistic and drive in any conditions, so we obviously have a much more capable approach. But with those kind of limitations, it’s really not very useful.

Tesla’s management believes that technological progress building on technological progress is what will eventually lead to full self driving

I would characterize our progress in real world AI as a series of stacked log curves. I think that’s also true in other parts of AI, like [ LOMs ] and whatnot, a series of stacked log curves. Each log curve gets higher than the last one. So if we keep stacking them, we keep stacking logs, eventually, we get to FSD.

The Trade Desk (NASDAQ: TSLA)

The Trade Desk’s management believes that AI will change the world, but not everyone working on AI is delivering meaningful impact

AI has immense promise. It will change the world again. But not everyone talking about AI is delivering something real or impactful.

The Trade Desk’s management is not focusing the company’s AI-related investments on LLMs (large language models) – instead, they are investing in deep-learning models to improve bidding, pricing, value, and ad relevance for Trade Desk’s services

Large Language Models (the basis of ChatGPT) aren’t the highest priority places for us to make our investments in AI right now. Deep learning models pointed at bidding, pricing, value, and ad relevance are perfect places for us to concentrate our investments in AI—all four categories have private betas and some of the best engineers in the world pointed at these opportunities.

The Trade Desk’s management believes that they are many areas to infuse AI into the digital advertising dataset that the company holds

Second is the innovation coming from AI and the many, many opportunities we have ahead of us to find places to inject AI into what may be the most rich and underappreciated data asset on the Internet, which we have here at The Trade Desk.

The Trade Desk’s management believes that traders in the digital advertising industry will not lose their jobs to AI, but they might lose their jobs to traders who know how to work with AI

Traders know that their jobs are not going to be taken away by AI. But instead, they have to compete with each other. So their job could be taken away from a trader who knows how to use AI really well until all of them are looking at ways to use the tools that are fueled by AI that were provided, where AI is essentially doing 1 or 2 things. It’s either doing the math for them, if you will, of course, with very advanced learning models or, in other cases, it’s actually their copilot.

Old Navy achieved a 70% reduction in cost to reach each unique household using The Trade Desk’s AI, Koa

A great example of an advertiser pioneering new approaches to TV advertising with a focus on live sports is Old Navy…  But as Old Navy quickly found out, programmatic guaranteed has limitations. Programmatic guaranteed, or PG, does not allow Old Navy to get the full value of programmatic such as frequency management, audience targeting and the ability to layer on their first-party data. So they took the next step in the form of decision biddable buying within the private marketplace and focused on live sports inventory. CTV live sports advertising was appealing because it offered an opportunity to expose their brand against very high premium content that might be more restrictive and expensive in a traditional linear environment. They were able to use Koa, The Trade Desk’s AI, to optimize pacing and frequency management across the highest-performing inventory. As a result, they saw a 70% reduction in the cost to reach each unique household versus their programmatic guaranteed performance. 

Wix (NASDAQ: WIX)

Users of Wix’s Wix Studio product are enjoying its AI features

Users particularly [indiscernible] Studio responsive AI technology that simplify high-touch and time-sensitive tasks such as ensuring consistent design across web pages on different screen sizes. They are also enjoying the AI code assistant inside the new Wix IDE [integrated development environment], which allowed them to write clinic code and detect errors easily.

Wix recently released new AI products: (1) an SEO tool powered by AI called AI Meta Tags Creator, and (2) AI Chat Experience for Business, which allows new users to chat with an AI who will walk them through the Wix onboarding process; AI Chat Experience for Business is in its early days, but it has already driven a positive impact on Wix’s conversion and revenue

Earlier this week, we released our latest AI products. The first was AI Meta Tags Creator, a groundbreaking SEO tool powered by AI and our first AI-powered feature within our collection of SEO tools. Both self creators looking to generate SEO-friendly tags for each of their pages and professionals looking to enhance their efficiency and make real-time adjustments will benefit from this product. The second was our Conversational AI Chat Experience for Business. This feature, which is now live, paves the way to accelerate onboarding using AI in order to get businesses online more quickly and efficiently. These new tools continue to demonstrate our leadership in utilizing AI to help users of all types to succeed online… 

…Avishai spoke about the AI chat experience for business and its early weeks — and in its early weeks, we have already seen its positive impact on conversion and revenue.

Wix’s management expects Wix’s AI products to drive higher conversion, monetisation, and retention in the company’s Self Creators business

Compounding Partners growth is complemented by re-accelerating growth in our stable and profitable Self Creators business, which we saw once again this quarter. We expect our market-leading product innovation as well as our powerful AI products and technology to drive higher conversion, monetization and retention as we maintain our leadership position in the website building space.

Wix’s management believes that Wix’s AI products are helping to improve conversion because the new AI tools help to generate content for users, which reduces the inertia to create a website

I believe your second question was in regards to what kind of effect we are seeing from different AI products that we are launching, and mostly in regards to improvement in conversion. And we do actually see an improvement in conversion, which is probably the most important KPI by which we measure our success in deploying new products. The reason for that is that with AI, we are able to ask the user better questions and to understand in a smarter way, why is that the user is trying to achieve. From that, we are able to generate a better starting point for their business on top of Wix. And that is not just the skeleton, we are also able to fill in a lot of information, a lot of the content that the user would normally have to fill in manually. The result is that the amount of effort and knowledge that you need to create a website and for your business on Wix is dramatically reduced. And from that, we are able to see very good results in terms of improvement of conversion.

The use of AI tools internally has helped to improve Wix’s margins

So we saw this year a tremendous improvement in margins — in gross margin. And it came mostly from 2 places. The first one is a lot of improvements and savings that we have with our infrastructure, most of you know the hosting activity. So we had a lot of savings over there, but also about our core organization, for example, benefiting from all kind of AI tools that enable us to be more efficient.

Wix’s management believes that the company’s AI features help users with website-creation when it would normally take specialists to do so

And then because of the power of the AI tools, you can create very strong, very professional websites because the AI will continue and finish for you the thing that would normally require to specialize in different variations of web designs.

Zoom Video Communications (NASDAQ: ZM)

Zoom AI Companion, which helps create call summaries, is included in Zoom’s paid plans at no additional costs to customers, and more than 220,000 accounts have enabled it, with 2.8 million meeting summaries created to-date

We also showcased newly-released innovations like Zoom AI Companion, as well as Zoom AI Expert Assist and a Quality Management for the Contact Center. Zoom AI Companion is especially noteworthy for being included at no additional cost to our paid plans, and has fared tremendously well with over 220,000 accounts enabling it and 2.8 million meeting summaries created as of today.

Zoom’s management believes that Zoom AI Companion’s meeting-summary feature is really accurate and really fast; management attributes the good performance to the company’s use of multiple AI models within Zoom AI Companion

I think we are very, very proud of our team’s progress since it launched the Zoom AI Companion, as I mentioned earlier, right, a lot of accounts enabled that. Remember, this is no additional cost to [ outpay ] the customer. A lot of features.One feature of that is like take a meeting summary, for example. Amazingly, it’s very accurate and it really save the meeting host a lot of time. And also, our federated AI approach really contributed to that success because we do not count on a single AI model, and in terms of latency, accuracy, and also the response, the speed and so on and so forth, I think, it really helped our AI Companion.

Free users of Zoom are unable to access Zoom AI Companion

For sure, for free users, they do not — they cannot enjoy this AI Companion, for sure, it’s a [ data health ] for those who free to approve for online upgrade. So anyway, so we keep innovating on AI Companion. We have high confidence. That’s a true differentiation compared to any other AI features, functionalities offered by some of our competitors.

Zoom’s management thinks that Zoom’s AI features for customers will be a key differentiator and a retention tool

But I think what Eric was just mentioning about AI is probably really going to be a key differentiator and a retention — retention tool in the future, because as a reminder, all of the AI Companion features come included for our free — sorry, for our paid users. So we’re seeing it not only help with conversion, but we really believe that for the long term, it will help with retention as well.

Zoom’s management believes that Zoom’s AI features will help to reaccelerate Zoom’s net dollar expansion rate for enterprise customers

[Question] You’re showing stabilization here on some of the major metrics, the Enterprise expansion metric took a step down to 105%. And so just wondering what it takes for that metric to similarly show stabilization as given like in Q1 renewal cohort and kind of walking through that. Anything on the product side for us to consider or just any other commentary there is helpful.

[Answer] Well, as a reminder, it’s a trailing 12-month metric. So as we’ve worsely seen our growth rates come down this year that’s following behind it. But absolutely, we believe that AI Companion in general as well as the success that we are seeing in Zoom Phone, in Zoom Contact Center, Zoom Virtual Agent, all of those will be key contributors to seeing that metric start to reaccelerate again as we see our growth rate starting to reaccelerate as well.

Zoom’s management thinks tjat Zoom’s gross margin could decline – but only slightly – due to the AI features in Zoom’s products being given away for free at the moment

[Question] As I look at gross margins, how sustainable is it keeping at these levels? I know AI Companion is being given away from as part of the package, I guess, prepaid users. But if you think about the cost to run these models, the margin profile of Contact Center and Phone. How durable is it to kind of sustain these levels?

[Answer] But we do expect there’s going to be some impact on gross margins. I mean we — I don’t think it’s going to be significant because the team will continue to operate in the very efficient manner that they do and run our co-los [co-locateds] that way, but we do expect there’s going to be some impact to our gross margin as we move forward.

Zoom’s management wants to leverage AI Companion across the entire Zoom platform

So again, it’s a lot of other features as well. And like for me, I also use our — the client, [indiscernible] client, connect and other services you can, right? You can have you compose e-mail as well, right? It’s a lot of features, right? And down the road awareness for the Whiteboard with AI Companion as well. Almost every service entire platform, we’re going to lever the AI Companion. So and a lot of features and the AI Companion.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.

Jensen Huang’s Wisdom

Nvidia’s co-founder and CEO was interviewed recently and there was plenty to learn from his sharing.

I listen to, or read the transcripts of, podcasts regularly. One of my favourite podcast episodes this year was Jensen Huang’s appearance earlier this month in an episode of the Acquired FM podcast hosted by Ben Gilbert and David Rosenthal. Huang is the co-founder and CEO of Nvidia, a chip designer with US$32.7 billion in trailing revenue that’s in the epicenter of the AI revolution today. During his 1.5 hour interview with Gilbert and Rosenthal, Huang shared many pieces of wisdom – the passages below in italics are my favourites. 

On how he sped up Nvidia’s chip development process by simulating the future

Jensen: We also made the decision to use this technology called emulation. There was a company called ICOS. On the day that I called them, they were just shutting the company down because they had no customers. I said, hey, look. I’ll buy what you have inventory. No promises are necessary.

The reason why we needed that emulator is because if you figure out how much money that we have, if we taped out a chip and we got it back from the fab and we started working on our software, by the time that we found all the bugs because we did the software, then we taped out the chip again. We would’ve been out of business already.

David: And your competitors would’ve caught up.

Jensen: Well, not to mention we would’ve been out of business.

David: Who cares?

Jensen: Exactly. If you’re going to be out of business anyway, that plan obviously wasn’t the plan. The plan that companies normally go through—build a chip, write the software, fix the bugs, tape out a new chip, so on and so forth—that method wasn’t going to work. The question is, if we only had six months and you get to tape out just one time, then obviously you’re going to tape out a perfect chip.

I remember having a conversation with our leaders and they said, but Jensen, how do you know it’s going to be perfect? I said, I know it’s going to be perfect, because if it’s not, we’ll be out of business. So let’s make it perfect. We get one shot.

We essentially virtually prototyped the chip by buying this emulator. Dwight and the software team wrote our software, the entire stack, ran it on this emulator, and just sat in the lab waiting for Windows to paint.

David: It was like 60 seconds for a frame or something like that.

Jensen: Oh, easily. I actually think that it was an hour per frame, something like that. We would just sit there and watch it paint. On the day that we decided to tape out, I assumed that the chip was perfect. Everything that we could have tested, we tested in advance, and told everybody this is it. We’re going to tape out the chip. It’s going to be perfect.

Well, if you’re going to tape out a chip and you know it’s perfect, then what else would you do? That’s actually a good question. If you knew that you hit enter, you tape out a chip, and you knew it was going to be perfect, then what else would you do? Well, the answer, obviously, go to production.

Ben: And marketing blitz. And developer relations.

Jensen: Kick everything off because you got a perfect chip. We got in our head that we have a perfect chip.

David: How much of this was you and how much of this was your co-founders, the rest of the company, the board? Was everybody telling you you were crazy?

Jensen: No. Everybody was clear we had no shot. Not doing it would be crazy.

David: Otherwise, you might as well go home.

Jensen: Yeah, you’re going to be out of business anyway, so anything aside from that is crazy. It seemed like a fairly logical thing. Quite frankly, right now as I’m describing it, you’re probably thinking yeah, it’s pretty sensible.

David: Well, it worked.

Jensen: Yeah, so we taped that out and went directly to production.

Ben: So is the lesson for founders out there when you have conviction on something like the RIVA 128 or CUDA, go bet the company on it. This keeps working for you. It seems like your lesson learned from this is yes, keep pushing all the chips in because so far it’s worked every time. How do you think about that?

Jensen: No, no. When you push your chips in I know it’s going to work. Notice we assumed that we taped out a perfect chip. The reason why we taped out a perfect chip is because we emulated the whole chip before we taped it out. We developed the entire software stack. We ran QA on all the drivers and all the software. We ran all the games we had. We ran every VGA application we had.

When you push your chips in, what you’re really doing is, when you bet the farm you’re saying, I’m going to take everything in the future, all the risky things, and I pull in in advance. That is probably the lesson. To this day, everything that we can prefetch, everything in the future that we can simulate today, we prefetch it.

On Nvidia’s corporate culture and architecture and why it works

Ben: We have some questions we want to ask you. Some are cultural about Nvidia, but others are generalizable to company-building broadly. The first one that we wanted to ask is that we’ve heard that you have 40+ direct reports, and that this org chart works a lot differently than a traditional company org chart.

Do you think there’s something special about Nvidia that makes you able to have so many direct reports, not worry about coddling or focusing on career growth of your executives, and you’re like, no, you’re just here to do your fricking best work and the most important thing in the world. Now go. (a) Is that correct? and (b) is there something special about Nvidia that enables that?

Jensen: I don’t think it’s something special in Nvidia. I think that we had the courage to build a system like this. Nvidia’s not built like a military. It’s not built like the armed forces, where you have generals and colonels. We’re not set up like that. We’re not set up in a command and control and information distribution system from the top down.

We’re really built much more like a computing stack. The lowest layer is our architecture, then there’s our chip, then there’s our software, and on top of it there are all these different modules. Each one of these layers of modules are people.

The architecture of the company (to me) is a computer with a computing stack, with people managing different parts of the system. Who reports to whom, your title is not related to anywhere you are in the stack. It just happens to be who is the best at running that module on that function on that layer, is in-charge. That person is the pilot in command. That’s one characteristic.

David: Have you always thought about the company this way, even from the earliest days?

Jensen: Yeah, pretty much. The reason for that is because your organization should be the architecture of the machinery of building the product. That’s what a company is. And yet, everybody’s company looks exactly the same, but they all build different things. How does that make any sense? Do you see what I’m saying?

How you make fried chicken versus how you flip burgers versus how you make Chinese fried rice is different. Why would the machinery, why would the process be exactly the same?

It’s not sensible to me that if you look at the org charts of most companies, it all looks like this. Then you have one group that’s for a business, and you have another for another business, you have another for another business, and they’re all supposedly autonomous.

None of that stuff makes any sense to me. It just depends on what is it that we’re trying to build and what is the architecture of the company that best suits to go build it? That’s number one.

In terms of information systems and how you enable collaboration, we’re wired up like a neural network. The way that we say this is that there’s a phrase in the company called ‘mission is the boss.’ We figure out what is the mission of what is the mission, and we go wire up the best skills, the best teams, and the best resources to achieve that mission. It cuts across the entire organization in a way that doesn’t make any sense, but it looks a little bit like a neural network.

David: And when you say mission, do you mean Nvidia’s mission is…

Jensen: Build Hopper.

David: Okay, so it’s not like further accelerated computing? It’s like we’re shipping DGX Cloud.

Jensen: No. Build Hopper or somebody else’s build a system for Hopper. Somebody has built CUDA for Hopper. Somebody’s job is to build cuDNN for CUDA for Hopper. Somebody’s job is the mission. Your mission is to do something.

Ben: What are the trade-offs associated with that versus the traditional structure?

Jensen: The downside is the pressure on the leaders is fairly high. The reason for that is because in a command and control system, the person who you report to has more power than you. The reason why they have more power than you is because they’re closer to the source of information than you are.

In our company, the information is disseminated fairly quickly to a lot of different people. It’s usually at a team level. For example, just now I was in our robotics meeting. We’re talking about certain things and we’re making some decisions.

There are new college grads in the room. There are three vice-presidents in the room, there are two e-staff in the room. At the moment that we decided together, we reasoned through some stuff, we made a decision, everybody heard it exactly the same time. Nobody has more power than anybody else. Does that make sense? The new college grad learned at exactly the same time as the e-staff.

The executive staff, the leaders that work for me, and myself, you earned the right to have your job based on your ability to reason through problems and help other people succeed. It’s not because you have some privileged information that I knew the answer was 3.7, and only I knew. Everybody knew.

On the right way to learn from business books

Jensen: In the last 30 years I’ve read my fair share of business books. As in everything you read, you’re supposed to first of all enjoy it, be inspired by it, but not to adopt it. That’s not the whole point of these books. The whole point of these books is to share their experiences.

You’re supposed to ask, what does it mean to me in my world, and what does it mean to me in the context of what I’m going through? What does this mean to me and the environment that I’m in? What does this mean to me in what I’m trying to achieve? What does this mean to Nvidia and the age of our company and the capability of our company?

You’re supposed to ask yourself, what does it mean to you? From that point, being informed by all these different things that we’re learning, we’re supposed to come up with our own strategies.

What I just described is how I go about everything. You’re supposed to be inspired and learn from everybody else. The education’s free. When somebody talks about a new product, you’re supposed to go listen to it. You’re not supposed to ignore it. You’re supposed to go learn from it.

It could be a competitor, it could be an adjacent industry, it could be nothing to do with us. The more we learn from what’s happening out in the world, the better. But then, you’re supposed to come back and ask yourself, what does this mean to us?

David: You don’t just want to imitate them.

Jensen: That’s right.

On the job of the CEO in a company

Jensen: That’s right. You want to pave the way to future opportunities. You can’t wait until the opportunity is sitting in front of you for you to reach out for it, so you have to anticipate.

Our job as CEO is to look around corners and to anticipate where will opportunities be someday. Even if I’m not exactly sure what and when, how do I position the company to be near it, to be just standing near under the tree, and we can do a diving catch when the apple falls. You guys know what I’m saying? But you’ve got to be close enough to do the diving catch.

On seeing the future of computing and AI before others did

Ben: Speaking of the speed of light—David’s begging me to go here—you totally saw that InfiniBand would be way more useful way sooner than anyone else realized. Acquiring Mellanox, I think you uniquely saw that this was required to train large language models, and you were super aggressive in acquiring that company. Why did you see that when no one else saw that?

Jensen: There were several reasons for that. First, if you want to be a data center company, building the processing chip isn’t the way to do it. A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it.

A desktop computer in a data center uses the same CPUs, uses the same GPUs, apparently. Very close. It’s not the processing chip that describes it, but it’s the networking of it, it’s the infrastructure of it. It’s how the computing is distributed, how security is provided, how networking is done, and so on and so forth. Those characteristics are associated with Melanox, not Nvidia.

The day that I concluded that really Nvidia wants to build computers of the future, and computers of the future are going to be data centers, embodied in data centers, then if we want to be a data center–oriented company, then we really need to get into networking. That was one.

The second thing is observation that, whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing, where one training job is orchestrated across millions of processors.

It’s the inverse of hyperscale, almost. The way that you design a hyperscale computer with off-the-shelf commodity ethernet, which is just fine for Hadoop, it’s just fine for search queries, it’s just fine for all of those things—

Ben: But not when you’re sharding a model across.

Jensen: Not when you’re sharding a model across, right. That observation says that the type of networking you want to do is not exactly ethernet. The way that we do networking for supercomputing is really quite ideal.

The combination of those two ideas convinced me that Mellanox is absolutely the right company, because they’re the world’s leading high-performance networking company. We worked with them in so many different areas in high performance computing already. Plus, I really like the people. The Israel team is world class. We have some 3200 people there now, and it was one of the best strategic decisions I’ve ever made.

David: When we were researching, particularly part three of our Nvidia series, we talked to a lot of people. Many people told us the Mellanox acquisition is one of, if not the best of all time by any technology company.

Jensen: I think so, too. It’s so disconnected from the work that we normally do, it was surprising to everybody.

Ben: But framed this way, you were standing near where the action was, so you could figure out as soon as that apple becomes available to purchase, like, oh, LLMs are about to blow up, I’m going to need that. Everyone’s going to need that. I think I know that before anyone else does.

Jensen: You want to position yourself near opportunities. You don’t have to be that perfect. You want to position yourself near the tree. Even if you don’t catch the apple before it hits the ground, so long as you’re the first one to pick it up. You want to position yourself close to the opportunities.

That’s kind of a lot of my work, is positioning the company near opportunities, and the company having the skills to monetize each one of the steps along the way so that we can be sustainable.

On why zero-billion dollar markets are better than $10 billion markets

David: I’ve heard you or others in Nvidia (I think) used the phrase zero billion dollar—

Jensen: That’s exactly right. It’s our way of saying there’s no market yet, but we believe there will be one. Usually when you’re positioned there, everybody’s trying to figure out why are you here. When we first got into automotive, because we believe that in the future, the car is going to be largely software. If it’s going to be largely software, a really incredible computer is necessary.

When we positioned ourselves there, I still remember one of the CTOs told me, you know what? Cars cannot tolerate the blue screen of death. I said, I don’t think anybody can tolerate that, but that doesn’t change the fact that someday every car will be a software-defined car. I think 15 years later we’re largely right.

Oftentimes there’s non-consumption, and we like to navigate our company there. By doing that, by the time that the market emerges, it’s very likely there aren’t that many competitors shaped that way.

We were early in PC gaming, and today Nvidia’s very large in PC gaming. We reimagined what a design workstation would be like. Today, just about every workstation on the planet uses Nvidia’s technology. We reimagine how supercomputing ought to be done and who should benefit from supercomputing, that we would democratize it. And look today, Nvidia’s in accelerated computing is quite large.

We reimagine how software would be done, and today it’s called machine learning, and how computing would be done, we call it AI. We reimagined these things, try to do that about a decade in advance. We spent about a decade in zero billion dollar markets, and today I spent a lot of time on omniverse. Omniverse is a classic example of a zero billion dollar business.

Ben: There are like 40 customers now? Something like that?

David: Amazon, BMW.

Jensen: Yeah, I know. It’s cool.

On protecting a company’s moat (or competitive advantage)

Jensen: Oftentimes, if you created the market, you ended up having what people describe as moats, because if you build your product right and it’s enabled an entire ecosystem around you to help serve that end market, you’ve essentially created a platform.

Sometimes it’s a product-based platform. Sometimes it’s a service-based platform. Sometimes it’s a technology-based platform. But if you were early there and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks, and all these developers and customers who are built around you. That network is essentially your moat.

I don’t love thinking about it in the context of a moat. The reason for that is because you’re now focused on building stuff around your castle. I tend to like thinking about things in the context of building a network. That network is about enabling other people to enjoy the success of the final market. That you’re not the only company that enjoys it, but you’re enjoying it with a whole bunch of other people.

On the importance of luck in a company’s success

David: Is it fair to say, though, maybe on the luck side of the equation, thinking back to 1997, that that was the moment where consumers tipped to really, really valuing 3D graphical performance in games?

Jensen: Oh yeah. For example, luck. Let’s talk about luck. If Carmack had decided to use acceleration, because remember, Doom was completely software-rendered.

The Nvidia philosophy was that although general-purpose computing is a fabulous thing and it’s going to enable software and IT and everything, we felt that there were applications that wouldn’t be possible or it would be costly if it wasn’t accelerated. It should be accelerated. 3D graphics was one of them, but it wasn’t the only one. It just happens to be the first one and a really great one.

I still remember the first times we met John. He was quite emphatic about using CPUs and his software render was really good. Quite frankly, if you look at Doom, the performance of Doom was really hard to achieve even with accelerators at the time. If you didn’t have to do bilinear filtering, it did a pretty good job.

David: The problem with Doom, though, was you needed Carmac to program it.

Jensen: Exactly. It was a genius piece of code, but nonetheless, software renders did a really good job. If he hadn’t decided to go to OpenGL and accelerate for Quake, frankly what would be the killer app that put us here? Carmack and Sweeney, both between Unreal and Quake, created the first two killer applications for consumer 3D, so I owe them a great deal.

On the importance of having an ecosystem of 3rd-party developers surrounding your company

David: I want to come back real quick to you told these stories and you’re like, well, I don’t know what founders can take from that. I actually do think if you look at all the big tech companies today, perhaps with the exception of Google, they did all start—and understanding this now about you—by addressing developers, planning to build a platform, and tools for developers.

All of them—Apple, not Amazon. […] That’s how AWS started. I think that actually is a lesson to your point of, that won’t guarantee success by any means, but that’ll get you hanging around a tree if the apple falls.

Jensen: As many good ideas as we have. You don’t have all the world’s good ideas and the benefit of having developers is you get to see a lot of good ideas.

On keeping AI safe, and how AI can change the world for the better

Ben: I want to think about the future a little bit. I’m sure you spend a lot of time on this being on the cutting edge of AI.

We’re moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they’re creating, which has to be amazing for humanity in the long run. In the short term, it’s going to be inevitably bumpy as we figure out what that means.

What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it?

Jensen: First of all, we have to keep AI safe. There are a couple of different areas of AI safety that’s really important. Obviously, in robotics and self-driving car, there’s a whole field of AI safety. We’ve dedicated ourselves to functional and active safety, and all kinds of different areas of safety. When to apply human in the loop? When is it okay for a human not to be in the loop? How do you get to a point where increasingly human doesn’t have to be in the loop, but human largely in the loop?

In the case of information safety, obviously bias, false information, and appreciating the rights of artists and creators, that whole area deserves a lot of attention.

You’ve seen some of the work that we’ve done, instead of scraping the Internet we, we partnered with Getty and Shutterstock to create commercially fair way of applying artificial intelligence, generative AI.

In the area of large language models in the future of increasingly greater agency AI, clearly the answer is for as long as it’s sensible—and I think it’s going to be sensible for a long time—is human in the loop. The ability for an AI to self-learn, improve, and change out in the wild in a digital form should be avoided. We should collect data. We should carry the data. We should train the model. We should test the model, validate the model before we release it in the wild again. So human is in the loop.

There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity. Obviously, the way autopilot works for a plane, two-pilot system, then air traffic control, redundancy and diversity, and all of the basic philosophies of designing safe systems apply as well in self-driving cars, and so on and so forth. I think there are a lot of models of creating safe AI, and I think we need to apply them.

With respect to automation, my feeling is that—and we’ll see—it is more likely that AI is going to create more jobs in the near term. The question is what’s the definition of near term? And the reason for that is the first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people because they want to expand into more areas.

So the question is, if you think about a company and say, okay, if we improve the productivity, then need fewer people. Well, that’s because the company has no more ideas. But that’s not true for most companies. If you become more productive and the company becomes more profitable, usually they hire more people to expand into new areas.

So long as we believe that they’re more areas to expand into, there are more ideas in drugs, there’s drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, that there are more ideas in technology, so long as we believe that there are more ideas, the prosperity of the industry which comes from improved productivity, results in hiring more people, more ideas.

Now you go back in history. We can fairly say that today’s industry is larger than the world’s industry a thousand years ago. The reason for that is because obviously, humans have a lot of ideas. I think that there are plenty of ideas yet for prosperity and plenty of ideas that can be begat from productivity improvements, but my sense is that it’s likely to generate jobs.

Now obviously, net generation of jobs doesn’t guarantee that any one human doesn’t get fired. That’s obviously true. It’s more likely that someone will lose a job to someone else, some other human that uses an AI. Not likely to an AI, but to some other human that uses an AI.

I think the first thing that everybody should do is learn how to use AI, so that they can augment their own productivity. Every company should augment their own productivity to be more productive, so that they can have more prosperity, hire more people.

I think jobs will change. My guess is that we’ll actually have higher employment, we’ll create more jobs. I think industries will be more productive. Many of the industries that are currently suffering from lack of labor, workforce is likely to use AI to get themselves off their feet and get back to growth and prosperity. I see it a little bit differently, but I do think that jobs will be affected, and I’d encourage everybody just to learn AI.

David: This is appropriate. There’s a version of something we talked about a lot on Acquired, we call it the Moritz corollary to Moore’s law, after Mike Moritz from Sequoia.

Jensen: Sequoia was the first investor in our company.

David: Of course, yeah. The great story behind it is that when Mike was taking over for Don Valentine with Doug, he was sitting and looking at Sequoia’s returns. He was looking at fund three or four, I think it was four maybe that had Cisco in it. He was like, how are we ever going to top that? Don’s going to have us beat. We’re never going to beat that.

He thought about it and he realized that, well, as compute gets cheaper, and it can access more areas of the economy because it gets cheaper, and can it get adopted more widely, well then the markets that we can address should get bigger. Your argument is basically AI will do the same thing. The cycle will continue.

Jensen: Exactly. I just gave you exactly the same example that in fact, productivity doesn’t result in us doing less. Productivity usually results in us doing more. Everything we do will be easier, but we’ll end up doing more. Because we have infinite ambition. The world has infinite ambition. If a company is more profitable, they tend to hire more people to do more.

On the importance of prioritising your daily activities

David: What is something that you believe today that 40-year-old Jensen would’ve pushed back on and said, no, I disagree.

Jensen: There’s plenty of time. If you prioritize yourself properly and you make sure that you don’t let Outlook be the controller of your time, there’s plenty of time.

David: Plenty of time in the day? Plenty of time to achieve this thing?

Jensen: To do anything. Just don’t do everything. Prioritize your life. Make sacrifices. Don’t let Outlook control what you do every day.

Notice I was late to our meeting, and the reason for that, by the time I looked up, oh my gosh. Ben and David are waiting.

David: We have time.

Jensen: Exactly.

David: Didn’t stop this from being your day job.

Jensen: No, but you have to prioritize your time really carefully, and don’t let Outlook determine that.

On what is the really important thing in a business plan: The problem you want to solve

Jensen: I didn’t know how to write a business plan.

Ben: Which it turns out is not actually important.

Jensen: No. It turns out that making a financial forecast that nobody knows is going to be right or wrong, turns out not to be that important. But the important things that a business plan probably could have teased out, I think that the art of writing a business plan ought to be much, much shorter.

It forces you to condense what is the true problem you’re trying to solve? What is the unmet need that you believe will emerge? And what is it that you’re going to do that is sufficiently hard, that when everybody else finds out is a good idea, they’re not going to swarm it and make you obsolete? It has to be sufficiently hard to do.

There are a whole bunch of other skills that are involved in just product positioning, pricing, go to market and all that stuff. But those are skills, and you can learn those things easily. The stuff that is really, really hard is the essence of what I described.

I did that okay, but I had no idea how to write the business plan. I was fortunate that Wilf Corrigan was so pleased with me in the work that I did when I was at LSI Logic, he called up Don Valentine and told Don, invest in this kid. He’s going to come your way. I was set up for success from that moment and got us off the ground.

On entrepreneurs’ superpower

David: Well, and that being our final question for you. It’s 2023, 30 years anniversary of the founding of Nvidia. If you were magically 30 years old again today in 2023, and you were going to Denny’s with your two best friends who are the two smartest people you know, and you’re talking about starting a company, what are you talking about starting?

Jensen: I wouldn’t do it. I know. The reason for that is really quite simple. Ignoring the company that we would start, first of all, I’m not exactly sure. The reason why I wouldn’t do it, and it goes back to why it’s so hard, is building a company and building Nvidia turned out to have been a million times harder than I expected it to be, any of us expected it to be.

At that time, if we realized the pain and suffering, just how vulnerable you’re going to feel, and the challenges that you’re going to endure, the embarrassment and the shame, and the list of all the things that go wrong, I don’t think anybody would start a company. Nobody in their right mind would do it.

I think that that’s the superpower of an entrepreneur. They don’t know how hard it is, and they only ask themselves how hard can it be? To this day, I trick my brain into thinking, how hard can it be? Because you have to.

On the importance of self-belief

David: I know how meaningful that is in any company, but for you, I feel like the Nvidia journey is particularly amplified on these dimensions. You went through two, if not three, 80%-plus drawdowns in the public markets, and to have investors who’ve stuck with you from day one through that, must be just so much support.

Jensen: It is incredible. You hate that any of that stuff happened. Most of it is out of your control, but 80% fall, it’s an extraordinary thing no matter how you look at it.

I forget exactly, but we traded down at about a couple of $2–$3 billion in market value for a while because of the decision we made in going into CUDA and all that work. Your belief system has to be really, really strong. You have to really, really believe it and really, really want it.

Otherwise, it’s just too much to endure because everybody’s questioning you. Employees aren’t questioning you, but employees have questions. People outside are questioning you, and it’s a little embarrassing.

It’s like when your stock price gets hit, it’s embarrassing no matter how you think about it. It’s hard to explain. There are no good answers to any of that stuff. The CEOs are humans and companies are built of humans. These challenges are hard to endure.

On how technology transforms and grows economic opportunities

Jensen: This is the extraordinary thing about technology right now. Technology is a tool and it’s only so large. What’s unique about our current circumstance today is that we’re in the manufacturing of intelligence. We’re in the manufacturing of work world. That’s AI. The world of tasks doing work—productive, generative AI work, generative intelligent work—that market size is enormous. It’s measured in trillions.

One way to think about that is if you built a chip for a car, how many cars are there and how many chips would they consume? That’s one way to think about that. However, if you build a system that, whenever needed, assisted in the driving of the car, what’s the value of an autonomous chauffeur every now and then?

Obviously, the problem becomes much larger, the opportunity becomes larger. What would it be like if we were to magically conjure up a chauffeur for everybody who has a car, and how big is that market? Obviously, that’s a much, much larger market.

The technology industry is that what we discovered, what Nvidia has discovered, and what some of the discovered, is that by separating ourselves from being a chip company but building on top of a chip and you’re now an AI company, the market opportunity has grown by probably a thousand times.

Don’t be surprised if technology companies become much larger in the future because what you produce is something very different. That’s the way to think about how large can your opportunity, how large can you be? It has everything to do with the size of the opportunity.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.