All articles

Stocks and Interest Rate Cuts

How has the US stock market historically performed when the Federal Reserve had cut interest rates?

A topic I’ve noticed that is buzzing among financial market participants lately is what would happen to the US stock market if and when the Federal Reserve, the US’s central bank, cuts interest rates later this year. 

There is a high likelihood of a rate cut coming, although there is more uncertainty around the timing and the extent of any cut. In a speech last week, the central bank’s chair, Jerome Powell, said (emphases are mine):

“The time has come for policy to adjust. The direction of travel is clear, and the timing and pace of rate cuts will depend on incoming data, the evolving outlook, and the balance of risks.”

I have no crystal ball, but I do have historical context. Josh Brown, CEO of Ritholtz Wealth Management, a US-based investment firm, recently shared fantastic data on how US stocks have performed in the past when the Federal Reserve lowered rates. His data, in the form of a chart, goes back to 1957 and I reproduced them in tabular format in Table 1; it shows how US stocks did in the next 12 months following a rate cut, as well as whether a recession occurred in the same window:

Table 1; Source: Josh Brown

I also split the data in Table 1 according to whether a recession had occurred shortly after a rate cut, since eight of the 21 past rate-cut cycles from the Federal Reserve since 1957 took place without an impending recession. Table 2 shows the same data as Table 1 but for rate cuts with a recession; Table 3 is for rate cuts without a recession.

Table 2; Source: Josh Brown
Table 3; Source: Josh Brown

With all the data found in Tables 1, 2, and 3, here are my takeaways:

  • US stocks have historically done well, on average, in the 12 months following a rate-cut. The overall record, seen in Table 1, is an average 12-month forward return of 9%. When a recession happened shortly after a rate-cut, the average 12-month forward return is 8%; when a recession did not happen shortly after a rate-cut, the average 12-month forward return is 12%.
  • Drawdowns – the maximum peak-to-trough decline in stocks over a given time period – have occurred nearly all the time following a rate-cut. This is not surprising. It’s a feature of the stock market that you would often have to endure a sharp shorter-term fall in stock prices in order to earn a positive longer-term return.
  • A recession is not necessarily bad for stocks. As Table 2 shows, US stocks have historically delivered an average return of 8% over the next 12 months after rate cuts that came with impending recessions. 
  • It’s not a guarantee that stocks will produce good returns in the 12 months after a rate cut even if a recession does not occur, as can be seen from the August 1976 episode in Table 3.
  • My most important takeaway is that a rate-cut is not guaranteed to be a good or bad event for stocks. One-factor analysis in the financial markets  – “if A happens, then B will occur” – should be largely avoided because clear-cut relationships are rarely seen.

It’s worth bearing in mind that it’s not a certainty that the Federal Reserve will be cutting rates in the near future. Anything can happen in the financial markets. And even if a rate cut does happen, no one knows for sure how the US stock market would perform. History is not a perfect indicator of the future and the best it can do is to give us context for the upcoming possibilities. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have no vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 25 August 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 25 August 2024:

1. Eric Schmidt talk on AI at Stanford (Transcript here) – Eric Schmidt and Erik Brynjolfsson

Schmidt: One more technical question. Why is NVIDIA worth $2 trillion and the other companies are struggling? Technical answer.

Attendee: I mean, I think it just boils down to like most of the code needs to run with CUDA optimizations that currently only NVIDIA GPU supports. Other companies can make whatever they want to, but unless they have the 10 years of software there, you don’t have the machine learning optimization.

Schmidt: I like to think of CUDA as the C-programming language for GPUs. That’s the way I like to think of it. It was founded in 2008. I always thought it was a terrible language and yet it’s become dominant.

There’s another insight. There’s a set of open source libraries which are highly optimized to CUDA and not anything else and everybody who builds all these stacks- this is completely missed in any of the discussions. It’s technically called VLLM and a whole bunch of libraries like that. Highly optimized CUDA, very hard to replicate that if you’re a competitor. So what does all this mean?

In the next year, you’re going to see very large context windows, agents, and text-to-action. When they are delivered at scale, it’s going to have an impact on the world at a scale that no one understands yet. Much bigger than the horrific impact we’ve had by social media in my view. So here’s why.

In a context window, you can basically use that as short-term memory and I was shocked that context windows get this long. The technical reasons have to do with the fact that it’s hard to serve, hard to calculate, and so forth. The interesting thing about short-term memory is when you feed, you’re asking a question – read 20 books, you give it the text of the books as the query and you say, “Tell me what they say.” It forgets the middle, which is exactly how human brains work too. That’s where we are.

With respect to agents, there are people who are now building essentially LLM agents and the way they do it is they read something like chemistry, they discover the principles of chemistry, and then they test it, and then they add that back into their understanding. That’s extremely powerful.

And then the third thing, as I mentioned is text to action. So I’ll give you an example. The government is in the process of trying to ban TikTok. We’ll see if that actually happens. If TikTok is banned, here’s what I propose each and every one of you do. Say to your LLM the following: “Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it and in one hour, if it’s not viral, do something different along the same lines.” That’s the command. Boom, boom, boom, boom. You understand how powerful that is?

If you can go from arbitrary language to arbitrary digital command, which is essentially what Python in this scenario is, imagine that each and every human on the planet has their own programmer that actually does what they want, as opposed to the programmers that work for me who don’t do what I ask, right? The programmers here know what I’m talking about. So imagine a non-arrogant programmer that actually does what you want, and you don’t have to pay all that money to, and there’s infinite supply of these programs.

Interviewer : And this is all within the next year or two?

Schmidt: Very soon. Those three things – and I’m quite convinced it’s the union of those three things – that will happen in the next wave. So you asked about what else is going to happen. Every six months I oscillate. It’s an even-odd oscillation.

So at the moment, the gap between the frontier models, which they’re now only three, I’ll reveal who they are, and everybody else, appears to me to be getting larger. Six months ago, I was convinced that the gap was getting smaller. So I invested lots of money in the little companies. Now I’m not so sure. And I’m talking to the big companies and the big companies are telling me that they need $10 billion, $20 billion, $50 billion, $100 billion.

Interviewer: Stargate is $100 billion, right?

Schmidt: That’s very, very hard. I talked to Sam Altman – he’s a close friend. He believes that it’s going to take about $300 billion, maybe more. I pointed out to him that I’d done the calculation on the amount of energy required. And I then, in the spirit of full disclosure, went to the White House on Friday and told them that we need to become best friends with Canada, because Canada has really nice people, helped invent AI, and lots of hydropower. Because we as a country do not have enough power to do this. The alternative is to have the Arabs fund it. And I like the Arabs personally. I spent lots of time there, right? But they’re not going to adhere to our national security rules. Whereas Canada and the U.S. are part of a triumvirate where we all agree…

…Attendee: In terms of national security or geopolitical interests, how do you think AI is going to play a role in competition with China as well?

Schmidt: So I was the chairman of an AI commission that sort of looked at this very carefully and you can read it. It’s about 752 pages and I’ll just summarize it by saying we’re ahead, we need to stay ahead, and we need lots of money to do so. Our customers were the Senate and the House. And out of that came the Chips Act and a lot of other stuff like that. A rough scenario is that if you assume the frontier models drive forward and a few of the open source models, it’s likely that a very small number of companies can play this game – countries, excuse me.

What are those countries or who are they? Countries with a lot of money and a lot of talent, strong educational systems, and a willingness to win. The US is one of them. China is another one. How many others are there?

Interviewer: Are there any others?

Schmidt: I don’t know. Maybe. But certainly in your lifetimes, the battle between the US and China for knowledge supremacy is going to be the big fight. So the US government banned essentially the NVIDIA chips, although they weren’t allowed to say, that was what they were doing, but they actually did that to China. We have a roughly 10-year chip advantage in terms of sub-DUV, that is sub-five nanometer chips.

So an example would be today we’re a couple of years ahead of China. My guess is we’ll get a few more years ahead of China, and the Chinese are whopping mad about this. It’s like hugely upset about it. So that’s a big deal. That was a decision made by the Trump administration and driven by the Biden administration…

…Interviewer: I want to switch to a little bit of a philosophical question. So there was an article that you and Henry Kissinger and Dan Huttenlocher wrote last year about the nature of knowledge and how it’s evolving. I had a discussion the other night about this as well. So for most of history, humans sort of had a mystical understanding of the universe and then there’s the scientific revolution and the enlightenment. And in your article, you argue that now these models are becoming so complicated and difficult to understand that we don’t really know what’s going on in them.

I’ll take a quote from Richard Feynman. He says, “What I cannot create, I do not understand.” I saw this quote the other day. But now people are creating things that they can create, but they don’t really understand what’s inside of them. Is the nature of knowledge changing in a way? Are we going to have to start just taking the word for these models without them being able to explain it to us?

Schmidt: The analogy I would offer is to teenagers. If you have a teenager, you know they’re human, but you can’t quite figure out what they’re thinking. But somehow we’ve managed in society to adapt to the presence of teenagers and they eventually grow out of it.

I’m serious. So it’s probably the case that we’re going to have knowledge systems that we cannot fully characterize, but we understand their boundaries. We understand the limits of what they can do. And that’s probably the best outcome we can get.

Interviewer: Do you think we’ll understand the limits?

Schmidt: We’ll get pretty good at it. The consensus of my group that meets every week is that eventually the way you’ll do this so-called adversarial AI is that there will actually be companies that you will hire and pay money to to break your AI system.

Interviewer: Like Red Team.

Schmidt: So instead of Human Red Teams, which is what they do today, you’ll have whole companies and a whole industry of AI systems whose jobs are to break the existing AI systems and find their vulnerabilities, especially the knowledge that they have that we can’t figure out. That makes sense to me…

…Attendee: In general, you seem super positive about the potential for AI’s problems. I’m curious, what do you think is going to drive that? Is it just more compute? Is it more data? Is it fundamental architectural shifts? Do you agree?

Schmidt: The amounts of money being thrown around are mind-boggling. And I’ve chosen – I essentially invest in everything because I can’t figure out who’s going to win. And the amounts of money that are following me are so large, I think some of it is because the early money has been made and the big money people who don’t know what they’re doing have to have an AI component. And everything is now an AI investment, so they can’t tell the difference. I define AI as learning systems, systems that actually learn. So I think that’s one of them.

The second is that there are very sophisticated new algorithms that are sort of post-transformers. My friend, my collaborator, for a long time has invented a new non-transformer architecture. There’s a group that I’m funding in Paris that has claims to have done the same thing. There’s enormous invention there, a lot of things at Stanford.

And the final thing is that there is a belief in the market that the invention of intelligence has infinite return. So let’s say you put $50 billion of capital into a company, you have to make an awful lot of money from intelligence to pay that back. So it’s probably the case that we’ll go through some huge investment bubble, and then it’ll sort itself out. That’s always been true in the past, and it’s likely to be true here…

…Attendee: You mentioned in your paper on natural security that you have China and the U.S [indecipherable]..  The next cluster down are all other U.S. allies or teed up nicely through the U.S. allies. I’m curious what your take is on those 10 and the middle that aren’t formally allies. How likely are they to get on board with securing our security guideline and what would hold them back from wanting to get on board?

Schmidt: The most interesting country is India because the top AI people come from India to the U.S. and we should let India keep some of its top talent. Not all of them, but some of them. And they don’t have the kind of training facilities and programs that we so richly have here. To me, India is the big swing state in that regard. China’s lost. It’s not going to come back. They’re not going to change the regime as much as people wish them to do. Japan and Korea are clearly in our camp. Taiwan is a fantastic country whose software is terrible, so that’s not going to work – amazing hardware. And in the rest of the world, there are not a lot of other good choices that are big. Europe is screwed up because of Brussels. It’s not a new fact. I spent 10 years fighting them. And I worked really hard to get them to fix the EU Act and they still have all the restrictions that make it very difficult to do our kind of research in Europe. My French friends have spent all their time battling Brussels and Macron, who’s a personal friend, is fighting hard for this. And so France, I think, has a chance. I don’t see Germany coming and the rest is not big enough.

2. Activism at Scale in Japan –  Daniel Rasmussen, Lionel Smoler Schatz, and Yuto Kida

Last year, the Tokyo Stock Exchange issued a directive asking all companies with price-to-book ratios below 1x to issue a plan to get to 1x book. The reforms aimed to help Japan shake off its reputation as a “value trap.” At the time of the announcement (March 2023), around 50% of companies in the Prime Section and 60% of firms in the Standard Section had a PBR <1x, reflecting a shocking degree of pessimism and inattention by investors. Over the past year, companies issued plans and posted them to the TSE’s website.

We did a systematic review (methodology described below) of every plan issued by companies on the TSE’s Prime and Standard Section (3,247 firms) to assess the impact of these reforms. And the answer, we believe, is that dramatic change is afoot, with widespread dividend and buyback increases…

…As of the end of June, based on the TSE’s monthly list of disclosed companies, 50.9% of firms have disclosed plans and 9.8% are considering…

…The majority of companies issuing plans are increasing dividends, almost a quarter are repurchasing shares, and over 10% are selling cross-share and strategic holdings…

…Firms that have made an effort to lay out a specific and tangible action plan to reach 1x book have experienced a significant rise in their stock prices since the TSE announcement, more than double compared to companies that haven’t disclosed or are still considering doing so. We can see that the market has generally reacted positively to the companies’ disclosed plans and that the TSE’s “name and shame” tactic is working so far. It seems like whether the Japanese stock market continues to build on its momentum depends on the willingness of companies to be transparent about and responsive to the TSE’s request to reach 1x book.

3. The CEO Who Made a Fortune While His Hospital Chain Collapsed – Jonathan Weil

Steward Health Care System was in such dire straits before its bankruptcy that its hospital administrators scrounged each week to find cash and supplies to keep their facilities running.

While it was losing hundreds of millions of dollars a year, Steward paid at least $250 million to its chief executive officer, Dr. Ralph de la Torre, and to his other companies during the four years he was the hospital chain’s majority owner.

Steward filed for bankruptcy in May, becoming one of the biggest hospital failures in decades. Conditions at some of its hospitals have grown dire. In one Florida hospital, a pest-control company last year found 3,000 bats.

This month in Phoenix, where temperatures topped 100 degrees, the air conditioning failed at a Steward hospital, forcing patients to be transferred elsewhere, according to a court filing. Also, the kitchen was closed because of health-code violations. The state last week ordered the hospital to cease operations…

…The former cardiac surgeon owns a 190-foot, $40 million yacht called Amaral and a 90-foot, $15 million sportfishing boat called Jaruco, according to the Senate committee. He owns an 11,108-square-foot Dallas mansion, valued at $7.2 million by the county. Other residents of his exclusive Preston Hollow neighborhood include George W. Bush and Mark Cuban.

He paid at least $7.2 million in 2022 for a 500-acre ranch 45 miles south in Waxahachie, according to the property deed. Two private jets that the same Senate committee valued at $95 million were owned by a Steward affiliate that is majority-owned by de la Torre…

…Once a renowned surgeon, de la Torre became CEO of Steward’s predecessor in 2008 and took over majority ownership of Steward from its private-equity owner in 2020…

…The $250 million in payments from Steward to de la Torre and to his businesses are based on public disclosures from Steward or companies it dealt with. The total likely understates the full tally because Steward’s bankruptcy-court disclosures in most cases have covered only the 12 months before it filed for chapter 11. Some of the $250 million was paid to de la Torre directly. Other payments were to companies that did business with Steward where he had big ownership stakes.

De la Torre got his majority stake in Steward in 2020 when the company’s private-equity owner, Cerberus Capital Management, transferred its 90% stake to a physician group he led in exchange for a $350 million promissory note…

…Steward also made payments to two of de la Torre’s other companies. It was paying a management-consulting firm majority-owned by him at a rate of $30 million a year, a bankruptcy-court filing shows.

Steward said the firm, Management Health Services, employed 16 people, including Steward executives. Steward said they “provide executive oversight and overall strategic directive.” Steward effectively paid its CEO’s firm, which employed Steward executives, for executive-management services for Steward.

De la Torre’s spokeswoman said the only payments he received from MHS were for salary. She called MHS a payroll vendor. But it also owned hard assets including the two private jets, according to RZJets, which tracks aircraft history. One, a Bombardier Global 6000, was valued at $62 million, according to the Senate panel, while the other, a Dassault Falcon 2000LX, was worth $33 million. The pilots were on MHS’s payroll, according to people familiar with the matter. Both jets were sold this year.

Steward also paid $37 million to a company called CREF from May 2023 to May 2024, according to a bankruptcy-court filing. CREF is 40%-owned by de la Torre, according to people familiar with the matter, and provides real-estate and facility-management services. The other 60% is owned by CREF’s founder and CEO, Robert Gendron, who was a Steward executive vice president from 2018 to 2022 in charge of real estate and facilities.

4. The Lessons of a Lousy Business – Kingswell

The very thing that honed Buffett’s ability to spot wonderful companies and identify undervalued investment opportunities was his hard-won experience dealing with the dregs of the business world.

At the Berkshire Hathaway AGM in 2017, he admitted that it was his firsthand experiences with “lousy” businesses that made him the investor he is today.

“If you want to be a good evaluator of businesses,” said Buffett, “you really ought to figure out a way — without too much personal damage — to run a lousy business for a while. You’ll learn a whole lot more about business by actually struggling with a terrible business for a couple of years than you learn by getting into a very good one where the business itself is so good that you can’t mess it up.”…

…It’s not just one of the most interesting chapters of Buffett’s long career, but his time at Dempster Mill Manufacturing Co. imprinted several lessons on the young investor that he would apply to Berkshire Hathaway a few years later…

…What appeared to be an outrageously low price is exactly what led Buffett to Dempster Mill Manufacturing Co., a windmill and farm implement maker based in Beatrice, Nebraska.

Buffett started buying shares for his partnership at $18 a piece — which was just 25% of the company’s book value. Eventually, he snapped up enough of them — at an overall cost basis of $28 per share — to take majority control of Dempster.

His prize? A front row seat to the dysfunction that caused Dempster to trade at such a low valuation in the first place. The quantitative metrics might have screamed BUY!, but the sharks were circling right beneath the surface. Sales had flatlined, unsold inventory piled up, and cash was in dangerously short supply.

Buffett tried to enact positive change without upsetting the apple cart — helpfully making suggestions as a member of the board — but that went nowhere. Dempster management paid lip service to the new owner’s ideas, but basically ignored them…

…Staring disaster in the face, Buffett turned to Charlie Munger for help. And, thankfully, Charlie knew just the man for the job. “A good friend, whose inclination is not toward enthusiastic descriptions, highly recommended Harry Bottle for our type of program,” Buffett wrote to his partners in 1962…

…Buffett and Bottle connected in Los Angeles in April of 1962 and, less than a week later, Bottle was in place in Beatrice. With a $50,000 signing bonus and Dempster stock options for his trouble. From Buffett’s perspective, no money has ever been better spent…

…Harry Bottle played hard ball. His was not a Kumbaya-style of management. Some people don’t like that. But drastic times call for drastic measures.

(In a Christmas letter to employees, Bottle admitted that some of the things done to right the ship “were distasteful to all of us”.)…

…In only one year, Bottle completely transformed Dempster into a profitable operation.

  • 1961: $166,000 cash vs. $2.3 million liabilities
  • 1962: $1 million cash and stock vs. $250,000 liabilities

In 1963, Buffett decided to cash in and sell Dempster at a hefty profit. But, as Alice Schroeder details in The Snowball, it was not exactly a smooth process. When Buffett posted notice that the company would be sold, “Beatrice went berserk at the thought of another new owner that might impose layoffs or a plant closing on its biggest and virtually only employer.”

“The people of Beatrice pulled out the pitchforks,” wrote Schroeder. “Buffett was shocked. He had saved a dying company. Didn’t they understand that? Without him, Dempster would have gone under. He had not expected the ferocity, the personal vitriol. He had no idea that they would hate him.”

It all ended happily enough — with the town raising enough money to purchase Dempster and Buffett’s partnership nearly tripling its money on an investment that had one foot in the grave just a year earlier.

On paper, it looked like a walk-off home run for Buffett. But pulling Dempster out of the fire left scars on the young investor that, while painful, nevertheless prepared him to paint his masterpiece with Berkshire Hathaway.

5. A Number From Today and A Story About Tomorrow – Morgan Housel

Every forecast takes a number from today and multiplies it by a story about tomorrow.

Investment valuations, economic outlooks, political forecasts – they all follow that formula. Something we know multiplied by a story we like.

The trick when forecasting is realizing that’s what you’re doing…

… A fact multiplied by a story always equals something less than a fact. So almost all predictions have less than a 100% chance of coming true. That’s not a bold statement, but if you embrace it it always pushes you towards room for error and the ability to endure surprise…

…If you’re trying to figure out where something is going next, you have to understand more than its technical possibilities. You have to understand the stories everyone tells themselves about those possibilities, because it’s such a big part of the forecasting equation.

When interest rates are low, the story side of the equation becomes more powerful. When short-term results aren’t competing for attention with interest rates, most of a company’s valuation comes from what it might be able to achieve in the future. That, of course, is just a story. And people can come up with some wild stories. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2024 Q2)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q2 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the second quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management is still really excited about AI, but they’ve also realised that it’s going to take a lot longer for applications to change; management sees three layers to AI, namely, the chip, the model, and the application, and while there’s been a lot of innovation on the chip and the model, not much has changed with the applications, especially in e-commerce and travel

ChatGPT launched late November 2022. When it launched, I think we all got like incredibly excited. It was kind of like the moment probably some of us first discovered the Internet or maybe when iPhone was launched. And when it was launched, you had the feeling that everything was going to change. But I think that’s still true. But I think one of the things we’ve learned over the last, say, 18 months or nearly 2 years — 22 months since ChatGPT launched is that it’s going to take a lot longer than people think for applications to change.

If I were to think of AI, I’d probably think about it in 3 layers. You have the chip. You have the model. And you have the applications. There’s been a lot of innovation on the chip. There’s been a lot of innovation on the model. We have a lot of new models, and there’s a prolific rate of improvement in these models. But if you look at your home screen, which of your apps are fundamentally different because of the AI, like fundamentally different because of generative AI? Very little, especially even less in e-commerce or travel. And the reason why is I think it’s just going to take time to develop new AI paradigm. 

Airbnb’s management sees ChatGPT, even though it’s an AI chat software, as an application that could have existed before AI; management thinks what needs to be done is to develop AI applications that are native to the AI models with unique interfaces and no one has done this year; Airbnb is working on an application that will be native to AI models and this will change how users interact with Airbnb, where it becomes much more than a search box; this change in Airbnb will take a few years to develop

ChatGPT [ is an AI model interface that could ] have existed before AI. And so all of our paradigms are pre-AI paradigms. And so what we need to do is we need to actually develop AI applications that are native to the model. No one has done this yet. There’s not been one app that I’m aware of that’s the top 50 app in the app store in the United States that is a fundamentally new paradigm as fundamentally different as a multitouch was to the iPhone in 2008, and we need that interface change. So that’s one of the things that we’re working on. And I do think Airbnb will eventually be much more than a search box where you type a destination, add dates and find a listing. It’s going to be much more of a travel concierge. It’s having a conversation, learning, adapting to you. It’s going to take a number of years to develop this. And so it won’t be in the next year that this will happen. And I think this is probably what most of my tech friends are also saying, is it’s going to just take a bit more time.

Airbnb’s management thinks that having a new AI-driven interface will allow Airbnb to expand into new businesses

But to answer your question on what’s possible, a new interface paradigm would allow us to attach new businesses. So the question is, what permission do we have to go into a business like hotels? Well, today, we have permission because we have a lot of traffic. But if we had a breakthrough interface, we have even more permission because suddenly, we could move top of funnel and not just ask where are you going, but we can point to — we can inspire where you travel. Imagine if we had an index of the world’s communities. We told you we had information about every community, and we can provide the end-to-end trip for you. So there’s a lot of opportunities as we develop new interfaces to cross-sell new more inventory. 

Alphabet (NASDAQ: GOOG)

Google Cloud’s year-to-date AI-related revenue is already in the billions, and its AI infrastructure and solutions are already used by >2 million developers; more than 1.5 million developers are using Gemini, Alphabet’s foundational AI model, across the company’s developers tools

Year-to-date, our AI infrastructure and generative AI solutions for cloud customers have already generated billions in revenues and are being used by more than 2 million developers…

…More than 1.5 million developers are now using Gemini across our developer tools.

Alphabet’s management thinks Alphabet is well-positioned for AI; Alphabet is innovating at every layer of the AI stack, from chips at the bottom to agents at the top

As I spoke about last quarter, we are uniquely well positioned for the AI opportunity ahead. Our research and infrastructure leadership means we can pursue an in-house strategy that enables our product teams to move quickly. Combined with our model building expertise, we are in a strong position to control our destiny as the technology continues to evolve. Importantly, we are innovating at every layer of the AI stack, from chips to agents and beyond, a huge strength.

Alphabet’s management thinks Alphabet is using AI to deliver better responses on Search queries; tests for AI Overviews has showed increase in Search usage and higher user satisfaction; Search users with complex searches keep coming back for AI Overviews; users aged 18-24 have higher engagement when using Search with AI Overviews; Alphabet is prioritising AI-approaches that send traffic to websites; ads that are above or below AI Overviews continue to be valuation; in 2024 Q2, management has doubled the core model size for AI Overviews while improving latency and keeping cost per AI Overviews served flat; management is working on matching the right AI model size to the query’s complexity to improve cost and latency; AI Overviews is rolled out in the USA and will be rolled out to more countries throughout 2024; Alphabet will soon put Search and Shopping ads within the AI Overviews for USA users

With AI, we are delivering better responses on more types of search queries and introducing new ways to search. We are pleased to see the positive trends from our testing continue as we roll out AI Overviews, including increases in Search usage and increased user satisfaction with the results. People who are looking for help with complex topics are engaging more and keep coming back for AI Overviews. And we see even higher engagement from younger users aged 18 to 24 when they use Search with AI Overviews. As we have said, we are continuing to prioritize approaches that send traffic to sites across the web. And we are seeing that ads appearing either above or below AI Overviews continue to provide valuable options for people to take action and connect with businesses…

…Over the past quarter, we have made quality improvements that include doubling the core model size for AI Overviews while at the same time improving latency and keeping cost per AI Overviews served flat. And we are focused on matching the right model size to the complexity of the query in order to minimize impact on cost and latency…

…On the AI Overviews, we are — we have rolled it out in the U.S. And we are — will be, through the course of the year, definitely scaling it up, both to more countries…

…And as you have probably noticed at GML, we announced that soon we’ll actually start testing search and shopping ads in AI Overviews for users in the U.S., and they will have the opportunity to actually appear within the AI Overview in a section clearly labeled as sponsored when they’re relevant to both the quarry and the information in the AI Overview, really giving us the ability to innovate here and take this to the next level.

AI opens up new ways to use Search, such as asking questions by taking a video with Lens; AI Overviews in Lens has led to higher overall visual search usage; Circle to Search is another new way to search, and is available on >100 million Android devices

AI expands the types of queries we are able to address and opens a powerful new ways to Search. Visual search via Lens is one. Soon, you’ll be able to ask questions by taking a video with Lens. And already, we have seen that AI Overviews in Lens leads to an increase in overall visual search usage. Another example is Circle to Search, which is available today on more than 100 million Android devices.

Gemini, now has 4 sizes, each with their own use cases; Gemini comes with a context window of 2 million, the longest of any foundation model to-date; all of Alphabet’s 6 products with more than 2 billion monthly users are using Gemini; through Gemini, users of Google Photos can soon ask questions of their photos and receive answers

Gemini now comes in 4 sizes with each model designed for its own set of use cases. It’s a versatile model family that runs efficiently on everything from data centers to devices. At 2 million tokens, we offer the longest context window of any large-scale foundation model to date, which powers developer use cases that no other model can handle. Gemini is making Google’s own products better. All 6 of our products with more than 2 billion monthly users now use Gemini…

…At I/O, we showed new features coming soon to Gmail and to Google Photos. Soon, you’ll be able to ask Photos questions like, what did I eat at that restaurant in Paris last year?

During Alphabet’s recent developer conference, I/O, management showed their vision of what a universal AI agent could look like

For a glimpse of the future, I hope you saw Project Astra at I/O. It shows multimodal understanding and natural conversational capabilities. We’ve always wanted to build a universal agent, and it’s an early look at how they can be helpful in daily life.

Alphabet has launched Trillium, the sixth-generation of its custom TPU AI accelerator; Trillium has a 5x increase in peak compute performance per chip and a 67% improvement in energy efficiency over TPU v5e

Trillium is the sixth generation of our custom AI accelerator, and it’s our best-performing and most energy-efficient TPU to date. It achieves a near 5x increase in peak compute performance per chip and a 67% more energy efficient compared to TPU v5e.

Google Cloud’s enterprise AI platform, Vertex, is used by Deutsche Bank, Kingfisher, and the US Air Force to build AI agents; Uber and WPP are using Gemini Pro 1.5 and Gemini Flash 1.5 in Vertex for customer experience and marketing; Vertex has broadened support for 3rd-party AI models, including Anthropic’s Claude 3.5 Sonnet, Meta’s Llama, and Mistral’s models

Our enterprise AI platform, Vertex, helps customers such as Deutsche Bank, Kingfisher and the U.S. Air Force build powerful AI agents. Last month, we announced a number of new advances. Uber and WPP are using Gemini Pro 1.5 and Gemini Flash 1.5 in areas like customer experience and marketing. We broadened support for third-party models including Anthropic’s Claude 3.5 Sonnet and open-source models like Gemma 2, Llama and Mistral. 

Google Cloud is the only cloud provider to provide grounding with Google Search; large enterprises such as Moody’s, MSCI, and ZoomInfo are using Google Cloud’s grounding capabilities

We are the only cloud provider to offer grounding with Google Search, and we are expanding grounding capabilities with Moody’s, MSCI, ZoomInfo and more.

Google Cloud’s AI-powered applications are helping it to drive upsells and win new customers; Best Buy and Gordon Food Service are using Google Cloud’s conversational AI platform; Click Therapeutics is using Gemini for Workspace; Wipro is using Gemini Code Assist to speed up software development; MercadoLibre is using BigQuery and Looker for capacity planning and speeding up shipments.

Our AI-powered applications portfolio is helping us win new customers and drive upsell. For example, our conversational AI platform is helping customers like Best Buy and Gordon Food Service. Gemini for Workspace helps Click Therapeutics analyze patient feedback as they build targeted digital treatments. Our AI-powered agents are also helping customers develop better-quality software, find insights from their data and protect their organization against cybersecurity threats using Gemini. Software engineers at Wipro are using Gemini Code Assist to develop, test and document software faster. And data analysts at Mercado Libre are using BigQuery and Looker to optimize capacity planning and fulfill shipments faster.

 In 2024 Q2, Alphabet announced more than 30 new ads features and products to help advertisers leverage AI; Alphabet is applying AI across its advertising products to streamline workflows, enhance asset creation, and improve engagement with consumers; in asset creation, any business using Product Studio can upload an image and enhance it with AI; AI features for consumers such as virtual try-ons in shopping ads are in beta-testing, and feedback shows that virtual try-on gets 60% more high-quality views; advertisers using Alphabet’s AI-powered profit maximisation tools along with Smart Bidding see a 15% increase in profit; Demand Gen, to be rolled out in the coming months, creates high-quality image assets for social marketers and delivers 14% more conversions when paired with Search or Performance Max; Tiffany used Demand Gen and achieved a 2.5% lift in consideration and important customer-actions, and a 5.6x improvement in cost per click compared to social media benchmarks; Alphabet used Demand Gen to create 4,500 ad variations for Pixel 8’s advertising campaigns and delivered twice the clicks per rate at nearly 1/4 of the cost

This quarter, we announced over 30 new ads features and products to help advertisers leverage AI and keep pace with the evolving expectations of customers and users. Across Search, PMax, Demand Gen and retail, we’re applying AI to streamline workflows, enhance creative asset production and provide more engaging experiences for consumers.

Listening to our customers, retailers in particular have welcomed AI-powered features to help scale the depth and breadth of their assets. For example, as part of the new and easier-to-use Merchant Center, we’ve expanded Product Studio with tools that bring the power of Google AI to every business owner. You can upload a product image, prompt the AI with something like feature this product with Paris skyline in the background, and Product Studio will generate campaign-ready assets.

I also hear great feedback from our customers on many of our other new AI-powered features. We’re beta testing virtual try-on in shopping ads and plan to roll it out widely later this year. Feedback shows this feature gets 60% more high-quality views than other images and higher click out to retailer sites. Retailers love it because it drives purchasing decisions and fewer returns.

Our AI-driven profit optimization tools have been expanded to Performance Max and standard shopping campaigns. Advertisers who use profit optimization and Smart Bidding see a 15% uplift in profit on average compared to revenue-only bidding.

Lastly, Demand Gen is rolling out to Display & Video 360 and Search Ads 360 in the coming months with new generative image tools that create stunning, high-quality image assets for social marketers. As we said at GML, when paired with Search or PMax, Demand Gen delivers an average of 14% more conversions…

…Luxury jewelry retailer Tiffany leveraged Demand Gen during the holiday season and saw a 2.5% brand lift in consideration and actions such as adding items to carts and booking appointments. The campaign drove a 5.6x more efficient cost per click compared to social media benchmarks. Our own Google marketing team used Demand Gen to create nearly 4,500 ad variations for Pixel 8 campaign shown across YouTube, Discover and Gmail, delivering twice the clicks per rate at nearly 1/4 of the cost.

Alphabet has used AI to (1) improve broad match performance by 10% in 6 months for advertisers using Smart Bidding, and (2) increase conversions by 25% at similar cost for advertisers who adopt PMax to broad match and Smart Bidding in their Search campaigns

In just 6 months, AI-driven improvements to quality, relevance and language understanding have improved broad match performance by 10% for advertisers using Smart Bidding. Also, advertisers who adopt PMax to broad match and Smart Bidding in their Search campaigns see an average increase of over 25% more conversions of value at a similar cost.

Google Cloud had 29% revenue growth in 2024 Q2 (was 28% in 2024 Q1); operating margin was 11% (was 9% in 2024 Q1 and was 4.9% in 2023 Q2); Google Cloud’s accelerating revenue growth in 2024 Q2 was partly the result of AI demand; GCP’s growth rate is above the growth rate for the overall Google Cloud business

Turning to the Google Cloud segment. Revenues were $10.3 billion for the quarter, up 29%, reflecting, first, significant growth in GCP, which was above growth for Cloud overall and includes an increasing contribution from AI; and second, strong Google Workspace growth, primarily driven by increases in average revenue per seat. Google Cloud delivered operating income of $1.2 billion and an operating margin of 11%…

…[Question] On the cloud acceleration, would you characterize that as new AI demand helping drive that year-to-date? Or is that more of a rebound in just general compute and other demand?

[Answer] There is clearly a benefit as the Cloud team is engaging broadly with customers around the globe with AI-related solutions, AI infrastructure solutions and generative AI solutions. I think we noted that we’re particularly encouraged that the majority of our top 100 customers are already using our generative AI solution. So it is clearly adding to the strength of the business on top of all that they’re doing. And just to be really clear, the results for GCP, the growth rate for GCP is above the growth for Cloud overall.

Alphabet’s big jump capex in 2024 Q2 (was $7.2 billion in 2023 Q2) was mostly for technical infrastructure, in the form of servers and data centers; management continues to expect Alphabet’s quarterly capex for the rest of 2024 to be similar to what was seen in 2024 Q1;

With respect to CapEx, our reported CapEx in the second quarter was $13 billion, once again, driven overwhelmingly by investment in our technical infrastructure with the largest component for servers followed by data centers. Looking ahead, we continue to expect quarterly CapEx throughout the year to be roughly at or above the Q1 CapEx of $12 billion, keeping in mind that the timing of cash payments can cause variability in quarterly reported CapEx.

Alphabet’s management is seeing more tangible use cases for AI in the consumer space compared to the enterprise space; in the consumer space, consumers are engaging with Alphabet’s AI features, but there’s still the question of monetisation; in the enterprise space, a lot of AI models are currently being built and they are converging towards a set of base capabilities; the next wave for the enterprise space will be building applications on top of the models, and there is some traction in some areas, but it’s not widespread yet; management believes value will eventually be unlocked, but it may take time

 I think there is a time curve in terms of taking the underlying technology and translating it into meaningful solutions across the board, both on the consumer and the enterprise side. Definitely, on the consumer side, I’m pleased, as I said in my comments earlier, in terms of how for a product like Search, which is used at that scale over many decades, how we’ve been able to introduce it in a way that it’s additive and enhances overall experience and this positively contributing there. I think across our consumer products, we’ve been able — I think we are seeing progress on the organic side. Obviously, monetization is something that we would have to earn on top of it. The enterprise side, I think we are at a stage where definitely there are a lot of models. I think roughly, the models are all kind of converging towards a set of base capabilities. But I think where the next wave is working to build solutions on top of it. And I think there are pockets, be it coding, be it in customer service, et cetera, where we are seeing some of those use cases are seeing traction, but I still think there is hard work there to completely unlock those…

…But I think we are in this phase where we have to deeply work and make sure on these use cases, on these workflows, we are driving deeper progress on unlocking value, which I’m very bullish will happen. But these things take time. So — but if I were to take a longer-term outlook, I definitely see a big opportunity here. And I think particularly for us, given the extent to which we are investing in AI, our research infrastructure leadership, all of that translates directly. And so I’m pretty excited about the opportunity space ahead.

Alphabet’s management thinks that the risk of underinvesting in AI infrastructure for the cloud business is currently greater than the risk of overinvesting; management thinks that even if Alphabet ends up overinvesting, the infrastructure is still widely useful for internal use cases

[Question] So it looks like from the outside at least, the hyperscaler industry is going from kind of an underbuilt situation this time last year to better meeting the demand with capacity right now to potentially being overbuilt next year if these CapEx growth rates keep up. So do you think that’s a fair characterization? And how are we thinking about the return on invested capital with this AI CapEx cycle?

[Answer] I think the one way I think about it is when we go through a curve like this, the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over-investing, we clearly — these are infrastructure which are widely useful for us. They have long useful lives, and we can apply it across, and we can work through that. But I think not investing to be at the front here, I think, definitely has much more significant downside. Having said that, we obsess around every dollar we put in. Our teams are — work super hard. I’m proud of the efficiency work, be it optimization of hardware, software, model deployment across our fleet. All of that is something we spend a lot of time on, and that’s how we think about it.

Amazon (NASDAQ: AMZN)

AWS’s AI business continues to grow dramatically with a multi-billion revenue run rate; management sees AWS’s AI services resonating with customers, who want choice in the AI models and AI chips they use, and AWS is providing them with choices; over the past 18 months, AWS has launched twice as many AI features into general availability than all other major cloud providers combined

Our AI business continues to grow dramatically with a multibillion-dollar revenue run rate despite it being such early days, but we can see in our results and conversations with customers that our unique approach and offerings are resonating with customers. At the heart of this strategy is a firmly held belief, which we’ve had since the beginning of AWS that there is not one tool to rule the world. People don’t want just one database option or one analytics choice or one container type. Developers and companies not only reject it but are suspicious of it. They want multiple options for flexibility and to use the best tool for each job to be done. The same is true in AI. You saw this several years ago when some companies tried to argue that TensorFlow will be the only machine learning framework that mattered and then PyTorch and others overtook it. The same one model or one chip approach dominated the earliest moments of the generative AI boom, but we have a lot of data that suggests this is not what customers want here either, and our AWS team is determined to deliver choice and options for customers…

…During the past 18 months, AWS has launched more than twice as many machine learning and generative AI features into general availability than all the other major cloud providers combined. 

AWS provides NVIDIA chips for AI model builders, but management also hear from customers that they want better price performance and hence AWS developed the Trainium and Inferentia chips for training and inference, respectively; the second version of Trainium is coming later this year and has very compelling price performance; management is seeing significant demand for Trainium and Inferentia; management started building Trainium and Inferentia 5 years ago also because they had the experience of seeing customers wanting better price performance from CPUs; management believes Trainium and Inferentia will generate similarly high ROI as Graviton, Amazon’s custom CPU, does

For those building generative AI models themselves, the cost of compute for training and inference is critical, especially as models get to scale. We have a deep partnership with NVIDIA and the broader selection of NVIDIA instances available, but we’ve heard loud and clear from customers that they relish better price performance. It’s why we’ve invested in our own custom silicon in Trainium for training and Inferentia for inference. And the second versions of those chips, with Trainium coming later this year, are very compelling on price performance. We are seeing significant demand for these chips…

…When we started AWS, we had and still have a very deep partnership with Intel on the generalized CPU space. But what we found from customers is that they — when you find a — an offering that is really high value for you and high return, you don’t actually spend less, even though you’re spending less per unit. You spend less per unit, but it enables you, it frees you up to do so much more inventing and building for your customers. And then when you’re spending more, you actually want better price performance than what you’re getting.

And a lot of times, it’s hard to get that price performance from existing players unless you decide to optimize yourself for what you’re learning from your customers and you push that envelope yourself. And so we built custom silicon in the generalized CPU space with Graviton, which we’re on our fourth model right now. And that has been very successful for customers and for our AWS business, is it saves customers about — up to about 30% to 40% price performance versus the other leading x86 processors that they could use.

And we saw the same trend happening about 5 years ago in the accelerator space in the GPU space, where the products are good, but there was really primarily 1 provider and supply was more scarce than what people wanted. And people — our customers really want improved price performance all the time. And so that’s why we went about building Trainium, which is our training chip, and Inferentia, which is our inference chip, which we’re on second versions of both of those. They will have very compelling relative price performance.

And in a world where it’s hard to get GPUs today, the supply is scarce and all the schedules continue to move over time, customers are quite excited and demanding at a high clip, our custom silicon, and we’re producing it as fast as we can. I think that’s going to have very good return profile just like Graviton has, and I think it will be another differentiating feature around AWS relative to others.

SageMaker, AWS’s fully-managed AI service, helps customers save time and money while they build their AI models; management is seeing model builders standardise on SageMaker

Model builders also desire services that make it much easier to manage the data, construct the models, experiment, deploy to production and achieve high-quality performance, all while saving considerable time and money. That’s what Amazon SageMaker does so well including its most recently launched feature called HyperPod that changes the game and networking performance for large models, and we’re increasingly seeing model builders standardize on SageMaker. 

Amazon Bedrock, AWS’s AI-models-as-a-service offering, caters to companies that want to leverage 3rd-party models and customise with their own data; Bedrock already has tens of thousands of companies using it; Bedrock has the largest selection of models and the best generative AI capabilities in a number of critical areas; Bedrock recently added Anthropic’s Claude 3.5 models, Meta’s new Llama 3.1 models, and Mistral’s new models

While many teams will build their own models, lots of others will leverage somebody else’s frontier model, customize it with their own data, and seek a service that provides broad model selection and great generative AI capabilities. This is what we think of as the middle layer, what Amazon Bedrock does and why Bedrock has tens of thousands of companies using it already. Bedrock has the largest selection of models, the best generative AI capabilities in critical areas like model evaluation, guardrails, RAG and agenting and then makes it easy to switch between different model types and model sizes. Bedrock has recently added Anthropic’s Claude 3.5 models, which are the best performing models on the planet; Meta’s new Llama 3.1 models; and Mistral’s new Large 2 models. And Llama’s and Mistral’s impressive performance benchmarks and open nature are quite compelling to our customers as well.

Amazon’s management is seeing strong adoption of Amazon Q, Amazon’s generative AI assistant for software development; Amazon Q has the highest score and acceptance rate for code suggestions; Amazon Q tests code and outperforms competitors on catching security vulnerabilities; with Amazon Q’s code transformation capabilities, Amazon saved $260 million and 4,500 developer years when performing a large Java Development Kit migration; management thinks Amazon Q can continue to improve and address more use cases  

We’re continuing to see strong adoption of Amazon Q, the most capable generative AI-powered assistant for software development and to leverage your own data. Q has the highest known score and acceptance rate for code suggestions, but it does a lot more than provide code suggestions. It tests code, outperforms all other publicly benchmarkable competitors on catching security vulnerabilities and leads all software development assistance on connecting multiple steps together and applying automatic action.

It also saves development teams time and money on the muck nobody likes to talk about. For instance, when companies decide to upgrade from one version of a framework to another, it takes development teams many months, sometimes years burning valuable opportunity costs and churning developers who hate this tedious though important work. With Q’s code transformation capabilities, Amazon has migrated over 30,000 Java JDK applications in a few months, saving the company $260 million and 4,500 developer years compared to what it would have otherwise cost. That’s a game changer.

And think about how this Q transformation capability might evolve to address other elusive but highly desired migrations. 

Amazon’s management is still very bullish on the medium to long-term impacts of AI, but the progress may not be a straight line; management sees a lot of promise in generative AI being able to improve customer experiences and this is informed by their own experience of using generative AI within Amazon, such as: (1) Rufus, a shopping assistant, improves customers’ shopping decisions, (2) customers can virtually try on apparel, (3) sellers can create new selections with a line or two of text, and (4) better detection of product defects before the products reach customers

We remain very bullish on the medium to long-term impact of AI in every business we know and can imagine. The progress may not be one straight line for companies.

Generative AI especially is quite iterative, and companies have to build muscle around the best way to solve actual customer problems. But we see so much potential to change customer experiences. We see it in how our generative-AI-powered shopping assistant, Rufus, is helping customers make better shopping decisions. We see it in our AI features that allow customers to simulate trying apparel items or changing the buying experience. We see it in our generative AI listing tools enabling sellers to create new selection with a line or 2 of text versus the many forms previously required. We see it in our fulfillment centers across North America, where we’re rolling out Project Private Investigator, which uses a combination of generative AI and computer vision to uncover defects before products reach customers. We see it in how our generative AI is helping our customers discover new music and video. We see it in how it’s making Alexa smart, and we see it in how our custom silicon and services like SageMaker and Bedrock are helping both our internal teams and many thousands of external companies reinvent their customer experiences and businesses. We are investing a lot across the board in AI, and we’ll keep doing so as we like what we’re seeing and what we see ahead of us.

Amazon’s management expects capital expenditures to be higher in 2024 H2 compared to 2024 H1; most of the capex will be for AWS infrastructure in both generative AI and non-generative AI workloads; management has a lot of experience, accumulated over the years, in predicting just the right amount of compute capacity to provide for AWS before the generative AI era, and they believe they can do so again for generative AI; management is investing heavily in AI-related capex because they see a lot of demand and in fact, they would like AWS to have more compute capacity than what it has today

For the first half of the year, CapEx was $30.5 billion. Looking ahead to the rest of 2024, we expect capital investments to be higher in the second half of the year. The majority of the spend will be to support the growing need for AWS infrastructure as we continue to see strong demand in both generative AI and our non-generative AI workloads…

…If you think about the fact that we have about 35 regions and think of a region as multiple — a cluster of multiple data centers and about 110 availability zones, which is roughly equivalent to a data center, sometimes it includes multiple and then if you think about having to land thousands and thousands of SKUs across the 200 AWS services in each of those availability zones at the right quantities, it’s quite difficult. And if you end up actually with too little capacity, then you have service disruptions, which really nobody does because it means companies can’t scale their applications.

So most companies deliver more capacity than they need. However, if you actually deliver too much capacity, the economics are pretty woeful, and you don’t like the returns of the operating income. And I think you can tell from having — we disclosed both our revenue and our operating income in AWS that we’ve learned over time to manage this reasonably well. And we have built models over a long period of time that are algorithmic and sophisticated that land the right amount of capacity. And we’ve done the same thing on the AI side.

Now AI is newer. And it’s true that people take down clumps of capacity in AI that are different sometimes. I mean — but it’s also true that it’s not like a company shows up to do a training cluster asking for a few hundred thousand chips the same day. Like you have a very significant advanced signal when you have customers that want to take down a lot of capacity.

So while the models are more fluid, it’s also true that we’ve built, I think, a lot of muscle and skill over time in building these capacity signals and models, and we also are getting a lot of signal from customers on what they need. I think that it’s — the reality right now is that while we’re investing a significant amount in the AI space and in infrastructure, we would like to have more capacity than we already have today. I mean we have a lot of demand right now, and I think it’s going to be a very, very large business for us.

Companies need to organise their data in specific ways before they can use AI effectively; it’s difficult for companies with on-premise data centers to use AI effectively

It’s quite difficult to be able to do AI effectively if your data is not organized in such a way that you can access that data and run the models on top of them and then build the application. So when we work with customers, and this is true both when we work directly with customers as well as when we work with systems integrator partners, everyone is in a hurry to get going on doing generative AI. And one of the first questions that we ask is show us where your data is, show us what your data lake looks like, show us how you’re going to access that data. And there’s very often work associated with getting your data in the right shape and in the right spot to be able to do generative AI. There — fortunately, because so many companies have done the work to move to the cloud, there’s a number of companies who are ready to take advantage of AI, and that’s where we’ve seen a lot of the growth. But also it’s worth remembering that, again, remember the 90% of the global IT spend being on-premises. There are a lot of companies who have yet to move to the cloud, who will, and the ability to use AI more effectively is going to be one of the many drivers in doing so for them.

Apple (NASDAQ: AAPL)

Apple Intelligence, Apple’s AI technologies embedded in its devices, improves Siri; Apple Intelligence is built on a foundation of privacy and has a ground-breaking approaching to using the cloud, known as Private Cloud Compute, that protects user information; Apple Intelligence is powered by Apple’s custom chips; Apple Intelligence will involve integration with ChatGPT in iPhones, Macs, and iPads; management will continue to invest in AI; because of management’s stance on privacy, Apple Intelligence will maximise the amount of data that is processed directly on people’s devices; Apple Intelligence’s roll out will be staggered; Apple Intelligence’s monetisation appears to involve both the Services business of Apple, and payments from partners

At our Worldwide Developers Conference, we were thrilled to unveil game-changing updates across our platforms, including Apple Intelligence. Apple Intelligence builds on years of innovation and investment in AI and machine learning. It will transform how users interact with technology from Writing Tools to help you express yourself to Image Playground, which gives you the ability to create fun images and communicate in new ways, to powerful tools for summarizing and prioritizing notifications. Siri also becomes more natural, more useful, and more personal than ever. Apple Intelligence is built on a foundation of privacy, both through on-device processing that does not collect users’ data and through Private Cloud Compute, a groundbreaking new approach to using the cloud while protecting users’ information powered by Apple Silicon. We are also integrating ChatGPT into experiences within iPhone, Mac, and iPad, enabling users to draw on a broad base of world knowledge.

We are very excited about Apple Intelligence, and we remain incredibly optimistic about the extraordinary possibilities of AI and its ability to enrich customers’ lives. We will continue to make significant investments in this technology and dedicate ourselves to the innovation that will unlock its full potential…

…We are committed as ever to shipping products that offer the highest standards of privacy for our users. With everything we do, whether it’s offering a browser like Safari that prevents third-parties from tracking you across the Internet, or providing new features like the ability to lock and hide apps, we are determined to keep our users in control of their own data. And we are just as dedicated to ensuring the security of our users’ data. That’s why we work to minimize the amount of data we collect and work to maximize how much is processed directly on people’s devices, a foundational principle that is at the core of all we build, including Apple Intelligence…

…The rollout, as we mentioned in June, sort of we’ve actually started with developers this week. We started with some features of Apple Intelligence, not the complete suite. There are other features like languages beyond U.S. English that will happen over the course of the year, and there are other features that will happen over the course of the year. And ChatGPT is integrated by the end of the calendar year. And so yes, so it is a staggered launch…

…[Question] How should investors think about the monetization models…  in the long term, do you see the Apple Intelligence part, the Services growth from Apple Intelligence being the larger contributor over time? Or do you see these partnerships becoming a larger contributor over time? 

[Answer] The monetization model, I don’t want to get into the terms of the commercial agreements because they’re confidential between the parties, but I see both aspects as being very important. People want both.

Apple is getting its partners to fork out the bill for some of its capex needs for AI cloud compute, so even though its capex will increase over time, it does not seem like the increase may be that high

[Question] Do you see the rollout of these features requiring further increases in R&D or increases in OpEx or CapEx for cloud compute capacity?

[Answer] On the CapEx part, it’s important to remember that we employ a hybrid kind of approach where we do things internally and we have certain partners that we do business with externally where the CapEx would appear in their respective businesses. But yes, I mean, you can expect that we will continue to invest and increase it year-on-year…

…On the CapEx front, as Tim said, we employ a hybrid model. Some of the investments show up on our balance sheet and some other investments show up somewhere else and we pay as we go. But in general, we try to run the company efficiently.

Arista Networks (NYSE: ANET)

Arista Networks recently launched its Etherlink AI platforms that are compatible with the ultra-Ethernet consortium and can lead the migration from Infiniband to Ethernet; the Etherlink AI platforms consist of a portfolio of 800-gig switches and can work with all kinds of GPUs; there are new products in the platform that work well even for very large AI clusters; the Etherlink portfolio is being trialled by customers can support up to be 100,000 XPUs

In June 2024, we launched Arista’s Etherlink AI platforms that are ultra-Ethernet consortium compatible, validating the migration from InfiniBand to Ethernet. This is a rich portfolio of 800-gig products, not just a point product, but in fact, a complete portfolio that is both NIC and GPU agnostic. The AI portfolio consists of the 7060 [indiscernible] switch that supports 64 800-gig or 128 400-gig Ethernet ports with a capacity of 51 terabits per second. The 7800 R4 AI Spine is our fourth generation of Arista’s flagship 7800, offering 100% non-blocking throughput with a proven virtual output queuing architecture. The 7800 R4 supports up to 460 terabits in a single chassis, corresponding to 576800 gigabit Ethernet ports or 1,152400 gigabit port density. The 7700 R4 AI distributed Etherlink Switch is a unique product offering with a massively parallel distributed scheduling and congestion-free traffic spraying fabric. The 7700 represents the first in a new series of ultra-scalable intelligent distributed systems that can deliver the highest consistent throughput for very large AI clusters…

…Our Etherlink portfolio is in the midst of trials and can support up to 100,000 XPUs in a 2-tier design built on our proven and differentiated extensible OS.

Arista Networks had a recent AI enterprise win with a Tier 2 cloud provider to provide Ethernet fabrics for its fleet of NVIDIA H100 GPUs; the cloud provider was using a legacy networking vendor that could not scale

The first example is an AI enterprise win with a large Tier 2 cloud provider which has been heavily investing in GPUs to increase their revenue and penetrate new markets. Their senior leadership wanted to be less reliant on traditional core services and work with Arista on new, reliable and scalable Ethernet fabrics. Their environment consisted of new NVIDIA H100s. However, it was being connected to their legacy networking vendor, which resulted in them having significant performance and scale issues with their AI applications. The goal of our customer engagement was to refresh the front-end network to alleviate these issues. Our technical partnership resulted in deploying a 2-step migration path to alleviate the current issues using 400-gig 7080s, eventually migrating them to an 800-gig AI Ethernet link in the future. 

Arista Networks’ management is once again seeing the network becoming the computer as AI training models require a lossless network to connect every AI accelerator in a cluster to one another; Arista Networks’ AI networking solutions also connect trained AI models to end users and other systems

I am reminded of the 1980s when Sun [Microsystems] for declaring the network is the computer. Well, 40 years later, we’re seeing the same cycle come true again with the collective nature of AI training models mandating a lossless highly available network to seamlessly connect every AI accelerator in the cluster to one another for peak job completion times. Our AI networks also connect trained models to end users and other multi-tenant systems in the front-end data center, such as storage, enabling the AI system to become more than the sum of its parts.

Arista Networks’ management think that data centers will evolve to be holistic AI centers, where the network will be the epicenter; management thinks that AI centers will need a foundational data architecture; Arista Networks has an AI agent within its EOS (Extensible Operating System) that can connect to NVIDIA’s Bluefield NICs (network interface cards), with more NICs to be added in the future

We believe data centers are evolving to holistic AI centers, where the network is the epicenter of AI management for acceleration of applications, compute, storage and the wide area network. AI centers need a foundational data architecture to deal with the multimodal AI data sets that run on our differentiated EOS network data systems. Arista showcased the technology demonstration of our EOS-based AI agent that can directly connect on the NIC itself or alternatively, inside the host. By connecting into adjacent Arista switches to continuously keep up with the current state, send telemetry or receive configuration updates, we have demonstrated the network working holistically with network interface cards such as NVIDIA Bluefield and we expect to add more NICs in the future.

Arista Networks’ management thinks that as GPUs increase in speed, the dependency on the network for higher throughput increases

I think as the GPUs get faster and faster, obviously, the dependency on the network for higher throughput is clearly related.

The 4 major AI trials Arista Networks discussed in the 2024 Q1 earnings call are all going well and ar removing into pilots these year

[Question] Last quarter, you had mentioned kind of 4 major AI trials that you guys were a part of…  any update on where those 4 AI trials stand or what the current count of AI trials is currently?

[Answer] All 4 trials are largely in what I call Cloud and AI Titans. A couple of them could be classified as specialty providers as well, depending on how they end up. But those 4 are going very well. They started out as largely trials. They’re now moving into pilots this year, most of them. 

Arista Networks has tens of smaller customers who are starting to do AI pilots with the company that typically involve a few hundred GPUs; these customers go to Arista Networks for AI trials because they want best-of-breed reliability and performance

We have tens of smaller customers who are starting to do AI pilots…

…They’re about to build an AI cluster. It’s a reasonably small size, not classified in thousands or 10 thousands. But you’ve got to start somewhere. So they started about a few hundred GPUs, would you say?…

…The AI cloud we talked about, they tend to be smaller, but it’s a representation of the confidence the customer has. They may be using other GPUs, servers, et cetera. But when it comes to the mission critical networks, they’ve recognized the importance of best-of-breed reliability, availability, performance, no loss and the familiarity with the data center is naturally leading to pilots and trials on the AI side with us.

Arista Networks’ management classifies its TAM (total addressable market) within AI as how much of Infiniband will move to Ethernet and it’s far larger than the AI-related revenue of $750 million that management has guided for in 2025

The TAM is far greater than the $750 million we’ve signed up for. And remember, that’s early years. But that can consist of our data center TAM. Our AI TAM, which we count in a more narrow fashion as how much of InfiniBand will move to Ethernet on the back end. We don’t count the AI TAM that’s already in the front end, which is part and parcel of our data center.

Arista Networks’ management continues to see its large customers preferring to spend on AI, but is also seeing classic cloud continue to be an important part of its business and they believe the demand for classic cloud infrastructure will eventually rebound once the AI models are more established

We saw that last year. We saw that there was a lot of pivot going on from the classic cloud, as I like to call it, to the AI in terms of spend. And we continue to see favorable preferences to AI spend in many of our large cloud customers. Having said that, at the same time, simultaneously, we are going through a refresh cycle where many of these customers are moving from 100 to 200 or 200 to 400 gig. So while we think AI will grow faster than cloud, we’re betting on classic cloud continuing to be an important aspect of our contributions…

… I would say there’s such a heavy bias towards — in the Cloud Titans towards training and super training and the bigger and better the GPUs, the billion parameters, the OpenAI, ChatGPT and [indiscernible] that you’re absolutely right that at some level, the classic cloud, what you call traditional, I’m still calling classic, is a little bit neglected last year and this year. Having said that, I think once the training models are established, I believe this will come back, and it will sort of be a vicious cycle that feeds on each other. But at the moment, we’re seeing more activity on the AI and more moderate activity on the cloud.

Arista Networks’ management thinks that as AI networking moves towards Ethernet, it will be difficult to distinguish between front-end and back-end networks

It’s going to become difficult to distinguish the back end from the front end when they all move to Ethernet. For this AI center, as we call it, is going to be a conglomeration of both the front and the back. So if I were to fast forward 3, 4 years from now, I think the AI center is a supercenter of both the front end and the back end. So we’ll be able to track it as long as there’s GPUs and strictly training use cases. But if I were to fast forward, I think there may be many more edge use cases, many more inference use cases and many more small-scale training use cases which will make that distinction difficult to make.

Arista Networks’ management sees NVIDIA more as a friend than a competitor despite NVIDIA trying to compete with the company with the Spectrum-X switches; management rarely sees Spectrum-X as a competing technology in the deals Arista Networks is working on; management feels good about Arista Networks’ win rate

[Question] If you’re seeing Spectrum-X from NVIDIA? And if so, how you’re doing against it?

[Answer] When you say competitive environment, it’s complicated with NVIDIA because we really consider them a friend on the GPUs as well as the mix, so not quite a competitor. But absolutely, we will compete with them on the Spectrum switch. We have not seen the Spectrum except in one customer where it was bundled. But otherwise, we feel pretty good about our win rate and our success for a number of reasons, great software, portfolio of products and architecture that has proven performance, visibility features, management capabilities, high availability. And so I think it’s fair to say that if a customer were bundling with their GPUs, then we wouldn’t see it. If a customer were looking for best of breed, we absolutely see it and win it.

When designing GPU clusters for AI, a network design-centric approach has to be taken

If you look at an AI network design, you can look at it through 2 lenses, just through the compute, in which case you look at scale up and you look at it strictly through how many processes there are. But when we look at an AI network design, it’s a number of GPUs or XTUs per workload. Distribution and location of these GPUs are important. And whether the cluster has multiple tenants and how it’s divvied up between the host, the memory, the storage and the wide area plays a role, and the optimization to make on the applications for the collective communication libraries for specific workloads, levels of resilience, how much redundancy you want to put in, active, link base, load balancing, types of visibility. So the metrics are just getting more and more. There are many more commutations in combination. But it all starts with number of GPUs, performance and billions of parameters. Because the training models are definitely centered around job completion time. But then there’s multiple concentric circles of additional things we have to add to that network design. All this to say, a network design-centric approach has to be taken for these GPU clusters. Otherwise, you end up being very siloed

Arista Networks’ management is seeing huge clusters of GPUs – in the tens of thousands to hundreds of thousands – being deployed in 2025

Let me just remind you of how we are approaching 2024, including Q4, right? Last year, trials. So small, it was not material. This year, we’re definitely going into pilots. Some of the GPUs, and you’ve seen this in public blogs published by some of our customers have already gone from tens of thousands to 24,000 and are heading towards 50,000 GPUs. Next year, I think there will be many of them heading into tens of thousands aiming for 100,000 GPUs. So I see next year as more promising.

ASML (NASDAQ: ASML)

ASML’s management sees no change to the company’s outlook for 2024 from what was mentioned in the 2023 Q4 earnings call and 2024 Q1 earnings call, with AI-related applications still driving demand

Our outlook for the full year 2024 has not changed. We expect a revenue similar to last year. As indicated before, and based on our current guidance, the second half of the year is expected to be significantly higher than the first half. This is in line with the industry’s continued recovery from the downturn. Our guidance on market segments is similar to what we’ve stated in previous quarters…

……We currently see strong developments in AI driving most of the industry recovery and growth, ahead of other end market segments.

ASML’s management sees AI driving the majority of recovery in the semiconductor industry in both Logic and Memory chips; AI’s positive effects on semiconductor industry demand will start showing up in 2025 and management expects that to continue into 2026; Memory chips used in AI require high-bandwidth memory and so have higher density of DRAM; ASML’s management sees other non-AI segments as being behind in terms of recovery, but they do expect recovery eventually

We currently see strong developments in AI driving most of the industry recovery and growth, ahead of other end market segments…

… I think AI is driving, I would say, right now, the biggest part of the recovery. This is true for Logic. This is true for Memory. Roger just commented on Logic. I think you know that for high-bandwidth memory, those products drive more demand, more of a wafer demand because we are looking basically at a higher density of DRAM on those products. And we look at something that, of course, will take course over several months. So we started to see the positive effect of that for 2025. We expect that to continue into 2026, both for Memory and for Logic. And at some point of time, I also mentioned that maybe the other segments are a bit behind in terms of recovery.

So a lot of the capacity today, either Logic or DRAM capacity will be [indiscernible] those AI product. As the other segments recover, we also expect potentially some capacity to be needed there. 

ASML’s management thinks DRAM for AI memory chips will continue to see an increasing use of EUV lithography at each technology node; management also see opportunity for DRAM to use High-NA EUV lithography systems in 2025 or 2026

On DRAM, so I think there also, I think I’ll be very consistent with the information we have shared with you previously. So we see on there an increase of EUV use on every node. I think this is a trend that continue at least in the foreseeable future. Of course, it’s always more difficult to make forecast on nodes or technology that are still being defined by a customer. But that logic is still in place. I think you have seen also in DRAM that at this point of time, all customers are using EUV in production. I think the last customer was very public about that recently.

ASML’s management is not seeing much revenue made on AI at the moment, but it’s still seeing a lot of investment made for AI and these investments require a lot of semiconductor manufacturing capacity

I think what we have seen with AI is a major investment from many companies in supercomputer and the ability basically to train model. What we still miss in AI, I think, is the emergence of end product. So I think today, there’s not much revenue made on AI. There’s just a lot of investment. What we see is that still that investment require a lot of capacity. I think you have seen some of our customers announcing also more capacity to be built before 2028.

Coupang (NYSE: CPNG)

Coupang’s Product Commerce segment had sequential and year-on-year improvement in gross profit in 2024 Q2, driven partly by the use of AI technologies

Product Commerce gross profit increased 26% year-over-year to over $1.9 billion, and a record gross profit margin of 30.3%. This represents a 310 basis points improvement over last year and 200 basis points over last quarter. Our margin improvement this quarter was driven by strong growth rates in categories with higher margin composition, as well as higher efficiencies across operations, including benefits from greater utilization of automation and technology, including AI. We also continue to benefit from further optimization in our supply chain, and the scaling of margin accretive offerings.

Datadog (NASDAQ: DDOG)

Datadog’s management classifies digital natives as SMBs and mid-market companies, and within digital natives, the AI natives are inflecting in usage growth that others are not

I would add that the digital natives are largely SMB and mid-market, they’re not enterprise. And even when you look at the digital native, there’s two stories, depending on whether you talk about the AI natives or the others. The AI natives are inflecting in a way that the others are not at this point. So today, we see this higher growth from AI natives and from traditional enterprises. And stable growth, but not accelerating, from the rest of the pack.  

Datadog’s management has announced general availability of LLM Observability for generative AI for companies to monitor, troubleshoot, and secure LLM (large language model) applications; WHOOP and AppFolio are two early adopters of LLM Observability; it’s still very early days for the LLM Observability product; management thinks a good proxy for the future demand for LLM Observability is the growth of the model providers and the AI-native companies; management expects the LLM market to change a lot over time because it’s still nascent; in order of LLMs to work, they need to be connected to other applications and it’s at that point where management thinks the LLMs need observability; customers that are currently using LLM Observability also use Datadog for the rest of their technology stack and it does not make sense for the customers to operate their LLM applications in isolation

In the next-gen AI space, we announced the general availability of LLM Observability, which application developers and machine learning engineers to efficiently monitor, troubleshoot and secure LLM applications. With LLM Observability, companies can accelerate the deployment of AI applications into production environments and reliably operate and scale them…

… It’s still early. We do see customers that are going increasingly into production, and we have a few of those. I mean, we named a couple as early customers of LLM Observability. I think the two we named were WHOOP, the fitness band; and AppFolio. And we see many more that are lining up and then are going to do that. But in the grand scheme of things, looking at the whole market, it’s still very early. I would say the best proxy you can get from the future demand there is the growth of the model providers and the AI natives because they tend to be the ones that currently are being used to provide AI functionality into other applications and largely in production environment. And so I always said they are the harbinger of what’s to come…

… [Question] When people are thinking about bringing on LLMs into their organization, do they want the observability product in place already? Or are they testing out LLMs and then bringing you on after the fact?

[Answer] We expect this market to change a lot over time because it is far from being mature. And so a lot of the things that might happen today in a certain way might happen 2 years in a very, very different form. That being said, the way it works typically is customers build applications using developer tools, and there’s a whole industry that has emerged around developer tools for — and playgrounds and things like that for LLM. And so they use not one, but 100 different things to do that, which is fairly similar to what you might find on the IDE side or code editor side for the more traditional development, which is lots of different, very fragmented environment on that side. When they start connecting the LLM to the rest of the application, then they start to need like visibility that includes the other components because the LLM doesn’t work in a vacuum, it’s plugged into a front end. It works with authentication and security. It works with — connects to other system databases in other services to get the data. And at that point, they need it to be integrated with the rest of the observability. For the customers that use our LLM Observability product, they use us for the rest — all the rest of their stack. And it would make absolutely no sense for them to operate their LLM in isolation completely separately and not have the visibility across the whole applications. So it’s — at that point, it’s a no-brainer that they need everything to be integrated in production.    

Datadog’s management has expanded Bits AI, Datadog’s AI copilot, with new capabilities, such as the ability to perform autonomous investigations

We also expanded Bits AI with new capabilities. As a reminder, Bits AI is a Datadog built-in AI copilot. In addition to being able to summarize incidents and answer questions, we previewed at DASH, the ability for Bits AI to operate as an agent and perform autonomous investigations. With this capability, this AI proactively surfaces key information and performs complex tasks such as investigating alerts and coordinating — response.

Datadog’s management is hearing from all of Datadog’s customers that they are ramping experiments with AI with the goal of delivering business value with the technology; currently, 2,500 Datadog customers are using one or more of Datadog’s AI integrations for visibility into their use of AI; AI-native customers accounted for 4% of Datadog’s ARR in June 2024 (was 3.5% 2024 Q1); management thinks the percentage of ARR from AI-native customers will lose its relevance over time as AI usage becomes more widespread

Taking a step back and looking at our customer base, we continue to see a lot of excitement around AI technologies. All customers are telling us that they are leveling up on AI and ramping experimentations with the goal of delivering additional business value with AI. And we can see them doing this. Today, about 2,500 customers use one or more of our AI integrations to get visibility into their increasing use of AI. We also continue to grow our business with AI-native customers. which increased to over 4% of our ARR in June. We see this as a sign of the continued expansion of this ecosystem and of the value of using Datadog to monitor the product environment. I will note that over time, we think this metric will become less relevant as AI usage and production broadens beyond this group of customers.

Datadog’s management recently announced Toto, Datadog’s first foundational model for time-series forecasting; Toto delivered state-of-the-art performance on all 11 benchmarks; Toto’s capabilities come from the quality of Datadog’s training dataset; management sees Toto’s existence as evidence of the company’s ability to train, build, and incorporate AI models into its platform

We announced Toto, our first foundational model for time-series forecasting, which delivered state-of-the-art performance on all 11 benchmarks. In addition to the technical innovations devised by our research team, TOTO derives its record performance from the quality of our training dataset and points to our unique ability to train, build and incorporate AI models into a platform that will meaningfully improve operations for our customers.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business

Overall, we continue to see no change to the multiyear trend towards digital transformation and cloud migration. We are seeing continued experimentation with new technologies, including next-gen AI, and we believe this is just one of the many factors that will drive greater use of the cloud and next-gen infrastructure.

Datadog’s management thinks the emergence of AI has led to large enterprises realising they need to be on the cloud sooner rather later; management sees a lot of growth in the cloud migration of enterprises as it’s really early in their transition

Some of the strengths we see today has to do with the fact that, to serve their — in part to — the emergence of AI has reaffirmed for them the need to go to the cloud sooner rather than later. So they can build the right kind of applications, they have the right kind of data available to give those applications…

…I’d point you to the numbers we shared, I think, 2 quarters ago in terms of our enterprise penetration and the average size of our contracts with enterprises, which are still fairly small. Like there’s a lot of runway there. And the growth of those accounts is not predicated on the growth of the enterprise themselves. They’re still early in their transformation.

Fiverr (NYSE: FVRR)

Fiverr’s management is deepening the integration of Neo, the company’s AI assistant, into its marketplace experience; management realised that not everyone wants the outright chatbot experience on its marketplace, so Neo only pops up when friction arises to provide guidance for buyers who are navigating Fiverr’s catalogue of talent; management wants Neo to be a personal assistant throughout the Fiverr purchasing experience and also answer buyers’ questions

The second theme of our Summer Product Release is deepening the integration of Neo, Fiverr’s AI tool throughout the market-based experience. As Gen-AI applications quickly shift consumers’ Internet behavior and expectations, we want to stay ahead of the curve to build a more personable experience on Fiverr. At the same time, tests and data in the past 6 months have shown that not everyone prepares the outright chatbot experience when it comes to shopping. So, our strategy for Neo is to incorporate it as an assistance throughout the funnel to help customers when friction arises. For search, Neo provides the guidance you need to navigate Fiverr’s massive catalog of services and talent. And it is trained to understand customers’ past transactions and preference to provide the most relevant recommendations. When it comes to project briefing, having Neo is like having a strategist by your side. It transforms customers’ ideas into a structured brief document that not only looks good, but also delivers better business results. Neo can also help customers write more detailed reviews faster by generating content based on transactions and providing language assistance…

The experimentation that we’ve done with Neo as a personal assistant within the inbox, which is the — which was the first version of doing it, taught us a lot about how our customers are actually using it and how it improves the conversion in briefing. It allows buyers to complete, and it leads to higher conversion as a result. And so, the idea here is that we’re graduating Neo to get out of the inbox and essentially being integrated in all of our experience. Right now, it’s being rolled out gradually because we want to test its accuracy and performance. But essentially, you can fund it as a personal assistant throughout the experience. So, it allows customers to search better, to be more accurate about their needs, and as a result get much higher quality match.

But it also has awareness about where it exists. So, if you’re looking at a specific page, you can ask questions about that page. So, it helps people make decisions and get to what they’re looking for better. The same goes with the integration in briefing. If customers have a brief premade then they can just upload it, and we help make that brief even better. But if they don’t, then the technology that is behind Neo actually helps them write a better, more accurate brief and again, as a result of that, get matched with a much more specific cohort of potential talent that can do the job.

Fiverr’s management continues to believe that AI will be a multiyear tailwind for the company and that AI will have a net positive impact on the company’s business; the deterioration seen in the simple services categories has improved, for whatever reasons (unsure if it’s a one-off event from low base, as management also spoke about the low-base effect); around 20% of Fiverr’s GMV comes from simple jobs 

We are in the early innings of unleashing the full potential of AI in our marketplace, and we believe it will be a multiyear tailwind for us to drive product innovation and growth…

…We also see AI continuing to have a net positive impact on our business. It is important to note that we are starting to see stabilizing and improving trends in simple services…

Now several quarters in, we are actually seeing that in our — we’re seeing this in our data. So, for example, writing and translation as a vertical is the vertical with the biggest exposure to AI impact. In Q2, we’re actually seeing traffic in that vertical improved 10 percentage points in terms of year-over-year growth rate compared to Q1…

That said, with us now opening professions catalog and hourly contracts this will open up new funnels and create growth opportunities, especially for complex services categories. And remember that we have over 700 categories. So, our exposure to specific categories is relatively low and seasonal trends in category spend are a regular thing in our line of business…

…When we think about the overall mix complex is in the mid-30s of GMV and simple is about 20%.

Mastercard (NYSE: MA)

Mastercard’s management intends to further embed AI into Mastercard’s value-added services, particularly in data analytics, fraud, and cybersecurity, because they are seeing companies asking for these solutions; the embedding of AI into the value-added services portfolio does not involve changing the existing portfolio, but augmenting them with a higher weightage to AI

We will also enhance and expand our value-added services, such as in data analytics, fraud and cybersecurity particularly as we further embed AI into our products and services…

…It’s pretty clear that on the services side, as far as the areas of focus are concerned, we continue to be guided by underlying strong secular trends, and one of that is for really any of our corporate partners and B2B partners that they want to make sense of their enterprise data and make better decisions. And how do we do that? We do that by leveraging our artificial intelligence solutions, our set of assistants, a set of fine-tuning, how they could have more personalized suggestions to their end consumers, et cetera, et cetera. That’s one part, help our customers make better decisions, not changing, but very specific solutions with a higher weightage to AI.

And then on the security side and the cybersecurity side, all of this data has to be kept safe. We kept saying that for years. That’s a strong secular trend in itself and making sure that we fine-tune our solutions here. We’ve got to move faster because the bad guys are also moving faster, and they have the similar technology tools at their hand now. So leveraging artificial intelligence, an example I gave last quarter around Decision Intelligence Pro, that’s predicting what is the next card that might be frauded, before it actually happens. Those kind of solutions provide significant lift to our customers in terms of preventing fraud, obviously giving peace of mind to their consumers and overall helping our business, and it’s a close link to our payments — underlying payments business.

Mastercard has been using AI technology successfully for the better part of a decade, in areas such as fraud prevention; management thinks generative AI gives the opportunity for Mastercard to understand more data faster; management has used generative AI to create artificial data sets to train Mastercard’s discriminative AI models; management has also used generative AI to build a new product, such as Decision Intelligence Pro; Decision Intelligence Pro brings a 20% improvement in fraud prediction; management believes that generative AI will increase in penetration within Mastercard’s fraud and cybersecurity products 

 AI isn’t actually anything new for us. So we’ve — for the better part of a decade, we’ve been using AI. This is a discrete machine learning technology to really predict where is the next problem, and analyze data of — that we have and the data that our customers have to prevent fraud. So that’s been very successful.

As far as generative AI is concerned, evolving technology here, there’s obviously an opportunity for us to understand more data in a quicker way. And we have used that initially to train our AI models, our discriminative AI models using generative AI to create artificial data set. So that was the first step. And then we went into putting out a new set of products. I mentioned Decision Intelligence Pro. Decision Intelligence is a product that we’ve had for a long time, machine learning driven that was predicting fraud outcomes and now we’re using more data sets to — that are externally available, stolen card data and so forth, to understand where fraud vulnerabilities might be. The lift is tremendous, 20%, we see in terms of effectiveness out of that product. So we start to see demand for the whole reason on the vulnerabilities that I talked about…

…I believe that the penetration of generative AI and our fraud and cybersecurity product set will only expand. 

Mercado Libre (NASDAQ: MELI)

MercadoLibre has been putting a lot of resources into AI and generative AI; management sees many ways AI can help the commerce business, such as producing better ways for consumers to look at product reviews, enhance product pictures, generate seller-responses when sellers are unable to, and improve the product search experience for consumers; MercadoLibre has 16,000 developers and they are using AI to improve productivity; MercadoLibre is using AI inc customer support to respond more cost-effectively and more accurately

We have been — put a lot of resources into AI and GenAI throughout the company, really. We don’t have a centralized department of AI, but all of our different business units…

… On the commerce side, obviously, we are using AI to help us with recommendations, as you mentioned, but more important than that on reviews, for instance, that in the past, you have to — if you were to review a product, you have to go through many different views, now we can consolidate that into a more efficient way of communicating the qualities, the prospects of a particular product pictures, as you know, our pictures that publish might not be the quality that we are expecting from our merchants, and we can improve those with answers from sellers is another good example in the past, if you were to buy something at 2 AM in the morning, you’ll have to wait until the next day to get an answer that obviously affected significantly the conversion of the product. Now we can respond right away with using GenAI models…

…On the developer side, we have 16,000 developers, which are also using AI tools to improve productivity and that also generating some improvements and efficiencies in the way we deploy products throughout the company. And I think 1 of the most important projects that we have is on CX, customer experience and customer support by which we are also applying AI tools that will help us to not only respond more efficiently in terms of cost, but also be more accurate in terms of the way we manage those issues. These are some examples, but there are many others…

… You asked about search and where you’re seeing technical and bedding to power search that technical — turn search into something more semantic. So it’s easier to try to send the users to what they’re looking for.

Meta Platforms (NASDAQ: META)

Meta’s AI work continues to improve quality of recommendations on Facebook and Instagram, and drives engagement; the more general recommendation models Meta develops, the better the content recommendations get; Meta rolled out a unified video recommendation service across Facebook in 2024 Q2 for Reels, longer videos, and Live; Meta’s unified AI systems had already increased engagement on Facebook Reels more than Meta’s shift from using CPUs to GPUs; management wants to eventually have a single, unified AI recommendation system for all kinds of content across Meta’s social apps; the unified video recommendation service has encouraging early results, and management expects the relevance of video recommendations to increase

Across Facebook and Instagram, advances in AI continue to improve the quality of recommendations and drive engagement. And we keep finding that as we develop more general recommendation models, content recommendations get better. This quarter we rolled out our full-screen video player and unified video recommendation service across Facebook — bringing Reels, longer videos, and Live into a single experience. This has allowed us to extend our unified AI systems, which had already increased engagement on Facebook Reels more than our initial move from CPUs to GPUs did. Over time, I’d like to see us move towards a single, unified recommendation system that powers all of the content including things like People You May Know across all of our surfaces. We’re not there, so there’s still upside — and we’re making good progress here…

…On Facebook, we are seeing encouraging early results from the global roll-out of our unified video player and ranking systems in June. This initiative allows us to bring all video types on Facebook into one viewing experience, which we expect will unlock additional growth opportunities for short-form video as we increasingly mix shorter videos into the overall base of Facebook video engagement. We expect the relevance of video recommendations will continue to increase as we benefit from unifying video ranking across Facebook and integrating our next generation recommendation systems. These have already shown promising gains since we began using the new systems to support Facebook Reels recommendations last year. We expect to expand these new systems to support more surfaces beyond Facebook video over the course of this year and next year

In the past, advertisers would tell Meta the specific audience they wanted to reach, but over time, Meta could predict the interested-audience better than the advertisers could, even though the advertisers still needed to come up with collateral; management thinks that AI will generate personalised collateral for advertisers in the coming years and all the advertiser needs to do is to tell Meta a business objective and a budget, and Meta will handle everything else; Meta’s first generative AI ad features, such as image expansion and text generation, were used by more than 1 million advertisers in June 2024; Meta rolled out full image generation capabilities in Advantage+ in May 2024

It used to be that advertisers came to us with a specific audience they wanted to reach — like a certain age group, geography, or interests. Eventually we got to the point where our ads system could better predict who would be interested than the advertisers could themselves. But today advertisers still need to develop creative themselves. In the coming years, AI will be able to generate creative for advertisers as well — and will also be able to personalize it as people see it. Over the long term, advertisers will basically just be able to tell us a business objective and a budget, and we’re going to go do the rest for them. We’re going to get there incrementally over time, but I think this is going to be a very big deal…

…We’ve seen promising early results since introducing our first generative AI ad features – image expansion, background generation, and text generation – with more than one million advertisers using at least one of these solutions in the past month. In May, we began rolling out full image generation capabilities into Advantage+ creative, and we’re already seeing improved performance from advertisers using the tool. 

Meta’s management thinks that Meta AI, the company’s AI assistant feature, will be the most used AI assistant by end-2024; Meta AI is improving in intelligence and features quickly, and seems on track to be an important service; Meta AI’s current use cases include searching for information, role-playing difficult conversations, and creating images, but new use cases are likely to emerge; Meta AI has been used for billions of queries thus far; Meta AI has helped with WhatsApp retention and engagement; India has become the largest market for Meta AI; Meta AI is now available in 20 countries and 8 languages; management thinks that people who bet on the early indicators of Meta tend to do pretty well, and Meta AI is one of those early indicators that are signalling well; management wants to build a lot more functionality into Meta AI, but that will take a few years

Last quarter we started broadly rolling out our assistant, Meta AI, and it is on track to achieve our goal of becoming the most used AI assistant by the end of the year. We have an exciting roadmap ahead of things that we want to add, but the bottom line here is that Meta AI feels on track to be an important service and it’s improving quickly both in intelligence and features. Some of the use cases are utilitarian, like searching for information or role-playing difficult conversations before you have them with another person, and other uses are more creative, like the new Imagine Yourself feature that lets you create images of yourself doing whatever you want in whatever style you want. And part of the beauty of AI is that it’s general, so we’re still uncovering the wide range of use cases that it’s valuable for…

…People have used Meta AI for billions of queries since we first introduced it. We’re seeing particularly promising signs on WhatsApp in terms of retention and engagement, which has coincided with India becoming our largest market for Meta AI usage. You can now use Meta AI in over 20 countries and eight languages, and in the US we’re rolling out new features like Imagine edit, which allows people to edit images they generate with Meta AI…

… I think that the people who bet on those early indicators tend to do pretty well, which is why I wanted to share in my comments the early indicator that we had on Meta AI, which is, I mean, look, it’s early…

…I was talking before about we have the initial usage trends around Meta AI but there’s a lot more that we want to add, things like commerce and you can just go vertical by vertical and build out specific functionality to make it useful in all these different areas are eventually, I think, what we’re going to need to do to make this just as — to fulfill the potential around just being the ideal AI assistant for people. So it’s a long road map. I don’t think that this stuff is going to get finished in the next couple of quarters or anything like that. But this is part of what’s going to happen over the next few years as we build something that will, I think, just be a very widely used service. So I’m quite excited about that.

Meta’s management recently launched AI Studio, which allows anyone to create AIs that people can interact with; AI Studio is useful for creators who want to engage more with their communities, but can also be useful for anyone who wants to build their own AI agents, including businesses; management thinks every business in the future will have its own AI agent for customer interactions that drives sales and reduces costs; management expects Business AI agents to dramatically accelerate Meta’s business messaging revenue when the feature reaches scale

This week we launched AI Studio, which lets anyone create AIs to interact with across our apps. I think that creators are especially going to find this quite valuable. There are millions of creators across our apps — and these are people who want to engage more with their communities and their communities want to engage more with them — but there are only so many hours in the day. So now they’re going to be able to use AI Studio to create AI agents that can channel them to chat with their community, answer people’s questions, create content, and more. So I’m quite excited about this. But this goes beyond creators too. Anyone is going to be able to build their own AIs based on their interests or different topics that they are going to be able to engage with or share with their friends.

Business AIs are the other big piece here. We’re still in alpha testing with more and more businesses. The feedback we’re getting is positive so far. Over time I think that just like every business has a website, social media presence, and an email address, in the future I think that every business is also going to have an AI agent that their customers can interact with. Our goal is to make it easy for every small business, and eventually every business, to pull all their content and catalog into an AI agent that drives sales and saves them money. When this is working at scale, I expect it to dramatically accelerate our business messaging revenue.

The Llama family of foundation models is the engine that powers all of Meta’s AI-related work; in 2024 Q2, Meta released Llama 3.1, the first frontier-level open source model, and other new and industry-leading small and medium models; the Llama 3.1 405B model has better cost performance compared to leading closed models; management thinks Llama 3.1 will mark an inflection point for open source AI becoming the industry standard; Meta is already working on Llama 4 and management is aiming for it to be the most advanced foundation AI model when released in 2025; the Llama models are well-supported by the entire cloud computing ecosystem

The engine that powers all these new experiences is the Llama family of foundation models. This quarter we released Llama 3.1, which includes the first frontier-level open source model, as well as new and industry-leading small and medium-sized models. The 405B model has better cost performance relative to the leading closed models, and because it’s open, it is immediately the best choice for fine-tuning and distilling your own custom models of whatever size you need. I think we’re going to look back at Llama 3.1 as an inflection point in the industry where open source AI started to become the industry standard, just like Linux is…

…We’re already starting to work on Llama 4, which we’re aiming to be the most advanced in the industry next year…

… Part of what we’re doing is working closely with AWS, I think, especially did great work for this release. Other companies like Databricks, NVIDIA, of course, other big players like Microsoft with Azure, and Google Cloud, they’re all supporting this. And we want developers to be able to get it anywhere. I think that’s one of the advantages of an open source model like Llama is — it’s not like you’re locked into 1 cloud that offers that model, whether it’s Microsoft with OpenAI or Google with Gemini or whatever it is, you can take this and use it everywhere and we want to encourage that. So I’m quite excited about that.

Meta’s management is planning for the AI compute needs of the company for the next several years; management thinks the compute requirements for training Llama 4 will likely be 10x that of Llama 3, and future models will require even more; given long lead times to build compute capacity, management would rather risk overbuilding than being too late in realising there’s a shortfall; even as Meta builds compute capacity, management still remains focused on cost efficiency

We’re planning for the compute clusters and data we’ll need for the next several years. The amount of compute needed to train Llama 4 will likely be almost 10x more than what we used to train Llama 3 — and future models will continue to grow beyond that. It’s hard to predict how this will trend multiple generations out into the future, but at this point I’d rather risk building capacity before it is needed, rather than too late, given the long lead times for spinning up new infra projects. And as we scale these investments, we’re of course going to remain committed to operational efficiency across the company…

A few years ago, management thought holographic AR (augmented reality) technology would be ready before smart AI, but the reverse has happened; regardless, Meta is still well positioned for this reverse order; because of AI, Meta’s smart glasses continue to be a bigger hit than management expected and supply cannot keep up with demand; Meta will continue to partner EssilorLuxottica for the long term to build its smart glasses

A few years ago I would have predicted that holographic AR would be possible before smart AI, but now it looks like those technologies will actually be ready in the opposite order. We’re well-positioned for that because of the Reality Labs investments that we’ve already made. Ray-Ban Meta glasses continue to be a bigger hit sooner than we expected — thanks in part to AI. Demand is still outpacing our ability to build them, but I’m hopeful we’ll be able to meet demand soon. EssilorLuxottica has been a great partner to work with on this, and we’re excited to team up with them to build future generations of AI glasses as we continue to build our long term partnership.

AI is playing an increasingly important role in improving Meta’s marketing performance; the AI-powered Meta Lattice ad ranking architecture continued to drive ad performance and efficiency gains in 2024 Q2; Advantage+ Shopping campaigns are driving 22% higher return on ad spend for US advertisers; advertiser adoption of Meta’s advertising automation tools continue to expand; Meta has continued to increase the capabilities of Advantage+, such as expanding conversion types, and helping advertisers automatically select which ad format to serve after they upload multiple images and videos; Meta rolled out full image generation capabilities in Advantage+ in May 2024

The second part of improving monetization efficiency is enhancing marketing performance. We continue to be pleased with progress here, with AI playing an increasingly central role. We’re improving ad delivery by adopting more sophisticated modeling techniques made possible by AI advancements, including our Meta Lattice ad ranking architecture, which continued to provide ad performance and efficiency gains in the second quarter. We’re also making it easier for advertisers to maximize ad performance and automate more of their campaign set up with our Advantage+ suite of solutions. We’re seeing these tools continue to unlock performance gains, with a study conducted this year demonstrating 22% higher return on ad spend for US advertisers after they adopted Advantage+ Shopping campaigns. Advertiser adoption of these tools continues to expand, and we’re adding new capabilities to make them even more useful. For example, this quarter we introduced Flexible Format to Advantage+ Shopping, which allows advertisers to upload multiple images and videos in a single ad that we can select from and automatically determine which format to serve, in order to yield the best performance. We have also now expanded the list of conversions that businesses can optimize for using Advantage+ Shopping to include an additional 10 conversion types, including objectives like “add to cart”…

…In May, we began rolling out full image generation capabilities into Advantage+ creative, and we’re already seeing improved performance from advertisers using the tool. 

Monetisation for Meta’s AI products such as Meta AI or AI Studio will take years because management is following the same playbook they have had for years, which is to start a product, then take time to scale the product to a billion users before monetising; Meta’s management is a little different from other companies in terms of how they think about the time needed to monetise products

We have a relatively long business cycle of starting a new product, scaling it to something that reaches 1 billion people or more and only then really focusing on monetizing at scale. So realistically, for things like Meta AI or AI Studio, I mean, these are things that I think will increase engagement in our products and have other benefits that will improve the business and engagement in the near term. But before we’re really talking about monetization of any of those things by themselves, I mean, I don’t think that anyone should be surprised that I would expect that, that will be years, right?…

…And I think that, that’s something that is a little bit different about Meta in the way we build consumer products and the business around them than a lot of other companies that ship something and start selling it and making revenue from it immediately. So I think that’s something that our investors and folks thinking about analyzing the business, if needed, to always grapple with is all these new products, we ship them and then there’s a multiyear time horizon between scaling them and then scaling them into not just consumer experiences but very large businesses.

Meta’s ongoing capex investments in AI infrastructure is informed by the strong returns management has seen and expect to achieve in the future; management expects the returns from generative AI to take some time to appear, but they see signification monetisation opportunities that could be unlocked through the AI investments; Meta’s capital expenditures for AI infrastructure are done with flexibility in mind so that AI training capacity can also be redirected to generative AI inference and its ranking and recommendation systems, if needed; management is focused on improving cost efficiency of its AI workloads over time; Meta’s AI capex come in 2 buckets, core AI and generative AI (genAI), which are built to be fungible if needed; the core AI bucket is much more mature in driving revenue for Meta and management takes an ROI (return on investment) approach; the gen AI bucket is much earlier in revenue-generation-maturity but is expected to open up new revenue opportunities over time to deliver that ROI; it’s difficult for management to plan for Meta’s long-term capex trajectory

Our ongoing investment in core AI capacity is informed by the strong returns we’ve seen, and expect to deliver in the future, as we advance the relevance of recommended content and ads on our platforms. While we expect the returns from generative AI to come in over a longer period of time, we are mapping these investments against the significant monetization opportunities that we expect to be unlocked across customized ad creative, business messaging, a leading AI assistant, and organic content generation. As we scale generative AI training capacity to advance our foundation models, we will continue to build our infrastructure in a way that provides us with flexibility in how we use it over time. This will allow us to direct training capacity to gen AI inference, or to our core ranking and recommendation work when we expect that doing so would be more valuable. We will also continue our focus on improving the cost efficiency of our workloads over time…

… I would broadly characterize our AI investments into 2 buckets: core AI and gen AI. And the 2 are really at different stages as it relates to driving revenue for our businesses and our ability to measure returns. On our core AI work, we continue to take a very ROI-based approach to our investment here. We’re still seeing strong returns as improvements to both engagement and ad performance have translated into revenue gains, and it makes sense for us to continue investing here. Gen AI is where we’re much earlier, as Mark just mentioned in his comments. We don’t expect our gen AI products to be a meaningful driver of revenue in ’24. But we do expect that they’re going to open up new revenue opportunities over time that will enable us to generate a solid return off of our investment while we’re also open sourcing subsequent generations of Llama. And we’ve talked about the 4 primary areas that we’re focused here on the gen AI opportunities to enhance the core ads business, to help us grow in business messaging, the opportunities around Meta AI, and the opportunities to grow core engagement over time.

The other thing I would say is, we’re continuing to build our AI infrastructure with fungibility in mind so that we can flex capacity where we think it will be put to best use. The infrastructure that we build for gen AI training can also be used for gen AI inference. We can also use it for ranking and recommendations by making certain modifications like adding general compute and storage. And we’re also employing a strategy of staging our data center sites at various phases of development, which allows us to flex up to meet more demand and less lead time if needed while limiting how much spend we’re committing to in the outer years…

…We haven’t really shared an outlook sort of on the longer-term CapEx trajectory. In part, infrastructure is an extraordinarily dynamic planning area for us right now. We’re continuing to work through what the scope of the gen AI road maps will look like over that time. Our expectation, obviously again, is that we are going to significantly increase our investments in AI infrastructure next year, and we’ll give further guidance as appropriate. But we are building all of that CapEx, again with the factors in mind that I talked about previously, thinking about both how to build it flexibly so we can deploy to core AI and gen AI use cases as needed…

… There’s sort of a whole host of use cases for the life of any individual data center ranging from gen AI training at its outset to potentially supporting gen AI inference to being used for core ads and content ranking and recommendation and also thinking through the implications, too, of what kinds of servers we might use to support those different types of use cases.

Microsoft (NASDAQ: MSFT)

Microsoft’s management sees the AI platform shift as involving both knowledge and capital-intensive investments, similar to the Cloud platform shift; as Microsoft goes through the AI platform shift, management is focused on product innovation, and using customer demand signals and time to value to manage the cost structure dynamically

 I want to offer some broader perspective on the AI platform shift. Similar to the Cloud, this transition involves both knowledge and capital-intensive investments. And as we go through this shift, we are focused on 2 fundamental things. First, driving innovation across a product portfolio that spans infrastructure and applications, so as to ensure that we are maximizing our opportunity while in parallel, continuing to scale our cloud business and prioritizing fundamentals, starting with security. Second, using customer demand signal and time to value to manage our cost structure dynamically and generate durable long-term operating leverage.

Azure’s share gains accelerated in FY2024 (fiscal year ended 30 June 2024), driven by AI; Azure grew revenue by 29% in 2024 Q2 (was 31% in 2024 Q1), with 8 points of growth from AI services (was 7 points in 2024 Q1); Azure’s AI business has higher demand than available capacity; 50% of Azure AI users are also using a data meter within Azure, which is excellent for Azure

Starting with Azure. Our share gains accelerated this year driven by AI…

…Azure and other cloud services revenue grew 29% and 30% in constant currency, in line with expectations and consistent with Q3 when adjusting for leap year. Azure growth included 8 points from AI services, where demand remained higher than our available capacity…

…AI doesn’t sit on its own, right? So it’s just for — we have a concept of design wins in Azure. So in fact, 50% of the folks who are using Azure AI are also using a data meter. That’s very exciting to us because the most important thing in Azure is to win workloads in the enterprise. And that is starting to happen. And these are generational things once they get going with you. So that’s, I think, how we think about it at least when I look at what’s happening on our demand side. 

Azure added new AI accelerators from both AMD and NVIDIA, and its own in-house Azure Maia chips; Azure also introduced its own Cobalt 100 CPUs

We added new AI accelerators from AMD and NVIDIA as well as our own first-party silicon Azure Maia and we introduced new Cobalt 100, which provides best-in-class performance for customers like Elastic, MongoDB, Siemens, Snowflake and Teradata.

Azure AI offers the most diverse selection of models for customers; Azure AI now has 60,000 customers and average spend per customer continues to grow; Azure OpenAI started to provide access to GPT-4o and GPT-4o Mini in 2024 Q2; Azure OpenAI is being used by companies from diverse industries; Phi-3 within Azure AI offers small language models that are already being used by a wide range of companies; Models as a Service within Azure AI offers access to third-party models including open-sourced models and it is being used by a diverse range of large companies; paid Models as a Service customers doubled sequentially

With Azure AI, we are building out the app server for the AI wave providing access to the most diverse selection of models to meet customers’ unique cost, latency and design considerations. All up, we now have over 60,000 Azure AI customers up nearly 60% year-over-year and average spend per customer continues to grow.  Azure OpenAI service provides access to best-in-class frontier models, including as of this quarter GPT-4o and GPT-4o mini. It’s being used by leading companies in every industry, including H&R Block, Suzuki, Swiss Re, Telstra as well as digital natives like Freshworks, Meesho and Zomato. With Phi-3, we offer a family of powerful small language models, which are being used by companies like BlackRock, Emirates, Epic, ITC, Navy Federal Credit Union and others. And with Models as a Service, we provide API access to third-party models, including as of last week, the latest from Cohere, Meta and Mistral. The number of paid Models as a Service customers more than doubled quarter-over-quarter, and we are seeing increased usage by leaders in every industry from Adobe and Bridgestone to Novo Nordisk and Palantir.

Microsoft Fabric, an AI-powered data platform, now has more than 14,000 customers (was more than 11,000 in 2024 Q1)

Microsoft Fabric, our AI-powered next-generation data platform, now has over 14,000 paid customers, including leaders in every industry from Accenture and Kroger to Rockwell Automation and Zeiss, up 20% quarter-over-quarter. And this quarter, we introduced new first of their kind, real-time intelligence capabilities in Fabric, so customers can unlock insights on high-volume, time-sensitive data.

GitHub Copilot is the most widely adopted AI-powered developer tool; 77,000 organisations have adopted GitHub Copilot in just over 2 years since its general availability and the number of organisations is up 180% from a year ago; GitHub Copilot is driving GitHub’s overall growth; GitHub’s annual revenue run rate is $2 billion and Copilot accounted for more than 40% of GitHub’s revenue growth in FY2024; GitHub Copilot alone is already a larger business than the entire GitHub when Microsoft acquired it in 2018

GitHub Copilot is by far the most widely adopted AI power developer tool. Just over 2 years since its general availability, more than 77,000 organizations from BBVA, FedEx and H&M to Infosys and Paytm have adopted Copilot up 180% year-over-year…

…Copilot is driving GitHub growth all up. GitHub annual revenue run rate is now $2 billion. Copilot accounted for over 40% of GitHub revenue growth this year and is already a larger business than all of GitHub was when we acquired it.

More than 480,000 organisations have used AI-features within Microsoft’s Power Platform (was more than 330,000 in 2024 Q1), and Power Platform has 48 million monthly active users (was 25 million in 2024 Q1)

We are also integrating generative AI across Power Platform, enabling anyone to use natural language to create apps, automate workflows or build a website. To date, over 480,000 organizations have used AI-powered capabilities in Power Platform, up 45% quarter-over-quarter. In total, we now have 48 million monthly active users of Power Platform, up 40% year-over-year.

The number of Copilot for Microsoft 365 users doubled sequentially; Copilot for Microsoft 365 customers increased 60% sequentially; number of customers for Copilot for Microsoft 365 with more than 10,000 seats doubled sequentially; Copilot Studio customers can build custom Copilots for agentic work; 50,000 organisations have used Copilot Studio

Copilot for Microsoft 365 is becoming a daily habit for knowledge workers as it transforms work, workflow and work artifacts. The number of people who use Copilot daily at work nearly doubled quarter-over-quarter as they use it to complete tasks faster, hold more effective meetings and automate business workflows and processes. Copilot customers increased more than 60% quarter-over-quarter. Feedback has been positive with majority of enterprise customers coming back to purchase more seats, all up the number of customers with more than 10,000 seats more than doubled quarter-over-quarter, including Capital Group, Disney, Dow, Kyndryl, Novartis, and EY alone will deploy Copilot to 150,000 of its employees and we are going further adding agent capabilities to Copilot. New Team Copilot can facilitate meetings and create an assigned task. And with Copilot Studio customers can extend Copilot for Microsoft 365 and build custom Copilots that proactively respond to data and events using their own first and third-party business data. To date, 50,000 organizations from Carnival Corporation, Cognizant and Eaton to KPMG, Majesco and McKinsey have used Copilot Studio, up over 70% quarter-over-quarter.

DAX Copilot has been purchased by more than 400 healthcare organisations to-date, up 40% sequentially; the number of AI-generated clinical reports have tripled

With DAX Copilot, more than 400 health care organizations, including Community Health Network, Intermountain, Northwestern Memorial Healthcare and Ohio State University Wexner Medical Center have purchased DAX Copilot to date, up 40% quarter-over-quarter and the number of AI-generated clinical reports more than tripled.

Microsoft introduced a new category of Copilot+ PCs in 2024 Q2; the Copilot+ PCs have a new system architecture design to deliver breakthrough AI experiences; early reviews are promising

When it comes to devices, we introduced our new category of Copilot+ PCs this quarter. They are the fastest, most intelligent Windows PCs ever. They include a new system architecture designed to deliver best-in-class performance and breakthrough AI experiences. We are delighted by early reviews, and we are looking forward to the introduction of more Copilot+ PCs powered by all of our silicon and OEM partners in the coming months.

More than 1,000 paid customers used Copilot for security ; Microsoft now has 1.2 million security customers and over 800,000 of them use 4 or more workloads, up 25% from a year ago

Over 1,000 paid customers used Copilot for security, including Alaska Airlines, Oregon State University, Petrofac, Wipro, WTW, and we are also securing customers’ AI deployments with updates to Defender and Purview. All up, we now have 1.2 million security customers over 800,000, including Dell Technologies, Deutsche Telekom, TomTom use 4 or more workloads, up 25% year-over-year. 

Combined revenue of Bing, Edge, and Copilot was up 19% year-on-year and management said Bing and Edge took share; management is applying generative AI to Bing to test a new generative search experience, whose aim is to create dynamic responses while still driving clicks to publishers

We are ensuring that Bing, Edge and Copilot collectively are driving more engagement and value to end users, publishers and advertisers. Our overall revenue ex-TAC increased 19% year-over-year and we again took share across Bing and Edge. We continue to apply Generative AI to pioneer new approaches to how people search and browse. Just last week, we announced we are testing a new generative search experience, which creates a dynamic response to users’ query while maintaining click share to publishers. 

Copilot for the web has created more than 12 billion images and did more than 13 billion chats to-date, up 150% since the start of 2024

We continue to drive record engagement with Copilot for the web, consumers have used Copilot to create over 12 billion images and conduct 13 billion chats to date, up 150% since the start of the calendar year.

Microsoft is using AI in its Performance Max advertising tool to create and optimise ads for advertisers, increasing their advertising ROI (return on investment)

We are helping advertisers increase their ROI, too. We have seen positive response to Performance Max, which uses AI to dynamically create and optimize ads and Copilot and Microsoft ad platform helps marketers create campaigns and troubleshoot using natural language.

Microsoft’s capex in 2024 Q2 (FY2024 Q4) and the whole of FY2024 are basically for AI and cloud, and it can be split roughly 50-50 into (1) data centers and (2) servers consisting of GPU/CPUs; management sees the capex for the data centers as providing support for monetisation over the next 15-plus years; the capex for GPUs and CPUs are driven by demand signals; the demand signals that management is seeing include Microsoft 365 Copilot demand, GitHub Copilot demand, and Azure AI growth; Microsoft can be spending on the data centres first, because they have long lead times, without spending on the GPUs and CPUs if the demand signals no longer persist, moreover, revenue growth will not be affected by the throttling of GPU/CPU spending; part of the capex is for AI training, but management will be scaling training only if they see demand; the capex on the data centres itself is really flexible because Microsoft has built a consistent architecture for its technological infrastructure

Capital expenditures, including finance leases, were $19 billion, in line with expectations and cash paid for PP&E was $13.9 billion. Cloud and AI-related spend represents nearly all of our total capital expenditures. Within that, roughly half is for infrastructure needs where we continue to build and lease data centers that will support monetization over the next 15 years and beyond. The remaining Cloud and AI-related spend is primarily for servers, both CPUs and GPUs to serve customers based on demand signals. For the full fiscal year, the mix of our Cloud and AI-related spend was similar to Q4…

…So when I think about what’s happening with M365 Copilot as perhaps the best Office 365 or M365 suite we have had, the fact that we’re getting recurring customers, so our customers coming back buying more seats. So GitHub Copilot now being bigger than even GitHub when we bought it. What’s happening in the contact center with Dynamics. So I would say — and obviously, the Azure AI growth, that’s the first place we look at. That then drives bulk of the CapEx spend, basically, that’s the demand signal because you got to remember, even in the capital spend, there is land and there is data center build, but 60-plus percent is the kit, that only will be bought for inferencing and everything else if there is demand signal, right? So that’s, I think, the key way to think about capital cycle even. The asset, as Amy said, is a long-term asset, which is land and the data center, which, by the way, we don’t even construct things fully, we can even have things which are semi-constructive, we call Kohl’s shelves and so on. So we know how to manage our CapEx spend to build out a long-term asset and a lot of the hydration of the kit happens when we have the demand signal. 

There is definitely spend for training. Even there, of course, we will only be scaling training as we see the demand accrue in any given period in time…

…Being able to maybe share a little more about that when we talked about roughly half of FY ’24’s total capital expense as well as half of Q4’s expense, it’s really on land and build and finance leases, and those things really will be monetized over 15 years and beyond. And they’re incredibly flexible because we’ve built a consistent architecture, first with the Commercial Cloud and second with the Azure Stack for AI, regardless of whether the demand is at the platform layer or at the app layer or through third parties and partners or, frankly, our first-party SaaS, it uses the same infrastructure. So it’s a long-lived flexible assets…

…Could we see sort of consistent revenue growth without maybe what you would say is more of this sort of elevated capital expense number or something that continues to accelerate. And the answer to that is yes because there’s 2 different pieces, right? You’re seeing half of this go toward long-term builds that Satya mentioned, the pace at which we fill those builds with CPUs or GPUs will be demand-driven. And so if we see differences in demand signal, we can throttle that investment on the CPU side, which we’ve done for I guess, a long time at this point, as I reflect, and we’ll use all that same learning and demand signal understanding to do the same thing on the GPU side. And so you’re right that you could see relatively consistent revenue patterns and yet see these inconsistencies and capital spend quarter-to-quarter…

…We think about it in terms of what’s the total percentage of cost that goes into each line item, land which obviously has a very different duration and a very different lead time. So those are the other 2 considerations. We think about lead time and duration of the asset. Land, network, construction, the system or the kit and then the ongoing cost. And so if you think about it that way, then you know how to even adjust, if you will, the capital spend based on demand signal.

For Azure’s expected growth of 28%-29% in 2024 Q3 (FY2025 Q1), management expects consumption trends from 2024 Q2 (FY2024 Q4) to continue through FY2025 H1 and the consumption trends include capacity-constrained AI-demand as well as non-AI growth; management expects Azure’s growth to accelerate in FY2025 H2, driven by increase in AI capacity to meet growing demand

 In Azure, we expect Q1 revenue growth to be 28% to 29% in constant currency. Growth will continue to be driven by our consumption business, inclusive of AI, which is growing faster than total Azure. We expect the consumption trends from Q4 to continue through the first half of the year. This includes both AI demand impacted by capacity constraints and non-AI growth trends similar to June. Growth in our per user business will continue to moderate. And in H2, we expect Azure growth to accelerate as our capital investments create an increase in available AI capacity to serve more of the growing demand…

… Capacity constraints, particularly on AI and Azure will remain in Q4 and will remain in H1. 

When Microsoft transitioned to the cloud (in the late 2000s and early 2010s), it was rolled out geography by geography, whereas this current AI platform shift is done globally straight away; Microsoft’s consistent technological infrastructure helps its current AI platform shift achieve faster margin improvement compared to the shift to cloud

You can see what we’re doing and focused on is building out this network in parallel across the globe. Because when we did this last transition, the first transition to the Cloud, which seems a long time ago sometimes. It rolled out quite differently. We rolled out more geo by geo and this one because we have demand on a global basis, we are doing it on a global basis, which is important. We have large customers in every geo… 

…[Question] With Cloud, it took time for margins to improve. It looks like with AI, it’s happening quicker. Can you give us a sense of how you think about the margin impact near term and long term from all the investment on AI?

[Answer] To answer the second half of your question on margin improvement, looking different than it did through the last cloud cycle. That’s primarily for a reason I’ve mentioned a couple of times. We have a consistent platform. So — because we’re building to on Azure AI stack, we don’t have to have multiple infrastructure investments. We’re making one. We’re using that internally first party, and that’s what we’re using with customers to build on as well as ISVs. So it does, in fact, make margins start off better and obviously scale consistently.

Management sees generative AI as fundamentally just being software, and it is translating into growth for Microsoft’s SaaS (software-as-a-service) products; management sees the growth in the usage of Microsoft’s software products as a healthy sign of AI adoption

[Question] How should we think about what it’s going to take for GenAI to become more real across the industry and for it to become more visible within your SaaS offerings?

[Answer] At the end of the day, GenAI is just software. So it is really translating into fundamentally growth on what has been our M365 SaaS offering with a newer offering that is the Copilot SaaS offering, which today is on a growth rate that’s faster than any other previous generation of software we launched as a suite in M365. That’s, I think, the best way to describe it. I mean the numbers I think we shared even this quarter are indicative of this, Mark. So if you look at it, we have both the landing of the seats itself quarter-over-quarter that is growing 60%, right? That’s a pretty good healthy sign. The most healthy sign for me is the fact that customers are coming back there. That is the same customers with whom we landed the seats coming back and buying more seats. And then the number of customers with 10,000-plus seats doubled, right? It’s 2x quarter-over-quarter. That, to me, is a healthy SaaS core business.

Microsoft has dealt with AI capacity constraints by working with third parties who are happy to help Microsoft extend the Azure platform

We’ve talked about now for quite a few quarters, we are constrained on AI capacity. And because of that, actually, we’ve, to your point, have signed up with third parties to help us as we are behind with some leases on AI capacity. We’ve done that with partners who are happy to help us extend the Azure platform, to be able to serve this Azure AI demand. 

Netflix (NASDAQ: NFLX)

Netflix has been using AI (artificial intelligence) and ML (machine learning) for many years to improve the content discovery experience and drive more engagement, and management thinks GenAI (generative AI) has great potential to improve these efforts; but it’s also important ultimately for Netflix to have great content

 We’ve been using similar technologies, AI and ML, for many years to improve the discovery experience and drive more engagement through those improvements. We think that generative AI has tremendous potential to improve our recommendations and discovery systems even further. We want to make it even easier for people to find an amazing story that’s just perfect for them in that moment. But I think it’s also worth noting that the key to our success stacks, right, it’s quality at all levels. So it’s great movies, it’s great TV shows, it’s great games, it’s great live events, and a great and constantly improving recommendation system that helps unlock all of that value for all of those stories.

Management is unsure how AI will specifically impact content creation, but they think AI will result in a great set of creator tools, as there has been a long history of technology improving the content creation process; management thinks that when it comes to content creation, great story-telling is still the most important thing, even as content creators experiment with AI

But I think it’s also worth noting that the key to our success stacks, right, it’s quality at all levels. So it’s great movies, it’s great TV shows, it’s great games, it’s great live events, and a great and constantly improving recommendation system that helps unlock all of that value for all of those stories. nd one thing that’s sure, if you look back over 100 years of entertainment, you can see how great technology and great entertainment work hand in hand to build great, big businesses. You can look no further than animation. Animation didn’t get cheaper, it got better in the move from hand-drawn to CG animation. And more people work in animation today than ever in history. So I’m pretty sure that there’s a better business and a bigger business in making content 10% better than it is making it 50% cheaper…

…I think that shows and movies, they win with the audience when they connect. It’s in the beauty of the writing. It’s in the chemistry of the actors. It’s in the plot, the surprise and the plot twist, all those things…

….So my point is they’re looking to connect. So we have to focus on the quality of the storytelling. There’s a lot of filmmakers and a lot of producers experimenting with AI today. They’re super excited about how useful a tool it can be. And we got to see how that develops before we can make any meaningful predictions on what it means for anybody. But our goal remains unchanged, which is telling great stories.

Nu Holdings (NYSE: NU)

Nu Holdings made a recent acquisition of Hyperlane, a provider of AI solutions in the financial services space; Hyperlane’s AI platform has improved the performance of even Nu Holdings’ most advanced machine learning models when utilising a foundation model focused on financial services that used Nu Holdings’ own unstructured data

I wanted to highlight our recently announced acquisition of Hyperplane. Hyperplane is a Silicon Valley-based leader in AI power solutions for the financial services space. As we tested Hyperplane’s platform on our vast amount of data, we were impressed by the opportunity to meaningfully improve performance of even our most advanced machine learning models by using a financial services focused foundation model that included our own unstructured data. We’re very excited to welcome the Hyperplane team on board and see them as a key part of our AI strategy in the foreseeable future. 

Shopify (NASDAQ: SHOP)

Shopify’s management believes the company can continue to post operating leverage, partly through the internal use of AI to drive productivity

We believe that we can continue to drive operating leverage through 4 key things: disciplined growth in headcount, which we have kept essentially flat for 5 quarters and where we expect we can keep head count growth well below revenue growth; strategic returns-based marketing to support and sustain our long-term revenue growth; internal use of AI and automation to drive productivity; and leveraging and continuing to enhance our internally-built GSD and Shopify OS systems, which allow us to smartly aim the product development work and size the team for maximum impact and efficiency.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s capital expenditure is always in anticipation of growth in future years; capex for 2024 is now expected to be US$30 billion to US$32 billion (2023’s capex was US$30.4 billion), up at the low-end from commentary given in the 2024 Q1 earnings call; most of TSMC’s capex are for advanced process technologies; management sees strong structural AI-related demand and is willing to invest to support its customers

Every year, our CapEx is spent in anticipation of the growth that will follow in the future years, and our CapEx and capacity planning is always based on the long-term market demand profile. As the strong structural AI-related demand continues, we continue to invest to support our customers’ growth. We are narrowing the range of our 2024 capital budget to be between USD 30 billion and USD 32 billion as compared to USD 28 million to USD 32 billion previously. Between 70% and 80% of the capital budget will be allocated for advanced process technologies. About 10% to 20% will be spent for specialty technologies, and about 10% will be spent for advanced packaging, testing, mass-making and others. At TSMC, a higher level of capital expenditures is always correlated with the higher growth opportunities in the following years. 

TSMC’s management is seeing a continuation of a strong surge in AI-related demand, which supports structural demand for energy-efficient computing

The continued surge in AI-related demand supports a strong structural demand for energy-efficient computing.

TSMC’s management sees TSMC as a key enabler of AI; management has a disciplined framework, consisting of both a top-down and bottoms-up approach, to plan its capacity buildout; management is not going to make the same kind of mistake it made in 2021 and 2022 when planning its capacity; management has spent a lot of effort studying AI-demand for its capacity-planning and has also asked its customer (likely referring to Nvidia) to be more realistic; management has been testing out AI within TSMC and have found it to be very useful, so management thinks AI demand is real; TSMC has been buying chips from its customer (likely referring to Nvidia)

 As a key enabler of AI applications, the value of our technology position is increasing as customers rely on TSMC to provide the most advanced process and packaging technology at scale in the most efficient and cost-effective manner. As such, TSMC employs a disciplined framework to address the structural increase in the long-term market demand profile underpinned by the industry megatrend of AI, HPC and 5G. We work closely with our customers to plan our capacity. We also have a rigorous and robust system that evaluates and judges market demand from both a top-down and bottom-up approach to determine the appropriate capacity to build…

… [Question] Now looking at GenAI, obviously, the technology has lots of great potential, but a new technology also have lots of volatilities where you start to ramp. And so how are we managing the volatilities of the demand? Why do you think this time around it is different versus COVID period?

[Answer] I thought I explained that our capacity premium process, right, and the investment, we have — I put a wording of discipline. That means we are not going to repeat the same kind of mistake that we have in 2021, 2022. Now this time, again, we look at the overall very big demand forecast for my customer. And so I look at it into actually the whole company with many people now examining and study that really is AI is so used for will be used by a lot of people or not. And we test ourself first inside TSMC, we are using AI, we are using machine learning skill to improve our productivity, and we found out it’s very useful. And so I also in the line to buy my customer’s product, and we have to form in the line, like I cannot privilege here, I’m sorry, but it’s useful.

And so I believe that this time, AI’s demand is more real than 2 or 3 years ago. At that timing it is because people were afraid of a shortage, and so automotive, everything, you name it, they are all in shortage. This time, AI alone only AI alone, it will be a very useful tool for the human being to improve all the productivity in our daily life, be it in medical industry or in any product, manufacturing industry or autonomous driving, everything you need AI. And so I believe it’s more real. But even with that, we also have a top-down bottom-up approach and discuss with our customers and ask them to be more realistic. I don’t want to repeat the same kind of mistake 2 or 3 years ago, and that’s what we are doing right now.

TSMC’s management sees N2, N2P, and A16 as the technologies that will enable TSMC to capture growth opportunities in the years ahead; TSMC’s AI customers are migrating aggressively from N-1 to leading edge nodes, and management is seeing a lot of customers wanting to move into N2, N2P, and A16 quickly, but capacity is very tight and will only loosen in the next year or two years

We believe N2, N2P, N16 and its derivatives will further extend our technology leadership position and enable TSMC to capture the growth opportunities well into the future…

…[Question] We’re hearing that AI chipmakers are looking to migrate more aggressively from N-1 to the leading edge, particularly due to backside power because they’re trying to lower their power budgets going forward. So my question, can you support this move?

[Answer] You are right. All the people want to move into kind of a power-efficient mode. And so they are looking for the more advanced technology so that they can save power consumption. And so a lot of my customers want to move into N2, N2P, A16 quickly. We are working very hard to build the capacity to support them. Today, it’s a little bit tight, not a little bit, actually, today is very tight. I hope in next year or the next 2 years, we can build enough capacity to support this kind of demand. 

TSMC’s management is seeing such high demand for AI-accelerator and CoWoS packaging that supply is so tight; management is hopeful that a balance between demand and supply can be met in 2025 or 2026; it appears that TSMC will be doubling CoWoS capacity again in 2025; CoWoS (or advanced packaging) used to have much lower gross margin than the corporate average, but it is now approaching the corporate average; TSMC is working with its OSAT (outsourced semiconductor assembly and test) partners to expand its CoWoS capacity

[Question] How do you think about supply/demand balance for AI accelerator and CoWoS advanced packaging capacity?

[Answer]  I also tried to reach the supply and demand balance, but I cannot today. The demand is so high. I had to work very hard to meet by customers demand. We continue to increase. I hope sometime in 2025 or 2026, I can reach the balance… The supply continues to be very tight all the way to probably 2025 and hope it can be eased in 2026. That’s today’s situation…

…[Question] Are you going to double your capacity again next year for CoWoS?

[Answer] The last time I say that this year, I doubled it, right, more than double, okay? So next year, if I say double it, probably, I will answer your question again next year, and say more than double, okay? We’re working very hard, as I said, wherever we can, whenever we can…

…For advanced packaging, the gross margin used to be much lower than the corporate average. Now it’s approaching corporate average. We are improving it that’s because of scale of the economics, and we put a lot of effort to reduce our cost. So gross margin is greatly improving in these 2 years…

… I just answered the question whether the CoWoS capacity is enough or not? Is not enough. And in great shortage, and that limited my customers’ growth. So we are working with our OSAT partner and trying to give more capacity to my customer so that they can grow here.

TSMC’s smartphone customers have been using InFO (Integrated Fan-Out) technologies but as they start building edge-AI devices, they are starting to use 3DIC (Three Dimensional Integrated Circuit) and SoIC (System on Integrated Chip) technologies

[Question] In regards to advanced packaging with more and more customers working on edge AI devices without — well, being overly specific, but what does it mean or the implication for advanced packaging solutions that we expect in the next 2 years to see these edge AI customers start to use SoIC or 3DIC particularly smartphone? Will they still be using info? Or will they also consider these solutions as well.

[Answer] As my customer moving into 2-nanometer or A16, they all need to probably take in the approach of chiplets. So once you use your chiplets, you have to use in advanced packaging technologies. On the edge AI, for those kind of smartphone customer, as compared with the HPC customers, HPC is moving faster because of bandwidth concerns, latency of footprint or all those kind of thing. For smartphone customer, they need to pay more attention to the footprint as well as the functionality increase. So you observe my big customers taking the info first and then for a few years, nobody catch it up. They are catching up okay? 

TSMC’s management is seeing a lot of customers wanting to put AI functionality into edge devices; this will increase dye sizes by 5% to 10%, but so far there’s no spike in unit growth of the devices; management thinks the unit growth will happen a few years later as the AI functionalities start to stimulate demand for replacement of older devices

[Question] For silicon content, recall a few years back when 5G just started to ramp you used to provide the silicon content expectations of 5G high-end and mid-end and low-end smartphones, so I wonder at this point of time, if you have any estimates for AI for smartphone going to next 2, 3 years?

[Answer] AI is so hard. So that’s right now everybody — all my customers want to put the AI functionality into the edge devices and so the dye size will be increased, okay? How much? I mean it’s different from my customer-to-customers product. But basically, probably 5% to 10% dye size increase will be a general rule. Unit growth, not yet, okay? Because we did not see kind of unit growth suddenly increased, but we expect this AI functionality was stimulated some of the demand to stimulate the replacement to be shorter. So in terms of unit growth that in a few years later, probably 2 years later, you will start to see a big increase in the edge device that’s a smartphone and the PC.

AI chips have larger die sizes, so TSMC’s management thinks there’s a need to adopt fan-out panel-level packaging eventually, but the technology is currently not mature enough and will need 2-3 years to attain that maturity

[Question] We also see the bigger footprint of the AI chips. So while there are quite some activities about fan-out panel-level packaging. So do you think that, that solution will be mentioned in the mid- to long run? Or does TSMC have any plan to do the related investment?

[Answer] We are looking at this as kind of a panel level fan-out technology. But the maturity today is not yet, so I — personally, I will think it’s about at least 3 years later, okay? In this, within these 3 years, we don’t have any very solid solution for a dye size bigger than 10x of the radical size. Today, we support our customer all the way to 5x, 6x chip size. I’m talking about the [ fuel ] size, the big [indiscernible] size. 2 years later, I believe the panel fan-out will be — start to be introduced and we are working on it.

Tencent (NASDAQ: TCEHY)

Tencent’s advertising business is benefitting from better click through rates driven by AI; management sees AI technology increasing advertising conversion rates by 10%

We are benefiting from deployment of neural network artificial intelligence on a GPU infrastructure to boost the click-through rate on our advertising inventory…

…And at the same time, on the ad recommendation end, if we can actually increase conversion by 10%, right, that’s sort of pretty modest improvement. The revenue actually grows quite a bit, right? So I think that’s areas in which we are leveraging AI to deliver material and tangible commercial results.

Tencent’s AI-related external revenue is growing, and the company recently launched 3 AI-powered solutions for enterprises, namely image generation engine, video generation engine, and knowledge engine

Tencent Meeting deepened its adoption and monetization, especially in the pharmaceutical manufacturing and retail sectors. We’re generating increasing AI-related external revenue from customers utilizing our high-performance computing infrastructure, such as GPUs and our model library services. We’re generating increasing AI-related external revenue from customers utilizing our high-performance computing infrastructure, such as GPUs and our model library services. We recently launched 3 AI-powered platform solutions for enterprises, image generation engine and video generation engine, which are pretty useful for advertisers creating ad content; as well as knowledge engine, which is particularly useful for finance, education and retail-related services, deploying customer service chat bots.

Tencent’s operating capex in 2024 Q2 was up 144% year-on-year because of investments in GPUs and CPUs; non-operating capex was up 53% year-on-year, driven by construction, but down 80% sequentially

Operating CapEx was RMB 7.2 billion, up 144% year-on-year driven by investment in GPU and CPU servers.  Non-operating CapEx was RMB 1.5 billion, up 53% year-on-year, driven by construction and progress. On a quarter-on-quarter basis, non-operating CapEx was down 80% from the high base in the prior quarter. As a result, total CapEx was RMB 8.7 billion, up 121% year-on-year.

Tencent’s management thinks of AI as more than just large language models

We look at AI as a more complete suite than just large language model. There are the neural networks, machine learning-based recommendation engines, which we use for content recommendation, video recommendation as well as the talking in the ads and content use case, which is already delivering very good result.

Tencent has delivered better content to users through the use of AI

If you take Video Accounts as an example, by using AI, we actually are able to deliver better content and that generates more use time — a pretty big part of the growth in terms of the Video Accounts user time. It’s actually driven by better targeting, better recommendation and that’s in turn driven by AI.

Tencent’s management thinks AI can improve PVE (player vs environment) games by making the computer smarter

In the area of games, we’re actually using AI to bridge the gap between PVE and PVP, right? So when you have games, which allow people to play against other players, but at the same time, sometimes you actually want to create a game mode in which a player actually play against the machine, right? Then — in the past, the machine is actually quite dumb, right? And with AI, we can actually make the machine play like a real player. And we can actually sort of have it to play a varying levels of skills and make the user experience and the gameplay very fun.

Tencent’s management’s focus with LLMs is to improve the technology; Tencent has already built a MOE (mixture of experts) architecture model, which is one of the top AI models in the Chinese language; Tencent is deploying its LLM in Yuanbao, an app launched to allow users to interact with its LLM; Tencent’s LLM is improving search results and Yuanbao is getting positive feedback; when Yuanbao improves, management will increase promotional resources to increase the user base; management also wants to incorporate Yuanbao into different parts of its ecosystem

Now in terms of LLM, the key thing for us is actually improving the technology. And as we shared before, we have already built an MOE architecture model, which is performing as one of the top models in China. And when compared with international models on Chinese language, I think we are at that top of the pack. And we are deploying our LLM in Yuanbao, which is an app that we have launched which allowed users to interact with our large language model in multiple ways. And one way is enhanced search functionality so that users can actually ask a question. And based on search results, we can actually provide a very direct answer to the questions that our users pose and we have rolled it out to a large enough sample size to get user feedback and the feedback so far has been quite positive…

…Over time, Yuanbao, when it gets to a certain level of quality, then we’re going to increase our promotional resources and try to get more users into the app. And at the same time, when it gets to an even better level of expertise, then we can actually start incorporating it into different parts of our ecosystem. We have a lot of apps which actually has got interaction use cases, which we can leverage our generative AI technology.

Renting out GPUs for AI workloads is a big business in China too, but it’s to a smaller extent when compared to what’s happening in the USA; Tencent’s management is seeing very fast growth in demand for GPU-rentals for AI needs partly because the growth is happening off a low base; the demand for GPU-rentals is partially cannibalising the demand for CPUs

Clearly, for the U.S. hyperscale Cloud providers, renting out GPUs to other companies with AI requirements has become a very big business. In China, the same trend is evident, but to a lesser extent because you don’t have the same multitude of extremely well-funded start-ups trying to build large language models on their own in China. There are many small companies, but they’re capitalized for $1 billion, $2 billion. They’re not capitalized at $10 billion or $90 billion, other way that some of the giant U.S. VC-funded start-ups are now capitalized in the space. And it’s also a somewhat challenging economic environment. Now that said, we have seen that within our Cloud, the demand from customers for renting GPUs for their own AI needs has been growing very swiftly. The percentage growth rates are very fast, but they’re very fast partly because it’s a low base. And also partly because, while some of that demand for renting GPUs in the Cloud is incremental, some of it is replacing demands that would otherwise have existed anyway for renting CPUs in the Cloud. And so while the business of GPU provision is doing very well, the business of CPU processing is more flat because the incremental demand is for GPU, not CPU.

Tesla (NASDAQ: TSLA)

Tesla has made a lot of progress with full self-driving in Q2; a new version, version 12.5, of the autonomous software has just started to be rolled out; version 12.5 of the FSD (full self-driving) software is a step-change improvement in supervised full self-driving; management thinks that most people still do not know how good version 12.5 is; as Tesla increases the miles between intervention, the system can transition from supervised full self-driving to unsupervised full self-driving; management would be shocked if Tesla cannot achieve unsupervised full self-driving next year, but they also note that they have been overly optimistic on the timeline for self-driving; management believes that Tesla will be able to get regulatory approval for unsupervised full self-driving once it shows the rate of accidents is less than human driving; self-driving capabilities of Tesla vehicles outside of North America are far behind those of Tesla vehicles in North America; management is asking for regulatory approval of Tesla supervised full self-driving in Europe, China, and other countries, and the approvals, which are expected before end-2024, will be a driver of demand for Tesla vehicles; FSD uptake is still low despite some increase after a recent price reduction

Regarding full self-driving and Robotaxi, we’ve made a lot of progress with full self-driving in Q2. And with version 12.5 beginning rollout, we think customers will experience a step change improvement in how well supervised full self-driving works. Version 12.5 has 5x the parameters of 12.4 and finally merged the highway and city stacks. So the highway stack at this point is pretty old. So often the issues people encounter are on the highway. But with 12.5, we finally merged the 2 stacks. I still find that most people actually don’t know how good the system is. And I would encourage anyone to understand the system better to simply try it out and let the car drive you around…

…And as we increase the miles between intervention, it will transition from supervised full self-driving to unsupervised full self-driving, and we can unlock massive potential [ in the fleet ]…

…I guess that, that’s really just a question of when can we expect the first — or when can we do unsupervised full self-driving. It’s difficult, obviously, my predictions on this have been overly optimistic in the past. So I mean, based on the current trend, it seems as though we should get miles between interventions to be high enough that — to be far enough in excess of humans that you could do unsupervised possibly by the end of this year. I would be shocked if we cannot do it next year. So next year seems highly probable to me based on quite simply plus the points of the curve of miles between intervention. That trend exceeds the humans for sure next year, so yes…

So it’s this capability. I think in our experience, once we demonstrate that something is safe enough or significantly safer than human, we find that regulators are supportive of deployment of that capability. It’s difficult to argue with — if you have got a large number of — if you’ve got billions of miles that show that in the future unsupervised FSD is safer than human, what regulator could really stand in the way of that. They’re morally obligated to approve. So I don’t think regulatory approval will be a limiting factor. I should also say that the self-driving capabilities that are deployed outside of North America are far behind that in North America. So with Version 12.5, and maybe 12.6, but pretty soon, we will ask for regulatory approval of the Tesla supervised FSD in Europe, China and other countries. And I think we’re likely to receive that before the end of the year. There will be a helpful demand driver in those regions…

[Question] You mentioned that FSD take rates were up materially after you reduced the price. Is there any way you can help us quantify what that means exactly?

[Answer] We’ve shared that how — that we’ve seen a meaningful increase. I don’t want to get into specifics because we started from a low base, but we are seeing encouraging results. 

Tesla will unveil its robotaxi product on 10th of October, after postponing it for a few months; the current plan is for robotaxis to be produced in Tesla’s headquarters at Giga Texas; management’s aim is to have a robotaxi fleet that’s made up of both Tesla-owned vehicles and consumer-owned vehicles, and consumers can rent out their cars, just like renting out their apartments for Airbnb; Tesla has a clause with every vehicle purchase that Tesla vehicles can only be used in the Tesla fleet and not in any 3rd-party autonomy fleet; management believes that once unsupervised full self-driving is available, most people will rent out their Tesla vehicles, so the Tesla robotaxi service will achieve instant scale given the existing number of Teslas on the road

We postponed the sort of robotaxi product unveil by a couple of months where it’s shifted to 10/10, to the 10th of October. And this is because I wanted to make some important changes that I think would improve the vehicle — the sort of — the Robotaxi — the thing — the main thing that we’re going to show…

…And I should say that the Cybertaxi or Robotaxi will be locally produced here at our headquarters at Giga Texas… 

This would just be the Tesla network. You just literally open the Tesla app and summon a car and we send a car to pick you up and take you somewhere. And our — we will have a fleet that’s on the order of 7 million [ vehicle autonomy ] soon. In the U.S. it will be over 10 million and over 20 million. This is in that scale. And the car is able to operate 24/7 unlike the human drivers. So the capability to — like this basically instant scale with a software update. And now this is for a customer-owned fleet. So you can think of that as being a bit like Airbnb, like you can choose to allow your car to be used by the fleet or cancel that and bring it back. It will be used by the fleet all the time, can be used by the fleet some of the time and then Tesla will take a share in the revenue with the customer…

…And there’s an important clause we’ve put in every Tesla purchase, which is that the Tesla vehicles can only be used in the Tesla fleet. They cannot be used by a third party for autonomy…

…[Question] Do you think that scales like progressively, so you can start in a city with just a handful of cars. Then you grow the number of cars over time? Or do you think there is like a critical mass you need to get to, to be able to offer like a service that is of competitive quality compared to what like Uber would be typically delivering already?

[Answer] I guess I’m not — I’m not conveying this correctly. The entire Tesla fleet basically becomes active. This is obviously — maybe there’s some number of people who don’t want their car to earn money. But I think most people will. It’s instant scale.

Tesla is nearing completion of the South expansion of Giga Texas, which is Tesla’s largest training cluster of GPUs to-date; there was a story earlier this year that Tesla sent its new H100 AI chip deliveries to Elon Musk’s other entities but this happened only because Tesla had no place to house the chips at that point in time; Tesla now has a place for the chips because of the South expansion of Giga Texas

We’re also nearing completion of the South expansion of Giga Texas, which will house our largest training cluster to date. So it will be an incremental 50,000 H100s, plus 20,000 of our hardware for AI5, Tesla AI computer…

…I mean I think you’re referring to a very — like an old article regarding GPUs. I think that’s like 6 or 7 months old. Tesla simply had no place to turn them on. So it would have been a waste of Tesla Capital because we would just have to order H100s and have no place to turn them on. So I was just – this wasn’t a let’s pick xAI over Tesla. There was no — the Tesla test centers were full. There was no place to actually put them. The — we’ve been working 24/7 to complete the South extension on the Tesla [indiscernible] Texas. That self extension is what will house the 50,000 H100s, and we’re beginning to move the certain H100 server racks in place there. But we really needed — we needed that to complete basically. You can’t just order compute — order GPUs and turn them on, you need a data center. So I want to be clear, that was in Tesla’s interest, not contrary to Tesla’s interest. Does Tesla no good to have GPUs that it can’t turn on. That South extension is able to take GPUs, which is really just this week. We are moving the GPUs in there and we’ll bring them online.

The Optimus robot is already performing tasks in Tesla’s factory; management expects to start limited production of Optimus in early 2025; early production is for Tesla’s consumption, and management expects a few thousand robots in Tesla’s factories by end-2025; management expects Optimus to enter high-volume production in 2026 and to release Optimus to external customers by then; management believes that Optimus will be the biggest revenue contributor to Tesla in the future, with an estimated total addressable market of 20 billion units of Optimus robots; management thinks Tesla has all the ingredients to build large scale, generalised humanoid robots 

With Optimus, Optimus is already performing tasks in our factory. And we expect to have Optimist production Version 1 and limited production starting early next year. This will be for Tesla consumption. It’s just better for us to iron out the issues ourselves. But we expect to have several thousand Optimus robots produced and doing useful things by the end of next year in the Tesla factories. And then in 2026, ramping up production quite a bit. And at that point, we’ll be providing Optimus robots to outside customers. There will be a production Version 2 of Optimus…

I mean, as I said a few times, I think the long-term value of Optimus will exceed that of everything else that Tesla combined. So it’s simply just never considered the usefulness, utility of a humanoid robot that can do pretty much anything you asked of it. II think everyone on earth is going to want one. There are 8 billion people on earth. So it’s 8 billion right there. Then you’ve got all of the industrial uses, which is probably at least as much, if not, way more. So I suspect that the long term demand for general purpose humanoid robots is in excess of 20 billion units. And Tesla has the most advanced humanoid robot in the world and is also very good at manufacturing, which these other companies are not. And we’ve got a lot of experience with — the most experienced — we’re the word leaders in [ Real World AI ]. So we have all of the ingredients. I think we’re unique in having all of the ingredients necessary for large scale, high utility, generalized humanoid robots.

Management expects capex to be over US$10 billion in 2024 (was US$8.9 billion in 2023) because of spending on the AI GPU cluster

On the CapEx front, while we saw a sequential decline in Q2, we still expect the year to be over $10 billion in CapEx as we increase our spend to bring a 50 GPU cluster on luck. This new center will immensely increase our capabilities to scale FSD and other AI initiatives. 

Tesla will continue working on its own AI GPU called Dojo to reduce reliance on NVIDIA, and also because NVIDIA’s supply for GPUs is so tight; management sees a path where Dojo’s chips can be competitive with NVIDIA’s

So Dojo, I should preface this by saying I’m incredibly impressed by NVIDIA’s execution and the capability of their hardware. And what we are seeing is that the demand for NVIDIA hardware is so high that it’s often difficult to get the GPUs. And there just seems this — I guess I’m quite concerned about actually being able to get steady out NVIDIA GPUs and when we want them. And I think this therefore requires that we put a lot more effort on Dojo in order to have — in order to ensure that we’ve got the training capability that we need. So we are going to double down on Dojo and we do see a path to being competitive with NVIDIA with Dojo. And I think we kind of have no choice because the demand for NVIDIA is so high and it’s obviously their obligation essentially to raise the price of GPUs to whatever the market will bear, which is very high. So I think we’ve really got to make Dojo work and we will.

Tesla is learning from Elon Musk’s AI startup, xAI; Musk is aware that Tesla needs shareholder approval before the company can invest in xAI, but he thinks it’s a good idea; Musk sees opportunities to integrate xAI’s foundation model, Grok, into Tesla’s software; Musk found that some engineers are only interested in working on AGI (artificial general intelligence) and they would have gone to other AI startups if Musk was not working on xAI since they would not have chosen Tesla anyway

Tesla is learning quite a bit from xAI. It’s been actually helpful in advancing full self-driving and in building up the new Tesla data center. With — regarding investing in xAI, I think, we need to have a shareholder approval of any such investment. But I’m certainly supportive of that if shareholders are, the group — probably, I think we need a vote on that. And I think there are opportunities to integrate Grok into Tesla’s software, yes…

…With regard to xAI, there are a few that only want to work on AGI. So what I was finding was that when trying to recruit people to Tesla, they were only interested in working on AGI and not on Tesla’s specific problems and they want to start — do a start-up. So it was a case of either they go to a startup or — and I am involved or they do a start-up and I am not involved. Those are the 2 choices. This wasn’t they would come to Tesla. They were not going to come to Tesla under any circumstances…

…I tried to recruit them to Tesla, including to say, like, you can work on AGI if you want and they refused. Only then was xAI created.

Management still thinks Tesla can rent out latent AI inferencing compute for general computing purposes from its fleet of vehicles (and perhaps humanoid robots) in the future

Just distributed compute. It seems like a pretty obvious thing to do. I think where the distributed compute becomes interesting is with next-generation Tesla AI truck, which is hardware viable, what we’re calling AI5, which is from the standpoint of inference capability comparable to B200 and [ a bit of ] B200. And we’re aiming to have that in production at the end of next year and scale production in ’26. So it just seems like if you’ve got autonomous vehicles that are operating for 50 or 60 hours a week, there’s 168 hours in a week. So we have somewhere above, I think, 100 neural net computing. I think we need a better word than GPU because GPU means graphics processing unit. So there’s a 100 hours plus per week of AI compute, AI [ first ] compute from the fleet in the vehicles and probably some percentage from humanoid robots. That it would make sense to do distributed inference. And if there’s a fleet of at some point, 100 million vehicles with AI5 and beyond, AI6 and 7 and what not and there are maybe billions of humanoid robots. That is just a staggering amount of inference compute that could be used for general purpose computing. Doesn’t have to use it for the humanoid robot or for the car.

Management believes that Waymo’s approach to autonomous vehicles is a localised solution that requires high-density mapping and is thus quite fragile compared to Tesla’s approach

I mean our solution is a generalized solution like what everybody else has. You could see if Waymo has [ one of it ], they have very localized solution that requires high-density mapping. It’s not — it’s quite fragile. So their ability to expand, I believe, is limited. Our solution is a general solution that works anywhere. It would even work on a different earth. So if you [ branded ] a new earth, it would work on new earth…

…in terms of regulatory approval, the vehicles are governed by FMVSS in U.S., which is the same across all 50 states. The road rules are the same across all 50 states. So creating a generalized solution gives us the best opportunity to deploy in all 50 states reasonably. Of course, there are state and even local municipal level regulations that may apply to being a transportation company or deploying taxis. But as far as getting the vehicle on the road, that’s all federal and that’s very much in line with what Elon was suggesting of the data and the vehicle itself…

…To add to the technology point, the end-to-end network basically makes no assumption about the location. Like you could add data from different countries and it just like performs equally well there. That’s like almost close to 0, U.S. specific code in there. It’s all just the data that comes from the U.S.

Visa (NYSE: V)

Visa’s management is investing in AI, particularly generative AI (genAI), because the company has use-cases for the technology in areas such as fraud reduction and productivity improvement; management is very optimistic about the positive impact that generative AI can have 

First of all, to frame it is we are all in on GenAI at Visa as we’ve been all in on predictive AI for more than a decade. We’re applying it in 2 broad-based different ways. One is sort of adopting across the company to drive productivity and we’re seeing real results there. We’re seeing great results, great adoption, great productivity increases from technology to accounting to sales all across the company. The second is applying generative AI to enhance the entire payment ecosystem. And to the latter part of your question, absolutely. I guess I’d give you one set of examples or some of the risk tools and capabilities that we’ve been deploying in the market. I mentioned the risk products that we’re using on RTP and account-to-account payments. That is an opportunity to reduce fraud, both for merchants and for issuers. I think I mentioned on a previous call, we have our Visa Provisioning Intelligence Service, which is using artificial intelligence to help predict token provisioning fraud before it happens. That also is a benefit to both issuers and merchants. And the list goes on. So we are very optimistic about the positive impact that generative AI can have, not just on our own productivity but on our ability to help drive increased sales and lower fraud across the ecosystem.

Wix (NASDAQ: WIX)

Wix’s management continues to improve the company’s AI capabilities; Wix has released 17 AI business assistants to-date; the AI business assistants support a wide range of use cases and Wix has already received positive feedback on them; Wix will be releasing dozens more AI assistant later in 2024; the 17 business assistants are all customer-facing but the assistants can play one of two roles, (1) be a question-and-answer AI assistant, and (2) be an assistant that executes actions; the AI business assistants rarely hallucinate; management wants to add these AI assistants everywhere in the Wix product suite

We continue to build up our suite of AI capabilities as a result of the numerous AI initiatives and work streams across Wix. Last quarter, we introduced our plan to embed AI assistance across our platform and products. I’m excited to share that we have released 17 AI business assistants so far to date. These assistants span a wide range of use cases to support users with minimal hands-on support, thus streamlining their experience. These conversational AI assistants act as a right-hand aid for users to guide them through the entire life cycle of ideating, creating and managing their online presence. Our offering includes an analytics assistant that can help Wix users find the data they need without having to search through dozens of reports, and an assistant that helps users create events through a conversational chat. We have already received positive feedback on this first set of AI assistants with dozens more set to launch later this year…

…how many of the 17 are customer-facing? And the answer is all of them. The concept is that we are currently — we build a platform in which it is easier for us to build an AI assistant. And then that enable us to develop 2 kinds of different assistants. The first one would be a question-and-answer AI assistant, so if you have a product like booking, how do I add a staff member to my yoga studio, right? And so you can actually talk to the AI and ask questions, get answered, and ask question, get answer, as you would do with the normal human being. And then we see a great result in that in terms of how customers quickly find the answers. Hallucinations are very small, the percentage, probably similar to what a human would do or not even better…

…The other thing that we are doing is that you can ask questions and you can have the AI do things for you. So this is the second kind. And for example, if you go to our analytics, you see that you can actually start asking questions and get the reports done for you automatically by the AI. So this is an AI that activates other agents in order to give you answers or do actions for you. How do I make an event that is a wedding event? What not? And then it will do — analyze [ VP ]. But if you want to create an event which is selling tickets for a concert, it will define that, willing to work with you on that. So those kind of things streamline and reduce a lot of friction from the customer…

…We’re going to add those kind of assistants in pretty much everywhere that we can on Wix. 

Wix’s management launched AI creation capabilities for its mobile app builder in June 2024, which enables users to create and edit iOS and Android apps through a cha 

We launched AI creation capabilities for our mobile app builder in June. This new solution enables users to create and edit iOS or Android apps through an AI chat experience. Once AI understands the user’s goals, intent and desired aesthetic, our technology generates a branded app that can be customized and managed from the App Editor.

Wix’s management recently released new AI features to help users with content-generation

We also recently released a suite of new AI features designed to help users identify relevant topics for blogs as well as generate outlined content and images for their target audience. With this new experience, users can swiftly turn ideas into new ready articles, significantly reducing the time and effort required to create engaging content, and ultimately, changing the blog creation experience.

Wix’s management sees both Self Creators and Partners having excellent engagement with Wix’s AI tools; management expects Wix’s AI tools to be a competitive advantage and significant driver of future growth; Wix’s AI tools continue to drive user conversion; Wix released its first AI product all the way back in 2016 and management saw that the AI functionality had very high adoption and drove dramatic improvement in user conversion; the latest version of the AI product, released earlier this year, had the same effect; Wix’s AI agents are having measurable positive impact on engagement; management thinks that their 7-8 years of experience with releasing AI technology is helping them integrate AI into Wix’s product suite in a highly intuitive way

Both Self Creators and Partners continue to show excellent engagement with our AI tools. As we expand the breadth of our AI technology, we expect it to continue to be a competitive advantage for us as well as a significant driver of growth going forward…

… Our AI tools continue to drive user conversion…

…Released ADI, the first AI product — GenAI product, actually created website right in the end of 2016. And since then, we’ve seen that by exposing users to AI functionality as part of the natural progression in the product life cycle, we get very high adoption, obviously using those kind of tools and results that can improve. And for ADI, we show that we improved the conversion dramatically. The new version that came earlier this year did it again. And we are seeing that a lot of the agents that we have now, AI agents, when they start to pick up more user interactions and more user conversations, again, create measurable effect. So I’m very optimistic. I think that our experience in releasing AI technology, right, which is almost, what, 8 years now — 7 years now, is helping us understand how to integrate them into the product in a way that actually mixed user interact with them and that they feel natural and don’t feel like you’re stepping out of what you’re doing to do something else and then coming back. And I think that creates a big difference. So yes, I’m very optimistic on the potential that we’re going to see a continuation of the improvement.

There is a big difference between what an agency and a Self Creators need from AI. So for me, if I want to design a website, and I’m not a designer, I want AI to help me design it because English is not my first language and I’m not writing so well in Hebrew as well, right? So I would love AI to also help me write great text and generate images.

When you’re an agency, you probably know how to design and you have your system of design and how things should look like. So you don’t need that. You probably need a little bit to help with the text, but other things, like the image editing, right, and the content recomposition create tremendous value. And then the other things that — in addition to that, for example, a great designer not necessarily know how to configure things to work in a responsive way on different screen resolutions, and we have an AI to do that. So we are utilizing those kind of technologies to streamline the agency’s experience and work and efficiency in a way that is significant to them. I think we have some ideas on how to make it even more significant going forward.

Wix’s management thinks there’s a long way to go before AI technology will make agencies become obsolete by having the computer know automatically what website you want to build and get it fully functioning, so agencies will still be an important business for Wix for many years

In theory, if you can just one day talk to a computer and get the full website functioning that knows exactly what should be there and that it’s easy to update then maybe some of the agency’s business will disappear. But there is a long way until we get to something similar to that. And I think the majority of businesses in the case that they need a website, they want somebody to be responsible for it, somebody that know how to activate the tools and use them and utilize them, and that’s why they go to agencies because they have a professional that understand how to take care of all of their business needs. And there’s a lot of those, right from SEO to how do you write things correctly in order to get the right shipping rules, and there’s a ton of things. So I think that where there’s a long way for AI to go before it can successfully replace good agencies. 

Unless, of course, you are a self-creator by nature, which is a lot of most of our customers, and you want to create your website, you can control it and you can do those things and you can change it. So I think the difference is in the user type and user intent and not necessarily in technology, which I believe means that both will continue to grow, agencies and Self Creators.

Wix’s management is seeing that the newer users who join Wix are those who use more AI tools to automate website creation as compared to earlier users; the presence of Wix’s AI tools opens up new types of customers for Wix

One of the qualification that you needed to have in order to be able to use Wix in the past was to know how to design to some level, to know how to write text to some level and to trust yourself that you’re good enough to do it, right? And then — so most of our users feel that they know how to do those things. And naturally, they will use less AI because they think they can just do it. And I think we are now opening to users that don’t feel that, right? They don’t expect themselves to know how to do those things and expect us to have the tools to — AI tools to automate it for them. So we are already seeing some of this gap, and I believe that this will continue to grow. And essentially, we are opening Wix to be more useful to more new types of customers.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Fiverr, Mastercard, Meta Platforms, Microsoft, Netflix, Nu Holdings, Shopify, TSMC, Tesla, Visa, and Wix. Holdings are subject to change at any time.

What We’re Reading (Week Ending 18 August 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 18 August 2024:

1. This Is How Treasury Really Funds Trillions of Dollars of US Debt –  Joe Weisenthal, Tracy Alloway, and Amar Reganti

Tracy (08:40):

So when I think about debt issuance, I used to cover corporate credit, and so I think about, you know, being a treasurer at a large multinational like an Apple or a Microsoft or whatever, and the decision making process there where, you know, if I decide there are favorable market conditions, I might go out and work with my bankers and decide to issue some debt. What is the difference between being a treasurer at a big company versus being US Treasury?

Amar (09:19):

Oh, a vast difference, right? And I too started on the other side, as a corporate portfolio manager in the bond market. You’d look at companies coming to the market, they either needed cash or as opportunistic. For the US government and for the debt management office, it’s very different. It’s that, you are always going to be at various points on the curve, whether or not at that point it’s, what I would call, tactically a good thing. And you know, this goes into that regular and predictable issuance cycle. And the point there, and this is how we get to cost, which is again different from how corporates measure cost is that, by being consistent, by helping this ecosystem thrive, you’re going to create a liquidity premium, right? That, because there is this regular and predictable nature to your issuance cycle, people understand they’re not going to be surprised that the availability of securities is going to be well calibrated to what the environment needs.

And when I meant environment or ecosystem, I meant the entire ecosystem. You want to service as broad of and diversified group of investors as possible. And that includes people who will actively short your securities, right? Because that provides a supply outside of auction cycles for people to buy and also helps stimulate repo markets and so on. So you want to be sure that you aren’t attempting to use pure price on what’s on the yield curve as a point on why or how you should issue.

Now, I want to be a little careful. There is a quantitative framework that Treasury has and it’s a model that, you know, a number of people collaborated on. Credit goes to people like Brian Sack, Srini Ramaswamy, Terry Belton, Kris Dawsey, a number of others who built this model. And it sort of gives a sense of, okay, historically, based on a number of inputs, where has Treasury benefited the most by issuing. But that’s like an important guidepost, but the more important part is the qualitative feedback that Treasury hears from its dealers, from investors, from central bank reserve managers who hold vast amounts of Treasuries. And that all also feeds in, along with the [Treasury] Borrowing Advisory Committee (TBAC), into making issuance decisions…

…Joe (16:05):

Also, Tracy, just to add onto that, we have an inverted yield curve. So, theoretically, if you wanted to borrow at the low, you know, one could say ‘Oh look, it’s cheaper to borrow at the long end, why are you selling all these bills when actually the cheapness is at the end?’

Tracy (16:18):

So this is the very essence of the current controversy. What is happening — and I know you’re not a Treasury now — but what is happening when the Treasury comes out with that kind of decision?

Amar (16:28):

Okay. So the first kind of framework you want to think about is, and you had asked this initially, is how do they make these directional issuance decisions? Well, the first thing is that Treasury does look at long-term averages of where it is in its weighted average maturity, right? Like when you add all these securities together, what’s sort of the average maturity? And historically, it’s been around 60 [or] 61 months. Treasury is well above that right now. It’s around 71 months. So it’s actually pretty, pretty high up.

Tracy (16:57):

Which, just to be clear, most people would say that’s a good thing, right? You want to term out your debt?

Amar (17:02):

Maybe if you’re a corporate treasurer you might want to do that, but there’s a lot of arguments that you actually don’t want to term out your debt.

Tracy (17:10):

Oh, interesting.

Amar (17:10):

So, the first is, is that yes, the curve is inverted. That’s, if you decided to move issuance that way, chances are you could uninvert the curve. I’m not saying that’s a definitive, it depends on how much or or how likely, you know, what else is happening in markets. The second thing is that, as in a previous episode, I thought Josh Younger explained it really well, you could roll these three-month bills, you know, all the way out to 10 years or you could issue a 10 year.

And if you’re sort of risk neutral, there’s no savings, right? Or there’s no gain or savings. It just means that, forwards get realized and it’s effectively the same thing. So when Treasury does that, you’re saying that, over time, you’re effectively making a tactical rates call that somehow, that you think that 10 year rates or 30 year rates won’t go substantially lower. That’s the first thing. The second thing is that the sheer amount that you can put on the 10 and 30 year is going to be less than what you can put in the bills market. Now that’s just absent anything that the Federal Reserve is doing. That’s just generally true, right? Like it’s just a broader and bigger, it tends to be a broader and bigger market.

Joe (18:19):

The shorter end.

Tracy (18:20):

Yeah, there’s more demand for shorter-dated securities.

Amar (18:22):

Yeah. But the third thing is that what Treasury really is trying to do is look around across the ecosystem and say, ‘Hey, where should we be feeding securities to over time if we are kind of taking a risk neutral sort of approach to this? That we’re not extrapolating what forward curves are going to be. We don’t know any more than a typical rate strategist or someone. We know what we don’t know about how market rates evolve over time. So because of that, our job is to help issue securities to where the biggest pools of capital are, because that’s how you issue risk-free securities and keep up the health and demand for, and liquidity of, your asset class.’ So the biggest pool of money now, in particular, is still at the front end, right? The amount of reserves that have been created is really dramatic.

2. Investing success comes down to one word: focus – Chin Hui Leong

Buffett does the same thing. On his table, he keeps a tray labelled, in capital letters, “TOO HARD”, a strategically placed reminder that most of the opportunities which cross his desk belong in that tray.

Now pause and think about that for a moment. Buffett is widely lauded for his investment smarts and long investing experience. In other words, it would be ridiculous to suggest that he has trouble understanding any company.

But Buffett knows better than that. Despite his ability, he is smart enough to know that there are many companies out there that he does not understand and should not touch. We would be wise to do the same…

…There’s an unfortunate adage in news broadcasting: If it bleeds, it leads. Said another way, negative headlines tend to get almost all of the attention while positive news gets buried in the process.

It’s true in investing as well. When Facebook reported a loss of a million daily active users (DAUs) in early 2022, the reaction from news outlets and analysts was deafening, with some even suggesting Facebook is on its last legs as a social network.

But since reporting the loss, the social network has gained over 180 million DAUs by 2023. Do you hear about these positive gains in the media? No, you don’t.

This example tells you one thing: You have to be proactive in searching for positive trends within the company.

And that means looking past its current problems and honing in on the parts which are not said out loud. For instance, at the end of 2021, Meta was far from a dying business. In fact, the social media company had nearly US$48 billion on its balance sheet after generating US$39 billion in free cash flow during the year.

3. The Seven Virtues of Great Investors – Jason Zweig

Curiosity is the first investing virtue. It’s what enables you to find and develop all the others…. Ordinary investors are afraid of what they don’t know, as if they are navigating the world with those antique maps that labeled uncharted waters with the warning “here be dragons.” Great investors are afraid of what they do know, because they realize it might be biased, incomplete or wrong. So they never deviate from their lifelong, relentless quest to learn more…

…without independence, investors are doomed to mediocrity. What’s your single most valuable asset as an investor? Your mind! If you let other people do your thinking for you, you’ve traded away your greatest asset — and made your results and your emotions hostage to the whims of millions of strangers. And those strangers can do the strangest things…

…Making a courageous investment “gives you that awful feeling you get in the pit of the stomach when you’re afraid you’re throwing good money after bad,” says investor and financial historian William Bernstein of Efficient Frontier Advisors in Eastford, Conn.

4. Integration and Android – Ben Thompson

Yesterday Google announced its ninth iteration of Pixel phones, and as you might expect, the focus was on AI. It is also unsurprising that the foundation of Osterloh’s pitch at the beginning of the keynote was about integration. What was notable is that the integration he focused on actually didn’t have anything to do with Pixel at all, but rather Android and Google:

We’re re-imagining the entire OS layer, putting Gemini right at the core of Android, the world’s most popular OS. You can see how we’re innovating with AI at every layer of the tech stack: from the infrastructure and the foundation models, to the OS and devices, and the apps and services you use every day. It’s a complete end-to-end experience that only Google can deliver. And I want to talk about the work we’re going to integrate it all together, with an integrated, helpful AI assistant for everyone. It changes how people interact with their mobile devices, and we’re building it right into Android.

For years, we’ve been pursuing our vision of a mobile AI assistant that you can work with like you work with a real life personal assistant, but we’ve been limited by the bounds of what existing technologies can do. So we’ve completely rebuilt the personal assistant experience around our Gemini models, creating a novel kind of computing help for the Gemini era.

The new Gemini assistant can go beyond understanding your words, to understanding your intent, so you can communicate more naturally. It can synthesize large amounts of information within seconds, and tackle complex tasks. It can draft messages for you, brainstorm with you, and give you ideas on how you can improve your work. With your permission, it can offer unparalleled personalized help, accessing relevant information across your Gmail Inbox, your Google calendar, and more. And it can reason across personal information and Google’s world knowledge, to provide just the right help and insight you need, and its only possible through advances we made in Gemini models over the last six months. It’s the biggest leap forward since we launched Google Assistant. Now we’re going to keep building responsibly, and pushing to make sure Gemini is available to everyone on every phone, and of course this starts with Android.

This may seem obvious, and in many respects it is: Google is a services company, which means it is incentivized to serve the entire world, maximizing the leverage on its costs, and the best way to reach the entire world is via Android. Of course that excludes the iPhone, but the new Gemini assistant isn’t displacing Siri anytime soon!

That, though, gets why the focus on Android is notable: one possible strategy for Google would have been to make its AI assistant efforts exclusive to Pixel, which The Information reported might happen late last year; the rumored name for the Pixel-exclusive-assistant was “Pixie”. I wrote in Google’s True Moonshot:

What, though, if the mission statement were the moonshot all along? What if “I’m Feeling Lucky” were not a whimsical button on a spartan home page, but the default way of interacting with all of the world’s information? What if an AI Assistant were so good, and so natural, that anyone with seamless access to it simply used it all the time, without thought?

That, needless to say, is probably the only thing that truly scares Apple. Yes, Android has its advantages to iOS, but they aren’t particularly meaningful to most people, and even for those that care — like me — they are not large enough to give up on iOS’s overall superior user experience. The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.

Of course Pixel would need to win in the Android space first, and that would mean massively more investment by Google in go-to-market activities in particular, from opening stores to subsidizing carriers to ramping up production capacity. It would not be cheap, which is why it’s no surprise that Google hasn’t truly invested to make Pixel a meaningful player in the smartphone space.

The potential payoff, though, is astronomical: a world with Pixie everywhere means a world where Google makes real money from selling hardware, in addition to services for enterprises and schools, and cloud services that leverage Google’s infrastructure to provide the same capabilities to businesses. Moreover, it’s a world where Google is truly integrated: the company already makes the chips, in both its phones and its data centers, it makes the models, and it does it all with the largest collection of data in the world.

This path does away with the messiness of complicated relationships with OEMs and developers and the like, which I think suits the company: Google, at its core, has always been much more like Apple than Microsoft. It wants to control everything, it just needs to do it legally; that the best manifestation of AI is almost certainly dependent on a fully integrated (and thus fully seamless) experience means that the company can both control everything and, if it pulls this gambit off, serve everyone.

The problem is that the risks are massive: Google would not only be risking search revenue, it would also estrange its OEM partners, all while spending astronomical amounts of money. The attempt to be the one AI Assistant that everyone uses — and pays for — is the polar opposite of the conservative approach the company has taken to the Google Aggregator Paradox. Paying for defaults and buying off competitors is the strategy of a company seeking to protect what it has; spending on a bold assault on the most dominant company in tech is to risk it all.

I’ve referenced this piece a few times over the last year, including when Osterloh, the founding father of Pixel, took over Android as well. I said in an Update at the time:

Google has a very long ways to go to make [Google’s True Moonshot] a reality, or, frankly, to even make it a corporate goal. It will cost a lot of money, risk partnerships, and lower margins. It is, though, a massive opportunity — the maximal application of AI to Google’s business prospects — and it strikes me as a pretty big deal that, at least when it comes to the org chart, the Pixel has been elevated above Android.

In fact, though, my takeaway from yesterday’s event is the opposite: Android still matters most, and the integration Google is truly betting on is with the cloud.

5. Signature Bank – why the 36,000% rise in 7 months? – Swen Lorenz

In case you don’t remember, Signature Bank had gotten shipwrecked in March 2023, alongside the other infamous “crypto-deposit banks”, Silvergate Bank and First Republic Bank. Its stock had to be considered worthless, at least by conventional wisdom.

However, between October and December 2023, the share price suddenly rose from 1 cent to USD 1.60. Buyers were hovering up shares, sometimes several million in a single day.

The stock then doubled again and reached USD 3.60, and with heavy trading…

…On 12 March 2023, New York authorities closed the bank. Because of its size, the US government considered a collapse a systemic risk, which enabled the FDIC to step in and guarantee all deposits after all. Whereas deposit holders were going to be made whole, those investors who held equity or bonds issued by Signature Bank were going to lose their entire investment. Within one week, the majority of the bank’s deposits and loans were taken over by New York Community Bancorp (ISN US6494451031, NYCB), which is the usual way to dispose of a failed banking operation…

…Not all of Signature Bank’s assets were transferred to New York Community Bancorp. When the bank closed its doors, it had USD 107bn of assets. Of that, only USD 47bn were transferred to New York Community Bancorp – basically, the part of the bank’s portfolio that was deemed a worthwhile business. A portfolio with a remaining USD 60bn of loans would remain in receivership, and it was earmarked for a gradual unwinding.

In September 2023, the FDIC sold another USD 28bn of the bank’s assets to Flagstar Bank.

The remaining USD 32bn of loans comprised mortgages made against commercial real estate and rent-regulated apartment buildings in New York – asset classes that are not exactly in favour with investors.

However, the FDIC knew that it was going to release more value from these remaining loans if it allowed them to continue to maturity. The government entity needed help, though, to get the job done, and it had to deliver some evidence that letting this portfolio run off over time was indeed the best way to minimise losses and maximise proceeds.

To this end, the FDIC put these remaining loans into joint venture entities. Minority stakes in these entities were then offered to private equity companies and other financial investors…

…These financial investors paid the equivalent of 59-72 cents on the dollar…

…For the FDIC to be made whole on the remaining USD 32bn portfolio of loans, it needs to recover 85% of the outstanding amounts. If the recovery rate of these remaining USD 32bn of loans comes out higher than 85%, there will be money left over to go towards holders of the bank’s bonds, preference shares, and ordinary shares.

How could any external investor come up with an estimate for the likely recovery rate?…

…It’s all down to the default rate and the so-called severity.

The default rate is the percentage of loans where the debtor won’t be able to make a repayment in full.

Severity is the percentage loss suffered when a debtor is not able to make a repayment in full. E.g., a debtor may not be able to pay back the entire mortgage but just 75%. In that case, the severity is 25%…

…The resulting estimate of an 8% loss on the loan portfolio means that 92% of the loan book will be recovered. Given that the FDIC’s claims only make up 85% of the loan book, this means there will be money left over to go towards the holders of Signature Bank’s bonds, preference shares, and ordinary shares.

This money is not going to be available immediately since most loans run out in 5-7 years. This gives the managers of these loan portfolios time to work towards maximising how much debtors can repay…

…The FDIC is first in line to receive the money that comes in. According to Goodwin’s estimate, the FDIC’s claims will be paid off in full at the end of 2027.

From that point on, the bonds, preference shares, and ordinary shares will have a value again, as they entitle the holder to a share in the remaining leftover proceeds.

For the ordinary shares, Goodwin estimates USD 600m to be left over, which will become available in about five years’ time. When discounting this sum by 20% p.a., Signature Bank has a fair market cap of of USD 223m.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms (parent of Faccebook), and Microsoft. Holdings are subject to change at any time.

Market View: Markets Movements Post Global Rout

Last week, on 06 August 2024, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station, by Chua Tian Tian, the co-host of the station’s The Evening Runway show. We discussed a number of topics, including:

  • The Singapore stock market’s recovery after a big fall in the Nikkei on 05 August 2024 that sparked a rout in global financial markets (Hints: What will ultimately matter for the Straits Times Index’s long-term recovery will be the underlying business-health of its three major constituents – the banks DBS, OCBC, and UOB – which collectively account for around half of the index; based on their latest results, “steady as it goes” sounds like an apt description of what’s going on with the banks)
  • How sustainable is the optimism surrounding pure-play US office REITs in Singapore’s stock market on expectations that the US Federal Reserve would cut interest rates (Hint: Singapore-listed US office REITs are facing two problems – low occupancies and high borrowing costs – and the Federal Reserve’s actions may at best alleviate only one of the problems, that of high borrowing costs)
  • My read on the Bank of Japan’s recent monetary policy tightening that triggered a historic plunge in Japanese stocks and contributed to global market turmoil (Hint: Big declines in stocks are bound to happen so it’s important to be investing in a way that allows us to stay in the game; meanwhile, the really good days in stocks tend to cluster with the really bad days in stocks, and if we miss just a small handful of the really good days, our long-term returns will be dramatically affected)
  • The impact of NVIDIA’s reported delays in the development of its latest chips to the company’s competitive edge (Hint: It’s unlikely for the delay to result in the loss of any competitive edge because NVIDIA’s real competitive edge lies in the familiarity that most of the AI community has with the company’s CUDA software platform)

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 11 August 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 11 August 2024:

1. Ted Weschler Case Study – DirtCheapStocks

To set the stage – Weschler’s Valassis purchases started in 2008 and ended in 2010.

Markets were in free fall in the back half of 2008. The S&P 500 traded down 12% in the first six months of the year. This was already a blow to investors. But things were about to get much worse. In the second half of the year, the S&P would trade down another 26%. 2008 was the worst year for the S&P since the 1930’s. Investors were scared. The country was frozen…

…There was blood in the streets, no doubt, but market participants were getting the investment opportunity of a lifetime. Weschler bought the bulk of his Valassis shares in the 4th quarter of 2008.

Valassis was a direct mail marketing company. It made the coupons that come in the daily paper along with the other marketing material sent directly to your mailbox. Junk mail, basically.

But this junk mail has a reasonably high conversion rate. There’s a reason it shows up in our mailbox daily.

In early 2007, Valassis had purchased ADVO, the direct mail business. The purchase of ADVO doubled the size of the company, taking revenues from $1 billion to $2 billion. ADVO was acquired for $1.2B, financed almost entirely by debt. Prior to the ADVO acquisition, Valassis operated with only ~$115MM of net debt. Debt grew 10x over night. The company levered up – big time…

…Valassis stock was destroyed in late 2008. Shares traded as high as $16.80 in the second quarter. At the lows of the fourth quarter, shares dipped to $1.05. A 94% drop…

…Weschler began buying in the fourth quarter of 2008. The stock price at that time ranged from $1.05 to $8.73. I don’t know exactly what he paid, but the stock fell hard on volume. Weshler was able to purchase 6.24% (or 3,000,000 shares) of the business in the quarter. We’ll assume he paid ~$3/share…

…Valassis was trading at a ridiculously cheap price. This underscores how afraid investors were in the moment. At some point in the fourth quarter, shares dropped as low as $1.05 – meaning someone paid less than one times free cash flow for this business.

Shares were cheap on a market cap basis, but when considering the heavy debt burden, they looked a lot more expensive…

…The 8.25% Senior Notes weren’t due until 2015. So at the time Weschler was buying, he would’ve known the company had ~7 years before that debt was to be repaid/refinanced. The 2015 notes required no scheduled principal repayment prior to maturity…

…Term loan B matured in 7 years, and required minimal principal payments…

…Long story short, the business had 7 years of cash flow generation before it would need to reconsider its debt situation. EBIT, even in the depths of the recession, was enough to cover interest expense. At the end of 2008, Valassis was in compliance with all of its covenants…

…Here’s the cash flow statement from 2009 – 2011:…

  • …Operating cash flow is consistently positive.
  • There is minor capex, leaving loads of excess cash.
  • All free cash flow was used for debt repayment and stock repurchases…

…In February 2014, Harland Clarke Holdings acquired Valassis for $34.05/share.

Weschler’s 2008 purchases would’ve compounded at a rate of 52.5% for a little less than 6 years…

…We don’t know exactly what Weschler was thinking when he bought his shares. But I’d guess the combination of an extremely cheap price, favorable debt repayment schedule and consistent cash flow were the deciding factors.

2. What Bill Ackman Got Wrong With His Bungled IPO – Jason Zweig

This week, Bill Ackman, the hedge-fund billionaire who has 1.4 million followers on X, had to pull the plug on his new fund before it could launch its initial public offering.

That’s because he’d organized his proposed Pershing Square USA, or PSUS, as a closed-end fund…

…Ackman, who has styled himself as a crusader for the investing public, could have tried using his new vehicle to shatter the status quo on fees. Instead, it would have cemented the status quo.

The fund’s 2% annual management fee, which Ackman was going to waive for the first year, would have been competitive at a hedge fund—but far more costly than at market-tracking ETFs.

Then there was the load, or sales charge, of 1.5% for individual investors and somewhat lower for institutions—an irksome cost of admission that people no longer have to pay on most other assets…

…If demand is high, closed-end shares can trade at a premium, or more than the sum of their parts known as net asset value. Usually, they trade at a discount, or less than what the portfolio is worth. The lower a fund’s return and the higher its expenses, the deeper the discount will tend to go.

According to the Investment Company Institute, more than 80% of closed-end funds recently traded at discounts. Stock funds were trading at almost 10% less than their net asset value; bond funds, about 9% below their NAV.

Typically, a closed-end fund doesn’t issue new shares after its IPO; nor does it redeem, or buy your shares back. Instead, you have to buy from, or sell to, another investor. That means new buyers don’t increase the fund’s capital, and sellers don’t decrease it…

…That’s why the firms that run them call closed-end funds “evergreen assets,” or permanent capital.

Over the decades, a few great investors have used that structure to enrich their shareholders rather than to fill their own pockets…

…Those examples suggest to me that Ackman missed an opportunity to innovate.

It was institutions, not individual investors, that balked at the potential discount on his fund.

What if Ackman instead had bypassed the investment bankers and their 1.5% sales load, offering the fund directly to individuals only, commission-free? And what if he’d set a reasonable management fee of, say, 0.5%?

Such an innovative, self-underwritten deal is likely feasible, several securities lawyers say, but would have been more expensive for Ackman than a conventional IPO…

…In the past few weeks, the New York Stock Exchange and Cboe Global Markets’ BZX Exchange separately proposed rule changes that would eliminate the requirement for closed-end funds to hold annual meetings for shareholders.

Good luck trying to get a lousy fund to hire a new manager if you can’t even vote your disapproval without somehow convening a special meeting.

Boaz Weinstein, founder of Saba Capital Management, an activist hedge-fund manager that seeks to narrow the discounts on closed-end funds, calls the exchanges’ rule proposals “some of the most shocking disenfranchisement efforts against closed-end fund shareholders in over 100 years.”

3. How to Build the Ultimate Semiconductor for LLMs – Joe Weisenthal, Tracy Alloway, Reiner Pope, and Mike Gunter

Joe (17:30):

I know there’s always this sort of cliché when talking about tech, they’re like, oh, Google and Facebook, they can just build this and they’ll destroy your little startup. They have infinite amount of money, except that doesn’t actually seem to happen in the real world as much as people on Twitter expect it to happen.

But can you just sort of give a sense of maybe the business and organizational incentives for why a company like Google doesn’t say, “oh, this is a hundred billion market NVIDIA’s worth three and a half trillion or $3 trillion. Let’s build our own LLM specific chips.” Why doesn’t that happen at these large hyperscaler companies that presumably have all the talent and money to do it?

Mike (18:13):

So Google’s TPUs are primarily built to serve their internal customers, and Google’s revenue for the most part comes from Google search, that Google search, and in particular from Google search ads, Google search ads is a customer of the TPUs. It’s a relatively difficult thing to say that hundreds of billions of dollars of revenue that we’re making, we’re going to make a chip that doesn’t really support that particularly well and focuses on this at this point, unproven in terms of revenue market.

And it’s not just ads, but there are a variety of other customers. For instance, you may have noticed how Google is pretty good at identifying good photos and doing a whole variety of other things that are supported in many cases by the TPUs.

Reiner (19:06):

I think one of the other things too that we see in all chip companies in general or companies producing chips is because producing chips is so expensive, you end up in this place where you really want to put all your resources behind one chip effort. And so just because the thinking is that there’s a huge amount of return on investment in making this one thing better rather than fragmenting your efforts, really what you’d like to do in this situation where there’s a new emerging field that might be huge or might not, but it’s hard to say yet. What you’d like to do is maybe spin up a second effort on the side and have a skunk works, see how it works.

Joe (19:37):

Yeah that’s right. That would be amazing just to let Reiner, or just let the two of you go have your own little office somewhere else.

Reiner (19:44):

Yeah. Organizationally, it’s often challenging to do, and we see this across all companies. Every chip company really has essentially only one mainstream chip product that they’re iterating on and making better and better over time…

…Joe (21:49):

Let’s get to MatX. Tell us the product that you’re designing and how it fundamentally will differ from the offerings on the market, most notably from Nvidia.

Reiner (22:01):

So we make chips and in fact, racks and clusters for large language models. So when you look at NVIDIA’s, GPUs, you already talked about all of this, the original background in gaming, this brief movement in Ethereum, and then even within AI, they’re doing small models of large models. So what that translates to, and you can think of it as the rooms of the house or something. They have a different room for each of those different use cases, so different circuitry in the chip for all of these use cases. And the fundamental bet is that if you say, look, I don’t care about that. I’m going to do a lousy job if you try and run a game on me, or I’m going to do a lousy job if you want to run a convolutional network on me, but if you give me a large model with very large matrices, I’m going to crush it. That’s the bet that we’re making at MatX. So we spend as much of our silicon as we can on making this work. There’s a lot of detail in making all of this work out because you need not just the matrix multiplication, but all of the memory bandwidths and communication bandwidths and the actual engineering things to make it pan out. But that’s the core bet.

Tracy (23:05):

And why can’t Nvidia do this? So Nvidia has a lot of resources. It has that big moat as we were discussing in the intro, and it has the GPUs that are already in production and it’s working on new ones. But why couldn’t it start designing an LLM focused chip from scratch?

Mike (23:23):

Right? So you talked about NVIDIA’s moat, and that moat has two components. One component is that they build the very best hardware, and I think that is the result of having a very large team that executes extremely well and making good choices about how to serve their market. They also have a tremendous software moat, and both of these moats are important to different sets of customers. So they’re a tremendous software moat. They have a very broad, deep software ecosystem based on CUDA that allows it…

Tracy (23:59):

Oh yeah, I remember this came up in our discussion with Coreweave.

Mike (24:03):

Yeah. And so that allows customers who are not very sophisticated, who don’t have gigantic engineering budgets themselves to use those chips and use NVIDIA’s chips and be efficient at that. So the thing about a moat is not only does it in some sense keep other people out, it also keeps you in. So insofar as they want to keep their software moat, their CUDA moat, they have to remain compatible with CUDA and compatibility with that software. Compatibility with CUDA requires certain hardware structures. So Nvidia has lots and lots of threads. They have a very flexible memory system. These things are great for being able to flexibly address a whole bunch of different types of neural net problems, but they all cost in terms of hardware, and they’re not necessarily the choices to have those sorts of things. They’re not necessarily the choices, in fact, not the choices that you would want to make if you were aiming specifically at an LLM. So in order to be fully competitive with a chip that’s specialized for LLMs, they would have to give up all of that. And Jensen himself has said that the one non-negotiable rule in our company is that we have to be compatible with CUDA.

Joe (25:23):

This is interesting. So the challenge for them of spinning out something totally different is that it would be outside the family. So it’s outside the CUDA family, so to speak. And

Tracy (25:35):

Meanwhile, you already have high PyTorch and Triton waiting in the wings, I guess…

…Joe (39:00):

Tell us about what customers, because I’ve heard this, we’re all trying to find some alternative to Nvidia, whether it’s to reduce energy costs or just reduce costs in general or being able to even access chips at all since not everyone can get them. There are only so many chips getting made. But when you talk to theoretical customers, A, who do you imagine as your customers? Is it the OpenAIs of the world? Is it the Metas of the world? Is it labs that we haven’t heard of yet that could only get into this if there were sort of more focused lower cost options? And then B, what are they asking for? What do they say? You know what, we’re using NVIDIA right now, but we would really like X or Y in the ideal world.

Reiner (39:48):

So there’s a range of possible customers in the world. The way that we see or a way you divide them up and how we choose to do that is what is the ratio of engineering time they’re putting into their work versus the amount of compute spent that they’re putting in. So the ideal customer in general for a hardware vendor who’s trained to make the absolute best but not necessarily easiest to use hardware, is a company that is spending a lot more on their computing power than they are spending on the engineering time, because then that makes a really good trade off of, maybe I can spend a bit more engineering time to make your hardware work, but I get a big saving on my computing costs. So companies like OpenAI would be obviously a slam dunk.

There’s many more companies as well. So the companies that meet this criteria of spending many times more on compute than on engineering, there’s actually a set of maybe 10, 15 large language model labs that are not as well known as OpenAI, but you might think Character.AI, Cohere and many other companies like that and Mistral.

So the common thing that we hear from those companies, all of those are spending hundreds of millions of dollars on compute, is I just want better FLOPS per dollar. That’s actually the single deciding factor. And that’s primarily the reason they’re deciding on today, deciding on NVIDIA’s products rather than some of the other products in the market is because the FLOPS per dollar of those products is the best you can buy. But when you give them a spec sheet and the first thing they’re going to look at is just what’s the most floating point operations I can run on my chip? And then you can rule out 90% of products there on the basis of, okay, just doesn’t meet that bar. But then after that, you then go through the more detailed analysis of saying, okay, well I’ve got these floating point operations, but is the rest going to work out? Do I have the bandwidths and the interconnect? But for sure the number one criteria is that top line FLOPS.

Joe (41:38):

When we talk about delivering more flops per dollar, what are you aiming for? What is current benchmark flops per dollar? And then are we talking like, can it be done like 90% cheaper? What do you think is realistic in terms of coming to market with something meaningfully better on that metric?

Reiner (41:56):

So NVIDIA’s Blackwell in their FP4 format offers 10 petaFLOPS in that chip, and that chip sells for ballpark 30 to 50,000, depends on many factors. That is about a factor of two to four better than the previous generation NVIDIA chip, which was the Hopper chip. So part of that factor is coming from going to lower precision, going from 8-bit precision to 4-bit precision. In general, precision has been one of the best ways to improve the FLOPS you can pack into a certain amount of silicon. And then some of it is also coming from other factors such as cost reductions that NVIDIA has been deploying. So that’s a benchmark for where NVIDIA is at now. You need to be at least integer multiples better than that in order to compete with the incumbent. So at least two or three times better on that metric we would say. But then of course, if you’re designing for the future, you have to compete against the next generation after that too. So you want to be many times better than the future chip, which isn’t out yet. So that’s the thing you aim for.

Joe (42:56):

Is there anything else that we should sort of understand about this business that we haven’t touched on that you think is important?

Mike (43:03):

One thing, given that this is Odd Lots that I think the reason that Sam Altman is going around the world talking about trillions of dollars of spend is that he wants to move the expectations of all of the suppliers up. So as we’ve observed in the semiconductor shortage, if the suppliers are preparing for a certain amount of demand and demand, in the case of famously of the auto manufacturers as a result of COVID canceled their orders and then they found that demand was much, much, much larger than they expected. It took a very long time to catch up. A similar thing happened with NVIDIA’s H100. So TSMC was actually perfectly capable of keeping up with demand for the chips themselves, but the chips for these AI products use a very special kind of packaging, which puts the compute chips very close to the memory chips and hence allows them to communicate very quickly called CoWoS.

And the capacity for CoWoS was limited because TSMC built with a particular expectation of demand, and when H100 became such a monster product, their CoWoS capacity wasn’t able to keep pace with demand. So supply chain tends to be really good if you predict accurately and if you predict badly on the low side, then you end up with these shortages. But on the other hand, these companies, because the manufacturing companies have very high CapEx, they’re fairly loath to predict badly on the high side because that leads them to having spend a bunch of money on capital CapEx that they’re unable to recover.

4. The Impact of Fed Rate Cuts on Stocks, Bonds & Cash – Ben Carlson

It can be helpful to understand what can happen to the financial markets when the Fed raises or lowers short-term rates.

The reason for the Fed rate cut probably matters more than the rate cut itself.

If the Fed is cutting rates in an emergency fashion, like they did during the Great Financial Crisis, that’s a different story than the Fed cutting because the economy and inflation are cooling off…

…Most of the time stocks were up. The only times the S&P 500 was down substantially a year later occurred during the 1973-74 bear market, the bursting of the dot-com bubble and the 2008 financial crisis.

It’s been rare for stocks to be down three years later and the market has never been down five years after the initial rate cut.

Sometimes the Fed cuts because we are in or fast approaching a recession, but that’s not always the case…

…Average returns have been better when no recession occurs but the disparity isn’t as large as you would assume.

Most of the time the stock market goes up but sometimes it goes down applies to Fed rate cuts just like it does to every other point in time.

Obviously, every rate cut cycle is different. This time it’s going to happen with stocks at or near all-time highs, big gains from the bottom of a bear market, a presidential election, and the sequel to Gladiator coming out this fall.

5. Enough! This Is How the Sahm Rule Predicts Recessions (Transcript Here) – Joshua Brown and Claudia Sahm

Brown (02:11): I’ve been around for a long time and I had not heard about the Sahm Rule but apparently it’s something that you created in 2019. The first person to mention it to me was Nick Koulos which he did on the show. And I guess it had a lot of relevance to start talking about now because we’re trying to figure out if the Fed is staying too tight and if the good economy we’ve had is going to start slipping away before the Fed can start easing and that’s why everyone’s talking about the Sahm Rule.

I want to try to explain it very succinctly and you tell me if I’m missing anything about how the Sahm Rule works. That’s important to the discussion. The Sahm Rule is a recession indicator you came up with about five years ago. Basically what you’re doing is calculating the three-month moving average of the national unemployment rate, so not just last month’s print, but you’ll take the last three, you’ll average those and you’re comparing them to the lowest three-month moving average for the unemployment rate that we’ve had over the last 12 months. Do I have that? Okay you’re nodding.

Sahm (03:28): That’s the formula. We’re there.

Brown (03:29): Okay. If the current three-month average is 0.5 percentage points or more above the lowest three-month average from the last 12 months, that would signal the early stages of a recession – and we could talk about how early – but that would be the “trigger”. And I’m so excited to have you on today because as of the last employment report we got, the three-month average is now more than, just barely, 0.5% above the lowest three-month average that we’ve had, therefore the Sahm Rule is in effect…

..Brown (06:30): So according to your work the Sahm Rule, I guess on a back test, would have accurately signalled every actual recession we’ve had since the 1970s, without the false positives that can occur outside of recessions. This is in some ways similar to my friend Professor Cam Harvey who was trying to figure out why the inverted yield curve has been so accurate in predicting recessions and so far has not had a false positive either. Some would say recent history has been the false positive but he would argue “I’m still on the clock.” But it’s interesting that you created this for fiscal policy while working at the Fed.

Sahm (07:20): So as one of the analysts who covered consumer spending in 2008, understanding what consumers were doing with their, say, rebate checks or later tax credits, the Fed works around the edges. In the staff’s forecast, there are estimates of what fiscal policy does to the economy and the Fed can take that into consideration when they do their monetary policy. It may seem a little counterintuitive but that’s a very important piece of the health of the economy, understanding consumers. But I will say having watched that episode made me want to help improve the policy for next time. The Sahm Rule was part of a policy volume in early 2019 on how to – all kinds of automatic stabilizers, it was just a piece of it. It comes from the back test, I’m looking at history. Before that, it did pass the 2020, calling that recession with flying colours, but anyone could have done that. Yet there are some very unusual circumstances in this cycle that the Sahm Rule – in my opinion, I do not think the US economy is in a recession despite what the Sahm Rule is stating right now…

…Sahm (13:23): There are two basic reasons the unemployment rate goes up. One, there’s a weakening demand for workers, unemployment rate goes up. That’s very consistent with recessionary dynamics. That’s bad and it builds, there’s momentum. That’s where the Sahm Rule gets its accuracy from historically. The other reason that you can have the unemployment rate increase is if you have an increase in the supply of workers. In general, the unemployment rate can get pushed around. It’s even worse right now for the Sahm Rule because early in the pandemic we had millions of workers drop out of the labour force, just walk away. Then we ended up, because they didn’t all come back as quickly as, say, customers did, so we had labour shortages. The unemployment rate got pushed down, probably unsustainably, because we just didn’t have enough workers. Then in recent years, we’ve had a surge in Immigration, as well as we had a good labour market, so people were coming in from the sidelines. So we’ve had two rather notable changes in the labour supply.

I think as we’ve learned – and this is a broad lesson from this – is anytime we have really abrupt, dramatic changes, the adjustments can take a long time. So now as we have these immigrants coming in, this is solving the labour shortage. That is a very good thing, having a larger labour force particularly as we have many people ageing out. That helps keep us growing. That’s a good thing. But in the interim where they’re still searching for jobs, things have slowed down some in terms of adding jobs. That causes the unemployment rate to drift up. Now if it’s just about that supply adjustment, it’s temporary. And at the end of it it’s a good thing, because we’ve got more workers. And we’ve had recessions when there were expansions in the labour force like in the 1970s, so I don’t want to act like just because we have more workers now, everything is okay. It’s just the Sahm Rule – and again as you point out, it’s right at the cusp of its historical trigger. It’s got a lot going on under the hood…

…Sahm (19:52): The Sahm Rule itself, even the real time, has false positives. And then just this bigger conversation of history might not repeat. The one thing on Barry’s is there are cases, you have to go further back in history, there are times where we go into a recession with a low or lower unemployment rate than now. It is not recent. And we have a mix – I talked a lot about the labour supply that’s definitely in the mix. I spent some time looking at that 0.5. When we get across that threshold, what do the contributions from different types of unemployed – you can be because you were laid off, which Barry mentioned, you could be because you’re a new entrant to the workforce, you left a job. We see quite a bit of variation, the contributions. It is true right now we’re much more, there’s more of the entrants, the new job seekers, the coming back to the labour force. They’re a bigger contributor to getting across that 0.5 threshold than most recessions. But you go back to the ‘70s when the labour force is not that different. So it’s hard to pull it out. I’m not in the ironclad, recession is not a given, nor I think what I read – the history – that tightly. And yet I think there are real risks and as with Barry, I was, say in 2022, “A recession is coming,” or “We need a recession.” I was adamantly, I’ve never had a recession call in this whole time. I was kind of close when we got to Silicon Valley Bank but I have not had a recession call in and part of what I could say in 2022 was look at the labour market, look at consumers. We are still in a position of strength, but much less. And the momentum is not good.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Meta Platforms (parent of Faccebook), and TSMC. Holdings are subject to change at any time.

Assessing Different Share Buyback Strategies

Buying back stock is a great way to drive shareholder value but only if it is done at the right price.

Over the past few years, I have observed the different ways that companies conduct their share buybacks. This made me realise that the way a company conducts its share buybacks can have a profound impact on the long term returns of its stock.

Here are some share buyback strategies and what they mean for shareholders.

Opportunistic

The best way to conduct share buybacks is what I term as opportunistic buybacks. This means buying back shares aggressively when shares are undervalued and vice versa.

An example of a company that does this very well is the US-listed company Medpace, which helps drugmakers run drug trials. 

In 2022, when markets and its own stock price were down, Medpace took the opportunity to buy back its shares aggressively. The company tapped the debt markets to procure more capital to buyback shares, to the extent that its net-cash position of US$314 million at the end of 2021 flipped to a net-debt position of US$361 million as of 30 June 2022.

But as its stock price went up, Medpace became less enthusiastic about buying back shares and instead started to pay off the debt it incurred; the company ended 2022 with a lower net-debt position of US$180 million

This type of opportunistic buyback strategy is the most efficient buyback strategy in my opinion.

The plot below shows the amount spent by Medpace on buy backs over the last 3 years.

Source: TIKR.com

With its stock price now at a much higher level, Medpace has not conducted buybacks for the last four quarters. Medpace’s management team is likely waiting for its shares to fall to a lower valuation before they conduct buybacks again.

Regular buybacks

Another way to conduct buybacks is to do it on a regular basis. The parent of Google, Alphabet, is one such company that has conducted very regular buybacks. In the past 10 quarters, Alphabet has consistently spent close to US$15 billion a quarter on buybacks. This includes quarters when the company’s free cash flow was less than US$15 billion.

Although I prefer opportunistic buybacks, regular buybacks may be best suited for a company such as Alphabet which has to deploy large amounts of capital. Alphabet’s shares have also consistently traded at a reasonable valuation over the last few years, making regular buybacks a decent strategy.

The chart below shows the amount that Alphabet spent on buybacks in each quarter for the last 10 quarters. 

Source: Tikr

Poor timing

At the other end of the spectrum, some companies try to time their buybacks but end up being aggressive with buybacks at the wrong time.

Take Adobe, the owner of Photoshop, for example.

Source: TIKR.com

Adobe seems to change the level of aggressiveness in its share buybacks from quarter to quarter.

In the first quarter of 2022 , Adobe’s stock price was close to all-time highs, but the company was very aggressive with buybacks and spent more than US$2 billion – or 143% of its free cash flow in the quarter – to repurchase its shares. 

When its stock price started falling later that year, instead of taking advantage of the lower price, Adobe surprisingly cut down on its buybacks to slightly over US$1 billion a quarter, less than what it generated in free cash flow during those periods. So far in 2024, Adobe has again increased its buybacks after its stock price increased.

The optimum strategy would have been to do more buybacks when its stock price was low and less buybacks when its stock price was high.

Bottom line

Buybacks can be a great way to add value to shareholders. However, it is vital that companies conduct buybacks at low valuations to maximise the use of their capital to generate long term returns for shareholders. 

Medpace is an excellent example of great capital allocation, even going so far as to tap the debt markets to be even more aggressive with buybacks when its stock price is low. In the middle, we have companies such as Alphabet that consistently buyback shares. But on the other end of the spectrum is Adobe that seems to become more aggressive with buybacks at the wrong times.

Hopefully, more companies can follow in the footsteps of Medpace and make sure they put their capital to use only when the time is right.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, and Medpace. Holdings are subject to change at any time.

What We’re Reading (Week Ending 04 August 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 04 August 2024:

1. No More EU Fines for Big Tech – John Loeber

The EU takes an aggressive stance toward American Big Tech. Citing concerns about privacy and monopolization, it has enacted countless regulations, and fined Google and Meta for billions of dollars. In the last six months, EU regulators have kicked this motion into overdrive:

  • They adopted the Digital Markets Act (DMA), which they used to immediately open investigations into Apple, Google, and Meta.
  • They adopted the AI Act to constrain AI applications.
  • They slapped Apple with a $2B fine.
  • In July alone, they opened antitrust proceedings against Nvidia, antitrust investigations into Google, and threatened to fine Twitter over seemingly-trivial Blue Checks.

The posture is clear: the EU is not satisfied with the bloodletting-to-date and is raising its demands from Big Tech. The AI Act and DMA both may assess penalties as a percentage of global turnover, and are so broad in scope that European regulators are emboldened to pursue tech giants for practically limitless amounts of money…

…The EU’s framework goes so far as to assess fines on a percentage of global turnover:

  • GDPR: up to 4% of global turnover (top-line revenue);
  • AI Act: up to 7% of global turnover;
  • DMA: up to 20% of global turnover;

These just keep getting more expensive! The idea of issuing fines based on global revenue for local violations of law is a brazen stretch of legal convention:

  1. Penalties must be commensurate with damages;
  2. Courts may assert their authority only over subjects in their jurisdiction.

The legal convention would be for the EU to assess fines based on EU revenues, not global revenues. Permitting fines based on global revenue would set disastrous precedent: if the EU can set fines based on global revenue, why can’t any other country? Any other big market with a little bit of leverage could try to extract a slice of the pie. Why shouldn’t India, which has ~500M Meta users, start fining Meta for 10% of its global revenue? Why shouldn’t Brazil do the same? Or Nigeria? And why should they keep their fines to Big Tech? Why don’t they fine Exxon Mobil for a percentage of global revenue?

Permitting this scope would set terrible precedent, and it has no legal legitimacy. Not only must Big Tech refuse to comply, but the US must reject it as a matter of national interest and international order…

…The EU might account only for 7% of Apple’s global revenue. 7% is still a big market, but Apple is by no means dependent on it. Especially considering the exceptionally high level of operational headache in complying with European requirements, if it comes to be Apple’s view that the fines-as-percentage-of-global-revenue cannot be avoided, then it may be rational to pull out…

…The EU doesn’t have true local alternatives. If it pursues Nvidia on Antitrust grounds: does it really want Nvidia GPUs to be replaced by, say, Huawei GPUs? Does it want Facebook to be replaced by VK? If EU regulators are motivated by concerns over unaccountable, outside influences, I might suggest that American Big Tech is still their best option…

…Never forget: these Big Tech products are, for the most part, cloud services. They can simply be turned off remotely, from one minute to the next. Hypothetically, if Big Tech were to coordinate, play true hardball, and shut off EU-facing products, the EU economy would grind to a halt overnight. Imagine the fallout from hundreds of millions of people suddenly not having email anymore. Without AWS, GCP, Azure, etc. things simply wouldn’t work. We live in a digital world; the dependencies are everywhere. It’d be like when OPEC constrained oil supply in the 70s, except percolating much more deeply and instantaneously throughout economies.

Of course, it’s very unlikely for Big Tech to withdraw from the EU entirely. That would be drastic. The reality is subtler, and we’re seeing it play out right now: Meta is not making its multimodal Llama models available in the EU. Apple isn’t going to bring Apple Intelligence to the EU. These are important, state-of-the-art products. If you believe at all that AI is promising or important, then EU businesses and consumers will suffer from not having access to them…

…Maybe multimodal Llama AI is not important for EU consumers today. But what if the best radiology AI assistant gets built on Llama AI — and EU patients can’t have access? Or an EU business needs the best-in-class AI to remain globally competitive? What if Apple Intelligence can automatically call an ambulance for you if you have a heart attack — but not in the EU?…

…The EU must compete or cooperate. Either one is fine. But it would be ill-advised to continue the current regime of low-grade economic harassment of its nominal allies by syphoning off fines and imposing obnoxious requirements.

2. 4 Key Lessons Learnt in Legacy Planning – Christopher Tan

In the plans that clients want us to put in place for them, one of the common requests is to put in place structures to prevent their children from squandering their inheritance. This is not just limited to young beneficiaries but beneficiaries who can be as old as in their 50s!

The lack of trust is largely due to many of these children not needing to work for the good life that they have been enjoying from a young age…

…But it is not that parents do not know this. No sensible parent starts off their parenting journey with the intention of spoiling their children to such an extreme. It usually begins in a small way, unintentionally, incrementally, and by the time they realise what they might have done, it is too late.

When we give our children too many good things in life, especially when they are still young, we deny them the opportunity to learn the importance of delayed gratification and we do not allow them to foster resilience and independence, which can cause them to have a self-entitlement mentality…

…When I first started my firm in 2001, this new “baby” began to consume me and took time away from my wife and two young children.

Well-meaning friends warned me not to chase wealth at the expense of my family. “But I am not even trying to be richer. I am just trying to survive!” I retorted. Finally, it came to a point in my life where I did not have a relationship with my family.

Thankfully, I realised it early enough to turn around. Otherwise, I would have lost my family…

…In all my work with my clients, I have realised that behind every legacy and estate plan, there is a message of love. Unfortunately, this is lost in the legal documents and structures that are put in place.

I have always encouraged my clients to share their gifting plans with their beneficiaries. Share not just the “what and how” of the plan but also share the “why”.

But as Asians, some of us may not be so willing to communicate our emotions so openly, especially before our passing. In this case, one can consider using the Letter of Wishes (LOW).

The LOW is a non-legally binding document by the settlor to guide the protectors and trustees on how they wish their assets to be managed. But instead of writing it like an instruction manual, write it like a love letter to your loved ones.

3. Nike: An Epic Saga of Value Destruction – Massimo Giunco

A month ago. June 28th, 2024. Nike Q2 24 financial results. 25bn of market cap lost in a day (70 in 9 months). 130 million shares exchanged in the stock market (13 times the avg number of daily transactions). The lowest share price since 2018, – 32% since the beginning of 2024.

It wasn’t a Wall Street session. It was the judgement day for Nike.

The story started on January 13th, 2020, when John Donahue became CEO of Nike, replacing Mark Parker. Together with Heidi O’Neill, who became President of Consumer, Product and Brand, he began immediately to plan the transformation of the company.

A few months later, after hist first tour around the Nike world, the CEO announced – via email – his decisions (using the formula “dear Nike colleagues, this is what you asked for…”):

1)    Nike will eliminate categories from the organization (brand, product development and sales)

2)    Nike will become a DTC led company, ending the wholesale leadership.

3)    Nike will change its marketing model, centralizing it and making it data driven and digitally led…

Clearly, one important support came from the brand investments. The marketing org. dramatically changed its demand creation model and pumped – over the years – billions of dollars into performance marketing/programmatic adv to buy (and the word “buy” is the proper one, otherwise I would have used “earn”) a fast-growing traffic to the ecommerce platform (we will talk about that later).

After a few quarters of good results (as I said, inflated by the long tail of the pandemic and the slow resurrection of the B&M business), things started to take unexpected directions. Among them:

a) Nike – that had been a wholesale business company since ever, working on a well- established “futures” system – did not have a clear knowledge and discipline to manage the shift operationally. Magically (well, not so magically), inventory started to blow up, as all the data driven predictions (the “flywheel” …) were simply inconclusive and the supply chain broke up. As announced by the quarterly earnings releases, the inventory level on May 31st, 2021, was 6.5bn $. On May 31st, 2022, it was 8.5bn $. On November 30th, 2022, it reached 10bn $. Nike didn’t know anymore what to produce, when to produce, where to ship. Action plans to solve the over-inventory issues planted the seed of margin erosion, as Nike started to discount more and more on its own channels – especially Nike.com (we will talk later about it)…

…The CEO of Nike doesn’t come from the industry. So, probably he underestimated consumer behavior and the logic behind the marketplace mechanisms of the sport sneakers and apparel distribution. Or wasn’t aware of them. At the end, he is a poorly advised “data driven guy”, whatever it means. It is more difficult to understand why the President of the Consumer, Product and Brand, a veteran of the industry, one of the creators of the Women’s category in Nike, a professional with an immense knowledge of the company and the business, approved and endorsed all of this. Maybe, excess of confidence. Or pure and simple miscalculations… hard to know…

What happened in 2020? Well, the brand team shifted from brand marketing to digital marketing and from brand enhancing to sales activation. All in. Because of that, the CMO of that time made a few epic moves:

a) shift from CREATE DEMAND to SERVE AND RETAIN DEMAND, that meant that most of the investment were directed to those who were already Nike consumers (or “members”).

b) massive growth of programmatic adv investment (as of 2021, to drive traffic to Nike.com, Nike started investing in programmatic adv and performance marketing the double or more of the share of resources usually invested in the other brand activities). For sure, the former CMO was ignoring the growing academic literature around the inefficiencies of investment in performance marketing/programmatic advertising, due to frauds, rising costs of mediators and declining consumer response to those activities. Things that were suggesting other large B2C companies – like Unilever and P&G – to reduce those kind of DC investments in the same exact period… Because of that, Nike invested a material amount of dollars (billions) into something that was less effective but easier to be measured vs something that was more effective but less easy to be measured. In conclusion: an impressive waste of money.

c) elevation of Brand Design and demotion of Brand Communication. Basically, style over breakthrough creativity. To feed the digital marketing ecosystem, one of the historic functions of the marketing team (brand communications) was “de facto” absorbed and marginalized by the brand design team, which took the leadership in marketing content production (together with the mar-tech “scientists”). Nike didn’t need brand creativity anymore, just a polished and never stopping supply chain of branded stuff…

Obviously, the former CMO had decided to ignore “How Brands Grow” by Byron Sharp, Professor of Marketing Science, Director of the Ehrenberg-Bass Institute, University of South Australia. Otherwise, he would have known that: 1) if you focus on existing consumers, you won’t grow. Eventually, your business will shrink (as it is “surprisingly” happening right now). 2) Loyalty is not a growth driver. 3) Loyalty is a function of penetration. If you grow market penetration and market share, you grow loyalty (and usually revenues). 4) If you try to grow only loyalty (and LTV) of existing consumers (spending an enormous amount of money and time to get something that is very difficult and expensive to achieve), you don’t grow penetration and market share (and therefore revenues). As simple as that…

He made “Nike.com” the center of everything and diverted focus and dollars to it. Due to all of that, Nike hasn’t made a history making brand campaign since 2018, as the Brand organization had to become a huge sales activation machine. An example? The infamous “editorial strategy” – you can see the effects of it if you visit its archive, the Nike channel on YouTube or any Nike account on Instagram – generated a regurgitation of thousands of micro-useless-insignificant contents, costly and mostly ineffective, all produced to feed the bulimic digital ecosystem, aimed to drive traffic to a platform that converts a tiny (and when I say tiny, I mean really tiny…) fraction of consumers who arrive there and disappoints (or ignores) all the others.

4. Getting bubbly – Owen A. Lamont

Is the U.S. stock market currently in an AI-fueled bubble? That’s the question I asked back in March, and my answer was “No, not even close.” Since then, new data has come in, and my answer has changed. As of July 2024, I still think we’re not in a bubble, but now we are getting close.

Here are my previously discussed Four Horsemen:

  • First Horseman, Overvaluation: Are current prices at unreasonably high levels according to historical norms and expert opinion?
  • Second Horseman, Bubble beliefs: Do an unusually large number of market participants say that prices are too high, but likely to rise further?
  • Third Horseman, Issuance: Over the past year, have we seen an unusually high level of equity issuance by existing firms and new firms (IPOs), and unusually low levels of repurchases?
  • Fourth Horseman, Inflows: Do we see an unusually large number of new participants entering the market?

What I said before was, “As of March 2024, we may perhaps hear the distant hoofbeats of the First Horseman (overvaluation), who has not traveled far since he last visited us, but there is no sign yet of the other three.”

What’s changed is the Second Horseman, who is now trotting into view. But there’s still no sign of the other two horsemen; for the aggregate U.S. stock market, we see neither issuance nor inflows…

… The table shows that, as has been widely reported, CAPE is very high today and has only been higher around prior bubbles in 2021 and 1999. The market ain’t cheap.

The only point I want to make is that the 2021 bubble was different from 1999/2000 in one key respect: interest rates. In 1999, both nominal and real rates were high and the excess CAPE yield was negative, implying that there was an obvious alternative to investing in overpriced stocks. In 2021, in contrast, both nominal and real rates were very low and the excess CAPE yield was positive, so that one could argue that stocks were fairly priced relative to bonds.

Today looks closer to 1999 than to 2021: a stock market that looks high relative to bond markets. So in that sense, today’s market looks more bubbly than 2021, though less bubbly than 1999…

…Talking to academic economists in mid-July 2024, I got a 1998ish vibe. When I asked them if they thought the market is overvalued, they almost all said yes, sometimes adding “of course” or “definitely” and mentioning megacap tech stocks. I don’t think the overvaluation sentiment among finance professors is as strong and uniform as it was in 1999, but it is far stronger than it was in 2021.

I’m guessing the gap between public and private utterances mostly reflects the slow pace of academic research. There were many economists studying stock market overvaluation in 1999 because the market had been overvalued for years. In contrast, today we see mostly visceral reactions to high prices as opposed to formal analysis…

I previously showed a table with survey data from Yale’s U.S. Stock Market Confidence Indices,[5] and I said that in order for the Second Horseman to be present:

“I need 65% or more respondents agreeing that “Stock prices in the United States, when compared with measures of true fundamental value or sensible investment value, are too high.”

Below, I show an updated table where I have just added a new row for July 2024. We are not quite at my proposed threshold of 65%, but we‘ve reached 61%, mighty close. With 61% of individual investors saying the market is overvalued but 75% saying that the market is going up, it appears that bubble beliefs are emerging…

…Other evidence suggests bubble beliefs emerging within specific segments of the market. For example, a recent survey found that 84% of retail investors expected the tech sector to outperform in the second half of 2024, but 61% said AI-related stocks were overvalued.

5. Does the Stock Market Care Who the President Is? – Ben Carlson

I took a look back at every president since Herbert Hoover to see how bad stock market losses have been for each four-year term in office…

…Every president saw severe corrections or bear markets on their watch. The average loss over all four-year terms was 30 percent. The average loss under a Republican administration was 37 percent while the average loss under the Democrats was 24 percent. But these differences don’t really tell you much about the two parties. The stock market does not care about Republicans or Democrats.

For example, if you look at the stock market performance under both Republicans and Democrats going back to 1853, two full presidential terms before Lincoln took office, the performance is fairly similar. Total returns under Democrats were 1,340 percent, the total returns under Republicans were 1,270 percent.

Presidents have far less control over the markets than most people would have you believe. There are no magical levers they can pull to force stocks to rise or fall. Policy decisions often affect the economy with a lag. And the economy and stock market are rarely operating in lock-step. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Meta Platforms. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q2 2024

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the second quarter of 2024.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the second quarter of 2024 – was held three weeks ago and contained useful insights on the state of American consumers and businesses. The bottom-line is this: The US economy is stronger than what many would have thought a few years ago given the current monetary conditions, but there are signs of weakness such as slightly higher unemployment and slower GDP growth; at the same time,  inflation and interest rates may stay higher than the market expects, and the Fed’s quantitative tightening may have unpredictable consequences.

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. Broader financial market conditions suggest a benign economic outlook, but JPMorgan’s management continue to be vigilant about potential tail risks; management is concerned about inflation and interest rates staying higher than the market expects, and the effects of the Federal Reserve’s quantitative tightening

While market valuations and credit spreads seem to reflect a rather benign economic outlook, we continue to be vigilant about potential tail risks. These tail risks are the same ones that we have mentioned before. The geopolitical situation remains complex and potentially the most dangerous since World War II — though its outcome and effect on the global economy remain unknown. Next, there has been some progress bringing inflation down, but there are still multiple inflationary forces in front of us: large fiscal deficits, infrastructure needs, restructuring of trade and remilitarization of the world. Therefore, inflation and interest rates may stay higher than the market expects. And finally, we still do not know the full effects of quantitative tightening on this scale.

2. Net charge-offs (effectively bad loans that JPMorgan can’t recover) rose from US$1.4 billion a year ago, mostly because of card-related credit losses that are normalising to historical norms

Credit costs were $3.1 billion, reflecting net charge-offs of $2.2 billion and a net reserve build of $821 million. Net charge-offs were up $820 million year-on-year, predominantly driven by Card…

…I still feel like when it comes to Card charge-offs and delinquencies, there’s just not much to see there. It’s still — it’s normalization, not deterioration. It’s in line with expectations. 

3. JPMorgan’s credit card outstanding loans was up double-digits

Card outstandings were up 12% due to strong account acquisition and the continued normalization of revolve.

4. Auto originations are down

In auto, originations were $10.8 billion, down 10%, coming off strong originations from a year ago while continuing to maintain healthy margins. 

5. JPMorgan’s investment banking fees had strong growth in 2024 Q2, partly because of favourable market conditions; management is cautiously optimistic about the level of appetite that companies have for capital markets activity, but headwinds persist 

This quarter, IB fees were up 50% year-on-year, and we ranked #1, with year-to-date wallet share of 9.5%. In advisory, fees were up 45% primarily driven by the closing of a few large deals and a weak prior year quarter. Underwriting fees were up meaningfully, with equity up 56% and debt up 51%, benefiting from favorable market conditions. In terms of the outlook, we’re pleased with both the year-on-year and sequential improvement in the quarter. We remain cautiously optimistic about the pipeline, although many of the same headwinds are still in effect. It’s also worth noting that pull-forward refinancing activity was a meaningful contributor to the strong performance in the first half of the year…

…In terms of dialogue and engagement, it’s definitely elevated. So I would say the dialogue on ECM [Equity Capital Markets] s elevated and the dialogue on M&A is quite robust as well. So all of those are good things that encourage us and make us hopeful that we could be seeing sort of a better trend in this space. But there are some important caveats.

So on the DCM [Debt Capital Markets] side, yes, we made pull-forward comments in the first quarter, but we still feel that this second quarter still reflects a bunch of pull-forward, and therefore, we’re reasonably cautious about the second half of the year. Importantly, a lot of the activity is refinancing activity as opposed to, for example, acquisition finance. So the fact that M&A remains still relatively muted in terms of actual deals has knock-on effects on DCM as well. And when a higher percentage of the wallet is refi-ed, then the pull-forward risk becomes a little bit higher.

On ECM, if you look at it kind of [ at a removed ], you might ask the question, given the performance of the overall indices, you would think it would be a really booming environment for IPOs, for example. And while it’s improving, it’s not quite as good as you would otherwise expect. And that’s driven by a variety of factors, including the fact that, as has been widely discussed, that extent to which the performance of the large industries is driven by like a few stocks, the sort of mid-cap tech growth space and other spaces that would typically be driving IPOs have had much more muted performance. Also, a lot of the private capital that was raised a couple of years ago was raised at pretty high valuations. And so in some cases, people looking at IPOs could be looking at down rounds, that’s an issue. And while secondary market performance of IPOs has improved meaningfully, in some cases, people still have concerns about that. So those are a little bit of overhang on that space. I think we can hope that over time that fades away and the trend gets a bit more robust.

And yes, on the advisory side, the regulatory overhang is there, remains there. And so we’ll just have to see how that plays out.

6. Management is seeing muted demand for new loans from companies as current economic conditions make them cautious

Demand for new loans remains muted as middle market and large corporate clients remain somewhat cautious due to the economic environment and revolver utilization continues to be below pre-pandemic levels. 

7. Demand for loans in the commercial real estate (CRE) market is muted

In CRE, higher rates continue to suppress both loan origination and payoff activity.

8. Lower income cohorts are facing a little more pressure than higher income cohorts because even though the US economy is stronger than what many would have thought a few years ago given the current monetary conditions, there is currently slightly higher unemployment and slower GDP growth

As I say, we always look quite closely inside the cohort, inside the income cohorts. And when you look in there, specifically, for example, on spend patterns, you can see a little bit of evidence of behavior that’s consistent with a little bit of weakness in the lower-income segments, where you see a little bit of rotation of the spend out of discretionary into nondiscretionary. But the effects are really quite subtle, and in my mind, definitely entirely consistent with the type of economic environment that we’re seeing, which, while very strong and certainly a lot stronger than anyone would have thought given the tightness of monetary conditions, say, like they’ve been predicting it a couple of years ago or whatever, you are seeing slightly higher unemployment, you are seeing moderating GDP growth. And so it’s not entirely surprising that you’re seeing a tiny bit of weakness in some pockets of spend. 

9. The increase in nonaccrual loans in the Corporate & Investment Bank business is not a broader sign of cracks happening in the business

[Question] I know your numbers are still quite low, but in the Corporate & Investment Bank, you had about a $500 million pickup in nonaccrual loans. Can you share with us what are you seeing in C&I? Are there any early signs of cracks or anything?

[Answer] I think the short answer is no, we’re not really seeing early signs of cracks in C&I. I mean, yes, I agree with you like the C&I charge-off rate has been very, very low for a long time. I think we emphasized that at last year’s Investor Day. If I remember correctly, I think the C&I charge-off rate [ over the preceding ] 10 years was something like literally 0. So that is clearly very low by historical standards. And while we take a lot of pride in that number and I think it reflects the discipline in our underwriting process and the strength of our credit culture across bankers and the risk team, that’s not — we don’t actually run that franchise to like a 0 loss expectation. So you have to assume there will be some upward pressure on that. But in any given quarter, the C&I numbers tend to be quite lumpy and quite idiosyncratic. So I don’t think that anything in the current quarter’s results is indicative of anything broader and I haven’t heard anyone internally talk that way, I would say.

10. Management is unwilling to lower their standards for risk-taking just because it has excess capital because they think it makes sense to be patient now given their current assessment of economic risk

And of course, for the rest of the loan space, the last thing that we’re going to do is have the excess capital mean that we lean in to lending that is not inside our risk appetite or inside our credit box, especially in a world where spreads are quite compressed and terms are under pressure. So there’s always a balance between capital deployment and assessing economic risk rationally. And frankly, that is, in some sense, a microcosm of the larger challenge that we have right now. When I talked about if there was ever a moment where the opportunity cost of not deploying the capital relative to how attractive the opportunities outside the walls of the company are, now would be it in terms of being patient. That’s a little bit one example of what I was referring to.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 28 July 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 28 July 2024:

1. Open Source AI Is the Path Forward – Mark Zuckerberg

In the early days of high-performance computing, the major tech companies of the day each invested heavily in developing their own closed source versions of Unix. It was hard to imagine at the time that any other approach could develop such advanced software. Eventually though, open source Linux gained popularity – initially because it allowed developers to modify its code however they wanted and was more affordable, and over time because it became more advanced, more secure, and had a broader ecosystem supporting more capabilities than any closed Unix. Today, Linux is the industry standard foundation for both cloud computing and the operating systems that run most mobile devices – and we all benefit from superior products because of it.

I believe that AI will develop in a similar way. Today, several tech companies are developing leading closed models. But open source is quickly closing the gap. Last year, Llama 2 was only comparable to an older generation of models behind the frontier. This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency.

Today we’re taking the next steps towards open source AI becoming the industry standard. We’re releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as new and improved Llama 3.1 70B and 8B models. In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models…

…Many organizations don’t want to depend on models they cannot run and control themselves. They don’t want closed model providers to be able to change their model, alter their terms of use, or even stop serving them entirely. They also don’t want to get locked into a single cloud that has exclusive rights to a model. Open source enables a broad ecosystem of companies with compatible toolchains that you can move between easily…

…Developers can run inference on Llama 3.1 405B on their own infra at roughly 50% the cost of using closed models like GPT-4o, for both user-facing and offline inference tasks…

…One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it’s clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing…

… I expect AI development will continue to be very competitive, which means that open sourcing any given model isn’t giving away a massive advantage over the next best models at that point in time…

…The next question is how the US and democratic nations should handle the threat of states with massive resources like China. The United States’ advantage is decentralized and open innovation. Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the US and its allies. Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don’t lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.

2. How a long-term approach to stock investments pays off in spades – Chin Hui Leong

Let’s look at the S&P 500’s performance between May 2004 and May 2024, a 20-year period which produced an average annual return of 10.2 per cent per year.

Here’s the shocker: If you missed the market’s 10 best days, your double-digit gains would shrink to only 6 per cent per year. If you missed the top 20 days, your returns would plummet to a mere 3.3 per cent, barely keeping up with inflation.

But don’t bet on timing your entry either. During this period, seven of the 10 best days occurred within 15 days of the 10 worst days. In other words, unless you can day trade with precision multiple times in a row, you are better off just holding your stocks through the volatility…

…Here’s another thing. History has shown that the longer you hold, the better your chances of reaping a positive return. From 1980 to 2023, the S&P 500 delivered positive returns in 33 out of the 43 years.

For the math geeks, that’s a win rate of over 76 per cent, far better than a coin flip. To top it off, there hasn’t been a single 20-year period since 1950 where the stock market has seen negative returns…

…While compounding is powerful, blindly buying any stock isn’t the answer. Many are not worthy to be held over long periods. Quality is the key. For a stock to compound, you need its underlying business to be built to last…

…What if you are wrong in your assessment of a business?…

..I submit to you that the lessons you learn holding a stock for the long term will far outweigh any other lessons you pick up from the stock market. Each stock, whether it turns out to be a winner or loser, will provide invaluable lessons you can apply in the future.

As you learn more over time, you’ll get better at picking the right stocks to hold. After all, as the late Nelson Mandela once said: “I never lose, I either win or I learn.”

3. What We Can Learn From The Oil Market – 1980 – Gene Hoots

Autumn 1980, the energy sector was 33% of the S&P 500 Index. Two personal incidents illustrate the mindset about energy, that we now know was unjustified mania…

…One investment advisor visited me in the fall of 1980. He had recently been an Assistant Secretary in the Department of Energy in Washington, Clearly, he was better informed than most about the world oil market. His company was overweight in oil stocks, and he laid out their case.

Oil had hit a new high, $39 a barrel in June. A few weeks before, he had met with the Saudi Oil Minister, Sheik Zaki Yamani. Everyone in the world was listening to Yamani who was setting Saudi oil prices; Yamani seemed to be the most powerful man in the world. My advisor said that in his meeting, Yamani “had personally assured that by April 1981 oil would hit $100 a barrel” – 2 ½ times the current price – a frightening thought…

… I gave my annual pension fund report to the RJR board finance committee. This year, taking my cue from the very conservative Capital Guardian Trust advisors, I (cautiously) stated my concern that oil stocks were becoming too big a part of the market. I did NOT say that oil stocks would decline, rather, that they might not be a bargain relative to other stocks. No sooner had I made the comment than one of the directors interrupted and asked, “Did you say oil stocks are going down?” His tone made it clear that he strongly disagreed with what I had said. I clarified and moved on with my talk, but the board clearly thought that I was completely wrong about oil…

…Spring 1981, the price of crude was far below $100 a barrel, even a bit below $39. Oil would not reach $100 until February 2008, 27 years later. When it comes to major economic and market inflection points, there are no experts!…

…Over the next two years, oil stocks dropped on average 35-50% and many of the smaller companies went bankrupt. 43 years later, the Energy Sector is 3.6% of the S&P 500. $100 invested in the energy stocks at the end of 1980 would have returned $493 and $100 in everything else would have returned $5,787 – 3.5% vs. 9.8% annually (without dividends).

4. Sometimes a cut is just a cut – Josh Brown

When is a rate cut not an emergency rate cut? When it’s a “celebratory rate cut” – a term coined by Callie Cox, whom you should be subscribed to immediately by the way.

Callie’s making the point that sometimes the Federal Reserve cuts because they can and they should – policy is overly restrictive relative to current conditions. And sometimes they cut because they have to – an emergency cut with even more emergency cuts to come later…

…The rate cutting cycles that stand out in our memories are the emergency ones. So there is a reflex in market psychology where we automatically equate cutting cycles with oncoming recessions. We need to stop that nonsense…

…Interest rate cuts have not historically meant a “slam dunk” recession call. Sometimes a cut is just a cut. The Y axis is S&P 500 performance rebased to 100 on the left scale and on the right scale it’s the date of the first interest rate cut of the cycle. The X axis is days after the first cut. You can plainly see that in many cases after the first cut we did not have a recession (the blue lines). There are even some instances where we did have a recession (red lines) but stock market performance did not go negative from the time of the first cut.

Which means the range of outcomes after the initial cut are all over the place. Crafting a narrative for what will happen to either the stock market or the economy (or both) as a result of the initial interest rate cut is an exercise in telling fairy tales.

5. AFC on the Road – Turkmenistan – Asia Frontier Capital

We decided to visit Turkmenistan in May 2024 after the third AFC Uzbekistan Fund Tour. Turkmenistan borders Uzbekistan to the west and happens to be one of the least visited countries in the world with what’s purported as being one of the ten hardest visas in the world to obtain…

…Upon receiving the invitation letter for our visa from the tour agency we used in Turkmenistan, we went to the Turkmen embassy in Tashkent. Warned of how chaotic the embassy is and how long it could take, along with a customary light interrogation, we were prepared to be patient. However, our interaction at the embassy was the polar opposite.

We provided our invitation letter and visa form along with our passports and the gentleman on the other side of the glass said to wait five minutes. Not being our first time dealing with a government agency in this part of the world, “5 minutes” often means 30 minutes or one hour. However, after approximately 5 minutes we were called and given our passports with our shiny green Turkmen visas pasted inside…

…The day after our May 2024 AFC Uzbekistan Fund Tour, we took the evening Afrosiyob (fast train) which takes four hours from Tashkent to Bukhara, arriving around 23:00. We took in the sights of the ancient city around midnight. For anyone going to Uzbekistan, Bukhara is a must see, much more so than Samarkand, especially as the old city is lit up at night.

The following morning at 06:30 we were picked up by a taxi for the two-hour drive to the Uzbek-Turkmen border where we exited the taxi and continued on foot. The border was easy to cross on the Uzbek side, taking five minutes as there was only us and a group of four Chinese tourists. We crossed no-man’s land in a minivan to the Turkmen side where we took a Covid-19 PCR test (just a money-making opportunity) which costs USD 33 each. Then we proceeded to the Turkmen immigration building via another, this time Soviet, minivan (nicknamed a “bukhanka” as it is shaped like a Soviet loaf of bread called bukhanka) where we met our Turkmen tour guide for the next 4 days (foreigners cannot freely travel in Turkmenistan, save for a 72-hour transit visa), completed our customs declaration forms (which were not in English), then they took our fingerprints and checked each luggage item thoroughly and finally proceeded onto another bukhanka to the border exit. There, after a final confirmation from a border guard that we had our visas stamped, we entered the parking lot, surrounded by the sprawling Karakum desert (which covers 80% of Turkmenistan).

We then took a twenty-minute drive to the nearby city of Türkmenabat, formerly Novy Chardzhou, the second largest city in the country, hosting a population of ~250,000, for a quick lunch before a back-breaking four-hour drive with our modern Japanese 4-wheel drive SUV to the ancient city of Merv on one of several roads to be that resembled the moon (and probably was a similar experience to what riding in the back of a dump truck full of rocks must feel like). On the drive, we passed a handful of wandering camels, some large petrochemical facilities (Turkmenistan hosts the world’s fourth largest natural gas reserves behind Russia, Iran, and Qatar), and hundreds of trucks with either Iranian, Turkish, or local number plates. We suspected that all the Iranian and Turkish trucks were in transit to Uzbekistan.

After about 2 hours into the journey, a brand new nicely paved 4 lane highway (resembling a German Autobahn) appeared parallel to our “tank track” road with a few trucks from time to time on it. After a short while, we innocently asked our tour guide why we can’t use it too and his answer was “it costs money”. To our surprise after a few minutes our driver drove off the “tank tracks” and followed another SUV which led us to the Autobahn. For about 30 minutes we were able to drive at about 120 km/h (instead of the maximum 50 km/h on the “tank tracks”) and realized that this road was actually still closed as from time-to-time construction works were taking place. Finally, we had to exit the Autobahn since a bridge was still under construction and a dirt track led us back to the normal road. However, before entering the normal road we had to pass by a guard (he was obviously a construction worker) and our driver handed him the equivalent of 50 USD cents for the “informal toll”…

…The former President of Turkmenistan, Gurbanguly Berdimuhamedov, is famous for his obsession with Guinness World Records. So it is only natural that at Ashgabat International Airport we encountered our first such world record, that of the world’s largest bird-shaped (seagull) building (according to Guinness World Records) with a wingspan of 364 meters.

The passenger terminal is also host to the world’s largest carpet, at 705 square meters. Opened in 2016, the airport is as modern as anything you see in Istanbul or Hong Kong. As we departed the airport, we passed by the world’s biggest fountain complex and thereafter we stopped to take a photo; our first glimpse of the ostentatious capital. We then drove to the Sports Hotel which is part of a massive complex built for the “2017 Asian Indoor and Martial Arts Games”, where the stadium, clearly visible from our hotel rooms, showcased the world’s largest statue of a horse…

…Only a few days before travelling to Turkmenistan, our broker in Uzbekistan casually told us during a dinner that the country “seems to have had” a stock exchange but its website (https://www.agb.com.tm/en/) did not work for the last 2 years and emails he sent to them were never answered so he was not sure if the stock exchange was still operating. Of course we were very surprised after we found the exchange’s website on Google and that it was operating again and updated (even in English) with new information and price quotations. The next day we wrote an official email to the CEO of the Ashgabat Stock Exchange but as of the day of publishing this travel report we never received a reply – what do you expect? Naturally, we asked our tour guide if we could visit the stock exchange and try to arrange a meeting, which of course we were denied since “you are travelling on a tourist visa and not with a business visa” we were told…

…One of the most fascinating things about it and Turkmenistan is the country’s exchange rate.

The official exchange rate is 3.5 manats to 1 USD. However, the black-market rate is 19.5 manats to the USD. If you order something in your hotel and charge it to your room, say a coffee for 40 manats, you will be billed at the official rate leading it to cost USD 11.42. However, if you pay cash, that coffee’s price collapses all the way down to a more normal USD 2.05…

…What is typical in many countries is a difference in pricing for hotels between locals and foreigners. Our hotel, the Sports Hotel costs approximately USD 85 per person per night. However, for a local, a suite costs 170 manats, or USD 8.71 at the black market rate. And, no that is not a typo!

Before returning to the hotel, we visited the modern shopping mall opposite our hotel in order to stock up on food and alcohol in an upscale supermarket. The shopping mall was full of local shops – and no international brands with the exception of LC Waikiki.

In the supermarket most of the goods were from either local, Iranian or Turkish companies. There were only a few international brands, but the big U.S. brands and European brands were almost all missing – just a few infamous German brands (no Ricola or Lindt chocolate for Thomas)…

…As we drove out of the ghost town that is Ashgabat, we crossed a bridge into a neighborhood with traditional homes that look similar to what you see in the rest of Central Asia, where it appeared the majority of Ashgabat’s population (about 1 million) actually lives. There was traffic, bus stops and buses were full, and some of the houses were very beautiful, while none of the construction was white marble!

As we drove further on the highway it became increasingly obvious, we were moving further afar from the stage the President set, for the infrastructure grew worse and worse until we were again driving on roads that resembled the moon (little did we know how much worse the road would get).


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Meta Platforms and Microsoft. Holdings are subject to change at any time.