The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.
Meanwhile, the latest earnings season for the US stock market – for the fourth quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:
With that, here are the latest commentary, in no particular order:
Airbnb (NASDAQ: ABNB)
Airbnb’s management believes that AI will allow the company to develop the most innovative and personalised AI interfaces in the world, and the company recently acquired GamePlanner AI to do so; Airbnb’s management thinks that popular AI services today, such as ChatGPT, are underutilising the foundational models that power the services; GamePlanner AI was founded by the creator of Apple’s Siri smart assistant
There is a new platform shift with AI, and it will allow us to do things we never could have imagined. While we’ve been using AI across our service for years, we believe we can become a leader in developing some of the most innovative and personalized AI interfaces in the world. In November, we accelerated our efforts with the acquisition of GamePlanner AI, a stealth AI company led by the co-founder and original developer of Siri. With these critical pieces in place, we’re now ready to expand beyond our core business. Now this will be a multiyear journey, and we will share more with you towards the end of this year…
…If you were to open, say, ChatGPT or Google, though the models are very powerful, the interface is really not an AI interface. It’s the same interface as the 2000s, in a sense, the 2010s. It’s a typical classical web interface. So we feel like the models, in a sense, are probably underutilized…
Airbnb’s management does not want to build foundational large language models – instead, they want to focus on the application layer
One way to think about AI is, let’s use a real-world metaphor. I mentioned we’re building a city. And in that city, we have infrastructure, like roads and bridges. And then on top of those roads and bridges, we have applications like cars. So Airbnb is not an infrastructure company. Infrastructure would be a large language model or, obviously, GPUs. So we’re not going to be investing in infrastructure. So we’re not going to be building a large language model. We’ll be relying on, obviously, OpenAI. Google makes — or create a model, Meta creates models. So those are really infrastructure. They’re really developing infrastructure. But where we can excel is on the application layer. And I believe that we can build one of the leading and most innovative AI interfaces ever created.
Airbnb’s management believes that the advent of generative AI represents a platform shift and it opens the probability of Airbnb becoming a cross-vertical company
Here’s another way of saying it. Take your phone and look at all the icons on your phone. Most of those apps have not fundamentally changed since the advent of Generative AI. So what I think AI represents is the ultimate platform shift. We had the internet. We had mobile. Airbnb really rose during the rise of mobile. And the thing about a platform shift, as you know, there is also a shift in power. There’s a shift of behavior. And so I think this is a 0-0 ball game, where Airbnb, we have a platform that was built for 1 vertical short-term space. And I think with AI — Generative AI and developing a leading AI interface to provide an experience that’s so much more personalized than anything you’ve ever seen before.
Imagine an app that you feel like it knows you, it’s like the ultimate Concierge, an interface that is adaptive and evolving and changing in real-time, unlike no interface you’ve ever seen before. That would allow us to go from a single vertical company to a cross-vertical company. Because one of the things that we’ve noticed is the largest tech companies aren’t a single vertical. And we studied Amazon in the late ’90s, early 2000s, when they went from books to everything, or Apple when they launched the App Store. And these really large technology companies are horizontal platforms. And I think with AI and the work we’re doing around AI interfaces, I think that’s what you should expect of us.
Alphabet (NASDAQ: GOOG)
Alphabet’s Google Cloud segment saw accelerated growth in 2023 Q4 from generative AI
Cloud, which crossed $9 billion in revenues this quarter and saw accelerated growth driven by our GenAI and product leadership.
Alphabet closed 2023 by launching Gemini, a foundational AI model, which has state-of-the-art capabilities; Gemini Ultra is coming soon
We closed the year by launching the Gemini era, a new industry-leading series of models that will fuel the next generation of advances. Gemini is the first realization of the vision we had when we formed Google DeepMind, bringing together our 2 world-class research teams. It’s engineered to understand and combine text, images, audio, video and code in a natively multimodal way, and it can run on everything from mobile devices to data centers. Gemini gives us a great foundation. It’s already demonstrating state-of-the-art capabilities, and it’s only going to get better. Gemini Ultra is coming soon. The team is already working on the next versions and bringing it to our products.
Alphabet is already experimenting Gemini with Google Search; Search Generative Experience (SGE) saw its latency drop by 40% with Gemini
We are already experimenting with Gemini in Search, where it’s making our Search Generative Experience, or SGE, faster for users. We have seen a 40% reduction in latency in English in the U.S.
Alphabet’s management thinks that SGE helps Google Search (1) answer new types of questions, (2) answer complex questions, and (3) surface more links; management believes that digital advertising will continue to play an important role in SGE; management has found that users find the ads placed above or below an AI overview of searches to be helpful; management knows what needs to be done to incorporate AI into the future experience of Google Search and they see AI assistants or agents as being an important component of Search in the future
By applying generative AI to Search, we are able to serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. People are finding it particularly useful for more complex questions like comparisons or longer queries. It’s also helpful in areas where people are looking for deeper understanding, such as education or even gift ideas. We are improving satisfaction, including answers for more conversational and intricate queries. As I mentioned earlier, we are surfacing more links with SGE and linking to a wider range of sources on the results page, and we’ll continue to prioritize approaches that add value for our users and send valuable traffic to publishers…
…As we shared last quarter, Ads will continue to play an important role in the new search experience, and we’ll continue to experiment with new formats native to SGE. SGE is creating new opportunities for us to improve commercial journeys for people by showing relevant ads alongside search results. We’ve also found that people are finding ads either above or below the AI-powered overview helpful as they provide useful options for people to take action and connect with businesses…
…Overall, one of the things I think people underestimate about Search is the breadth of Search, the amount of queries we see constantly on a new day, which we haven’t seen before. And so the trick here is to deliver that high-quality experience across the breadth of what we see in Search. And over time, we think Assistant will be very complementary. And we will again use generative AI there, particularly with our most advanced models in Bard and allows us to act more like an agent over time, if I were to think about the future and maybe go beyond answers and follow through for users even more. So that is the — directionally, what the opportunity set is. Obviously, a lot of execution ahead. But it’s an area where I think we have a deep sense of what to do.
Alphabet’s latest Pixel 8 phones have an AI-powered feature that lets users search what they see on their phones without switching apps; the Pixel 8s uses Gemini Nano for AI features
Circle to Search lets you search what you see on Android phones with a simple gesture without switching apps. It’s available starting this week on Pixel 8 and Pixel 8 Pro and the new Samsung Galaxy S24 Series…
…Pixel 8, our AI-first phone, was awarded Phone of the Year by numerous outlets. It now uses Gemini Nano with features like Magic Compose for Google Messages and more to come.
Alphabet’s management is seeing that advertisers have a lot of interest in Alphabet’s AI advertising solutions; the solutions include (1) the Automatically Created Assets (ACA) feature for businesses to build better ads and (2) conversational experiences – currently under beta testing – that has helped SMBs be 42% more likely to publish ads with good ad-strength
We are also seeing a lot of interest in our AI-powered solutions for advertisers. That includes our new conversational experience that uses Gemini to accelerate the creation of Search campaigns…
…As we look ahead, we’re also starting to put generative AI in the hands of more and more businesses to help them build better campaigns and even better performing ads. Automatically created assets help advertisers show more relevant search ads by creating tailored headlines and descriptions based on each ad’s context. Adoption was up with strong feedback in Q4. In addition to now being available in 8 languages, more advanced GenAI-powered capabilities are coming to ACA…
…And then last week’s big news was that Gemini will power new conversational experience in Google Ads. This is open and beta to U.S. and U.K. advertisers. Early tests show advertisers are building higher-quality search campaigns with less effort, especially SMBs who are 42% more likely to publish a campaign with good or excellent ad strength.
Alphabet’s Google Cloud offers AI Hypercomputer (a supercomputing architecture for AI), which is used by high-profile AI startups such as Anthropic and Mistral AI
Google Cloud offers our AI Hypercomputer, a groundbreaking supercomputing architecture that combines our powerful TPUs and GPUs, AI software and multi-slice and multi-host technology to provide performance and cost advantages for training and serving models. Customers like Anthropic, Character.AI, Essential AI and Mistral AI are building and serving models on it.
Vertex AI, which is within Google Cloud, enables users to customise and deploy more than 130 generative AI models; Vertex AI’s API (application programming interface) requests has jumped six times from the first half of 2023 the second half; Samsung is using Vertex AI to provide GenAI models in its Galaxy S24 smartphones while companies such as Shutterstock and Victoria’s Secret are also using Vertex AI
For developers building GenAI applications, we offer Vertex AI, a comprehensive enterprise AI platform. It helps customers like Deutsche Telekom and Moody’s discover, customize, augment and deploy over 130 GenAI models, including PaLM, MedPaLM, Sec-PaLM and Gemini as well as popular open source and partner models. Vertex AI has seen strong adoption with the API request increasing nearly 6x from H1 to H2 last year. Using Vertex AI, Samsung recently announced its Galaxy S24 Series smartphone with Gemini and Imagen 2, our advanced text-to-image model. Shutterstock has added Imagen 2 to their AI image generator, enabling users to turn simple text prompts into unique visuals. And Victoria’s Secret & Co. will look to personalize and improve the customer experience with Gemini, Vertex AI, Search and Conversations.
Duet AI, Alphabet’s AI agents for its Google Workspace and Google Cloud Platform (GCP) services, now has more than 1 million testers, and will incorporate Gemini soon; Duet AI for Developers is the only generative AI offering that supports the entire development and operations lifecycle for software development; large companies such as Wayfair, GE Appliances, and Commerzbank are already using Duet AI for Developers
Customers are increasingly choosing Duet AI, our packaged AI agents for Google Workspace and Google Cloud Platform, to boost productivity and improve their operations. Since its launch, thousands of companies and more than 1 million trusted testers have used Duet AI. It will incorporate Gemini soon. In Workspace, Duet AI is helping employees benefit from improved productivity and creativity at thousands of paying customers around the world, including Singapore Post, Uber and Woolworths. In Google Cloud Platform, Duet AI assists software developers and cybersecurity analysts. Duet AI for Developers is the only GenAI offering to support the complete development and operations life cycle, fine-tuned with the customer’s own core purpose and policies. It’s helping Wayfair, GE Appliances and Commerzbank write better software, faster with AI code completion, code generation and chat support. With Duet AI and Security Operations, we are helping cybersecurity teams at Fiserv, Spotify and Pfizer.
Alphabet’s management believes that the company has state-of-the-art compute infrastructure and that it will be a major differentiator in the company’s AI-related work; managements wants Alphabet to continue investing in its infrastructure
Search, YouTube and Cloud are supported by our state-of-the-art compute infrastructure. This infrastructure is also key to realizing our big AI ambitions. It’s a major differentiator for us. We continue to invest responsibly in our data centers and compute to support this new wave of growth in AI-powered services for us and for our customers.
Alphabet’s AI-powered ad solutions are helping retailers with their omni channel growth; a large big-box retailer saw a 60%+ increase in omni channel ROA (return on advertising) and a 22%+ increase in store traffic
Our proven AI-powered ad solutions were also a win for retailers looking to accelerate omni growth and capture holiday demand. Quick examples include a large U.S. big-box retailer who drove a 60%-plus increase in omni ROAS and a 22%-plus increase in store traffic using Performance Max during Cyber Five; and a well-known global fashion brand, who drove a 15%-plus higher omnichannel conversion rate versus regular shopping traffic by showcasing its store pickup offering across top markets through pickup later on shopping ads.
Alphabet’s management is using AI to make it easier for content creators to create content for Youtube (for example, creators can easily create backgrounds or translate their videos); management also believes the AI tools built for creators can also be ported over to the advertising business to help advertisers
First, creation, which increasingly takes place on mobile devices. We’ve invested in a full suite of tools, including our new YouTube Create app for Shorts, to help people make everything from 15-second Shorts to 15-minute videos to 15-hour live streams with a production studio in the palm of their hands. GenAI is supercharging these capabilities. Anyone with a phone can swap in a new backdrop, remove background extras, translate their video into dozens of languages, all without a big studio budget. We’re excited about our first products in this area from Dream Screen for AI-generated backgrounds to Aloud for AI-powered dubbing…
…You are obviously aware of the made YouTube announcement where we introduced a whole lot of new complementary creativity features on YouTube, including Dream Screen, for example, and a lot of other really interesting tools and thoughts. You can obviously imagine that we can take this more actively to the advertising world already. As you know, it continues already to power AI, a lot of our video ad solutions and measurement capabilities. It’s part of video-rich campaigns. Multi-format ads are — actually, there is a generative creator music that actually makes it easier for creators to design the perfect soundtrack already. And as I said earlier, AI will unlock a new world of creativity. And you can see how this will — if you just look at where models are heading, where multimodal models are heading, where the generation capabilities of those models are heading, you can absolutely see how this will impact and positively impact and simplify the flow for creators, similar to what you see already emerging in some of our core products like ACA on the Search side.
Alphabet’s management expects the company’s capital expenditure in 2024 to be notably higher than in 2023 (it was US$20 billion in 2023), driven by investments in AI infrastructure
With respect to CapEx, our reported CapEx in the fourth quarter was $11 billion, driven overwhelmingly by investment in our technical infrastructure with the largest component for servers followed by data centers. The step-up in CapEx in Q4 reflects our outlook for the extraordinary applications of AI to deliver for users, advertisers, developers, cloud enterprise customers and governments globally and the long-term growth opportunities that offers. In 2024, we expect investment in CapEx will be notably larger than in 2023.
Alphabet’s management is restructuring the company’s workforce not because AI is taking away jobs, but because management believes that AI solutions can deliver significant ROI (return on investments) and it’s important for Alphabet to have an organisational structure that can better build these solutions
But I also want to be clear, when we restructure, there’s always an opportunity to be more efficient and smarter in how we service and grow our customers. We’re not restructuring because AI is taking away roles that’s important here. But we see significant opportunities here with our AI-powered solution to actually deliver incredible ROI at scale, and that’s why we’re doing some of those adjustments.
Alphabet’s management thinks that Search is not just about generative AI
Obviously, generative AI is a new tool in the arsenal. But there’s a lot more that goes into Search: the breadth, the depth, the diversity across verticals, stability to follow through, getting actually access to rich, diverse sources of content on the web and putting it all together in a compelling way.
Alphabet’s management believes that AI features can help level the playing field for SMBs in the creation of effective advertising (when competing with large companies) and they will continue to invest in that area
Our focus has always been here on investing in solutions that really help level the playing field, and you mentioned several of those. So actually, SMBs can compete with bigger brands and more sophisticated advertisers. And so the feedback we’re always getting is they need easy solutions that could drive value quickly, and several of the AI-powered solutions that you’re mentioning are actually making the workflow and the whole on-ramp and the bidded targeting creative and so on, you mentioned that is so much easier for SMBs. So we’re very satisfied with what we’re seeing here. We will continue to invest.
Amazon (NASDAQ: AMZN)
Amazon’s cloud computing service, AWS, saw an acceleration in revenue growth in 2023 Q4 and management believes this was driven partly by AI
If you look back at the revenue growth, it accelerated to 13.2% in Q4, as we just mentioned. That was an acceleration. We expect accelerating trends to continue into 2024. We’re excited about the resumption, I guess, of migrations that companies may have put on hold during 2023 in some cases and interest in our generative AI products, like Bedrock and Q, as Andy was describing
Amazon’s management reminded the audience that their framework for thinking about generative AI consists of three layers – the first is the compute layer, the second is LLMs as a service, the third is the applications that run on top of LLMs – and Amazon is investing heavily in all three
You may remember that we’ve explained our vision of three distinct layers in the gen AI stack, each of which is gigantic and each of which we’re deeply investing.
At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute…
…In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service…
…At the top layer of the stack is the application layer.
Amazon’s management is seeing revenues accelerate rapidly for AWS across all three layers of the generative AI stack and AWS is receiving significant interest from customers wanting to run AI workloads
Still relatively early days, but the revenues are accelerating rapidly across all three layers, and our approach to democratizing AI is resonating well with our customers. We have seen significant interest from our customers wanting to run generative AI applications and build large language models and foundation models, all with the privacy, reliability and security they have grown accustomed to with AWS
Amazon’s management is seeing that enterprises are still figuring out which layer of the generative AI stack they want to operate in; management thinks that most enterprises will operating in at least two layers, with the technically capable ones operating in all three
When we talk to customers, particularly at enterprises as they’re thinking about generative AI, many are still thinking through at which layers of those three layers of the stack I laid out that they want to operate in. And we predict that most companies will operate in at least two of them. But I also think, even though it may not be the case early on, I think many of the technically capable companies will operate at all three. They will build their own models, they will leverage existing models from us, and then they’re going to build the apps.
At the first layer of the generative AI stack, AWS is offering the most expansive collection of compute instances with NVIDIA chips; AWS has built its own Trainium chips for training and Inferentia chips for inference; a new version of Trainium – Trainium 2 – was recently announced and it is 4x faster, and has 3x more memory, than the first generation of Trainium; large companies and prominent AI startups are using AWS’s AI chips
At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute, we offer the most expansive collection of compute instances with NVIDIA chips. We also have customers who like us to push the price performance envelope on AI chips just as we have with Graviton for generalized CPU chips, which are 40% more price-performant than other x86 alternatives. And as a result, we’ve built custom AI training chips named Trainium and inference chips named Inferentia. In re:Invent, we announced Trainium2, which offers 4x faster training performance and 3x more memory capacity versus the first generation of Trainium, enabling advantageous price performance versus alternatives. We already have several customers using our AI chips, including Anthropic, AirBnB, Hugging Face, Qualtrics, Rico and Snap.
At the middle layer of the generative AI stack, AWS has launched Bedrock, which offers LLMs-as-a-service; Bedrock is off to a very strong start with thousands of customers already using it just a few months after launch; Bedrock has added new models, including those from prominent AI startups, Meta’s Llama2, and Amazon’s own Titan family; customers are excited over Bedrock because building production-quality generative AI applications requires multiple iterations of models, and the use of many different models, and this is where Bedrock excels
In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service, we’ve launched Bedrock, which is off to a very strong start with many thousands of customers using the service after just a few months… We also added new models from Anthropic, Cohere, Meta with Llama 2, Stability AI and our own Amazon Titan family of LLMs. What customers have learned at this early stage of gen AI is that there’s meaningful iteration required in building a production gen AI application with the requisite enterprise quality at the cost and latency needed. Customers don’t want only one model. They want different models for different types of applications and different-sized models for different applications. Customers want a service that makes this experimenting and iterating simple. And this is what Bedrock does, which is why so many customers are excited about it.
At the top layer of the generative AI stack, AWS recently launched Amazon Q, a coding companion; management believes that a coding companion is one of the very best early generative AI applications; Amazon Q is linked with more than 40 popular data-connectors so that customers can easily query their data repositories; Amazon Q has generated strong interest from developers
At the top layer of the stack is the application layer. One of the very best early gen AI applications is a coding companion. At re:Invent, we launched Amazon Q, which is an expert on AWS, writes code, debugs code, tests code, does translations like moving from an old version of Java to a new one and can also query customers various data repositories like Internet, Wikis or from over 40 different popular connectors to data in Salesforce, Amazon S3, ServiceNow, Slack, Atlassian or Zendesk, among others. And it answers questions, summarizes data, carries on a coherent conversation and takes action. It was designed with security and privacy in mind from the start, making it easier for organizations to use generative AI safely. Q is the most capable work assistant and another service that customers are very excited about…
…When enterprises are looking at how they might best make their developers more productive, they’re looking at what’s the array of capabilities in these different coding companion options they have. And so we’re spending a lot of time. Our enterprises are quite excited about it. It created a meaningful stir in re:Invent. And what you see typically is that these companies experiment with different options they have and they make decisions for their employee base, and we’re seeing very good momentum there.
Amazon’s management is seeing that security over data is very important to customers when they are using AI and this is an important differentiator for AWS because its AI services inherit the same security features as AWS – and AWS’s capabilities and track record in security are good
By the way, don’t underestimate the point about Bedrock and Q inheriting the same security and access control as customers get with AWS. Security is a big deal, an important differentiator between cloud providers. The data in these models is some of the company’s most sensitive and critical assets. With AWS’ advantaged security capabilities and track record relative to other providers, we continue to see momentum around customers wanting to do their long-term gen AI work with AWS.
Amazon has launched some generative AI applications across its businesses and are building more; one of the applications launched is Rufus, a shopping assistant, which allows consumers to receive thoughtful responses to detailed shopping questions; other generative AI applications being built and launched by Amazon include a customer-review-summary app, an app for customers to predict how they will fit in apparel, an app for inventory forecasts for each fulfilment centre, and an app to generate copy for ads based on a picture, or generate pictures based on copy; Rufus is seamlessly integrated into Amazon and management thinks Rufus could meaningfully change what discovery looks for shoppers using Amazon
We’re building dozens of gen AI apps across Amazon’s businesses, several of which have launched and others of which are in development. This morning, we launched Rufus, an expert shopping assistant trained on our product and customer data that represents a significant customer experience improvement for discovery. Rufus lets customers ask shopping journey questions, like what is the best golf ball to use for better spin control or which are the best cold weather rain jackets, and get thoughtful explanations for what matters and recommendations on products. You can carry on a conversation with Rufus on other related or unrelated questions and retains context coherently. You can sift through our rich product pages by asking Rufus questions on any product features and it will return answers quickly…
…. So if you just look at some of our consumer businesses, on the retail side, we built a generative AI application that allowed customers to look at summary of customer review, so that they didn’t have to read hundreds and sometimes thousands of reviews to get a sense for what people like or dislike about a product. We launched a generative AI application that allows customers to quickly be able to predict what kind of fit they’d have for different apparel items. We built a generative AI application in our fulfillment centers that forecasts how much inventory we need in each particular fulfillment center…Our advertising business is building capabilities where people can submit a picture and an ad copy is written and the other way around.
… All those questions you can plug in and get really good answers. And then it’s seamlessly integrated in the Amazon experience that customers are used to and love to be able to take action. So I think that that’s just the next iteration. I think it’s going to meaningfully change what discovery looks like for our shopping experience and for our customers.
Amazon’s management believes generative AI will drive tens of billions in revenue for the company over the next few years
Gen AI is and will continue to be an area of pervasive focus and investment across Amazon primarily because there are a few initiatives, if any, that give us the chance to reinvent so many of our customer experiences and processes, and we believe it will ultimately drive tens of billions of dollars of revenue for Amazon over the next several years.
Amazon’s management expects the company’s full-year capital expenditure for 2024 to be higher than in 2023, driven by increased investments in infrastructure for AWS and AI
We define our capital investments as a combination of CapEx plus equipment finance leases. In 2023, full year CapEx was $48.4 billion, which was down $10.2 billion year-over-year, primarily driven by lower spend on fulfillment and transportation. As we look forward to 2024, we anticipate CapEx to increase year-over-year primarily driven by increased infrastructure CapEx to support growth of our AWS business, including additional investments in generative AI and large language models.
AWS’s generative AI revenue is pretty big in absolute numbers, but small in the context of AWS already being a $100 billion annual-revenue-run-rate business
If you look at the gen AI revenue we have, in absolute numbers, it’s a pretty big number. But in the scheme of a $100 billion annual revenue run rate business, it’s still relatively small, much smaller than what it will be in the future, where we really believe we’re going to drive tens of billions of dollars of revenue over the next several years.
Apple (NASDAQ: AAPL)
Many of the features in Apple’s latest product, the virtual reality headset, the Vision Pro, features are powered by AI
There’s an incredible amount of technology that’s packed into the product. There’s 5,000 patents in the product. And it’s, of course, built on many innovations that Apple has spent multiple years on, from silicon to displays and significant AI and machine learning, all the hand tracking, the room mapping, all of this stuff is driven by AI.
Apple has been spending a lot of time and effort on AI and management will share details later in 2024
As we look ahead, we will continue to invest in these and other technologies that will shape the future. That includes artificial intelligence where we continue to spend a tremendous amount of time and effort, and we’re excited to share the details of our ongoing work in that space later this year…
…In terms of generative AI, which I would guess is your focus, we have a lot of work going on internally as I’ve alluded to before. Our MO, if you will, has always been to do work and then talk about work and not to get out in front of ourselves. And so we’re going to hold that to this as well. But we’ve got some things that we’re incredibly excited about that we’ll be talking about later this year.
Apple’s management thinks there is a huge opportunity for Apple with generative AI but will only share more details in the future
Let me just say that I think there is a huge opportunity for Apple with gen AI and AI and without getting into more details and getting out in front of myself.
Arista Networks (NYSE: ANET)
Arista Networks’ management believes that AI at scale needs Ethernet at scale because AI workloads cannot tolerate delays; management thinks that 400 and 800-gigabit Ethernet will become important or AI back-end GPU clusters
AI workloads are placing greater demands on Ethernet as they have both data and compute-intensive across thousands of processes today. Basically, AI at scale needs Ethernet at scale. AI workloads cannot tolerate the delays in the network because the job can only be completed after all flows are successfully delivered to the GPU clusters. All it takes is one culprit or worst-case link to throttle an entire AI workload…
…. We expect both 400 and 800-gigabit Ethernet will emerge as important pilots for AI back-end GPU clusters.
Arista Networks’ management is pushing the company and the Ultra Ethernet Consortium to improve Ethernet technology for AI workloads in three key ways; management believes that Ethernet is superior to Infiniband for AI-related data networking because Ethernet provides flexible ordering of data transfer whereas Infiniband is rigid
Three improvements are being pioneered by Arista and the founding members of the Ultra Ethernet Consortium to improve job completion time. Number one, packet spring. AI network topology meets packet spring to allow every flow to simultaneously access all parts of the destination. Arista is developing multiple forms of load balancing dynamically with our customers. Two is flexible ordering. Key to an AI job completion is the rapid and reliable bulk transfer with flexible ordering using Ethernet links to optimally balance AI-intensive operations, unlike the rigid ordering of InfiniBand. Arista is working closely with its leading vendors to achieve this. Finally, network congestion. In AI networks, there’s a common in-cost congestion problem whereby multiple uncoordinated senders can send traffic to the receiver simultaneously. Arista’s platforms are purpose-built and designed to avoid these kinds of hotspots, evenly spreading the load across multi-packs across a virtual output queuing VoQ losses fabric.
Arista Networks’ management thinks the company can achieve AI revenue of at least $750 million in 2025
We are cautiously optimistic about achieving our AI revenue goal of at least $750 million in AI networking in 2025…
…. So our AI performance continues to track well for the $750 million revenue goal that we set last November at Analyst Day.
Arista Networks’ management sees the company becoming the gold-standard for AI data-networking
We have more than doubled our enterprise revenue in the last 3 years and we are becoming the gold standard for client-to-cloud-to-AI networking with 1 EOS and 1 CloudVision Foundation.
In the last 12 months, Arista Networks has participated in a large number of AI project bids, and in the last five projects where there was a situation of Ethernet versus Infiniband, Arista Networks has won four of them; over the last 12 months, a lot has changed in terms of how Infiniband was initially bundled into AI data centres; management believes that Ethernet will become the default standard for AI networking going forward
To give you some color on the last 3 months, I would say difficult to project anything in 3 months. But if I look at the last year, which maybe last 12 months is a better indication, we have participated in a large number of AI bids and when I say large, I should say they are large AI bids, but there are a small number of customers actually to be more clear. And in the last 4 out of 5, AI networking clusters we have participated on Ethernet versus InfiniBand, Arista has won all 4 of them for Ethernet, one of them still stays on InfiniBand. So these are very high-profile customers. We are pleased with this progress…
…The first real consultative approach from Arista is to provide our expertise on how to build a robust back-end AI network. And so the whole discussion of Ethernet become — versus InfiniBand becomes really important because as you may recall, a year ago, I told you we were outside looking in, everybody had an Ethernet — everybody had an InfiniBand HPC cluster that was kind of getting bundled into AI. But a lot has changed in a year. And the popular product we are seeing right now and the back-end cluster for our AI is the Arista 7800 AI spine, which in a single chassis with north of 500 terabit of capacity can give you a substantial number of ports, 400 or 800. So you can connect up to 1,000 GPUs just doing that. And that kind of data parallel scale-out can improve the training time dimensions, large LLMs, massive integration of training data. And of course, as we shared with you at the Analyst Day, we can expand that to a 2-tier AI leaf and spine with a 16-way CMP to support close to 10,000 GPUs nonblocking. This lossless architecture for Ethernet. And then the overlay we will have on that with the Ultra Ethernet Consortium in terms of congestion controls, packet spring and working with a suite of [ UC ] mix is what I think will make Ethernet the default standard for AI networking going forward.
Arista Networks’ management believes that owners and operators of AI data centres would not want to work with white box data switches (non-branded and commoditised data switches) because data switches are mission critical in AI data centres, so users would prefer reliable and higher-quality data switches
I think white box is here to stay for a very long time if somebody just wants a throwaway commodity product, but how many people want throwaway commodity in the data center? They’re still mission-critical, and they’re even more mission-critical for AI. If I’m going to spend multimillion dollars on a GPU cluster, and then the last thing I’m going to do is put a toy network in, right? So to put this sort of in perspective, that we will continue to coexist with a white box. There will be use cases where Arista’s blue box or a stand-alone white box can run either SONiC or FBOSS but many times, the EOS software stack is really, really something they depend on for availability, analytics, automation, and there’s — you can get your network for 0 cost, but the cost of downtime is millions and millions of dollars.
Arista Networks is connecting more and more GPUs and management believes that the picture of how a standard AI data centre Ethernet switch will look like is starting to form; AI is still a small part of Arista Networks’ business but one that should grow over time
On the AI side, we continue to track well. I think we’re moving from what I call trials, which is connecting hundreds of GPUs to pilots, which is connecting thousands of GPUs this year, and then we expect larger production clusters. I think one of the questions that we will be asking ourselves and our customers is how these production clusters evolve. Is it going to be 400, 800 or a combination thereof? The role of Ultra Ethernet Consortium and standards and the ecosystem all coming together, very similar to how we had these discussions in 400 gig will also play a large part. But we’re feeling pretty good about the activity. And I think moving from trials to pilots this year will give us considerable confidence on next year’s number…
…AI is going to come. It is yet to come — certainly in 2023, as I’ve said to you many, many times, it was a very small part of our number, but it will gradually increase.
Arista Networks’ management is in close contact with the leading GPU vendors when designing networking solutions for AI data centres
Specific to our partnership, you can be assured that we’ll be working with the leading GPU vendors. And as you know, NVIDIA has 90% or 95% of the market. So Jensen and I are going to partner closely. It is vital to get a complete AI network design going. We will also be working with our partners in AMD and Intel so we will be the Switzerland of XPUs, whatever the GPU might be, and we look to supply the best network ever.
Arista Networks’ management believes that the company is very well-positioned for the initial growth spurts in AI networking
Today’s models are moving very rapidly, relying on a high bandwidth, predictable latency, the focus on application performance requires you to be sole sourced initially. And over time, I’m sure it’ll move to multiple sources, but I think Arista is very well positioned for the first innings of AI networking, just like we were for the cloud networking decade.
ASML (NASDAQ: ASML)
ASML’s management believes that 2025 will be a strong year for the company because of the long-term trends in its favour (this includes AI and digitalisation, customer-inventory-levels becoming better, and the scheduled opening of many semiconductor fabrication plants)
So essentially unchanged I would say in comparison to what we said last quarter. So if we start looking at 2025. As I mentioned before, we are looking at a year of significant growth and that is for a couple of reasons. First off, we think the secular trends in our industry are still very much intact. If you look at the developments around AI, if you look at the developments around electrification, around energy transition etcetera, they will need many, many semiconductors. So we believe the secular trends in the industry are still very, very strong. Secondly I think clearly by 2025 we should see our customers go through the up cycle. I mean the upward trend in the cycle. So that should be a positive. Thirdly, as we also mentioned last time it’s clear that many fab openings are scheduled that will require the intake of quite some tools in the 2025 time frame.
ASML’s management is seeing AI-related demand drive a positive inflection in the company’s order intake
And I think AI is now particularly something which could be on top of that because that’s clearly a technology transition. But we’ve already seen a very positive effect of that in our Q4 order intake…
…After a few soft quarters, the order intake for the quarter was very, very strong. Actually a record order intake at €9.2 billion. If you look at the composition of that, it was about 50/50 for Memory versus Logic. Around €5.6 billion out of the €9.2 was related to EUV, both Low NA and High NA.
ASML’s management is confident that AI will help to drive demand for the company’s EUV (extreme ultraviolet) lithography systems from the Memory-chips market in the near future
In ’23, our Memory shipments were lower than the 30% that you mentioned. But if you look at ’25, and we also take into account what I just said about AI and the need for EUV in the DDR5 and in the HBM era, then the 30% is a very safe path and could be on the conservative side.
ASML’s management thinks that the performance of memory chips is a bottleneck for AI-related workloads, and this is where EUV lithography is needed; management was also positively surprised at how important EUV was for the development of leading-edge memory chips for AI
I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV…
… And were we surprised? I must be — I say, yes, to some extent, we were surprised in the meetings we’ve had with customers and especially the Memory because we’re leading-edge Memory customers. We were surprised about the technology requirements of — for litho, EUV specifically and how it impacts how important it is for the rollout and the ramp of the memory solutions for AI. This is why we received more EUV orders than we anticipated because it was obvious in the detailed discussions and the reviews with our customers, that EUV is critical in that sense. And that was a bit of a surprise, that’s a positive surprise.
[Question] Sorry, was that a function of EUV layer count or perhaps where they’re repurposing equipment? And so now they’re realizing they need more footprint for EUV.
[Answer] No, it is layer count and imaging performance. And that’s what led to the surprise, the positive surprise, which indeed led to more orders.
ASML’s management sees the early shoots of recovery observed in the Memory chip market as being driven by both higher utilisation across the board, and by the AI-specific technology transition
I think it’s — what we’re seeing is, of course, the information coming off our tools that we see the utilization rates going up. That’s one. Clearly, there’s also an element of technology transition. That’s also clear. I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV. So it’s a bit of a mix. I mean, yes, you’ve gone through, I think, the bottom of this memory cycle with prices going up, utilizations increasing, and that combined with the technology transition driven by AI. That’s a bit what we see today. So it’s a combination of both, and I think that will continue.
ASML’s management is thinking if their planned capacity buildout for EUV lithography systems is too low, partly because of AI-driven demand for leading edge chips
We have said our capacity buildout will be 90 EUV Low-NA systems, 20 High-NA whereby internally, we are looking at that number as a kind of a base number where we’re investigating whether that number should be higher. The question is whether that 90 is going to be enough. Now we have to realize, we are selling wafer capacity, which is not only a function of the number of units, but also a function of the productivity of those tools. Now we have a pretty aggressive road map for the productivity in terms of wafers per hour. So it’s a complex question that you’re asking. But actually, we need to look at this especially against the math that we’re seeing for little requirements in the area of AI, whether it’s HBM or whether it is Logic, whether the number of units and the road map on productivity, which gives wafers because the combination is wafer capacity, whether that is sufficient.
Datadog (NASDAQ: DDOG)
Datadog’s management is seeing growing engagement in AI with a 75% sequential jump in the use of next-gen AI integrations
In observability, we now have more than 700 integrations allowing our customers to benefit from the latest AWS, Azure and GCP abilities as well as from the newly emerging AI stack. We continued to see increasing engagement there with the use of our next-gen AI integrations growing 75% sequentially in Q4.
Datadog’s management continues to add capabilities to Bits AI, the company’s natural language incident management copilot, and is improving the company’s LLM (large language model) observability capabilities
In the generative AI and LLM space, we continued to add capability to Bits AI, our natural language incident management copilot. And we are advancing LLM observability to help customers investigate where they can safely deploy and manage their models in production.
Currently, 3% of Datadog’s annualised recurring revenue (ARR) comes from next-gen AI native customers (was 2.5% in 2023 Q3); management believes the AI opportunity will be far larger in the future as all kinds of customers start incorporating AI in production; the AI native customers are companies that Datadgo’s management knows are substantially all based on AI
Today, about 3% of our ARR comes from next-gen AI native customers, but we believe the opportunity is far larger in the future as customers of every industry and every size start doing AI functionality in production…
…It’s hard for us to wrap our arms exactly around what is GenAI, what is not among our customer base and their workload. So the way we chose to do it is we looked at a smaller number of companies that we know are substantially all based on AI so these are companies like the modal providers and things like that. So 3% of ARR, which is up from what we had disclosed last time.
Microsoft said that AI accounts for six percentage points of Azure’s growth, but Datadog’s management is seeing AI-native companies on Datadog’s Azure business account for substantially more than the six percentage points mentioned
I know one number that everyone has been thinking about is one cloud, in particular, Microsoft, disclosed that 6% of their growth was attributable to AI. And we definitely see the benefits of that on our end, too. If I look at our Azure business in particular, there is substantially more than 6% that is attributable to AI native as part of our Azure business. So we see completely this trend is very true for us as well. It’s harder to tell with the other cloud providers because they don’t break those numbers up.
Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business, and that Datadog is ideally positioned for these
We continue to believe digital transformation and cloud migration are long-term secular growth drivers of our business and critical motion for every company to deliver value and competitive advantage. We see AI adoption as an additional driver of investment and accelerator of technical innovation and cloud migration. And more than ever, we feel ideally positioned to achieve our goals and help customers of every size in every industry to transform, innovate and drive value through technology adoption.
Datadog experienced a big slowdown from its digitally native customers in the recent past, but management thinks that these customers could also be the first ones to fully leverage AI and thus reaccelerate earlier
We suddenly saw a big slowdown from the digital native over the past year. On the other hand, they might be the first ones to fully leverage AI and deploy it in production. So you might see some reacceleration earlier from some of them at least.
Datadog’s management sees the attach rates for observability going up for AI workloads versus traditional workloads
[Question] If you think about the very long term, would you think attach rates of observability will end up being higher or lower for these AI workloads versus traditional workloads?
[Answer] We see the attach rate going up. The reason for that is our framework for that is actually in terms of complexity. AI just adds more complexity. You create more things faster without understanding what they do. Meaning you need — you shift a lot of the value from building to running, managing, understanding, securing all of the other things that need to keep happening after that. So the shape of some of the products might change a little bit because the shape of the software that runs it changes a little bit, which is no different from what happened over the past 10, 15 years. But we think it’s going to drive more need for observability, more need for security products around that.
Datadog’s management is seeing AI-native companies using largely the same kind of Datadog products as everyone else, but the AI-native companies are building the models, so the tooling for understanding the models are not applicable for them
[Question] Are the product SKUs, these kind of GenAI companies are adopting, are they similar or are they different to the kind of other customer cohorts?
[Answer] Today, this is largely the same SKUs as everybody else. These are infrastructure, APM logs, profiling these kind of things that they are — or really the monitoring, these kind of things that these customers are using. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base. And we see also where we see the bulk of the opportunity in the longer term, not in the handful of model providers that [ anybody ] is going to use. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base.
Datadog has a much larger presence in inference AI workloads as compared to training AI workloads; Datadog’s management sees that the AI companies that are scaling the most on Azure are scaling on inference
There’s 2 parts to the AI workloads today. There’s training and there’s inference. The vast majority of the players are still training. There’s only a few that are scaling with inference. The ones that are scaling with inference are the ones that are driving our ARR because we are — we don’t — we’re not really present on the training side, but we’re very present on the inference side. And I think that also lines up with what you might see from some of the cloud providers, where a lot of the players or some of the players that are scaling the most are on Azure today on the inference side, whereas a lot of the other players still largely training on some of the other clouds.
Etsy (NASDAQ: ETSY)
Etsy’s management recently launched Gift Mode, a feature where a buyer can type in details of a person and occasion, and AI technology will match the buyer with a gift; Gift Mode has more than 200 recipient persons, and has good early traction with 6 million visits in the first 2 weeks
So what’s Gift Mode? It’s a whole new shopping experience where gifters simply enter a few quick details about the person they’re shopping for, and we use the power of artificial intelligence and machine learning to match them with unique gifts from Etsy sellers. Creating a separate experience helps us know immediately if you’re shopping for yourself or someone else, hugely beneficial information to help our search engines solve for your needs. Within Gift Mode, we’ve identified more than 200 recipient personas, everything from rock climber to the crossword genius to the sandwich specialist. I’ve already told my family that when shopping for me, go straight to the music lover, the adventurer or the pet parent…
…Early indications are that Gift Mode is off to a good start, including positive sentiment from buyers and sellers in our social channels, very strong earned media coverage and nearly 6 million visits in the first 2 weeks. As you test and shop in Gift Mode, keep in mind that this is just the beginning.
Etsy’s management is using AI to understand the return on investment of the company’s marketing spend
We’ve got pretty sophisticated algorithms that work on is this bid — is this click worth this much right now and how much should we bid. And so to the extent that CPCs rise, we naturally pull back. Or to the extent that CPC is lower, we naturally lean in. The other thing, by the way, it’s not just CPCs, it’s also conversion rates. So in times when people are really budget constrained, we see them actually — we see conversion rate across the industry go down. We see people compare some shop a lot more. And so we are looking at all of that and not humans, but machines using AI are looking at a very sophisticated way at what’s happening with conversion rate right now, what’s happening with CPCs right now. And therefore, how much is each visit worth and how much should we be bidding.
Fiverr (NYSE: FVRR)
Fiverr’s management is seeing strong demand for the AI services vertical, with AI-related keyword searches growing sevenfold in 2023
Early in January last year, we were the first in the market to launch a dedicated AI services vertical, creating a hub of businesses to higher AI talent. Throughout the year, we continue to see tremendous demand for those services with searches that contain AI-related keywords in our market base growing sevenfold in 2023 compared to 2022.
Fiverr’s management has seen AI create a net-positive 4% impact to Fiverr’s business by driving a mix-shift for the company from simple services – such as translation and voice-over – to complex services; complex services now represent 1/3 of Fiverr’s market base are typically larger and longer-duration; complex categories are where a human touch is needed and adds value while simple categories are where technology can do a good job without humans; Fiverr’s management thinks that simple categories will be automated away by AI while complex categories will become more important
Overall, we estimate AI created a net positive impact of 4% to our business in 2023 as we see a category mix shift from simple services such as translation and voice over to more complex services such as mobile app development, e-commerce management or financial consulting. In 2023, complex services represented nearly 1/3 of our market base, a significant step-up from 2022. Moreover, there are typically larger projects and longer duration with an average transaction size 30% higher than those of simple services…
…What we’ve identified is there is a difference between what we call simple categories or tasks and more complex ones. And in the complex group, it’s really those categories that require human intervention and human inputs in order to produce a satisfactory results for the customer. And in these categories, we’re seeing growth that goes well beyond the overall growth that we’re seeing. And really, the simple ones are such where technology can actually do a pretty much gen-tie work, which in those cases, they’re usually associated with lower prices and shorter-term engagements…
…So our assumption is that some of the simple paths are going to be — continue to be automated, which, by the way, is nothing new. I mean, it happened before even before AI, automation has been a part of our lives. And definitely, the more complex services is where I think the growth potential definitely lies. This is why we called out the fact that we’re going to double down on these categories and services.
Fiverr’s management believes that the opportunities created by AI will outweigh the jobs that are displaced
We believe that the opportunities created by emerging technologies far outweigh the jobs they replace. Human talent continues to be an essential part of unlocking the potential of new technologies.
Fiverr’s management believes that AI will be a multiyear tailwind for the company
We are also seeing a shift into more sophisticated, highly skilled and longer-duration categories with bigger addressable market. Data shows our market base is built to benefit from these technologies and labor market changes. Unlike single vertical solutions with higher exposure to disruptive technologies and train changes, Fiverr has developed a proprietary horizontal platform with hundreds of verticals, quickly leaning into the ever-changing industry demand needs and trends. All in all, we believe AI will be a multiyear tailwind for us to drive growth and innovation. In 2023, we also made significant investments in AI that drove improvements in our overall platform.
A strategic priority for Fiverr’s management in 2024 is to develop AI tools to enhance the overall customer experience of the company’s marketplace
Our recent winter product release in January culminated these efforts in the second half of 2023 and revamped almost every part of our platform with an AI-first approach, from search to personalization from supply quality to seller engagement…
…Our third strategic priority is to continue developing proprietary AI applications unique to our market base to enhance the overall customer experience. The winter product release we discussed just now gives you a flavor of that, but there is so much more to do.
Mastercard (NYSE: MA)
Mastercard’s management is leveraging the company’s work on generative AI to build new services and solutions as well as to increase internal productivity
We also continue to develop new services and solutions, many of which leverage the work we are doing with generative AI. Generative AI brings more opportunity to drive better experiences for our customers, makes it easier to extract insights from our data. It can also help us increase internal productivity. We are working on many Gen AI use cases today to do just that. For example, we recently announced Shopping News. Shopping News uses generative AI to offer a conversational shopping tool that recreates the in-store human experience online, can translate consumers collegially language into tailored recommendations. Another example is Mastercard Small Business AI. The tool will draw on our existing small business resources, along with the content from a newly formed global media coalition to help business owners navigate a range of business challenges. The platform, which is scheduled for pilot launch later this year will leverage AI to provide personalized real-time assistance delivered in a conversational tone.
MercadoLibre (NASDAQ: MELI)
MercadoLibre’s management launched a number of AI features – including a summary of customer reviews, a summary of product functions, push notifications about items left unpurchased in shopping carts, and capabilities for sellers to create coupons and answer buyer questions quickly – in 2023 for the ecommerce business
In 2023, we launched capabilities that enable sellers to create their own promotional coupons and answer buyer questions more quickly with the assistance of artificial intelligence…
…AI based features are already an integral part of the MELI experience, with many innovations launched in 2023, including:
- A summary of customer reviews on the product pages that concentrates the main feedback from buyers of that product.
- On beauty product pages a summary of product functions and characteristics is automatically created to facilitate buyers choices.
- Push notifications about items left unpurchased in shopping carts are now highly personalized and remind users why they may have chosen to buy a particular product.
- We have also added an AI feature that helps sellers to respond to questions by preparing answers that sellers can send immediately, or edit quickly.
Meta Platforms (NASDAQ: META)
The major goal of Meta’s management is for the company is to have (1) world-class AI assistant for all users, (2) AI-representor for each creator, (3) AI agent for every business, and (4) state-of-the-art open source models for developers
Now moving forward, a major goal, we’ll be building the most popular and most advanced AI products and services. And if we succeed, everyone who uses our services will have a world-class AI assistant to help get things done, every creator will have an AI that their community can engage with, every business will have an AI that their customers can interact with to buy goods and get support, and every developer will have a state-of-the-art open-source model to build with.
Meta’s management thinks consumers will want a new AI-powered computing device that can see and hear what we are seeing and hearing, and this new computing device will be smart glasses, and will require full general intelligence; Meta has been conducting research on general intelligence for more than a decade, but it will now also incorporate general intelligence into product work – management thinks having product-targets when developing general intelligence helps to focus the work
I also think that everyone will want a new category of computing devices that let you frictionlessly interact with AIs that can see what you see and hear what you hear, like smart glasses. And one thing that became clear to me in the last year is that this next generation of services requires building full general intelligence. Previously, I thought that because many of the tools were social-, commerce- or maybe media-oriented that it might be possible to deliver these products by solving only a subset of AI’s challenges. But now it’s clear that we’re going to need our models to be able to reason, plan, code, remember and many other cognitive abilities in order to provide the best versions of the services that we envision. We’ve been working on general intelligence research and FAIR for more than a decade. But now general intelligence will be the theme of our product work as well…
…We’ve worked on general intelligence in our lab, FAIR, for more than a decade, as I mentioned, and we produced a lot of valuable work. But having clear product targets for delivering general intelligence really focuses this work and helps us build the leading research program.
Meta’s management believes the company has world-class compute infrastructure; Meta will end 2024 with 600,000 H100 (NVIDIA’s state-of-the-art AI chip) equivalents of compute; Meta is coming up with new data centre and chip designs customised for its own needs
The first is world-class compute infrastructure. I recently shared that, by the end of this year, we’ll have about 350,000 H100s, and including other GPUs, that will be around 600,000 H100 equivalents of compute…
…In order to build the most advanced clusters, we’re also designing novel data centers and designing our own custom silicons specialized for our workloads.
Meta’s management thinks that future AI models will be even more compute-intensive to train and run inference; management does not know exactly how much the compute this will be, but recognises that the trend has been of AI models requiring 10x more compute for each new generation, so management expects Meta to require growing infrastructure investments in the years ahead for its AI work
Now going forward, we think that training and operating future models will be even more compute-intensive. We don’t have a clear expectation for exactly how much this will be yet, but the trend has been that state-of-the-art large language models have been trained on roughly 10x the amount of compute each year…
…While we are not providing guidance for years beyond 2024, we expect our ambitious long-term AI research and product development efforts will require growing infrastructure investments beyond this year.
Meta’s approach with AI is to open-source its foundation models while keeping product-implementations proprietary; Meta’s management thinks open-sourcing brings a few key benefits, in that open source software (1) is safer and more compute-efficient, (2) can become the industry standard, and (3) attracts talented people; management intends to continue open-sourcing Meta’s AI models
Our long-standing strategy has been to build an open-source general infrastructure while keeping our specific product implementations proprietary. In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far, as well as industry standard tools like PyTorch that we’ve developed…
…The short version is that open sourcing improves our models. And because there’s still significant work to turn our models into products because there will be other open-source models available anyway, we find that there are mostly advantages to being the open-source leader, and it doesn’t remove differentiation for our products much anyway. And more specifically, there are several strategic benefits.
First, open-source software is typically safer and more secure as well as more compute-efficient to operate due to all the ongoing feedback, scrutiny and development from the community. Now this is a big deal because safety is one of the most important issues in AI. Efficiency improvements and lowering the compute costs also benefit everyone, including us. Second, open-source software often becomes an industry standard. And when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products. That’s subtle, but the ability to learn and improve quickly is a huge advantage. And being an industry standard enables that. Third, open source is hugely popular with developers and researchers. And we know that people want to work on open systems that will be widely adopted. So this helps us recruit the best people at Meta, which is a very big deal for leading in any new technology area…
…This is why our long-standing strategy has been to open source general infrastructure and why I expect it to continue to be the right approach for us going forward.
Meta is already training the next generation of its foundational Llama model, Llama 3, and progress is good; Meta is also working on research for the next generations of Llama models with an eye on developing full general intelligence; Meta’s management thinks that the company’s next few generations of foundational AI models could be in a totally different direction from other AI companies
In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far…
…While we’re working on today’s products and models, we’re also working on the research that we need to advance for Llama 5, 6 and 7 in the coming years and beyond to develop full general intelligence…
…A lot of last year and the work that we’re doing with Llama 3 is basically making sure that we can scale our efforts to really produce state-of-the-art models. But once we get past that, there’s a lot more kind of different research that I think we’re going to be doing that’s going to take our foundation models in potentially different directions than other players in the industry are going to go in because we’re focused on specific vision for what we’re building. So it’s really important as we think about what’s going to be in Llama 5 or 6 or 7 and what cognitive abilities we want in there and what modalities we want to build into future multimodal versions of the models.
Meta’s management sees unique feedback loops for the company’s AI work that involve both data and usage of its products; the feedback loops have been important in how Meta improved its AI systems for Reels and ads
When people think about data, they typically think about the corpus that you might use to train a model upfront. And on Facebook and Instagram, there are hundreds of billions of publicly shared images and tens of billions of public videos, which we estimate is greater than the common crawl data set. And people share large numbers of public text posts and comments across our services as well. But even more important in the upfront training corpus is the ability to establish the right feedback loops with hundreds of millions of people interacting with AI services across our products. And this feedback is a big part of how we’ve improved our AI systems so quickly with Reels and Ads, especially over the last couple of years when we had to re-architect it around new rules.
Meta’s management wants hiring-growth in AI-related roles for 2024
AI is a growing area of investment for us in 2024 as we hire to support our road map…
…Second, we anticipate growth in payroll expenses as we work down our current hiring underrun and add incremental talent to support priority areas in 2024, which we expect will further shift our workforce composition toward higher-cost technical roles.
Meta’s management fully rolled out Meta AI Assistant and other AI chat experiences in the US at the end of 2023 and has began testing generative AI features in the company’s Family of Apps; Meta’s focus in 2024 regarding generative AI is on launching Llama3, making Meta AI assistant useful, and improving AI Studio
With generative AI, we fully rolled out our Meta AI assistant and other AI chat experiences in the U.S. at the end of the year and began testing more than 20 GenAI features across our Family of Apps. Our big areas of focus in 2024 will be working towards the launch of Llama 3, expanding the usefulness of our Meta AI assistant and progressing on our AI Studio road map to make it easier for anyone to create an AI.
Meta has been using AI to improve its marketing performance; Advantage+ is helping advertisers partially or fully automate the creation of ad campaigns; Meta has rolled out generative AI features to help advertisers with changing text and images in their ad campaigns – adoption of the features is strong and test show promising performance gains, and Meta has a big focus in this area in 2024
We continue to leverage AI across our ad systems and product suite. We’re delivering continued performance gains from ranking improvements as we adopt larger and more advanced models, and this will remain an ongoing area of investment in 2024. We’re also building out our Advantage+ portfolio of solutions to help advertisers leverage AI to automate their advertising campaigns. Advertisers can choose to automate part of the campaign creation setup process, such as who to show their ad to with Advantage+ audience, or they can automate their campaign completely using Advantage+ shopping, which continues to see strong growth. We’re also now exploring ways to apply this end-to-end automation to new objectives. On the ads creative side, we completed the global rollout of 2 of our generative AI features in Q4, Text Variations and Image Expansion, and plan to broaden availability of our background generation feature later in Q1. Initial adoption of these features has been strong, and tests are showing promising early performance gains. This will remain a big area of focus for us in 2024…
…So we’re really scaling our Advantage+ suites across all of the different offerings there, which really helped to automate the ads creation process for different types of advertisers. And we’re getting very strong feedback on all of those different features, advantage+ Shopping, obviously, being the first, but Advantage+ Catalog, Advantage+ Creative, Advantage+ Audiences, et cetera. So we feel like these are all really important parts of what has continued to grow improvements in our Ads business and will continue to going forward.
Meta’s management’s guidance for capital expenditure for 2024 is increased slightly from prior guidance (for perspective 2023’s capex is $27.27 billion), driven by increased investments in servers and data centers for AI-related work
Turning now to the CapEx outlook. We anticipate our full year 2024 capital expenditures will be in the range of $30 billion to $37 billion, a $2 billion increase of the high end of our prior range. We expect growth will be driven by investments in servers, including both AI and non-AI hardware, and data centers as we ramp up construction on sites with our previously announced new data center architecture.
Meta’s management thinks AI will make all of the company’s products and services better, but is unsure how the details will play out
I do think that AI is going to make all of the products and services that we use and make better. So it’s hard to know exactly how that will play out.
Meta’s management does not expect the company’s generative AI products to be a meaningful revenue-driver in the short term, but they expect the products to be huge drivers in the long term
We don’t expect our GenAI products to be a meaningful 2024 driver of revenue. But we certainly expect that they will have the potential to be meaningful contributors over time.
Microsoft (NASDAQ: MSFT)
Microsoft is now applying AI at scale, across its entire tech stack, and this is helping the company win customers
We have moved from talking about AI to applying AI at scale. By infusing AI across every layer of our tech stack, we are winning new customers and helping drive new benefits and productivity gains.
Microsoft’s management thinks that Azure offers (1) the best AI training and inference performance, (2) the widest range of AI chips, including those from AMD, NVIDIA, and Microsoft, and (3) the best selection of foundational models, including LLMs and SLMs (small language models); Azure AI now has 53,000 customers and more than 33% are new to Azure; Azure allows developers to deploy LLMs without managing underlying infrastructure
Azure offers the top performance for AI training and inference and the most diverse selection of AI accelerators, including the latest from AMD and NVIDIA as well as our own first-party silicon, Azure Maia. And with Azure AI, we provide access to the best selection of foundation and open source models, including both LLMs and SLMs all integrated deeply with infrastructure, data and tools on Azure. We now have 53,000 Azure AI customers. Over 1/3 are new to Azure over the past 12 months. Our new models of service offering makes it easy for developers to use LLMs from our partners like Cohere, Meta and Mistral on Azure without having to manage underlying infrastructure.
Azure grew revenue by 30% in 2023 Q4, with six points of growth from AI services; most of the six points of growth from AI services was driven by Azure Open AI
Azure and other cloud services revenue grew 30% and 28% in constant currency, including 6 points of growth from AI services. Both AI and non-AI Azure services drove our outperformance…
…Yes, Azure OpenAI and then OpenAI’s own APIs on top of Azure would be the sort of the major drivers. But there’s a lot of the small batch training that goes on, whether it’s out of [indiscernible] or fine-tuning. And then a lot of people who are starting to use models as a service with all the other new models. But it’s predominantly Azure OpenAI today.
Microsoft’s management believes the company has built the world’s most popular SLMs; the SLMs have similar performance to larger models, but can run on laptops and mobile devices; both startups and established companies are exploring the use of Microsoft’s Phi SLM for applications
We have also built the world’s most popular SLMs, which offer performance comparable to larger models but are small enough to run on a laptop or mobile device. Anchor, Ashley, AT&T, EY and Thomson Reuters, for example, are all already exploring how to use our SLM, Phi, for their applications.
Microsoft has added Open AI’s latest models to the Azure OpenAI service; Azure Open AI is seeing increased usage from AI-first start ups, and more than 50% of Fortune 500 companies are using it
And we have great momentum with Azure OpenAI Service. This quarter, we added support for OpenAI’s latest models, including GPT-4 Turbo, GPT-4 with Vision, DALL-E 3 as well as fine-tuning. We are seeing increased usage from AI-first start-ups like Moveworks, Poplexity, Symphony AI as well as some of the world’s largest companies. Over half of the Fortune 500 use Azure OpenAI today, including Ally Financial, Coca-Cola and Rockwell Automation. For example, at CES this month, Walmart shared how it’s using Azure OpenAI Service along with its own proprietary data and models to streamline how more than 50,000 associates work and transform how its millions of customers shop.
Microsoft’s management is integrating AI across the company’s entire data stack; Cosmo DB, which has vector search capabilities, is used by companies as a database for AI apps; KPMG, with the help of Cosmos DB, has seen a 50% increase in productivity for its consultants; Azure AI Search provides hybrid search that goes beyond vector search and Open AI is using it for ChatGPT
We are integrating the power of AI across the entire data stack. Our Microsoft Intelligent Data Platform brings together operational databases, analytics, governance and AI to help organizations simplify and consolidate their data estates. Cosmos DB is the go-to database to build AI-powered apps at any scale, powering workloads for companies in every industry from AXA and Kohl’s to Mitsubushi and TomTom. KPMG, for example, has used Cosmos DB, including its built-in native vector search capabilities, along with Azure OpenAI Service to power an AI assistant, which it credits with driving an up to 50% increase in productivity for its consultants… And for those organizations who want to go beyond in-database vector search, Azure AI Search offers the best hybrid search solution. OpenAI is using it for retrieval augmented generation as part of ChatGPT.
There are now more than 1.3 million GitHub Copilot subscribers, up 30% sequentially; more than 50,000 organisations use GitHub Copilot Business and Accenture alone will roll out GitHub Copilot to 50,000 of its developers in 2024; Microsoft’s management thinks GitHub Copilot is a core product for anybody who is working in software development
GitHub revenue accelerated to over 40% year-over-year, driven by all our platform growth and adoption of GitHub Copilot, the world’s most widely deployed AI developer tool. We now have over 1.3 million paid GitHub Copilot subscribers, up 30% quarter-over-quarter. And more than 50,000 organizations use GitHub Copilot Business to supercharge the productivity of their developers from digital natives like Etsy and HelloFresh to leading enterprises like Autodesk, Dell Technologies and Goldman Sachs. Accenture alone will roll out GitHub Copilot to 50,000 of its developers this year…
…Everybody had talked it’s become — it is the 1 place where it’s becoming standard issue for any developer. It’s like if you take away spellcheck from Word, I’ll be unemployable. And similarly, it will be like I think GitHub Copilot becomes core to anybody who is doing software development…
To increase GitHub Copilot’s ARPU (average revenue per user), and ARPUs for other Copilots for the matter, Microsoft’s management will lean on the improvement that the Copilots bring to a company’s operating leverage and ask for a greater share of value
Our ARPUs have been great but they’re pretty low. But frankly, even though we’ve had a lot of success, it’s not like we are a high-priced ARPU company. I think what you’re going to start finding is, whether it’s Sales Copilot or Service Copilot or GitHub Copilot or Security Copilot, they are going to fundamentally capture some of the value they drive in terms of the productivity of the OpEx, right? So it’s like 2 points, 3 points of OpEx leverage would go to some software spend. I think that’s a pretty straightforward value equation. And so that’s the first time. I mean, this is not something we’ve been able to make the case for before, whereas now I think we have that case.
Then even the horizontal Copilot is what Amy was talking about, which is at the Office 365 or Microsoft 365 level. Even there, you can make the same argument. Whatever ARPU we may have with E5, now you can say incrementally as a percentage of the OpEx, how much would you pay for a Copilot to give you more time savings, for example. And so yes, I think all up, I do see this as a new vector for us in what I’ll call the next phase of knowledge work and frontline work even and their productivity and how we participate.
And I think GitHub Copilot, I never thought of the tools business as fundamentally participating in the operating expenses of a company’s spend on, let’s say, development activity. And now you’re seeing that transition. It’s just not tools. It’s about productivity of your dev team.
Microsoft’s own research and external studies show that companies can see up to a 70% increase in productivity by using generative AI for specific tasks; early users of Copilot for Microsoft 365 became 29% faster in a number of tasks
Our own research as well as external studies show as much as 70% improvement in productivity using generative AI for specific work tasks. And overall, early Copilot for Microsoft 365 users were 29% faster in a series of tasks like searching, writing and summarizing.
Microsoft’s management believes that AI will become a first-class part of every personal computer (PC) in 2024
In 2024, AI will become first-class part of every PC. Windows PCs with built-in neural processing units were front and center at CES, unlocking new AI experiences to make what you do on your PC easier and faster, from searching for answers and summarizing e-mails to optimizing performance in battery efficiency. Copilot in Windows is already available on more than 75 million Windows 10 and Windows 11 PCs. And with our new Copilot Key, the first significant change to the Windows Keyboard in 30 years, providing one-click access.
Microsoft’s management thinks that AI is transforming Microsoft’s search and browser experience; Microsoft has created more than 5 billion images and conducted more than 5 billion chats to-date, with both doubling sequentially; Bing and Edge both took share in 2023 Q4
And more broadly, AI is transforming our search and browser experience. We are encouraged by the momentum. Earlier this month, we achieved a new milestone with 5 billion images created and 5 billion chats conducted to date, both doubling quarter-over-quarter and both Bing and Edge took share this quarter.
Microsoft’s management expects the company’s capital expenditure to increase materially in the next quarter because of cloud and AI infrastructure investments; management’s commitment to increase infrastructure investments is guided by customer demand and what they see as a substantial market opportunity; management feels good about where Microsoft is in terms of adding infrastructure capacity to meet AI computing demand
We expect capital expenditures to increase materially on a sequential basis, driven by investments in our cloud and AI infrastructure and the slip of a delivery date from Q2 to Q3 from a third-party provider noted earlier. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-out…
…Our commitment to scaling our cloud and AI investment is guided by customer demand and a substantial market opportunity. As we scale these investments, we remain focused on driving efficiencies across every layer of our tech stack and disciplined cost management across every team…
…I think we feel really good about where we have been in terms of adding capacity. You started to see the acceleration in our capital expense starting almost a year ago, and you’ve seen it scale through that process.
Microsoft’s management is seeing that most of the AI activity taking place on Azure is for inference
[Question] On AI, where are we in the journey from training driving most of the Azure AI usage to inferencing?
[Answer] What you’ve seen for most part is all inferencing. So none of the large model training stuff is in any of our either numbers at all. Small batch training, so somebody is doing fine-tuning or what have you, that will be there but that’s sort of a minor part. So most of what you see in the Azure number is broadly inferencing.
New workloads in AI that happen on Azure starts with selecting a frontier model, fine-tuning that model, then inference
The new workload in AI obviously, in our case, it starts with 1 of the frontier — I mean, starts with the frontier model, Azure OpenAI. But it’s not just about just 1 model, right? So you — first, you take that model, you do all that jazz, you may do some fine-tuning. You do retrieval, which means you’re sort of either getting some storage meter or you’re eating some compute meters. And so — and by the way, there’s still a large model to a small model and that would be a training perhaps, but that’s a small batch training that uses essentially inference infrastructure. So I think that’s what’s happening.
Microsoft’s management believes that generative AI will change the entire tech stack, down to the core computer architecture; one such change is to separate data storage from compute, as in the case of one of Microsoft’s newer services, Fabric
[Question] Cloud computing changed the tech stack in ways that we could not imagine 10 years back. The nature of the database layer, the operating system, every layer just changed dramatically. How do you foresee generative AI changing the tech stack as we know it?
[Answer] I think it’s going to have a very, very foundational impact. In fact, you could say the core compute architecture itself changes, everything from power density to the data center design to what used to be the accelerator now is that sort of the main CPU, so to speak, or the main compute unit. And so I think — and the network, the memory architecture, all of it. So the core computer architecture changes, I think every workload changes. And so yes, so there’s a full — like take our data layer.
The most exciting thing for me in the last year has been to see how our data layer has evolved to be built for AI, right? If you think about Fabric, one of the genius of Fabric is to be able to say, let’s separate out storage from the compute layer. In compute, we’ll have traditional SQLs, we’ll have spark. And by the way, you can have an Azure AI drop on top of the same data lake, so to speak, or the lake house pattern. And then the business model, you can combine all of those different computes. So that’s the type of compute architecture. So it’s sort of a — so that’s just 1 example…
… I do believe being in the cloud has been very helpful to build AI. But now AI is just redefining what it means to have — what the cloud looks like, both at the infrastructure level and the app model.
Microsoft’s management is seeing a few big use cases emerging within Microsoft 365 Copilot: Summarisation of meetings and documents; “chatting” with documents and texts of past communications; and creation and completion of documents
In terms of what we’re seeing, it’s actually interesting if you look at the data we have, summarization, that’s what it’s like, number one, like I’m doing summarizations of teams, meetings inside of teams during the meeting, after the meeting, Word documents, summarization. I get something in e-mail, I’m summarizing. So summarization has become a big deal. Drafts, right? You’re drafting e-mails, drafting documents. So anytime you want to start something, the blank page thing goes away and you start by prompting and drafting.
Chat. To me, the most powerful feature is now you have the most important database in your company, which happens to be the database of your documents and communications, is now query-able by natural language in a powerful way, right? I can go and say, what are all the things Amy said I should be watching out for next quarter? And it will come out with great detail. And so chat, summarization, draft.
Also, by the way, actions, one of the most used things is, here’s a Word document. Go complete — I mean, create a PowerPoint for me. So those are the stuff that’s also beginning.
Microsoft’s management is seeing strong engagement growth with Microsoft 365 Copilot that gives them optimism
And the other thing I would add, we always talk about in enterprise software, you sell software, then you wait and then it gets deployed. And then after deployment, you want to see usage. And in particular, what we’ve seen and you would expect this in some ways with Copilot, even in the early stages, obviously, deployment happens very quickly. But really what we’re seeing is engagement growth. To Satya’s point on how you learn and your behavior changes, you see engagement grow with time. And so I think those are — just to put a pin on that because it’s an important dynamic when we think about the optimism you hear from us.
Nvidia (NASDAQ: NVDA)
Nvidia’s management believes that companies are starting to build the next generation of AI data centres; this next generation of AI data centres takes in data and transforms them into tokens, which are the output of AI models
At the same time, companies have started to build the next generation of modern Data Centers, what we refer to as AI factories, purpose-built to refine raw data and produce valuable intelligence in the era of generative AI…
…A whole new industry in the sense that for the very first time, a Data Center is not just about computing data and storing data and serving the employees of the company. We now have a new type of Data Center that is about AI generation, an AI generation factory, and you’ve heard me describe it as AI factories. But basically, it takes raw material, which is data. It transforms it with these AI supercomputers that NVIDIA built, and it turns them into incredibly valuable tokens. These tokens are what people experience on the amazing ChatGPT or Midjourney or search these days are augmented by that. All of your recommender systems are now augmented by that, the hyper-personalization that goes along with it. All of these incredible start-ups in digital biology generating proteins and generating chemicals and the list goes on. And so all of these tokens are generated in a very specialized type of Data Center. And this Data Center, we call it AI supercomputers and AI generation factories.
Nvidia’s management is seeing very strong demand for the company’s Hopper AI chips and expects demand to far outstrip supply
Demand for Hopper remains very strong. We expect our next generation products to be supply constrained as demand far exceeds supply…
…However, whenever we have new products, as you know, it ramps from 0 to a very large number, and you can’t do that overnight. Everything is ramped up. It doesn’t step up. And so whenever we have a new generation of products and right now, we are ramping H200s, there’s no way we can reasonably keep up on demand in the short term as we ramp.
Nvidia’s outstanding 2023 Q4 growth in Data Center revenue was driven by both training and inference of AI models; management estimates that 40% of Nvidia’s Data Center revenue in 2023 was for AI inference; the 40% estimate might even be understated, because recommendation systems that were driven by CPU approaches are now being driven by GPUs
In the fourth quarter, Data Center revenue of $18.4 billion was a record, up 27% sequentially and up 409% year-on-year…
…Fourth quarter Data Center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our Data Center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year, approximately 40% of Data Center revenue was for AI inference…
…The estimate is probably understated and — but we estimated it, and let me tell you why. Whenever — a year ago, a year ago, the recommender systems that people are — when you run the Internet, the news, the videos, the music, the products that are being recommended to you because, as you know, the Internet has trillions — I don’t know how many trillions, but trillions of things out there, and your phone is 3 inches squared. And so the ability for them to fit all of that information down to something such a small real estate is through a system, an amazing system called recommender systems.
These recommender systems used to be all based on CPU approaches. But the recent migration to deep learning and now generative AI has really put these recommender systems now directly into the path of GPU acceleration. It needs GPU acceleration for the embeddings. It needs GPU acceleration for the nearest neighbor search. It needs GPU accelerating for reranking. And it needs GPU acceleration to generate the augmented information for you. So GPUs are in every single step of a recommender system now. And as you know, a recommender system is the single largest software engine on the planet. Almost every major company in the world has to run these large recommender systems.
Nvidia’s management is seeing that all industries are deploying AI solutions
Building and deploying AI solutions has reached virtually every industry. Many companies across industries are training and operating their AI models and services at scale…
…One of the most notable trends over the past year is the significant adoption of AI by enterprises across the industry verticals such as Automotive, health care, and financial services.
Large cloud providers accounted for more than half of Nvidia’s Data Center revenue in 2023 Q4; Microsoft
In the fourth quarter, large cloud providers represented more than half of our Data Center revenue, supporting both internal workloads and external public cloud customers.
Nvidia’s management is finding that consumer internet companies have been early adopters of AI and they are one of Nvidia’s largest customer categories; consumer internet companies are using AI (1) in content recommendation systems to boost user engagement and (2) to generate content for advertising and to help content creators
The consumer Internet companies have been early adopters of AI and represent one of our largest customer categories. Companies from search to e-commerce, social media, news and video services and entertainment are using AI for deep learning-based recommendation systems. These AI investments are generating a strong return by improving customer engagement, ad conversation and click-through rates…
… In addition, consumer Internet companies are investing in generative AI to support content creators, advertisers and customers through automation tools for content and ad creation, online product descriptions and AI shopping assistance.
Nvidia’s management is observing that enterprise software companies are using generative AI to help their customers with productivity and they are already seeing commercial success
Enterprise software companies are applying generative AI to help customers realize productivity gains. All the customers we’ve partnered with for both training and inference of generative AI are already seeing notable commercial success. ServiceNow’s generative AI products in their latest quarter drove their largest ever net new annual contract value contribution of any new product family release. We are working with many other leading AI and enterprise software platforms as well, including Adobe, Databricks, Getty Images, SAP, and Snowflake.
There are both enterprises and startups that are building foundational large language models; these models are serving specific cultures, regions, and also industries
The field of foundation of large language models is thriving, Anthropic, Google, Inflection, Microsoft, OpenAI and xAI are leading with continued amazing breakthrough in generative AI. Exciting companies like Adept, AI21, Character.AI, Cohere, Mistral, Perplexity and Runway are building platforms to serve enterprises and creators. New startups are creating LLMs to serve the specific languages, cultures and customs of the world’s many regions. And others are creating foundation models to address entirely different industries like Recursion, pharmaceuticals and generative biomedicines for biology. These companies are driving demand for NVIDIA AI infrastructure through hyperscale or GPU-specialized cloud providers.
Nvidia’s AI infrastructure are used for autonomous driving; the automotive vertical accounted for more than $1 billion of Nvidia’s Data Center revenue in 2023, and Nvidia’s management thinks the automotive vertical is a big growth opportunity for the company
We estimate that Data Center revenue contribution of the Automotive vertical through the cloud or on-prem exceeded $1 billion last year. NVIDIA DRIVE infrastructure solutions include systems and software for the development of autonomous driving, including data ingestion, curation, labeling, and AI training, plus validation through simulation. Almost 80 vehicle manufacturers across global OEMs, new energy vehicles, trucking, robotaxi and Tier 1 suppliers are using NVIDIA’s AI infrastructure to train LLMs and other AI models for automated driving and AI cockpit applications. In effect, nearly every Automotive company working on AI is working with NVIDIA. As AV algorithms move to video transformers and more cars are equipped with cameras, we expect NVIDIA’s automotive Data Center processing demand to grow significantly…
…NVIDIA DRIVE Orin is the AI car computer of choice for software-defined AV fleet. Its successor, NVIDIA DRIVE Thor, designed for vision transformers offers more AI performance and integrates a wide range of intelligent capabilities into a single AI compute platform, including autonomous driving and parking, driver and passenger monitoring, and AI cockpit functionality and will be available next year. There were several automotive customer announcements this quarter. Li Auto, Great Wall Motor, ZEEKR, the premium EV subsidiary of Geely and Xiaomi EV all announced new vehicles built on NVIDIA.
Nvidia is developing AI solutions in the realm of healthcare
In health care, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging, and wearable devices. We have built deep domain expertise in health care over the past decade, creating the NVIDIA Clara health care platform and NVIDIA BioNeMo, a generative AI service to develop, customize and deploy AI foundation models for computer-aided drug discovery. BioNeMo features a growing collection of pre-trained biomolecular AI models that can be applied to the end-to-end drug discovery processes. We announced Recursion is making available for the proprietary AI model through BioNeMo for the drug discovery ecosystem.
Nvidia’s business in China is affected by the US government’s export restrictions concerning advanced AI chips; Nvidia has been building workarounds and have started shipping alternatives to China; Nvidia’s management expects China to remain a single-digit percentage of Data Center revenue in 2024 Q1; management thinks that while the US government wants to limit China’s access to leading-edge AI technology, it still wants to see Nvidia succeed in China
Growth was strong across all regions except for China, where our Data Center revenue declined significantly following the U.S. government export control regulations imposed in October. Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don’t require a license for the China market. China represented a mid-single-digit percentage of our Data Center revenue in Q4, and we expect it to stay in a similar range in the first quarter…
…At the core, remember, the U.S. government wants to limit the latest capabilities of NVIDIA’s accelerated computing and AI to the Chinese market. And the U.S. government would like to see us be as successful in China as possible. Within those two constraints, within those two pillars, if you will, are the restrictions.
Nvidia’s management is seeing demand for AI infrastructure from countries become an additional growth-driver for the company
In regions outside of the U.S. and China, sovereign AI has become an additional demand driver. Countries around the world are investing in AI infrastructure to support the building of large language models in their own language on domestic data and in support of their local research and enterprise ecosystems…
…So we’re seeing sovereign AI infrastructure is being built in Japan, in Canada, in France, so many other regions. And so my expectation is that what is being experienced here in the United States, in the West will surely be replicated around the world.
Nvidia is shipping its Hopper AI chips with Infiniband networking; management believes that a combination of the company’s Hopper AI chips with Infiniband is becoming a de facto standard for AI infrastructure
The vast majority of revenue was driven by our Hopper architecture along with InfiniBand networking. Together, they have emerged as the de facto standard for accelerated computing and AI infrastructure.
Nvidia is on track to ramp shipments of the latest generation of its most advanced AI chips – the H200 – in 2024 Q2; the H200 chips have double the inference performance of its predecessor
We are on track to ramp H200 with initial shipments in the second quarter. Demand is strong as H200 nearly doubled the inference performance of H100.
Nvidia’s networking solutions has a revenue run-rate of more than $13 billion and the company’s Quantum Infiniband band solutions grew by more than five times in 2023 Q4 – but Nvidia is also working on its own Ethernet AI networking solution called Spectrum X, which is purpose-built for AI and performs better than traditional Ethernet for AI workloads; Spectrum X has attracted leading OEMs as partners, and Nvidia is on track to ship the solution in 2024 Q1; management still sees Infiniband the standard for AI-dedicated systems
Networking exceeded a $13 billion annualized revenue run rate. Our end-to-end networking solutions define modern AI data centers. Our Quantum InfiniBand solutions grew more than 5x year-on-year. NVIDIA Quantum InfiniBand is the standard for the highest-performance AI-dedicated infrastructures. We are now entering the Ethernet networking space with the launch of our new Spectrum-X end-to-end offering designed for an AI-optimized networking for the Data Center. Spectrum-X introduces new technologies over Ethernet that are purpose-built for AI. Technologies incorporated in our Spectrum switch, BlueField DPU and software stack deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Leading OEMs, including Dell, HPE, Lenovo and Supermicro with their global sales channels are partnering with us to expand our AI solution to enterprises worldwide. We are on track to ship Spectrum-X this quarter…
…InfiniBand is the standard for AI-dedicated systems. Ethernet with Spectrum-X, Ethernet is just not a very good scale-out system. But with Spectrum-X, we’ve augmented, layered on top of Ethernet, fundamental new capabilities like adaptive routing, congestion control, noise isolation or traffic isolation so that we could optimize Ethernet for AI. And so InfiniBand will be our AI-dedicated infrastructure, Spectrum-X will be our AI-optimized networking
Nvidia’s AI-training-as-a-service-platform, DGX Cloud, has reached a $1 billion annualised revenue run rate, and is now available on all the major cloud service providers; Nvidia’s management believes that the company’s software business will become very significant over time, because of the importance of software when running AI-related hardware
We also made great progress with our software and services offerings, which reached an annualized revenue run rate of $1 billion in Q4. NVIDIA DGX Cloud will expand its list of partners to include Amazon’s AWS, joining Microsoft Azure, Google Cloud, and Oracle Cloud. DGX Cloud is used for NVIDIA’s own AI R&D and custom model development as well as NVIDIA developers. It brings the CUDA ecosystem to NVIDIA CSP partners…
…And the way that we work with CSPs, that’s really easy. We have large teams that are working with their large teams. However, now that generative AI is enabling every enterprise and every enterprise software company to embrace accelerated computing, and when it is now essential to embrace accelerated computing because it is no longer possible, no longer likely anyhow, to sustain improved throughput through just general-purpose computing, all of these enterprise software companies and enterprise companies don’t have large engineering teams to be able to maintain and optimize their software stack to run across all of the world’s clouds and private clouds and on-prem.
So we are going to do the management, the optimization, the patching, the tuning, the installed base optimization for all of their software stacks. And we containerize them into our stack called NVIDIA AI Enterprise. And the way we go to market with it is think of that NVIDIA AI Enterprise now as a run time like an operating system. It’s an operating system for artificial intelligence. And we charge $4,500 per GPU per year. And my guess is that every enterprise in the world, every software enterprise company that are deploying software in all the clouds and private clouds and on-prem will run on NVIDIA AI Enterprise, especially obviously, for our GPUs. And so this is going to likely be a very significant business over time.
Nvidia’s gaming chips also have strong generative AI capabilities, leading to better gaming performance
At CES, we announced our GeForce RTX 40 Super Series family of GPUs. Starting at $599, they deliver incredible gaming performance and generative AI capabilities. Sales are off to a great start. NVIDIA AI Tensor Cores and the GPUs deliver up to 836 AI TOPS, perfect for powering AI for gaming, creating and everyday productivity. The rich software stack we offer with our RTX DPUs further accelerates AI. With our DLSS technologies, 7 out of 8 pixels can be AI-generated, resulting up to 4x faster ray tracing and better image quality. And with the TensorRT LLM for Windows, our open-source library that accelerates inference performance for the latest large language models, generative AI can run up to 5x faster on RTX AI PCs.
Nvidia has announced new gaming AI laptops from every major laptop manufacturer; Nvidia has more than 100 million RTX PCs in its installed base, and management thinks the company is in a good position to lead the next wave of generative AI applications that are coming to the personal computer
At CES, we also announced a wave of new RTX 40 Series AI laptops from every major OEM. These bring high-performance gaming and AI capabilities to a wide range of form factors, including 14-inch and thin and light laptops. With up to 686 TOPS of AI performance, these next-generation AI PCs increase generative AI performance by up to 60x, making them the best performing AI PC platforms…
…NVIDIA is fueling the next wave of generative AI applications coming to the PC. With over 100 million RTX PCs in the installed base and over 500 AI-enabled PC applications and games, we are on our way.
Nvidia has a service that allows software developers to build state-of-the-art generative AI avatars
At CES, we announced NVIDIA Avatar Cloud Engine microservices, which allow developers to integrate state-of-the-art generative AI models into digital avatars. ACE won several Best of CES 2024 awards. NVIDIA has an end-to-end platform for building and deploying generative AI applications for RTX PCs and workstations. This includes libraries, SDKs, tools and services developers can incorporate into their generative AI workloads.
Nvidia’s management believes that generative AI cannot be done on traditional general-purpose computing – it has to be done on an accelerated computing framework
With accelerated computing, you can dramatically improve your energy efficiency. You can dramatically improve your cost in data processing by 20:1, huge numbers. And of course, the speed. That speed is so incredible that we enabled a second industry-wide transition called generative AI. In generative AI, I’m sure we’re going to talk plenty about it during the call. But remember, generative AI is a new application. It is enabling a new way of doing software, new types of software being created. It is a new way of computing. You can’t do generative AI on traditional general-purpose computing. You have to accelerate it.
The hardware supply chain of a Nvidia GPU is improving; the components that go into a Nvidia GPU is really complex
Our supply is improving. Overall, our supply chain is just doing an incredible job for us. Everything from, of course, the wafers, the packaging, the memories, all of the power regulators to transceivers and networking and cables, and you name it, the list of components that we ship. As you know, people think that NVIDIA GPUs is like a chip, but the NVIDIA Hopper GPU has 35,000 parts. It weighs 70 pounds. These things are really complicated things we’ve built. People call it an AI supercomputer for good reason. If you ever look at the back of the Data Center, the systems, the cabling system is mind-boggling. It is the most dense, complex cabling system for networking the world has ever seen. Our InfiniBand business grew 5x year-over-year. The supply chain is really doing fantastic supporting us. And so overall, the supply is improving.
Nvidia’s management is allocating chips fairly to all of the company’s customers
CSPs have a very clear view of our product road map and transitions. And that transparency with our CSPs gives them the confidence of which products to place and where and when. And so they know the timing to the best of our ability, and they know quantities and, of course, allocation. We allocate fairly. We allocate fairly, do the best of our — best we can to allocate fairly and to avoid allocating unnecessarily.
Nvidia’s management is seeing a lot of activity emerging from robotics companies
There’s just a giant suite of robotics companies that are emerging. There are warehouse robotics to surgical robotics to humanoid robotics, all kinds of really interesting robotics companies, agriculture robotics companies.
Nvidia’s installed base of hardware has been able to support every single innovation in AI technology because it is programmable
NVIDIA is the only architecture that has gone from the very, very beginning, literally at the very beginning when CNNs and Alex Krizhevsky and Ilya Sutskever and Geoff Hinton first revealed AlexNet, all the way through RNNs to LSTMs to every RLs to deep RLs to transformers to every single version and every species that have come along, vision transformers, multi-modality transformers that every single — and now time sequence stuff. And every single variation, every single species of AI that has come along, we’ve been able to support it, optimize our stack for it and deploy it into our installed base…
… We simultaneously have this ability to bring software to the installed base and keep making it better and better and better. So our customers’ installed base is enriched over time with our new software…
…on’t be surprised if in our future generation, all of a sudden, amazing breakthroughs in large language models were made possible. And those breakthroughs, some of which will be in software because they run CUDA, will be made available to the installed base. And so we carry everybody with us on the one hand, we make giant breakthroughs on the other hand.
A big difference between accelerated computing and general purpose computing is the importance of software in the former
As you know, accelerated computing is very different than general-purpose computing. You’re not starting from a program like C++. You compile it and things run on all your CPUs. The stacks of software necessary for every domain from data processing, SQL versus SQL structured data versus all the images and text and PDF, which is unstructured, to classical machine learning to computer vision to speech to large language models, all — recommender systems. All of these things require different software stacks. That’s the reason why NVIDIA has hundreds of libraries. If you don’t have software, you can’t open new markets. If you don’t have software, you can’t open and enable new applications. Software is fundamentally necessary for accelerated computing. This is the fundamental difference between accelerated computing and general-purpose computing that most people took a long time to understand. And now people understand that software is really key.
Nvidia’s management believes that generative AI has kicked off a massive new investment cycle for AI infrastructure
Generative AI has kicked off a whole new investment cycle to build the next trillion dollars of infrastructure of AI generation factories. We believe these two trends will drive a doubling of the world data center infrastructure installed base in the next 5 years and will represent an annual market opportunity in the hundreds of billions.
PayPal (NASDAQ: PYPL)
PayPal’s management will soon launch a new PayPal app that will utilise AI to personalise the shopping experience for consumers; management hopes to drive engagement with the app
This year, we’re launching and evolving a new PayPal app to create a situation. We will also leverage our merchant relationships and the power of AI to make the entire shopping experience personalized for consumers while giving them control over their data…
…The new checkout and app experiences we are rolling out this year will also create an engagement loop that will drive higher awareness of the various products we offer and drive higher adoption of our portfolio over time.
Shopify (NASDAQ: SHOP)
Shopify’s management launched nearly 12 AI-powered tools through the Shopify Magic product suite in 2023, including tools for AI-generated product descriptions and an AI commerce assistant; in recent weeks, management launched AI product image creating and editing tools within Shopify Magic; management will be introducing new modalities and text-to-image capabilities later this year
In 2023, we brought nearly a dozen AI-enabled tools through our Shopify Magic product suite. We’re one of the first platforms to bring AI-generated product descriptions to market and made solid progress towards building Sidekick, a first of its kind AI-enabled commerce assistant. As part of our winter edition a few weeks ago, we introduced new features to our Shopify Magic suite of AI tools. These new generative AI tools simplify and enhance product image editing directly within the product image editor in the Shopify admin. With Shopify Magic, merchants can now leverage AI to create stunning images and professional edits with just a few clicks or keywords, saving on cost and time. And given the significant advancements in AI in 2023, we plan to seize this enormous opportunity ahead of us and are excited to introduce new modalities and text to image capabilities to Shopify in 2024.
Shopify’s marketing-paybacks have improved by over 30% with the help of AI
In terms of marketing, the 2 areas, in particular, where we are leaning in this quarter are performance marketing and point-of-sale. Within performance marketing, our team has unlocked some opportunities to reach potential customers at highly attractive LTV to CAC and paybacks. In fact, tactics that we’ve implemented on some channels earlier this year including through the enhanced use of AI and automation have improved paybacks by over 30%, enabling us to invest more into these channels while still maintaining our operating discipline on the underlying unit economics.
Taiwan Semiconductor Manufacturing Company (NYSE: TSM)
TSMC’s management has increased the company’s capital expenditure materially over the last few years to capture the growth opportunities associated with AI
At TSMC, a higher level of capital expenditures is always correlated with higher growth opportunities in the following years. In the past few years, we have sharply increased our CapEx spending in preparation to capture or harvest the growth opportunities from HPC, AI and 5G megatrends.
TSMC’s management expects 2024 to be a healthy growth-year for the company with revenue growth in the low-to-mid 20s percentage range, driven by its 3nm technologies, 5nm technologies, and AI
Entering 2024, we forecast fabless semiconductor inventory to have returned to a [ handsome ] level exiting 2023. However, macroeconomic weakness and geopolitical uncertainties persist, potentially further weighing on consumer sentiment and the market demand. Having said that, our business has bottomed out on a year-over-year basis, and we expect 2024 to be a healthy growth year for TSMC, supported by continued strong ramp of our industry-leading 3-nanometer technologies, strong demand for the 5-nanometer technologies and robust AI-related demand.
TSMC’s management sees 2023 as the year that generative AI became important for the semiconductor industry, with TSMC as a key enabler; management thinks that the surge in AI-related demand in 2023 will drive an acceleration in structural demand for energy-efficient computing, and that AI will need to be supported by more powerful semiconductors – these are TSMC’s strengths
2023 was a challenging year for the global semiconductor industry, but we also witnessed the rising emergence of generative AI-related applications with TSMC as a key enabler…
…Despite the near-term challenges, our technology leadership enable TSMC to outperform the foundry industry in 2023, while we are positioning us to capture the future AI and high-performance computing-related growth opportunities…
…The surge in AI-related demand in 2023 supports our already strong conviction that the structural demand for energy-efficient computing will accelerate in an intelligent and connected world. TSMC is a key enabler of AI applications. No matter which approach is taken, AI technology is evolving to use more complex AI models as the amount of computation required for training and inference is increasing. As a result, AI models need to be supported by more powerful semiconductor hardware, which requires use of the most advanced semiconductor process technologies. Thus, the value of TSMC technology position is increasing, and we are all well positioned to capture the major portion of the market in terms of semiconductor component in AI. To address insatiable AI-related demand for energy-efficient computing power, customers rely on TSMC to provide the most leading edge processing technology at scale with a dependable and predictable cadence of technology offering.
Almost everyone important in AI is working with TSMC on its 2nm technologies
As process technology complexity increase, the engagement lead time with customers also started much earlier. Thus, almost all the AI innovators are working with TSMC, and we are observing a much higher level of customer interest and engagement at N2 as compared with N3 at a similar stage from both HPC and the smartphone applications.
TSMC’s management believes that the world has seen only the tip of the iceberg with AI
But on the other hand, AI is only in its nascent stage. Only last November, the first large language data is announced, ChatGPT announced. We only see the tip of the iceberg.
TSMC’s management believes that the use of AI could accelerate scientific innovation in the field of semiconductor manufacturing
So I want to give the industry an optimistic note that even though 1 nanometer or sub 1 nanometer could be challenging, but we have a new technology capability using AI to accelerate the innovation in science.
TSMC’s management still believes that its narrowly-defined AI business will grow at 50% annually; management also sees AI application process chips making up a high-teens weightage of TSMC’s revenue by 2027, up from a low-teens weightage mentioned in the 2023 second-quarter earnings call, because of a sudden increase in demand
But for TSMC, we look at ours here, the AI’s a CAGR, that’s the growth rate every year, it’s about 50%. And we are confident that we can capture more opportunities in the future. So that’s what we said that up to 2027, we are going to have high teens of the revenue from a very narrow, we defined the AI application process, not to mention about the networking, not to mention about all others, okay?…
…[Question] You mentioned that we have a very narrow definition, we call server AI processor contribution and that you said it can be high teens in 5 years’ time because the last time, we said low teens.
[Answer] The demand suddenly being increased since last — I think, last year, the first quarter up to March or April, when ChatGPT become popular, so customers respond quickly and asked TSMC to prepare the capacity, both in front end and the back end. And that’s why we have confidence that this AI’s revenue will increase. We only narrowed down to the AI application process, by the way. So we look at ours here, that we prepare the technology and the capacities in both our front end and also back end. And so we — it’s in the early stage so far today. We already see the increase, the momentum. And we expect — if you guys continue to track this one, the number will increase. I have confidence to say that, although I don’t know how much.
TSMC’s management is seeing AI chips being placed in edge-devices such as smartphones and PCs
And to further extend our value, actually, all the edge device, including smartphone, including the PC, they start to put the AI’s application inside. They have some kind of a neural process, for example, so the silicon content will be greatly increased.
Tesla (NASDAQ: TSLA)
Tesla has released version 12 of its FSD (Full Self Driving) software, which is powered end-to-end by AI (artificial intelligence); Tesla will soon release it to over 400,000 vehicles in North America; FSD v.12 is the first time AI has been used for pathfinding and vehicle controls, and within it, neural nets replaced over 330,000 lines of code
For full self-driving, we’ve released version 12, which is a complete architectural rewrite compared to prior versions. This is end-to-end artificial intelligence. So [ nothing but ] nets basically, photons in and controls out. And it really is quite a profound difference. This is currently just with employees and a few customers, but we will be rolling out to all who — all those customers in the U.S. who request full self-driving in the weeks to come. That’s over 400,000 vehicles in North America. So this is the first time AI has been used not just for object perception but for pathfinding and vehicle controls. We replaced 330,000 lines of C++ code with neural nets. It’s really quite remarkable.
Tesla’s management believes that Tesla is the world’s most efficient company at AI inference because the company, out of necessity, has had to wring the most performance out of 3-year-old hardware
I think Tesla is probably the most efficient company in the world for AI inference. Out of necessity, we’ve actually had to be extremely good at getting the most out of hardware because hardware 3 at this point is several years old. So I don’t — I think we’re quite far ahead of any other company in the world in terms of AI inference efficiency, which is going to be a very important metric in the future in many arenas.
Tesla’s management thinks that the AI technologies the company has developed for vehicles translates well into a humanoid robot (Optimus); Tesla’s vehicles and Optimus both have the same inference computers
And the technologies that we — the AI technologies we’ve developed for the car translate quite well to a humanoid robot because the car is just a robot on 4 wheels. Tesla is arguably already the biggest robot maker in the world. It’s just a 4-wheeled robot. So Optimus is a robot with — a humanoid robot with arms and legs, just by far the most sophisticated humanoid robot that’s being developed anywhere in the world…
…As we improve the technology in the car, we improve the technology in Optimus at the same time. It runs the same AI inference computer that’s on the car, same training technology. I mean we’re really building the future. I mean the Optimus lab looks like the set of Westworld, but admittedly, that was not a super utopian situation.
Tesla’s management is hedging their bets for the company’s FSD-related chips with Nvidia’s GPUs while also pursuing Dojo (Tesla’s own AI chip design)
[Question] As a follow-up, your release does not mention Dojo, so if you could just provide us an update on where Dojo stands and at what point do you expect Dojo to be a resource in improving FSD. Or do you think that you now have sufficient supply of NVIDIA GPUs needed for the training of the system?
[Answer] I mean the AI part of your question is — that is a deep one. So we’re obviously hedging our bets here with significant orders of NVIDIA GPUs…
…And we’re pursuing the dual path of NVIDIA and Dojo.
Tesla’s management believes that Tesla’s progress in self-driving is limited by training and that in AI, the more training is done on the model, the less resources are required for inference
A lot of our progress in self-driving is training limited. Something that’s important with training, it’s much like a human. The more effort you put into training, the less effort you need in inference. So just like a person, if you train in a subject, sort of class, 10,000 hours, the less mental effort it takes to do something. If you remember when you first started to drive how much of your mental capacity it took to drive, it was — you had to be focused completely on driving. And after you’ve been driving for many years, it only takes a little bit of your mind to drive, and you can think about other things and still drive safely. So the more training you do, the more efficient it is at the inference level. So we do need a lot of training. And we’re pursuing the dual path of NVIDIA and Dojo, A
Tesla’s management thinks that Dojo is a long shot – it has potential, but may not work out
But I would think of Dojo as a long shot. It’s a long shot worth taking because the payoff is potentially very high but it’s not something that is a high probability. It’s not like a sure thing at all. It’s a high risk, high payoff program. Dojo is working, and it is doing training jobs, so — and we are scaling it up. And we have plans for Dojo 1.5, Dojo 2, Dojo 3 and whatnot. So I think it’s got potential. I can’t emphasize enough, high risk, high payoff.
Tesla’s management thinks that Tesla’s AI-inference hardware in its vehicles can enable the company to perhaps possess the largest amount of compute resources for AI tasks in the world at some point in the future
There’s also our inference hardware in the car, so we’re now on what’s called Hardware 4, but it’s actually version 2 of the Tesla-designed AI inference chip. And we’re about to complete design of — the terminology is a bit confusing. About to complete design of Hardware 5, which is actually version 3 of the Tesla-designed chip because the version 1 was Mobileye. Version 2 was NVIDIA, and then version 3 was Tesla. So — and we’re making gigantic improvements from 1 — from Hardware 3 to 4 to 5. I mean there’s a potentially interesting play where when cars are not in use in the future, that the in-car computer can do generalized AI tasks, can run a sort of GPT4 or 3 or something like that. If you’ve got tens of millions of vehicles out there, even in a robotaxi scenario, whether in heavy use, maybe they’re used 50 out of 168 hours, that still leaves well over 100 hours of time available — of compute hours. Like it’s possible with the right architectural decisions that Tesla may, in the future, have more compute than everyone else combined.
The Trade Desk (NASDAQ: TSLA)
Trade Desk’s management believes that in a post-cookie world, advertisers will have to depend on authentication, new approaches to identity, first-party data, and AI-driven relevance tools – Trade Desk’s tools help create the best outcome in this world
The post-cookie world is one that will combine authentication, new approaches to identity, first-party data activation and advanced AI-driven relevance tools, all to create a new identity fabric for the Internet that is so much more effective than cookies ever were. The Internet is being replumbed and our product offerings create the best outcome for all of the open Internet.
AI optimisations are distributed across Kokai, which is Trade Desk’s new platform that recently went live; Kokai helps advertisers understand and score every ad impression, and allows advertisers to use an audience-first approach in campaigns
In particular, Kokai represents a completely new way to understand and score the relevance of every ad impression across all channels. It allows advertisers to use an audience-first approach to their campaigns, targeting their audiences wherever they are on the open Internet. Our AI optimizations, which are now distributed across the platform, help optimize every element of the ad purchase process. Kokai is now live, and similar to Next Wave and Solimar, it will scale over the next year.
Based on Trade Desk’s management’s interactions with customers, the use of AI to forecast the impacts that advertisers’ decisions will have on their ad spending is a part of Kokai that customers love
A big part of what they love, to answer your question about what are they most excited about, is we have streamlined our reporting. We’ve made it way faster. There are some reports that you just have to wait multiple minutes for it because they’re just so robust, and we found ways to accelerate that. We’ve also added AI throughout the platform, especially in forecasting. So it’s a little bit like if you were to make a hypothetical trade in a trading platform for equity and then us tell you what we think is going to happen to the price action in the next 10 minutes. So we’re showing them what the effects of their changes are going to be before they even make them so that they don’t make mistakes. Because sometimes what happens is people put out a campaign. They’ll put tight restrictions on it. They’ll hope that it spends, then they come back a day or 2 or even 3 later and then realize they made it so difficult with their combination of targeting and pricing for us to buy anything that they didn’t spend much money. Or the opposite because they spent more and it wasn’t as effective as they wanted. So helping them see all of that before they do anything helped.
Trade Desk’s management believes that the company is reinforcing itself as the adtech AI leader; Trade Desk has been using AI in its platform since 2016
We are reinforcing our position as the adtech AI leader. We’ve been embedding AI into our platform since 2016, so it’s nothing new to us. But now it’s being distributed across our platform so our clients can make even better choices among the 15 million ad impression opportunities a second and understand which of those ads are most relevant to their audience segments at any given time.
Wix (NASDAQ: WIX)
Wix’s management added new AI features in 2023 to help users create content more easily; the key AI features introduced include a chat bot, code assistant, and text and image creators
This year, we meaningfully extended an already impressive toolkit of AI capabilities to include new AI-powered features that will help Wix users create visual and written web content more easily, optimized design and content layout, right code and manage their website and businesses more efficiently. The key AI product introduced in the last year include an AI chat experience for businesses, responsive AI design, AI code assistant, AI Meta Tag Creators and AI text and image creators among several other AI design tools.
Wix’s management recently released an AI site generator that can create a full-blown, tailored, ready-to-publish website based on user prompts; management believes that Wix is the first to launch such an AI site generator; the site generator has received fantastic feedback so far, and is a good starting point for creating a new website, but it is only at Version 1
We also recently released our AI site generator and have heard fantastic feedback so far. I believe this will be the first AI tool on the market that creates a full-blown, tailored and ready-to-publish website integrated with relevant business application based on user prompt…
… So we released what I would call version 1. It’s a great way for people to start with the website, meaning that you come in and you say, I’m a Spa in New York City and I specialize in some specific things. And we’ll — and AI will interview you on the — what makes your business unique, where are you located? How many people? Tell us about those people and the staff members. And as a result, we generate a website for you that is — has all the great content, right? And the content will be text and images. The other thing that then will actually get you to this experience where you can choose how you want to have the design look like. And the AI will generate different designs for you. So you can tell why I like this thing, I want a variation on that, I don’t like the colors, please change the colors or I want colors that are more professionals or I want color that are blue and yellow. And there I will do it for you.
On the other hand, you can also say, well, I don’t really like the design, can you generate something very different or generate a small variation of that, in many ways, a bit similar to Midjourney, what Midjourney is doing with the images, we are doing with a full-blown website. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting.
Wix’s management is seeing that the majority of the company’s new users today have adopted at least one AI tool and this has been a positive for Wix’s business
In fact, the majority of new users today are using at least 1 AI tool on the web creation journey. This has resulted in reduced friction and enhanced the creation experience for our users as well as increased conversion and improve monetization.
Wix’s management expects AI to be a driver of Wix’s growth in 2024 and beyond
We expect our AI technology to be a significant driver of growth in 2024 and beyond…
…Third, as Avishai mentioned, uptick of the milestone AI initiatives of 2023 has been incredible, and we expect to see ramping conversion and monetization benefits from our entire AI toolkit for both self-creators and partners this year…
…But then again, also 2025 will be much better than 2024. I think that the first reason is definitely the launching new products. At the end of the day, we are a technology, a product company, and this is how we drive our growth, mostly from new features, some new products. And this is what we did in the past, and we will continue also to do in the future. So definitely, it’s coming from the partners business with launching Studio. It was a great launch for us. We see the traction in the market. We see the demand. We see how our agencies use it. I think, as you know, we mentioned a few times about the number of new accounts with more than 50% are new. I think that it’s — for us, it’s a great proxy to the fact that we are going to see much more that it would be significantly the major growth driver for us in the next few years. The second one is everything that we’ve done with AI, we see a tremendous results out of it, which we believe that we will continue into the next year. And as you know, as always, the third one is about trying to optimize our pricing strategy. And this is what we’ve done in the past, we’ll continue to do in the future. [indiscernible] both mentioned like a fourth reason, which is the overall demand that we see on a macro basis.
Wix’s management has been driving the company to use AI for internal processes; the internal AI tools include an open internal AI development platform that everyone at Wix can contribute to, and a generative AI conversational assistant for product teams in Wix; the internal AI tools has also helped Wix to save costs and improve its gross margin
We also leverage AI to improve many of our internal processes at Wix, especially research and development velocity. This include an open internal AI deployment platform that allow for everyone at Wix to contribute to building AI-driven user features in tandem. We also have a Gen AI best platform dedicated to conversational assistant, which allow any product team at Wix to develop their own assistant tailored to specific user needs without having to start from scratch. With this platforms, we are able to develop and release high-quality AI-based features and tools efficiently and at scale…
…We ended 2023 with a total gross margin of 68%, an improvement of nearly 500 basis points compared to 2022. Throughout the year, we benefited from improved efficiencies in housing and infrastructure costs and optimization of support cost, partially aided by integrating AI into our workflows. Creative Subscriptions gross margin expanded to 82% in 2023. And Business Solutions gross margin grew to 29% for the full year as we continue to benefit from improving margin and new [indiscernible].
Wix’s management believes that there can be double-digit growth for the company’s self creators business in the long run partly because of AI products
And we mentioned that for self-creators in the long run, we believe that it will be a double-digit growth just because of that because it has the most effect of the macro environment which already started to see that it’s improving. But then again, also the new product and AI is 1 of the examples how we can bring increased conversion and also increase the growth of self-creators.
Zoom Video Communications (NASDAQ: ZM)
Zoom’s management launched Zoom AI Companion, a generative AI assistant, five months ago and it has been expanded to six Zoom products, all included at no extra cost to users; Zoom AI companion now has 510,000 accounts enabled and has created 7.2 million meeting summaries
Zoom AI Companion, our generative AI assistant, empowers customers and employees with enhanced productivity, team effectiveness and skills. Since its launch only five months ago, we expanded AI Companion to six Zoom products, all included at no additional cost to licensed users…
…Zoom AI companion have grown tremendously in just 5 months with over 510,000 accounts enabled and 7.2 million meeting summaries created as of the close of FY ’24.
Zoom’s future roadmap for AI is guided by driving customer value
Our future roadmap for AI is 100% guided by driving customer value. We are hard at work developing new AI capabilities to help customers achieve their unique business objectives and we’ll have more to share in a month at Enterprise Connect
Zoom’s Contact Center suite is an AI-first solution that includes AI Companion; Contact Center suite is winning in head-to-head competition against legacy incumbents
Our expanding Contact Center suite is a unified, AI-first solution that offers tremendous value to companies of all sizes seeking to strengthen customer relationships and deliver better outcomes. The base product includes AI Companion and our newly launched tiered pricing allows customers to add specialized CX capabilities such as AI Expert Assist, workforce management, quality management, virtual agent, and omnichannel support. Boosted by its expanding features, our contact center suite is beginning to win in head-to-head competition with the legacy incumbents.
Zoom Revenue Accelerator gained recognition from Forrester as an AI-powered tool for sales teams
Zoom Revenue Accelerator was recognized as a “Strong Performer” in The Forrester Wave™ in its first year of being covered – an amazing testament to its value as a powerful AI-enabled tool driving value for sales teams.
A financial services company, Convera, was attracted to Zoom’s products because of AI Companion
Finally, let me thank Convera, the World’s FX payments leader. Zoom Phone was the foundation of their Zoom engagement and from there they adopted the wider Zoom One platform in less than two years. Seeing the benefits of the tight integration of our products underpinned by AI Companion, they recently began to deeply leverage Zoom Team Chat in order to streamline their pre, during and post meeting communication all within the Zoom Platform.
Zoom is monetising AI on many fronts
We are monetizing AI on many fronts. You look at our Zoom AI Companion, right? So first of all, for our existing customers, because they all like the value we created, right, to generate meeting summary, meeting [indiscernible] and so on and so forth, because of that, we really do not — because customers, they’re also trying to reduce the cost. That’s why we do not charge the customers for those features. However, a lot of areas we can monetize. Take our AI Companion, for example. Enterprise customers, how to lever enterprise customer directionally, source data and also to build a tailored — the Zoom AI Companion for those customers, sort of like a customized Zoom AI Companion, we can monetize. And also look at all the services. Maybe I’ll just take Contact Center, for example. We are offering Zoom Virtual Agent, that’s one we can monetize. And recently, we announced 3 tiers of Zoom Contact Center product. The last one is per agent per month, we charge $149. The reason why, there are a few features. One of the feature is Zoom Expert Assist, right? All those features are empowered by AI features.
Zoom’s AI-powered Virtual Agent was deployed internally and has saved Zoom 400,000 agent hours per month, and handled more than 90% of inbound inquiries; Zoom’s management believes that Zoom’s AI features help improve companies’ agent-efficiency in contact centers
Zoom, we — internally, we deployed our Virtual Agent. Guess what? Every month, we saved 400,000 agent hours. And more than 90% inbound inquiries can be done by our Virtual Agent driven by the AI technology…
…If you look at our Zoom Meeting product, right, customer discovered that Zoom AI Companion to help you with the meeting summary. And after they discovered that feature and they would like to adopt that, right? Contact Center, exact same thing. And like Virtual Agent, Zoom Expert Assist, right, leverage those AI features. Manager kind of knows what’s going on in real time and also — and the agent while can have the AI, to get a real-time in order base and any update about these customers. All those AI features can dramatically improve the agent efficiency, right? That’s the reason why it’s kind of — will not take a much longer time for those agents to realize the value of the AI features because it’s kind of very easy to use. And I think that in terms of adoption rate, I feel like Contact Center AI adoption rate even probably faster than the other — the core features, so — core services.
Zoom’s management is seeing that having AI features at no additional cost to customers helps the company to attract users to Zoom Team Chat
[Question] And for Eric, what’s causing customers to move over to the Zoom chat function and off your main competitor like Teams? Just further consolidation onto one platform? Or is it AI Companion playing a larger role here, especially as you guys are including it as opposed to $30, $35 a month?
[Answer] Customers, they see — using their chat solution, they want to use AI, right? I send you — James, I send you a message. I want to leverage AI, send a long message. However, if you use other solutions, sometimes, other solutions itself, even without AI, it’s not free, right? And in our case, not only do we have core functionalities, but also AI Companion built in also at no additional cost. I can use — for any users, customers, you already have a Meeting license, Zoom Team Chat already built in, right? All the core features, you can use the Zoom AI Companion in order to leverage AI — write a chat message and so on and so forth. It works so well at no additional cost. The total cost of ownership of the Zoom Team Chat is much better than any other team chat solutions.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.