We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.
Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!
But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.
Here are the articles for the week ending 02 March 2025:
1. Satya Nadella – Microsoft’s AGI Plan & Quantum Breakthrough – Dwarkesh Patel and Satya Nadella
Dwarkesh Patel
Where is the value going to be created in AI?
Satya Nadella
That’s a great one. So I think there are two places where I can say with some confidence. One is the hyperscalers that do well, because the fundamental thing is if you sort of go back to even how Sam and others describe it, if intelligence is log of compute, whoever can do lots of compute is a big winner.
The other interesting thing is, if you look at underneath even any AI workload, like take ChatGPT, it’s not like everybody’s excited about what’s happening on the GPU side, it’s great. In fact, I think of my fleet even as a ratio of the AI accelerator to storage, to compute. And at scale, you’ve got to grow it…
…Satya Nadella
So in fact it’s manna from heaven to have these AI workloads because guess what? They’re more hungry for more compute, not just for training, but we now know, for test time. When you think of an AI agent, it turns out the AI agent is going to exponentially increase compute usage because you’re not even bound by just one human invoking a program. It’s one human invoking programs that invoke lots more programs. That’s going to create massive, massive demand and scale for compute infrastructure. So our hyperscale business, Azure business, and other hyperscalers, I think that’s a big thing.
Then after that, it becomes a little fuzzy. You could say, hey, there is a winner-take-all model- I just don’t see it. This, by the way, is the other thing I’ve learned: being very good at understanding what are winner-take-all markets and what are not winner-take-all markets is, in some sense, everything. I remember even in the early days when I was getting into Azure, Amazon had a very significant lead and people would come to me, and investors would come to me, and say, “Oh, it’s game over. You’ll never make it. Amazon, it’s winner-take-all.”
Having competed against Oracle and IBM in client-server, I knew that the buyers will not tolerate winner-take-all. Structurally, hyperscale will never be a winner-take-all because buyers are smart.
Consumer markets sometimes can be winner-take-all, but anything where the buyer is a corporation, an enterprise, an IT department, they will want multiple suppliers. And so you got to be one of the multiple suppliers.
That, I think, is what will happen even on the model side. There will be open-source. There will be a governor. Just like on Windows, one of the big lessons learned for me was, if you have a closed-source operating system, there will be a complement to it, which will be open source.
And so to some degree that’s a real check on what happens. I think in models there is one dimension of, maybe there will be a few closed source, but there will definitely be an open source alternative, and the open-source alternative will actually make sure that the closed-source, winner-take-all is mitigated.
That’s my feeling on the model side. And by the way, let’s not discount if this thing is really as powerful as people make it out to be, the state is not going to sit around and wait for private companies to go around and… all over the world. So, I don’t see it as a winner-take-all.
Then above that, I think it’s going to be the same old stuff, which is in consumer, in some categories, there may be some winner-take-all network effect. After all, ChatGPT is a great example.
It’s an at-scale consumer property that has already got real escape velocity. I go to the App Store, and I see it’s always there in the top five, and I say “wow, that’s pretty unbelievable”.
So they were able to use that early advantage and parlay that into an app advantage. In consumer, that could happen. In the enterprise again, I think there will be, by category, different winners. That’s sort of at least how I analyze it…
…Satya Nadella
The way I come at it, Dwarkesh, it’s a great question because at some level, if you’re going to have this explosion, abundance, whatever, commodity of intelligence available, the first thing we have to observe is GDP growth.
Before I get to what Microsoft’s revenue will look like, there’s only one governor in all of this. This is where we get a little bit ahead of ourselves with all this AGI hype. Remember the developed world, which is what? 2% growth and if you adjust for inflation it’s zero?
So in 2025, as we sit here, I’m not an economist, at least I look at it and say we have a real growth challenge. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let’s have that Industrial Revolution type of growth.
That means to me, 10%, 7%, developed world, inflation-adjusted, growing at 5%. That’s the real marker. It can’t just be supply-side.
In fact that’s the thing, a lot of people are writing about it, and I’m glad they are, which is the big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity that, by the way, is abundant. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.
But that’s to me the moment. Us self-claiming some AGI milestone, that’s just nonsensical benchmark hacking to me. The real benchmark is: the world growing at 10%.
Dwarkesh Patel
Okay, so if the world grew at 10%, the world economy is $100 trillion or something, if the world grew at 10%, that’s like an extra $10 trillion in value produced every single year. If that is the case, you as a hyperscaler… It seems like $80 billion is a lot of money. Shouldn’t you be doing like $800 billion?
If you really think in a couple of years, we could be really growing the world economy at this rate, and the key bottleneck would be: do you have the compute necessary to deploy these AIs to do all this work?
Satya Nadella
That is correct. But by the way, the classic supply side is, “Hey, let me build it and they’ll come.” That’s an argument, and after all we’ve done that, we’ve taken enough risk to go do it.
But at some point, the supply and demand have to map. That’s why I’m tracking both sides of it. You can go off the rails completely when you are hyping yourself with the supply-side, versus really understanding how to translate that into real value to customers.
That’s why I look at my inference revenue. That’s one of the reasons why even the disclosure on the inference revenue… It’s interesting that not many people are talking about their real revenue, but to me, that is important as a governor for how you think about it.
You’re not going to say they have to symmetrically meet at any given point in time, but you need to have existence proof that you are able to parlay yesterday’s, let’s call it capital, into today’s demand, so that then you can again invest, maybe exponentially even, knowing that you’re not going to be completely rate mismatched.
Dwarkesh Patel
I wonder if there’s a contradiction in these two different viewpoints, because one of the things you’ve done wonderfully is make these early bets. You invested in OpenAI in 2019, even before there was Copilot and any applications.
If you look at the Industrial Revolution, these 6%, 10% build-outs of railways and whatever things, many of those were not like, “We’ve got revenue from the tickets, and now we’re going to…”
Satya Nadella
There was a lot of money lost.
Dwarkesh Patel
That’s true. So, if you really think there’s some potential here to 10x or 5x the growth rate of the world, and then you’re like, “Well, what is the revenue from GPT-4?”
If you really think that’s the possibility from the next level up, shouldn’t you just, “Let’s go crazy, let’s do the hundreds of billions of dollars of compute?” I mean, there’s some chance, right?
Satya Nadella
Here’s the interesting thing, right? That’s why even that balanced approach to the fleet, at least, is very important to me. It’s not about building compute. It’s about building compute that can actually help me not only train the next big model but also serve the next big model. Until you do those two things, you’re not going to be able to really be in a position to take advantage of even your investment.
So, that’s kind of where it’s not a race to just building a model, it’s a race to creating a commodity that is getting used in the world to drive… You have to have a complete thought, not just one thing that you’re thinking about.
And by the way, one of the things is that there will be overbuild. To your point about what happened in the dotcom era, the memo has gone out that, hey, you know, you need more energy, and you need more compute. Thank God for it. So, everybody’s going to race.
In fact, it’s not just companies deploying, countries are going to deploy capital, and there will be clearly… I’m so excited to be a leaser, because, by the way; I build a lot, I lease a lot. I am thrilled that I’m going to be leasing a lot of capacity in ’27, ’28 because I look at the builds, and I’m saying, “This is fantastic.” The only thing that’s going to happen with all the compute builds is the prices are going to come down…
…Satya Nadella
This has been another 30-year journey for us. It’s unbelievable. I’m the third CEO of Microsoft who’s been excited about quantum.
The fundamental breakthrough here, or the vision that we’ve always had is, you need a physics breakthrough in order to build a utility-scale quantum computer that works. We took the path of saying, the one way for having a less noisy or more reliable qubit is to bet on a physical property that by definition is more reliable and that’s what led us to the Majorana zero modes, which was theorized in the 1930s. The question was, can we actually physically fabricate these things? Can we actually build them?
So the big breakthrough effectively, and I know you talked to Chetan, was that we now finally have existence proof and a physics breakthrough of Majorana zero modes in a new phase of matter effectively. This is why we like the analogy of thinking of this as the transistor moment of quantum computing, where we effectively have a new phase, which is the topological phase, which means we can even now reliably hide the quantum information, measure it, and we can fabricate it. And so now that we have it, we feel like with that core foundational fabrication technique out of the way, we can start building a Majorana chip.
That Majorana One which I think is going to basically be the first chip that will be capable of a million qubits, physical. And then on that, thousands of logical qubits, error-corrected. And then it’s game on. You suddenly have the ability to build a real utility-scale quantum computer, and that to me is now so much more feasible. Without something like this, you will still be able to achieve milestones, but you’ll never be able to build a utility-scale computer. That’s why we’re excited about it…
…Satya Nadella
It’s a great question. One thing that I’ve been excited about is, even in today’s world… we had this quantum program, and we added some APIs to it. The breakthrough we had maybe two years ago was to think of this HPC stack, and AI stack, and quantum together.
In fact, if you think about it, AI is like an emulator of the simulator. Quantum is like a simulator of nature. What is quantum going to do? By the way, quantum is not going to replace classical. Quantum is great at what quantum can do, and classical will also…
Quantum is going to be fantastic for anything that is not data-heavy but is exploration-heavy in terms of the state space. It should be data-light but exponential states that you want to explore. Simulation is a great one: chemical physics, what have you, biology.
One of the things that we’ve started doing is really using AI as the emulation engine. But you can then train. So the way I think of it is, if you have AI plus quantum, maybe you’ll use quantum to generate synthetic data that then gets used by AI to train better models that know how to model something like chemistry or physics or what have you. These two things will get used together.
So even today, that’s effectively what we’re doing with the combination of HPC and AI. I hope to replace some of the HPC pieces with quantum computers.
2. Microsoft’s Majorana 1 chip carves new path for quantum computing – Catherine Bolgar
Microsoft today introduced Majorana 1, the world’s first quantum chip powered by a new Topological Core architecture that it expects will realize quantum computers capable of solving meaningful, industrial-scale problems in years, not decades.
It leverages the world’s first topoconductor, a breakthrough type of material which can observe and control Majorana particles to produce more reliable and scalable qubits, which are the building blocks for quantum computers.
In the same way that the invention of semiconductors made today’s smartphones, computers and electronics possible, topoconductors and the new type of chip they enable offer a path to developing quantum systems that can scale to a million qubits and are capable of tackling the most complex industrial and societal problems, Microsoft said…
…This new architecture used to develop the Majorana 1 processor offers a clear path to fit a million qubits on a single chip that can fit in the palm of one’s hand, Microsoft said. This is a needed threshold for quantum computers to deliver transformative, real-world solutions – such as breaking down microplastics into harmless byproducts or inventing self-healing materials for construction, manufacturing or healthcare. All the world’s current computers operating together can’t do what a one-million-qubit quantum computer will be able to do…
…The topoconductor, or topological superconductor, is a special category of material that can create an entirely new state of matter – not a solid, liquid or gas but a topological state. This is harnessed to produce a more stable qubit that is fast, small and can be digitally controlled, without the tradeoffs required by current alternatives…
…This breakthrough required developing an entirely new materials stack made of indium arsenide and aluminum, much of which Microsoft designed and fabricated atom by atom…
…Commercially important applications will also require trillions of operations on a million qubits, which would be prohibitive with current approaches that rely on fine-tuned analog control of each qubit. The Microsoft team’s new measurement approach enables qubits to be controlled digitally, redefining and vastly simplifying how quantum computing works.
This progress validates Microsoft’s choice years ago to pursue a topological qubit design – a high risk, high reward scientific and engineering challenge that is now paying off. Today, the company has placed eight topological qubits on a chip designed to scale to one million…
…But reaching the next horizon of quantum computing will require a quantum architecture that can provide a million qubits or more and reach trillions of fast and reliable operations. Today’s announcement puts that horizon within years, not decades, Microsoft said.
Because they can use quantum mechanics to mathematically map how nature behaves with incredible precision – from chemical reactions to molecular interactions and enzyme energies – million-qubit machines should be able to solve certain types of problems in chemistry, materials science and other industries that are impossible for today’s classical computers to accurately calculate…
…Most of all, quantum computing could allow engineers, scientists, companies and others to simply design things right the first time – which would be transformative for everything from healthcare to product development. The power of quantum computing, combined with AI tools, would allow someone to describe what kind of new material or molecule they want to create in plain language and get an answer that works straightaway – no guesswork or years of trial and error.
“Any company that makes anything could just design it perfectly the first time out. It would just give you the answer,” Troyer said. “The quantum computer teaches the AI the language of nature so the AI can just tell you the recipe for what you want to make.”…
…Qubits can be created in different ways, each with advantages and disadvantages. Nearly 20 years ago, Microsoft decided to pursue a unique approach: developing topological qubits, which it believed would offer more stable qubits requiring less error correction, thereby unlocking speed, size and controllability advantages. The approach posed a steep learning curve, requiring uncharted scientific and engineering breakthroughs, but also the most promising path to creating scalable and controllable qubits capable of doing commercially valuable work.
The disadvantage is – or was – that until recently the exotic particles Microsoft sought to use, called Majoranas, had never been seen or made. They don’t exist in nature and can only be coaxed into existence with magnetic fields and superconductors. The difficulty of developing the right materials to create the exotic particles and their associated topological state of matter is why most quantum efforts have focused on other kinds of qubits…
…Majoranas hide quantum information, making it more robust, but also harder to measure. The Microsoft team’s new measurement approach is so precise it can detect the difference between one billion and one billion and one electrons in a superconducting wire – which tells the computer what state the qubit is in and forms the basis for quantum computation.
The measurements can be turned on and off with voltage pulses, like flicking a light switch, rather than finetuning dials for each individual qubit. This simpler measurement approach that enables digital control simplifies the quantum computing process and the physical requirements to build a scalable machine…
…Majorana 1, Microsoft’s quantum chip that contains both qubits as well as surrounding control electronics, can be held in the palm of one’s hand and fits neatly into a quantum computer that can be easily deployed inside Azure datacenters.
3. The most underreported and important story in AI right now is that pure scaling has failed to produce AGI – Gary Marcus
On the order of half a trillion dollars has been invested on a premise that I have long argued was unlikely to succeed: the idea (sometimes informally referred to as the scaling hypothesis) that we could get to “artificial general intelligence” simply by adding more and more data and GPUs…
…Virtually all of the generative AI industry has been built on this presumption; projects like the OpenAI/Oracle/Softbank joint venture Stargate, allegedly another half trillion dollars, are also largely based on this premise…
…But I always knew it couldn’t last forever. When I said as much, the field was absolutely furious at me…
…The first signs that the pure scaling of data and compute might in fact be hitting a wall came from industry leaks from people like famed investor Marc Andreessen, who said in early November 2024 that current models are “sort of hitting the same ceiling on capabilities.” Then, in December, Microsoft CEO Satya Nadella echoed many of my 2022 themes, saying at a Microsoft Ignite event, “in the last multiple weeks there is a lot of debate on have we hit the wall with scaling laws. Is it gonna continue? Again, the thing to remember at the end of the day these are not physical laws. There are just empirical observations that hold true just like Moore’s Law did for a long period of time and so therefore it’s actually good to have some skepticism some debate.”…
…Finally, and perhaps most significantly: Elon Musk said over that weekend that Grok 3, with 15x the compute of Grok 2, and immense energy (and construction and chop) bills, would be “the smartest AI on the earth.” Yet the world quickly saw that Grok 3 is still afflicted by the kind of unreliability that has hobbled earlier models. The famous ML expert Andrej Karpathy reported that Grok 3 occasionally stumbles on basics like math and spelling. In my own experiments, I quickly found a wide array of errors, such as hallucinations (e.g, it told me with certainty that there was a significant 5.6-sized earthquake on Feb. 10 in Billings, Montana, when no such thing had happened) and extremely poor visual comprehension (e.g. it could not properly label the basic parts of a bicycle).
Nadella, in his December speech, pointed to test-time compute, in which systems are allowed extra time for “reasoning” as the next big thing, and to some degree he is right; it is the next big thing, a new thing to try to scale, since merely scaling compute and data is no longer bringing the massive returns it once did. At least for a while, adding more and more computing time will help, at least on some kinds of problems…
…although DeepSeek lowered the costs of training these new systems, they are still expensive to operate, which is why companies like OpenAI are limiting their usage. When customers begin to realize that even with the greater expenses, errors still seep in, they are likely to be disappointed. One irate customer cc:d me yesterday on a several page demand for a refund, writing in part that “GPT-4o Pro [which includes access to test time compute] has consistently underperformed,” and enumerated problems such as “Severely degraded memory” and “Hallucinations and Unreliable Answers.”…
…the illustrious Stanford Natural Language Processing group reached a similar conclusion, reading between the lines of OpenAI’s recent announcement in the same way I did. In their words, Altman’s recent OpenAI roadmap was “the final admission that the 2023 strategy of OpenAI, Anthropic, etc. ‘“simply scaling up model size, data, compute, and dollars spent will get us to AGI/ASI’) is no longer working!”
In short, half a trillion dollars have been invested in a misguided premise; a great deal more funding seems to be headed in the same direction for now.
4. Is Microsoft’s Copilot push the biggest bait-and-switch in AI – Tien Tzuo
Over a year ago, Microsoft launched Copilot Pro, an AI assistant embedded in its Office suite, with a $20/month price. The uptake apparently was abysmal. By October, they admitted that the way they were selling Copilot was not working out.
So what did they do? They forced it on Microsoft users by including Copilot in Office, and hiking up subscription fees. Microsoft first made this change in Asia, then fully pulled the plug across the globe last month, impacting 84 million subscribers. To add insult to injury, Microsoft renamed the product to Microsoft 365 Copilot. You didn’t want to pay for Copilot? Well, now you are…
…Not only is Microsoft’s Copilot rollout deceptive, it’s also embarrassingly disastrous.
This goes to show that even tech giants, including a major backer of AI pioneer OpenAI, can suffer the hype and competitive pressure surrounding AI. And it’s a stark reminder that what businesses should really be focused on instead is value — communicating it clearly and delivering it tangibly to customers…
…Well, there’s a good contrast to Microsoft’s approach — from Adobe.
Adobe took a different approach with its AI rollout last fall, resisting the temptation to immediately monetize its new video generator, instead using it to boost adoption and engagement. By positioning AI as a value-add rather than a paid extra, Adobe was playing the long game, building a loyal user base that would be ripe for future upselling once they experienced AI’s benefits for themselves.
5. Broken Markets!? – The Brooklyn Investor
So, I keep hearing that markets are broken, or that the market is as expensive as ever. I know I keep saying this and I am sounding like a broken record, but I am not so sure…
…But if you look at individual stocks, markets are clearly differentiating between individual stocks. Look at Nvidia vs. Intel. If nobody was really evaluating them and the market was ignoring fundamentals, you would think both stocks would be performing similarly. But they are clearly not. People are clearly differentiating between winners and losers. It’s a separate question whether they are over-discounting their views. That’s a different discussion, and contrary to the view that passive investing is killing fundamental analysis.
Another example: JPM, the better bank, is trading at 2.4x book, versus C, which is a crappier one, selling at 0.8x book. You can’t complain that the market is broken just because you don’t agree with it. On the other hand, Buffett in the 50s loved the fact that institutional investors of the time completely ignored company analysis / valuation…
…Look at all the rich folks at the Berkshire Hathaway annual meeting. Look at BRK stock since 1970. How often did it look ‘overpriced’? What about the market? What if people sold out when they thought BRK was overpriced? Or the market? Would they be as rich as they are now? Probably not. Maybe there are some smart folks that got in and out of BRK over the years, but I would bet that the richest of them are the ones that just held it and did nothing.
I keep telling people this, but if you look at all the richest people in the world, a lot of them are people who held a single asset, and held it for decades. Look at Bill Gates. What if he was hip to value investing and knew more about stocks and values. He may have sold out of MSFT when it looked really overvalued. What about Buffett? What about Bezos?
A lot of the rich used to be real estate moguls, and I thought a lot of them were wealthy because real estate was not very liquid, so they had no choice but to hold on even in bad times. Stocks have what you may call the “curse of liquidity”. It’s so easy to say, holy sh*t, something bad is going to happen, and *click*, you can get out of the market. Back in the 90s, we used to fear a 1-800 crash; people will call their brokers’ automated trade execution lines, 1-800-Sell-Everything, and go “Get me out of the market!!! See everything NOW!!!”, and the market would not be able to open the next morning. Some of us truly feared that, and hedge funds talked about that sort of thing all the time. But you can’t do that with your house.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Amazon and Microsoft. Holdings are subject to change at any time.