We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.
Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!
But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.
Here are the articles for the week ending 09 March 2025:
1. The Troubled Energy Transition – Daniel Yergin, Peter Orszag, and Atul Arya
The fundamental objective of the energy transition is to replace most of today’s energy system with a completely different system. Yet throughout history, no energy source, including traditional biomass of wood and waste, has declined globally in absolute terms over an extended period.
The first energy transition began in 1709, when a metalworker named Abraham Darby figured out that coal provided “a more effective means of iron production” than wood. And the ensuing “transition” took place over at least a century. Although the nineteenth century has been called “the century of coal,” the energy scholar Vaclav Smil has observed that coal did not overtake traditional biomass energy sources (such as wood and crop residues) until the beginning of the twentieth century. Oil, discovered in western Pennsylvania in 1859, would overtake coal as the world’s top energy source in the 1960s. Yet that did not mean that the absolute amount of coal used globally was falling—in 2024, it was three times what it had been in the 1960s.
The same pattern is playing out today. About 30 percent of the world’s population still depends on traditional biomass for cooking, and demand for hydrocarbons has yet to peak or even plateau. The portion of total energy usage represented by hydrocarbons has changed little since 1990, even with the massive growth in renewables. (In the same period, overall energy use has increased by 70 percent.) And the global population is expected to grow by approximately two billion in the coming decades, with much of that growth taking place in the global South. In Africa—a demographically young continent whose population has been projected to increase from 18 percent of the global population today to 25 percent by 2050—almost 600 million people live without electricity, and roughly one billion lack access to clean cooking fuel. Traditional biomass energy still fuels almost half the continent’s total energy consumption…
…Technological, policy, and geopolitical uncertainty makes it challenging to estimate the costs associated with achieving net zero by 2050. But one thing is certain: the costs will be substantial.
The most recent estimate comes from the Independent High-Level Expert Group on Climate Finance, whose numbers provided a framework for the COP29 meeting—the UN’s annual forum on climate change—in Azerbaijan. It projected that the investment requirement globally for climate action will be $6.3 to $6.7 trillion per year by 2030, rising to as much as $8 trillion by 2035. It further estimated that the global South countries will account for almost 45 percent of the average incremental investment needs from now to 2030, and they have already been falling behind in meeting their financing needs, especially in sub-Saharan Africa.
Based on such estimates, the magnitude of energy-transition costs would average about five percent a year of global GDP between now and 2050. If global South countries are largely exempted from these financial burdens, global North countries would have to spend roughly ten percent of annual GDP—for the United States, over three times the share of GDP represented by defense spending and roughly equal to what the U.S. government spends on Medicare, Medicaid, and Social Security combined…
…In other words, achieving net zero will also require an unprecedented reorganization of capital flows from the global North to the global South, which will necessitate substantial investments in renewable-energy infrastructure at a time when, according to the International Monetary Fund, 56 percent of low-income countries are “at high levels of debt distress.” While innovative financing mechanisms (such as debt-for-climate and debt-for-nature swaps) will help, low sovereign-debt ratings throughout the developing world present a major obstacle to outside investment and raise capital costs. As a result, the bulk of the financial burden will be borne by advanced economies. But even there, debt has risen considerably—average public debt today is over 100 percent of GDP, a level not seen since World War II and a major constraint on governments’ ability to finance the transition through public spending…
…At the moment, almost half the population of the developing world—three billion people—annually uses less electricity per capita than the average American refrigerator does. As energy use grows, “carbonizing” will precede “decarbonizing.” Natural gas is a readily available option, and it’s a better alternative to coal, as well as to traditional biomass fuels that produce harmful indoor air pollution. Although global oil demand seems slated to plateau in the early 2030s, natural gas consumption is expected to continue to increase well into the 2040s. Production of liquefied natural gas is on track to increase by 65 percent by 2040, meeting energy security needs in Europe, replacing coal in Asia, and driving economic growth in the global South…
…The clash of priorities between the North and the South is especially striking when it comes to carbon tariffs. Many global North governments have, as part of their efforts to reduce emissions, put up barriers preventing other countries from taking the same carbon-based economic development path that they took to achieve prosperity. The European Union has launched the first phase of its Carbon Border Adjustment Mechanism. The CBAM is intended to support European climate objectives globally by initially imposing import tariffs on products such as steel, cement, aluminum, and fertilizer based on the carbon emissions embedded in their production and then expanding to more imports. Critics in the global North have argued that such measures would be ineffective because of the enormous complexity of supply chains and the associated difficulty of tracking embedded carbon in imports. Critics in the global South see the CBAM as a barrier to their economic growth. Ajay Seth, India’s economic affairs secretary, has argued that CBAM would force higher costs on the Indian economy: “With income levels which are one-twentieth of the income levels in Europe, can we afford a higher price? No, we can’t.”…
…The International Energy Agency has projected that global demand for the minerals needed for “clean energy technologies” will quadruple by 2040. At the top of the list are such critical minerals as lithium, cobalt, nickel, and graphite, as well as copper. Between 2017 and 2023 alone, demand for lithium increased by 266 percent; demand for cobalt rose by 83 percent; and demand for nickel jumped by 46 percent. Between 2023 and 2035, S&P expects the demand for lithium to increase by another 286 percent; cobalt, by 96 percent; and nickel, by 91 percent. Electric vehicles require two and a half to three times more copper than an internal combustion engine car; battery storage, offshore and onshore wind systems, solar panels, and data centers all require significant amounts of copper. S&P’s analysis of future copper demand found that global copper supply will have to double by the middle of the 2030s to meet current policy ambitions for net-zero emissions by 2050. This is extremely unlikely, considering that, based on S&P data that tracked 127 mines that have come online globally since 2002, it takes more than 20 years to develop a major new mine; in the United States, it takes an average of 29 years…
…China already has a dominant position in mining and a predominant position in the processing of minerals into metals essential for renewable energy infrastructure. It accounts for over 60 percent of the world’s rare-earth mining production (compared with nine percent for the United States) and more than 90 percent of the processing and refining of rare earths. It produces 77 percent of the world’s graphite, processes 98 percent of it, and processes over 70 percent of the world’s lithium and cobalt and almost half the copper.
Beijing aims to extend this dominance to what it calls the “global new energy industrial chain,” with its commanding position in batteries, solar panels, and electric vehicles, as well as in deploying massive amounts of capital toward energy infrastructure in the developing world. With China’s huge scale and low costs, Beijing describes this effort as an extensive and integrated approach to developing and dominating the renewable energy sector. From 2000 to 2022, it issued $225 billion in loans for energy projects in 65 strategically significant nations, with about 75 percent of that directed toward coal, oil, and gas development. Between 2016 and 2022, China provided more energy project financing around the world than any major Western-backed multilateral development bank, including the World Bank…
…Electrification trends suggest that power demand in the United States will double between now and 2050. Electricity consumption is already outpacing recent demand forecasts. PJM, which manages electricity transmission from Illinois to New Jersey, almost doubled its growth projection between 2022 and 2023 and is warning of the danger of shortfalls in electricity before the end of the decade…
…Today’s energy transition is meant to be fundamentally distinct from every previous energy transition: it is meant to be transformative rather than an additive. But so far it is “addition,” not replacement. The scale and variety of the challenges associated with the transition mean that it will not proceed as many expect or in a linear way: it will be multidimensional, proceeding at different rates with a different mix of technologies and different priorities in different regions. That reflects the complexities of the energy system at the foundation of today’s global economy. It also makes clear that the process will unfold over a long period and that continuing investment in conventional energy will be a necessary part of the energy transition.
2. An Interview with Benedict Evans About AI Unknowns – Ben Thompson and Benedict Evans
Well, you wrote about Deep Research a couple of weeks ago and you were pretty disappointed in the output. They used a smartphone report as the demo and it’s interesting, because the Deep Research case that convinced me was actually interview prep, and the key thing about it was, it was a lot of qualitative information that was helpful, and I wasn’t looking for quantitative information. Does that ring true of your experience?
BE: It does, yes. There’s a lot of different things one can say about this, and most of what I said was, it’s kind of interesting and puzzling rather than just, “It’s crap”. It’s very easy to say, “This is amazing and it changes the world, and it’s the fifth industrial revolution”, and it’s very easy to say, “This is all a bunch of crap and it’s the biggest waste of time and money since NFTs, please subscribe to my Substack”, and leave it at that. But what I struggle with is, it’s actually much more interesting and more complicated.
There’s a simple statement which is, “These things are good at things that don’t have wrong answers”. The quote someone used a couple of years ago was, “They tend to be good at things that computers are bad at and bad at things that computers are good at”, they’re very bad at precise specific information retrieval, which is what computers begin with. But on the other hand, you can ask them a question like, “What would be a good thing to take on a picnic?”, and that’s a question that a computer just couldn’t answer, that’s not a SQL query, and an LLM can answer that.
I think a lot of the product challenge and use case challenge around these things is trying to work out how you translate that into something that you’re actually trying to do. You gave the example of interview prep, which is — actually I don’t do interviews, but that would be something where, yeah, I can see that would be useful. The funny thing here is that OpenAI, I wasn’t testing it myself, I went and looked at OpenAI’s own product page, this is their test where they’re telling me this is what it’s for and this is what it’s good at, and proceed to show it to doing precise information retrieval, which of course, it can’t do. So just for the people who haven’t looked into OpenAI’s product page, it suggests some use cases, and one of them is, “Make a table of a bunch of countries with smartphone adoption by operating system””, and also stuff like, “Who wants to learn languages”.
The wrong report for the wrong guy.
BE: Yeah. The problem is, as many people may know, I used to be a telecoms analyst, so I looked at this and thought, “Okay, well let me have a look”. Problem one is, it used Statista, which is basically an SEO spam house that aggregates other people’s data. Saying “Source: Statista” at best is kind of saying “Source: a Google Search”, they’re not actually telling me what the source is and secondly, StatCounter, which tells you traffic. And I looked at this and I thought — I won’t monologue too long, I promise — I looked at this and I thought-
No, this is great. I completely agree with where you’re going.
BE: -there’s two questions here. The first is, is this model accurately working out what sources it should use? And then, is it getting the correct data from those sources?
And the question that OpenAI have posed is smartphone adoption. Well, what does that mean exactly? Are you asking me about unit sales? Are you asking about the install base? Are you asking me about usage? Are you asking me about outdoor usage? Because a use case that they propose actually was something like a translation app. Adoption isn’t any of those, it might be any of those, depending on context.
ChatGPT has come up with, number one, StatCounter, which is a metric of usage, not the install base and then, it’s come up with Statista, which is actually going to Kantar, which I think is doing install base, but I’m not sure, and those two come up with two different numbers, and the number that’s in ChatGPT, the Deep Research report, is a third number.
The thing that I thought about this was, you’ve asked this a probabilistic question, not a deterministic question. You’ve asked it a “What should I take on a picnic?”-type question, you haven’t asked it a precise database-y question where a computer would know the answer. You’ve asked it to work out what you want, and then you’ve asked it to work out where to get it from, and then you’ve asked it to do the database retrieval and actually report what’s on those pages. It’s kind of done okay at the first two, or it’s done what I would expect an intern to do on the first two, and as I wrote, if I had an intern, I would’ve said, “This is why you wouldn’t use either of those two”.
Yeah.
BE: But an intern wouldn’t know, and then it’s copied the number down wrong, which is where you smack the intern on the back of the head and tell them to go back and do it again. There’s a lot of different ways you can talk about this. Are you using these things for precise information retrieval? Are you using them for things that don’t really have wrong answers? Are you using them for qual or for quant? Are you using them for brainstorming? How do you work out what your problem is and whether it would be good at that, and how it would map against that?
But at the end of the day, I went and asked it to do a thing and it told me it had done the thing, and it’s wrong. In fact, it’s worse than that. OpenAI asked it to do the thing and it did the thing, and it was wrong.
And then put it up as a demo!
BE: There’s this really profoundly important thing here, which I had this feeling looking at the new Claude model today as well, or yesterday, is people talk about these models getting better a lot, but if you’ve given me a table with 20 entries and some of them are wrong, what do you mean when you say the model’s better? Do you mean that all the odd entries are right? Or do you mean that the entries are more likely to be right? Those are very different things. The idea that we would get to these models to the point that they always would be right and you would know that in a way that you would know a database would be right, we have no idea if that’s possible. That’s a profound scientific question in the field. What do you do with something that can do amazing things but you don’t know if it’s right?…
…You made this point, this tool’s the most helpful if you are already an expert because it saves you time and you can identify the errors. But if you’re not an expert, it’s incredibly dangerous. I wrote about this a couple of weeks ago, I asked for this report on an industry I happen to know a lot about, and Deep Research completely missed a big player because it was a private company, there was no listings about it even though anyone in the field would have known about it. You now have an unknown known, there’s something that you think you know, but actually, you don’t know. You’ve been convinced to be more ignorant.
BE: Yeah, listening to you talk, I’m reminded, actually, of one of these very old fallacies from engineers, which is to say, “The users have to learn how it works”, which is the thing you see with open source over and over again or, “The users will have to learn”. Of course you can say that, but the users won’t learn and it’s not the user’s job to learn how it works. The more that you force people to have to understand how it works, the more you limit the potential adoption of it.
Zooming back a little bit, something that I have in draft at the moment is that, if we think about where we’ve come in the last whatever it is, two and a bit years since GPT 3.5 launched, at the beginning of 2023, say, you could think there was a cluster of questions that would determine what was going to happen. How much will these scale? How big will the models get? How expensive will it be? What will happen to the error rates? What will reasoning be? Are there barriers to entry? Are there winner-takes-all effects? Is there enough data? You can make a list of a dozen questions, they all kind of link together.
Out of that there’s one possible output which is there’s one computer that runs the whole world, and the other extreme is it ends up being like databases or indeed like machine learning in which if you were to say today, “How many databases are there?”, that’s just a meaningless question. What are you talking about?
Since then, none of those questions have really been answered, except for the extent that it seems clear right now that these things are going to be commodities, although still quite expensive commodities. But anyone who’s got a billion dollars can have one, you don’t need a hundred billion dollars, and there’s not a lot of winner-takes-all effect the way there was with smartphones. Anyone who’s got $500 million or $100 million dollars, or something, pick a number, can have an outlet, can have a frontier model. But all the other questions, we don’t know. We don’t know what will happen to the error rate. We don’t know how big the models will get or how long the scaling works.
One of the things that kind of came out of that was, there’s a path that says, you can just go to ChatGPT and say, “I want to move to Taiwan, how do I do that?”, “I need to file my taxes in New York, London, and California, do it for me”. And the model can go and read the right websites and ask you for a photograph of your bank statement and just make you the PDF and do it for you…
…BE: And suddenly, there’s this explosion of complexity in the data centers, and we have to know about it. You have to know chips, and there are all these papers. But I did this deck, I do this annual presentation, and I ended my presentation, the section that talked about scaling, with a quote from the Godfather that says, “If history teaches you anything, it’s that you can kill anybody”, and I crossed out, “Kill anybody” and said, “Commodity computing tends to get cheap”. You’ve got all of this complexity in the creation of the models and the data center and everything else, and yet, I don’t know, I look at Grok and I think, okay, in less than two years, you managed to produce a state-of-the-art model. Is that really, really good or really, really bad?
That’s bearish for model creation.
BE: That is not as positive as you think. Yes, they’re a great team, and well done for building a 100,000 GPU cluster, but what this tell us is it’s a commodity…
…BE: There’s no difference, that’s what that means. But yeah, I think you can extend this to the doomers, where it was clear that the people who were thinking this stuff is going to take over the world in the next three months just had no conception of how the world worked outside of their shared group house in the Berkeley Hills. The puzzle and the analogy I always used to give, looking at, going back to talking about use cases, is imagine the first people seeing VisiCalc, the first spreadsheet in the late ’70s.
Yep.
BE: So if you saw this and you were an accountant, it blew your mind because the line is like you change the interest rate here and all the other numbers on the spreadsheet change.
Yep. [John] Gruber and I were just talking about this the other day, and you could watch it change!
BE: You say that now and people now are like, “Yes…?”, but back then you did spreadsheets on paper with a pencil and so if you’re an accountant, you have to have this. Certainly, you can look up the pricing of the Apple II that you needed to run VisiCalc, the full setup with a floppy drive and a printer and a monitor was like 15 grand adjusted for inflation. But if you were a lawyer and you see it, you think, “Well, that’s great, my accountant should see this, but that’s not what I do all day”.
Yep.
BE: Now, show me a word processor that can do word counts and footnotes and line numbers. That, I will pay for, that solves a problem that I have. And the challenge of the text box and the blinking cursor is either you really know you’ve got a problem that it solves, which is coding and marketing, or you’re the kind of person that’s instinctively looking for tools to solve things in their company, which is the bottom-up IT adoption and it’s no coding and everything else, but that’s a very small portion of the population.
And then it’s everybody else who didn’t see that they had that problem until an enterprise SaaS company came and showed them that they were spending three hours a week on it and sold them something for 10 grand a seat to fix it. Otherwise, you’ve got this prompt. What do I do with it?
I completely agree with you and this is where one of the analogies I’ve been thinking about is going back to the first — arguably the greatest direct job displacement in IT was actually the first mainframes, where it’s just like, “Okay, we don’t need an entire backroom of bookkeepers, we don’t need an entire backroom of ERP trackers, we can just do it on a computer”, and it was like a one-to-one replacement. What’s interesting about this is right now, AI is a bottoms-up phenomenon because you need so much agency to go out and find a way to use this, and because the model doesn’t learn, you have to learn how to use the model. It’s like people wanting to bring PCs to work in the late ’70s, and it’s like, “What are you trying to do here?”.
BE: And if you look at what these people were doing, all the books and magazines at the time were, “You should learn to code. Or at a minimum, you need to buy a database software program”. So it wasn’t you buy Quicken, it’s you buy a database plus software program and you make your own Quicken by yourself.
That’s such a great point, it’s the same thing it was with code. That was the thing at the beginning and you can do something yourself, it’s an excellent point. But it does make me think that if you’re a top-down decision maker, you can, number one, decide that the cost-benefit is worth actually removing an entire category or entire department. And number two, you can tolerate the error rate because you’ll do a cost-benefit analysis that says, “Okay, I’m at X percent error rate, this is going to cost me Y amount of money. How does that balance versus this collection of humans who are also going to make X amount of errors and cost me Y amount of money?” And you go in and you just shift people out and that is actually going to be a more meaningful change or where it’s going to start. It’s going to be less about getting people to fundamentally change how they work and more about, “You know what? We can do it good enough with AI for this whole category of work. Sorry, you guys are laid off”.
BE: Yeah, I think it depends. And if you go back and think about how the last waves of automated enterprise software worked, whether it’s SaaS or on-prem or whatever, there’s two or three things here. So one of them is you don’t just replace a whole department with one piece of software.
No, not now. I’m talking about back in the ’70s.
BE: Yeah. But the other part of that is that didn’t result in fewer white-collar workers or fewer accountants. Excel didn’t result in fewer accountants. I mean, my joke was always that, young people won’t believe this, but before Excel, investment bankers worked really long hours. Now they can get their work finished at lunchtime on Fridays and go home to The Hamptons. And of course, that’s not what happened.
Well, why is it that that isn’t what happened? I think, and it comes back to what I was saying earlier, the base case here is that you work in invoice processing and now you have a new piece of software that’s better at resolving failed invoice payments, and that is worth X-hundred thousand dollars a year to your company, and so they come out and they buy this piece of software. Or it’s a feature that gets added to their existing software or it plugs into SAP or Salesforce or Oracle or whatever it is, and it’s that piece of software and today, the typical big company has 400 or 500 SaaS applications, maybe more. And the HR team, account department has 45 different applications, there’s the German tax planning thing, and there’s the thing that manages stock options, and there’s the thing that manages training for new recruits, and the thing that makes sure that everybody’s taking their compliance exam, and the other thing that makes sure everyone’s done their compliance exam, and the training thing, and you just keep adding these all up. And they all find value and they all find budget and they all move costs from one place to another and the big company has 400 or 500 other new things.
The base case is that generative AI will be another 400 or 500 of these, and it will replace half of them, and it will double the value of the other half. 250 of them will get killed and 250 of them will get a bunch of new features and there’ll be another 250 of them on top, and now you’ll have 750 or 1,000 new applications in this company and there will be bits of LLM scattered all the way through them just as there’s bits of machine learning scattered all the way through them, just as there’s databases scattered all the way through them. It will just be kind of a building block in making software. Does that mean that there’s less net employment or does that just mean that a bunch of jobs go away and get replaced by new jobs?…
…BE: If you’re in the accounting industry, this changes the whole nature of your industry, so that’s one observation.
The second observation would be — yes, you don’t know what those changes will be, you don’t know how it is, what all the new things it will be, you start by doing what you already know you need to do. And then over time, you realize there’s new things you can do with this, which is your point about feeds, and you could say the same thing about the emergence of Instagram and online dating, all the stuff that happened that wasn’t obvious at the time.
However, I think there’s a completely opposite point, which is equally interesting, about how new this is or how different this is, which is that if you’re looking at the Internet in 1995, you kind of knew how fast computers were and how fast they’d be in like five years, and you knew how many people had broadband, and you had a pretty good sense — you could plot a line on a chart of how many people are going to have broadband and how fast, on a three to five-year view and you kind of knew how much PCs cost, and you knew what annual PC sales were, and you could make a guess that, okay, this is going to mean that PC sales quadruple and everyone will go out and buy a PC, which is more or less what happened, and you knew how many middle-class households there were in America, so you kind of knew what the PC market could, in principle, do. The same thing with smartphones, you knew how fast 3G was, you knew how fast 4G was, you knew how fast the chips were.
What I was getting at is, with LLMs, we don’t know that. We don’t know what that roadmap is for the fundamental technical capabilities of the thing, which is different to anything from the web to flight or cars. We weren’t looking at a smartphone and saying, “Well, this is where we are today, but maybe in two or three years, it will be able to unroll and fill the whole wall, or maybe it’ll have a year’s battery life”. You kind of knew what the really basic core physics constraints were, and we don’t know that for this stuff.
Well, especially with this accuracy point. Daniel Gross made this point a few weeks ago too, I think it’s really profound, that there’s just a really stark fundamental difference between 100% accuracy and 99% accuracy.
BE: Well, this is a problem with saying “better” models. What do you mean “better”? Do you mean it was 82 and now it’s 83? Or do you mean it was 80 and now it’s 100 and it will always be 100? That’s a completely different thing.
3. Reality has a surprising amount of detail – John Salvatier
It’s tempting to think ‘So what?’ and dismiss these details as incidental or specific to stair carpentry. And they are specific to stair carpentry; that’s what makes them details. But the existence of a surprising number of meaningful details is not specific to stairs. Surprising detail is a near universal property of getting up close and personal with reality.
You can see this everywhere if you look. For example, you’ve probably had the experience of doing something for the first time, maybe growing vegetables or using a Haskell package for the first time, and being frustrated by how many annoying snags there were. Then you got more practice and then you told yourself ‘man, it was so simple all along, I don’t know why I had so much trouble’. We run into a fundamental property of the universe and mistake it for a personal failing.
If you’re a programmer, you might think that the fiddliness of programming is a special feature of programming, but really it’s that everything is fiddly, but you only notice the fiddliness when you’re new, and in programming you do new things more often.
You might think the fiddly detailiness of things is limited to human centric domains, and that physics itself is simple and elegant. That’s true in some sense – the physical laws themselves tend to be quite simple – but the manifestation of those laws is often complex and counterintuitive…
…The more difficult your mission, the more details there will be that are critical to understand for success.
You might hope that these surprising details are irrelevant to your mission, but not so. Some of them will end up being key. Wood’s tendency to warp means it’s more accurate to trace a cut than to calculate its length and angle. The possibility of superheating liquids means it’s important to use a packed bed when boiling liquids in industrial processes lest your process be highly inefficient and unpredictable. The massive difference in weight between a rocket full of fuel and an empty one means that a reusable rocket can’t hover if it can’t throttle down to a very small fraction of its original thrust, which in turn means it must plan its trajectory very precisely to achieve 0 velocity at exactly the moment it reaches the ground.
You might also hope that the important details will be obvious when you run into them, but not so. Such details aren’t automatically visible, even when you’re directly running up against them. Things can just seem messy and noisy instead. ‘Spirit’ thermometers, made using brandy and other liquors, were in common use in the early days of thermometry. They were even considered as a potential standard fluid for thermometers. It wasn’t until the careful work of Swiss physicist Jean-André De Luc in the 18th century that physicists realized that alcohol thermometers are highly nonlinear and highly variable depending on concentration, which is in turn hard to measure.
You’ve probably also had experiences where you were trying to do something and growing increasingly frustrated because it wasn’t working, and then finally, after some time you realize that your solution method can’t possibly work.
Another way to see that noticing the right details is hard, is that different people end up noticing different details…
…Before you’ve noticed important details they are, of course, basically invisible. It’s hard to put your attention on them because you don’t even know what you’re looking for. But after you see them they quickly become so integrated into your intuitive models of the world that they become essentially transparent. Do you remember the insights that were crucial in learning to ride a bike or drive? How about the details and insights you have that led you to be good at the things you’re good at?
This means it’s really easy to get stuck. Stuck in your current way of seeing and thinking about things. Frames are made out of the details that seem important to you. The important details you haven’t noticed are invisible to you, and the details you have noticed seem completely obvious and you see right through them. This all makes makes it difficult to imagine how you could be missing something important…
…If you’re trying to do impossible things, this effect should chill you to your bones. It means you could be intellectually stuck right at this very moment, with the evidence right in front of your face and you just can’t see it.
This problem is not easy to fix, but it’s not impossible either. I’ve mostly fixed it for myself. The direction for improvement is clear: seek detail you would not normally notice about the world. When you go for a walk, notice the unexpected detail in a flower or what the seams in the road imply about how the road was built. When you talk to someone who is smart but just seems so wrong, figure out what details seem important to them and why. In your work, notice how that meeting actually wouldn’t have accomplished much if Sarah hadn’t pointed out that one thing. As you learn, notice which details actually change how you think.
If you wish to not get stuck, seek to perceive what you have not yet perceived.
4. America’s Growing Trade Deficit Is Selling the Nation Out From Under Us. Here’s a Way to Fix the Problem – And We Need to Do It Now – Warren Buffett
Take a fanciful trip with me to two isolated, side-by-side islands of equal size, Squanderville and Thriftville. Land is the only capital asset on these islands, and their communities are primitive, needing only food and producing only food. Working eight hours a day, in fact, each inhabitant can produce enough food to sustain himself or herself. And for a long time that’s how things go along…
…Eventually, though, the industrious citizens of Thriftville decide to do some serious saving and investing, and they start to work 16 hours a day. In this mode, they continue to live off the food they produce in the eight hours of work but begin exporting an equal amount to their one and only trading outlet, Squanderville.
The citizens of Squanderville are ecstatic about this turn of events, since they can now live their lives free from toil but eat as well as ever. Oh, yes, there’s a quid pro quo – but to the Squanders, it seems harmless: All that the Thrifts want in exchange for their food is Squanderbonds (which are denominated, naturally, in Squanderbucks).
Over time Thriftville accumulates an enormous amount of these bonds, which at their core represent claim checks on the future output of Squanderville. A few pundits in Squanderville smell trouble coming. They foresee that for the Squanders both to eat and pay off – or simply service – the debt they’re piling up will eventually require them to work more than eight hours a day…
…Meanwhile, the citizens of Thriftville begin to get nervous. Just how good, they ask, are the IOUs of a shiftless island? So the Thrifts change strategy: Though they continue to hold some bonds, they sell most of them to Squanderville residents for Squanderbucks and use the proceeds to buy Squanderville land. And eventually the Thrifts own all of Squanderville.
At that point, the Squanders are forced to deal with an ugly equation: They must now not only return to working eight hours a day in order to eat—they have nothing left to trade—but must also work additional hours to service their debt and pay Thriftville rent on the land so imprudently sold. In effect, Squanderville has been colonized by purchase rather than conquest.
It can be argued, of course, that the present value of the future production that Squanderville must forever ship to Thriftville only equates to the production Thriftville initially gave up and that therefore both have received a fair deal. But since one generation of Squanders gets the free ride and future generations pay in perpetuity for it, there are—in economist talk—some pretty dramatic “intergenerational inequities.”…
…Sooner or later the Squanderville government, facing ever greater payments to service debt, would decide to embrace highly inflationary policies—that is, issue more Squanderbucks to dilute the value of each. After all, the government would reason, those irritating Squanderbonds are simply claims on specific numbers of Squanderbucks, not on bucks of specific value. In short, making Squanderbucks less valuable would ease the island’s fiscal pain.
That prospect is why I, were I a resident of Thriftville, would opt for direct ownership of Squanderville land rather than bonds of the island’s government. Most governments find it much harder morally to seize foreign-owned property than they do to dilute the purchasing power of claim checks foreigners hold. Theft by stealth is preferred to theft by force…
…The time to halt this trading of assets for consumables is now, and I have a plan to suggest for getting it done. My remedy may sound gimmicky, and in truth it is a tariff called by another name. But this is a tariff that retains most free-market virtues, neither protecting specific industries nor punishing specific countries nor encouraging trade wars. This plan would increase our exports and might well lead to increased overall world trade. And it would balance our books without there being a significant decline in the value of the dollar, which I believe is otherwise almost certain to occur.
We would achieve this balance by issuing what I will call Import Certificates (ICs) to all U.S. exporters in an amount equal to the dollar value of their exports. Each exporter would, in turn, sell the ICs to parties—either exporters abroad or importers here—wanting to get goods into the U.S. To import $1 million of goods, for example, an importer would need ICs that were the byproduct of $1 million of exports. The inevitable result: trade balance.
Because our exports total about $80 billion a month, ICs would be issued in huge, equivalent quantities—that is, 80 billion certificates a month—and would surely trade in an exceptionally liquid market. Competition would then determine who among those parties wanting to sell to us would buy the certificates and how much they would pay. (I visualize that the certificates would be issued with a short life, possibly of six months, so that speculators would be discouraged from accumulating them.)
For illustrative purposes, let’s postulate that each IC would sell for 10 cents—that is, 10 cents per dollar of exports behind them. Other things being equal, this amount would mean a U.S. producer could realize 10% more by selling his goods in the export market than by selling them domestically, with the extra 10% coming from his sales of ICs.
In my opinion, many exporters would view this as a reduction in cost, one that would let them cut the prices of their products in international markets. Commodity-type products would particularly encourage this kind of behavior. If aluminum, for example, was selling for 66 cents per pound domestically and ICs were worth 10%, domestic aluminum producers could sell for about 60 cents per pound (plus transportation costs) in foreign markets and still earn normal margins. In this scenario, the output of the U.S. would become significantly more competitive and exports would expand. Along the way, the number of jobs would grow…
…To see what would happen to imports, let’s look at a car now entering the U.S. at a cost to the importer of $20,000. Under the new plan and the assumption that ICs sell for 10%, the importer’s cost would rise to $22,000. If demand for the car was exceptionally strong, the importer might manage to pass all of this on to the American consumer. In the usual case, however, competitive forces would take hold, requiring the foreign manufacturer to absorb some, if not all, of the $2,000 IC cost.
There is no free lunch in the IC plan: It would have certain serious negative consequences for U.S. citizens. Prices of most imported products would increase, and so would the prices of certain competitive products manufactured domestically. The cost of the ICs, either in whole or in part, would therefore typically act as a tax on consumers.
That is a serious drawback. But there would be drawbacks also to the dollar continuing to lose value or to our increasing tariffs on specific products or instituting quotas on them—courses of action that in my opinion offer a smaller chance of success. Above all, the pain of higher prices on goods imported today dims beside the pain we will eventually suffer if we drift along and trade away ever larger portions of our country’s net worth.
I believe that ICs would produce, rather promptly, a U.S. trade equilibrium well above present export levels but below present import levels. The certificates would moderately aid all our industries in world competition, even as the free market determined which of them ultimately met the test of “comparative advantage.”
This plan would not be copied by nations that are net exporters, because their ICs would be valueless. Would major exporting countries retaliate in other ways? Would this start another Smoot-Hawley tariff war? Hardly. At the time of Smoot-Hawley we ran an unreasonable trade surplus that we wished to maintain. We now run a damaging deficit that the whole world knows we must correct…
…The likely outcome of an IC plan is that the exporting nations—after some initial posturing—will turn their ingenuity to encouraging imports from us. Take the position of China, which today sells us about $140 billion of goods and services annually while purchasing only $25 billion. Were ICs to exist, one course for China would be simply to fill the gap by buying 115 billion certificates annually. But it could alternatively reduce its need for ICs by cutting its exports to the U.S. or by increasing its purchases from us. This last choice would probably be the most palatable for China, and we should wish it to be so.
If our exports were to increase and the supply of ICs were therefore to be enlarged, their market price would be driven down. Indeed, if our exports expanded sufficiently, ICs would be rendered valueless and the entire plan made moot. Presented with the power to make this happen, important exporting countries might quickly eliminate the mechanisms they now use to inhibit exports from us.
5. The hidden cost of AI: Trading long-term resilience for short-term efficiency – Eric Markowitz
AI is the latest in a long lineage of efficiency-maximizing tools. It promises to make research instantaneous, strip away uncertainty, and optimize everything from hiring to investment analysis. But for all the gains in speed and precision, we rarely stop to ask: What are we losing in return? Because there is no such thing as a free lunch…
…AI makes the world feel more scientific than ever. It can generate business strategies, write persuasive emails, and surface patterns invisible to human analysis. But the most important decisions — the ones that lead to breakthroughs, revolutions, and paradigm shifts — are rarely the result of pure data analysis.
Some of the best ideas in history looked irrational at first. They required deep research, yes — but more importantly, they required taste. (And perhaps a bit of luck.)
Taste is an underrated concept in a world obsessed with efficiency. It’s the ability to recognize something valuable before the numbers prove it. The ability to see beyond spreadsheets and sentiment analysis and understand how an idea actually fits into the world. If everyone has access to the same AI-generated insights, the only thing that remains scarce is independent thinking. And that is precisely where the edge lies.
None of this is an argument against AI. It’s an argument for knowing what not to outsource to our robot overlords. AI is a tool. A powerful one. But it is not a substitute for intuition, nor a replacement for deep thinking. The institutions and ideas that endure the longest are those that understand what to hold onto even as the world around them changes.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.