We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.
Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!
But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.
Here are the articles for the week ending 23 July 2023:
[00:48:03] William Green: Some of what you were just saying gets to this whole question of how to design a life that suits ourselves. And I thought about this a lot after I guess it was 2008, 2009, and I’d been [Inaudible] by Time and then I went to work at another company for a while and I hated it. And I was working with my friend Guy Spier on his autobiography, his memoir. He’s a hedge fund manager and I was helping him write that, and part of what he had done was he had moved to Zurich, having been caught up in this kind of vortex of selling and greed and all of that, in competition in the hedge fund world in New York, and he really rebooted his entire life by moving to a slightly bland but very pleasant suburb of Zurich. And this really got me thinking a lot about how to design a life, and then when I moved from London back to New York, I really thought very carefully about, “Well, so I’m going to live in a more modest home than I lived in in London, but I’m not going to be surrounded by people with their Maseratis and their Ferraris and stuff. Because I was living in Belgravia in London on Time Magazine’s, dime, and once that was no longer available to me, I really had to think about how to structure a life. And it feels to me like part of the thing that got you to think about how to structure your own life was this seminal event that happened back I guess in about 1990, right? Where there was a fire, your family home in Santa Barbara that burned your house to the ground, and I wanted to talk about that in some depth because I think it gets in a lot of these issues that we want to discuss about how to construct a life that’s truly valuable, it’s truly abundant. But if you could start by just telling us what actually happened and how this became a really defining, formative event in the way you view your life.
[00:49:47] Pico Iyer: Well, again and again, William, you’ve asked exactly the question that’s been coming up in my mind. It’s as if we’re absolutely working in sync or telepathically. And just before I address that, two things: designing a life is such a beautiful phrase and it reminds me, we put so much attention into how we’ll furnish a house and how we’ll make a house, which is we need to do, but even more essential is how will we furnish and make our lives. And when Guy Spier hosted you on his first podcast, it was one of the most lovely, humane conversations I’ve ever had. I learned so much about investing from it.
[00:50:19] William Green: Thank you.
[00:50:20] Pico Iyer: I learned even more about friendship and generosity, so to anyone who’s listening who hasn’t heard you be a guest on his podcast –
[00:50:28] William Green: Ah, well, it’s kind of you to listen because I know how little interest you must have in the world of investing, so I take that as a great honor that you listened. Thank you.
[00:50:37] Pico Iyer: I don’t have a huge interest in the world of investment, but I have a huge interest in the world of investors because they’re wise people.
[00:50:42] William Green: Yeah.
[00:50:43] Pico Iyer: They figured out how to live not just in a monetary sense, but they’ve got to where they are not by chance and not by foolishness, and I think they have a lot to offer, and that’s what your book is about, so yeah. In terms of the fire, I was sitting in my family house in the hills of California, and I saw this distant knife of orange cutting through a hillside, so I went downstairs to call the fire department. And then when I came upstairs again, five minutes later, literally our house was encircled by 70-foot flames, five stories high on all sides. So I grabbed my mother’s cat, jumped into a car to try to escape, and then I was stuck on the mountain road for three hours underneath our house, saved only by a good Samaritan who had driven up with a water truck to be of assistance, and then found himself stuck and saved us all by pointing with a little hose of water at every roar of fire that approached us. It was the worst fire in California history at the time, and it’s broken out just up the road from us. So of course, it was a shock. We lost every last thing in the world. In my case, all my handwritten notes for my next eight years of writing, probably my next three books. In my parents’ case, all the photos and mementos, our keepsakes from 60 years.
[00:51:54] Pico Iyer: But the interesting thing, looking back on it, was that months later, after adjusting to circumstances, when the insurance company came along and said, “Well, we have some money and you can replace your goods,” of course, that really did make me understand I didn’t need 90% of the books and clothes and furniture I’d accumulated. I could live much more lightly, which is really the way I’d always wanted to live. I called up my editor in New York – or in London actually at the time, and I said, “All those books I was promising you, I can’t offer them to you because all my notes have gone,” and because he’s a kind man, he commiserated for a while, but because he’s a wise man, he said, “Actually, not having notes may liberate you to writing much more deeply from your heart and from your memory, from imagination.” And then lacking a physical home in California, I suddenly began to think, “Well, maybe I should spend more time in the place that really feels like my true home,” which is Japan, and now I’m pretty much here all the time. And so in so many ways, that seeming catastrophe opened doors and windows that might otherwise have been closed for a long time, perhaps forever.
[00:52:59] Pico Iyer: And I was thinking about it a lot during the pandemic because the pandemic was closing so many doors and so many lives, but at the same time, it was opening little windows of possibility, at least for me, that otherwise I might never have glimpsed, and moving me to live in better ways than I had been beforehand. I suppose the one other interesting thing about the fire, especially given our connection, is that as soon as – I stuck there for three hours and smoke was so intense that no fire firetruck could come up and make contact with me, and I could hear helicopters above, but they couldn’t see me and I couldn’t see them. Finally, after three hours, a fire truck came up and told me it was safe to drive down. So I drove down through what looked like what I associated with scenes from the Vietnam War: houses exploding all over the place, cars smoldering, fires on every side of me. I went downtown and I bought a toothbrush, which was the only thing I had in the world at that point.
[00:53:53] Pico Iyer: And then I went to sleep on a friend’s floor, but before I went to sleep, because my job then was partly working for Time Magazine, I asked my friend if I could use his computer, and I filed a report. So three hours after escaping the fire, I filed a report on this major news event for which I had a front seat view. And I ended my little piece with a poem that I picked up in Japan, because I had begun spending time there, from the 17th century haiku, which just said, “My house burnt down. I can now see better the rising moon.” So the very night when I lost everything in the world, something in me, probably wiser than I am, realized not everything was lost. Certain things would be gained, and actually, the main thing I would gain was a sense of priorities. So, literally that night, I thought about that poem, “I lost everything. I can now really see what’s important.”
[00:54:46] William Green: Yeah, I read that article yesterday. It was beautiful and still incredibly vivid, and it was striking to me that I think in probably all six of the books of yours that I’ve read in recent weeks, you mentioned the fire. You come back to it again and again. It’s such a profound formative episode for you. One thing you wrote in Autumn Light, you said, “As I climbed all the way up to our house the day after everything in our lives was reduced to rubble, I saw that everything that could be replaced – furniture, clothes, books – was by definition worthless. The only things that mattered were the things that were gone forever, and I think that’s such an interesting question, this whole issue of what you discover has value after it’s gone. And this is something we talked about in Vancouver where you led a fascinating session where you asked people various questions. One of which was ””If you had, I think, 10 minutes to save anything from your home, what would you save?” And I wonder if you could talk a bit more about that sense of what has value and what doesn’t. What does have value? When you had a very near escape a few years later after you rebuilt the house, what did you take out, for example?
[00:56:03] Pico Iyer: The only way I live differently since the fire than before, this is a bit embarrassing, I keep all my notes in a safety deposit box in the bank because they’re still handwritten and they seem to me the one indispensable thing, not because I make my living by being a writer, but more because I feel that’s my life. My life is contained in this otherwise illegible scrolls. Other people, I think my mother might have kept her photographs as well as her jewelry in the bank, which makes absolute sense to me. So again, I don’t think there’s a right answer, but I think it’s a really useful question to ask, which is why I shared it with that little circle at TED, and just again, that sense that we know things intuitively, but unless we actually stop to ask ourselves that, we get caught up in the rush and then life catches us by surprise.
[00:56:49] Pico Iyer: Because it always will. You’ve read my books more closely than anyone I can imagine, and I’m so touched because that’s the ultimate compliment and act of generosity. And you’re the first person who’s noticed that they all keep on coming back to that fire, which is partly a metaphor for a world on fire, where a lot of our certainties are being burnt up, but also a way of saying that whoever you are, you’re going to face some of these challenges in life. It could be a typhoon or a flood or an earthquake, or it could just be a car coming at high speed towards you, the wrong side of the road or a bad diagnosis, but one way or another, and maybe this is my age speaking a little, I think it’s a useful exercise to think if suddenly I only had a little time, what would I want to do with it? Or if suddenly my life were upended, what is it that I would cherish? I can’t really answer your question so much as applaud it and say maybe I feel that’s the question we should all be asking ourselves…
…[01:18:49] William Green: I loved this story. I think it was in Autumn Light, where you talked about all of these very rich donors rolling up in their fancy suits and their expensive sock dresses, and they show him [referring to the Dalai Lama] this wonderful, elaborate architecture model of this beautiful Buddhist center with treasure rooms and meditation halls that they’re going to build.
[01:19:08] William Green: And he, I, the way you described it, I think he, he slaps the thigh of this monk who’s sitting both beside him and he says, no, no need. This is your treasure, and I thought that was really beautiful. There’s a sense of humanity to him and a sense of pragmatism where it’s like, don’t spend all the money.
[01:19:24] William Green: He’s like, just be kinder to people. Do, help people, and you said also, I think there was another lovely story in, in one of the books where he said these very rich people would come to him and ask for a blessing and he’d say, you are the only one who can give yourself a blessing.
[01:19:38] William Green: You have money, freedom, opportunity to do some good for someone else. Why? Ask me for what’s in your hands?
[01:19:46] Pico Iyer: Yes, and then I think he said, start a school. We’ll give money to a hospital. Do something very concrete that’s going to help you and everybody else much more, so I really feel unlike monks in every tradition, he’s pretty much given his whole life to the subject of your podcast.
[01:20:00] Pico Iyer: What is richness? What is wisdom, and what is happiness? And again, the other thing that I’ve sometimes witnessed is when he’ll show up in Los Angeles, traditionally, he’d be surrounded by, billionaires and movie stars and movers and shakers, and people would often say, it must be so hard to live amidst the poverty of India.
[01:20:17] Pico Iyer: He’d look across this room where many people are on their fifth marriages and going to see a therapist every day in their pain, and he’d say, well, there’s poverty, and there’s poverty, and of course the material poverty of India is really serious and one wants to do everything one can to help it.
[01:20:31] Pico Iyer: That’s what he did. In fact, partly with his Nobel Prize money, but there’s an inner poverty that is just as debilitating, and you guys have, in the terms of the world, done everything that could be expected and much more, and you’re still suffering terribly, so that’s the poverty that you really need to address.
[01:20:48] William Green: I think there was another message that came through very powerfully from your books about the fact that if we live in this extremely uncertain world where anything can happen, basically, one of the things you point out is there’s an urgency that comes from that. If nothing lasts forever, you’ve got to relish the moment in the knowledge that it may not come again.
[01:21:10] William Green: Can you talk about that? Because that seems to me a, just a hugely important if obvious insight. Like, like most great insights there, they are obvious but you’ve got to internalize them somehow.
[01:21:23] Pico Iyer: Yeah, and I think, that’s the main thing I’ve got from the pandemic. I realized I’m living with much more decisiveness and clarity, because I know time isn’t infinite and I always knew it.
[01:21:33] Pico Iyer: As you say we’ve been held, told it a thousand times and we grew up studying it at school and being reminded of it by the tolling bells in Kyoto, but I think it really came home to us during the pandemic and I was living with my 88-year-old mother and it was a great blessing. I could spend a lot of time with her.
[01:21:47] Pico Iyer: She died in the course of the pandemic unrelated to Covid, which was just another reminder that as you say, I think the central line in my most recent book is the fact nothing lasts as the reason that everything matters because we can’t take anything for granted. Let’s make the most of this moment as just as you said so perfectly, William.
[01:22:06] Pico Iyer: I don’t know what’s going. This afternoon, all I know is I’ve got this chance to talk to you and I never have that chance otherwise, let me make the most of it and bring all of myself to it, and I think, to go back to the Dalai Lama and so much that we’ve been talking about and really where we began the conversation, none of this means ignoring the material stuff of the world.
[01:22:26] Pico Iyer: I think unless you’ve got that in place, it’s very hard to have the luxury of thinking about other things. Nobody is counseling poverty where if you are in a desperate state, you can’t think of anything other than relieving your immediate circumstances. I have a friend who’s a very serious Zen practitioner for many years, and a very actually accomplished and successful guy these days because of his writing.
[01:22:48] Pico Iyer: And he told me that at one point in his life when he was young, he decided to live on $8,000 a year. Very as simply as you could and beyond all that, and I think he probably managed that until somebody, maybe a wise Buddhist teacher told him living, trying to live on $8,000 a year is as crazy as trying to live off, trying to make 8 billion a year.
[01:23:09] Pico Iyer: The Buddha himself and Thomas Merton, everybody has seen. The silliness of extremes and twisting your life into a bonsai in order to live with almost nothing is as crazy as turning yourself into a madman to try to get everything. It’s a matter of balance, and I think that’s why, as you said, I mean really when I, we began by talking about my leaving Time magazine, but as I said earlier on, I couldn’t have left it if I hadn’t got there.
[01:23:34] Pico Iyer: And I couldn’t have seen through what, as you said about investors, they have to earn millions for them to realize, oh, actually maybe that’s not enough. I had to exhaust my boyhood ambitions to realize their insufficient ambitions as a young ambitions, and actually it’s something more that I need to fulfill me entirely, which is why if this podcast were called just wisdom and happiness, I’d be a bit skeptical about it because I would think, well, that’s wonderful stuff up in the air and abstract.
[01:24:02] Pico Iyer: But most of us are living in the world and so the fact that we begin with the richness part is what gives legitimacy, I think, to the other two parts because all of us in our lives have to take care of those fundamentals. Yeah. As you said, probably an hour ago before, as a way of addressing the other things…
…[01:39:40] Pico Iyer: But as you say, I think just in the most commonplace ways, mysteries everywhere, and thank heavens for that. I remember when my mother turned 80, we threw a party for her and one of her friends said, oh, Pico, why don’t you interview your mother? And I thought, roll my head eyes and oh, what a terrible idea.
[01:39:56] Pico Iyer: But my, her friend was eager to do this, so I said, okay, I will, so I asked my mother a few questions and I think the last question was, well, now you’re 80 years old. What’s the main thing that you’ve learned? And she said that you can never know another person, and I love that A, because it was the last thing, I expected my mother ever to say.
[01:40:13] Pico Iyer: I never knew if she believed that, and so by saying it, she actually bore it out. I didn’t know my own mother. I was really taken aback by that answer, and also, I was haunted by our answer because she was saying maybe her husband, my father, was as much a mystery to her as to me, and maybe she was saying that I.
[01:40:31] Pico Iyer: I’m a mystery to her, but whatever she meant by it, it was a wonderful answer. I’m so glad asked it, and that maybe when you and I are both 80 if we’re lucky enough to attain that, we’ll even more have this sense of how little we know about the people who are closest to us, and as you said about circumstances, which is which is wonderful.
[01:40:50] Pico Iyer: I’m so glad to be freed of that sense. I had as a kid that I knew exactly how my life was going and that I would plan it. I think when that fire burnt down my house the day before, as you can tell, I had my next eight years mapped out. I knew exactly which books I was going to write, I’d accumulated all my notes, and suddenly life has a different plan for me.
[01:41:10] Pico Iyer: And I can’t say it’s a worse plan than the one I would’ve come up with.
But, I do have an issue with valuation models in general. Because, today, basically all the valuation metrics tell the same story—U.S. stocks are overvalued, therefore, we should expect a major crash as these metrics return to their long-term historical averages. Whether you use Hussman’s measure, the Buffett indicator, or Shiller’s CAPE (cyclically-adjusted price-to-earnings) ratio, the logic is always the same.
But, there’s a huge problem with this logic—there is nothing that says that these metrics have to return to their long-term averages. In fact, I believe the opposite. Valuation multiples are likely to stay above their historical norms for the foreseeable future. Why?
Because investing is much simpler today than it used to be. With the rise of cheap diversification over the last half century, investors today are willing to accept lower future returns (i.e. higher valuations) than their predecessors. This has fundamentally changed valuation metrics and made historical comparisons less useful than they once were. This might seem far-fetched, but let me explain.
Imagine it’s 1940 and you want to build your wealth by owning a diversified portfolio of U.S. stocks. How would you do it?
You first might try to go the mutual fund route to have a professional pick stocks for you. Though the first mutual fund was created in 1924 and the industry grew in the 1930s, many of these early funds had high load fees. These were fees that you had to pay anytime you bought (or sold) shares in the fund. Load fees varied, but could be up to 9% of your total investment… If you wanted to avoid such fees, your next best option would have been to create a diversified portfolio by hand. Unfortunately, this would have meant doing research to figure out which stocks would do well over time and which ones wouldn’t. This task would have been even more difficult and time consuming than it is today given the lack of access to information.
More importantly, you would be picking stocks during a time when it wasn’t obvious that owning stocks was the right thing to do. After all, it’s 1940 and America just came out of the worst economic crisis in its history. Are you sure that stocks aren’t just a gamble? We can answer this question with a definitive “no” today because we have historical evidence that shows otherwise. But this historical evidence wouldn’t have been readily available in 1940.
This is what I call the privilege of knowledge, or the idea that we are able to make certain decisions today that are ancestors couldn’t make because we have more information than they had. For example, it’s easy to say “Just Keep Buying” in 2023 because we have so much data to back it up. But, 1940 this wasn’t true…
…Investing today is far simpler and cheaper that it was nearly a century ago. This begs a question: how much annual return would you be willing to give up in 1940 to have all the investment innovations that we have today? I bet it’s at least a few percentage points. And, if this is true across investors in general, then we would expect stock prices to be bid up accordingly over time.
And this is exactly what we’ve seen over the past few decades. If you look at Shiller’s P/E (price-to-earnings) ratio going back to 1920, you can see that this ratio has been mostly increasing over the last century:
In fact, before 2000, the average Shiller P/E ratio was 15.5 and since then it has been around 27. This is evidence that investors are willing to bid up prices (and, thus, accept lower returns than their predecessors). Even in March 2009, when things looked the bleakest during The Great Recession, the P/E ratio only briefly dipped below its pre-2000 average (~15) before immediately shooting back upward…
…Nevertheless, this simple valuation model has the same flaws that all the others do—it assumes that underlying conditions are the same in every time period. It assumes that a P/E ratio of 15 in 1940 is identical to a P/E ratio of 15 in 2009. But, as I’ve just demonstrated, they aren’t. Yes, they may look the same, but they definitely don’t feel the same.
A few months after graduating from college in Nairobi, a 30-year-old I’ll call Joe got a job as an annotator — the tedious work of processing the raw information used to train artificial intelligence. AI learns by finding patterns in enormous quantities of data, but first that data has to be sorted and tagged by people, a vast workforce mostly hidden behind the machines. In Joe’s case, he was labeling footage for self-driving cars — identifying every vehicle, pedestrian, cyclist, anything a driver needs to be aware of — frame by frame and from every possible camera angle. It’s difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which Joe was paid about $10.
Then, in 2019, an opportunity arose: Joe could make four times as much running an annotation boot camp for a new company that was hungry for labelers. Every two weeks, 50 new recruits would file into an office building in Nairobi to begin their apprenticeships. There seemed to be limitless demand for the work. They would be asked to categorize clothing seen in mirror selfies, look through the eyes of robot vacuum cleaners to determine which rooms they were in, and draw squares around lidar scans of motorcycles. Over half of Joe’s students usually dropped out before the boot camp was finished. “Some people don’t know how to stay in one place for long,” he explained with gracious understatement. Also, he acknowledged, “it is very boring.”…
…The current AI boom — the convincingly human-sounding chatbots, the artwork that can be generated from simple prompts, and the multibillion-dollar valuations of the companies behind these technologies — began with an unprecedented feat of tedious and repetitive labor.
In 2007, the AI researcher Fei-Fei Li, then a professor at Princeton, suspected the key to improving image-recognition neural networks, a method of machine learning that had been languishing for years, was training on more data — millions of labeled images rather than tens of thousands. The problem was that it would take decades and millions of dollars for her team of undergrads to label that many photos.
Li found thousands of workers on Mechanical Turk, Amazon’s crowdsourcing platform where people around the world complete small tasks for cheap. The resulting annotated dataset, called ImageNet, enabled breakthroughs in machine learning that revitalized the field and ushered in a decade of progress.
Annotation remains a foundational part of making AI, but there is often a sense among engineers that it’s a passing, inconvenient prerequisite to the more glamorous work of building models. You collect as much labeled data as you can get as cheaply as possible to train your model, and if it works, at least in theory, you no longer need the annotators. But annotation is never really finished. Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data. These failures, called “edge cases,” can have serious consequences. In 2018, an Uber self-driving test car killed a woman because, though it was programmed to avoid cyclists and pedestrians, it didn’t know what to make of someone walking a bike across the street. The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them. Already, this has given rise to a global industry staffed by people like Joe who use their uniquely human faculties to help the machines.
Over the past six months, I spoke with more than two dozen annotators from around the world, and while many of them were training cutting-edge chatbots, just as many were doing the mundane manual labor required to keep AI running. There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don’t get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.
“There’s an entire supply chain,” said Sonam Jindal, the program and research lead of the nonprofit Partnership on AI. “The general perception in the industry is that this work isn’t a critical part of development and isn’t going to be needed for long. All the excitement is around building artificial intelligence, and once we build that, it won’t be needed anymore, so why think about it? But it’s infrastructure for AI. Human intelligence is the basis of artificial intelligence, and we need to be valuing these as real jobs in the AI economy that are going to be here for a while.”
The data vendors behind familiar names like OpenAI, Google, and Microsoft come in different forms. There are private outsourcing companies with call-center-like offices, such as the Kenya- and Nepal-based CloudFactory, where Joe annotated for $1.20 an hour before switching to Remotasks. There are also “crowdworking” sites like Mechanical Turk and Clickworker where anyone can sign up to perform tasks. In the middle are services like Scale AI. Anyone can sign up, but everyone has to pass qualification exams and training courses and undergo performance monitoring. Annotation is big business. Scale, founded in 2016 by then-19-year-old Alexandr Wang, was valued in 2021 at $7.3 billion, making him what Forbes called “the youngest self-made billionaire,” though the magazine noted in a recent profile that his stake has fallen on secondary markets since then.
This tangled supply chain is deliberately hard to map. According to people in the industry, the companies buying the data demand strict confidentiality. (This is the reason Scale cited to explain why Remotasks has a different name.) Annotation reveals too much about the systems being developed, and the huge number of workers required makes leaks difficult to prevent. Annotators are warned repeatedly not to tell anyone about their jobs, not even their friends and co-workers, but corporate aliases, project code names, and, crucially, the extreme division of labor ensure they don’t have enough information about them to talk even if they wanted to. (Most workers requested pseudonyms for fear of being booted from the platforms.) Consequently, there are no granular estimates of the number of people who work in annotation, but it is a lot, and it is growing. A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.”
Automation often unfolds in unexpected ways. Erik Duhaime, CEO of medical-data-annotation company Centaur Labs, recalled how, several years ago, prominent machine-learning engineers were predicting AI would make the job of radiologist obsolete. When that didn’t happen, conventional wisdom shifted to radiologists using AI as a tool. Neither of those is quite what he sees occurring. AI is very good at specific tasks, Duhaime said, and that leads work to be broken up and distributed across a system of specialized algorithms and to equally specialized humans. An AI system might be capable of spotting cancer, he said, giving a hypothetical example, but only in a certain type of imagery from a certain type of machine; so now, you need a human to check that the AI is being fed the right type of data and maybe another human who checks its work before passing it to another AI that writes a report, which goes to another human, and so on. “AI doesn’t replace work,” he said. “But it does change how work is organized.”…
…Worries about AI-driven disruption are often countered with the argument that AI automates tasks, not jobs, and that these tasks will be the dull ones, leaving people to pursue more fulfilling and human work. But just as likely, the rise of AI will look like past labor-saving technologies, maybe like the telephone or typewriter, which vanquished the drudgery of message delivering and handwriting but generated so much new correspondence, commerce, and paperwork that new offices staffed by new types of workers — clerks, accountants, typists — were required to manage it. When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious…
…The act of simplifying reality for a machine results in a great deal of complexity for the human. Instruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use. A human asked to tag all the shirts in a photo probably wouldn’t tag the reflection of a shirt in a mirror because they would know it is a reflection and not real. But to the AI, which has no understanding of the world, it’s all just pixels and the two are perfectly identical. Fed a dataset with some shirts labeled and other (reflected) shirts unlabeled, the model won’t work. So the engineer goes back to the vendor with an update: DO label reflections of shirts. Soon, you have a 43-page guide descending into red all-caps.
“When you start off, the rules are relatively simple,” said a former Scale employee who requested anonymity because of an NDA. “Then they get back a thousand images and then they’re like, Wait a second, and then you have multiple engineers and they start to argue with each other. It’s very much a human thing.”
The job of the annotator often involves putting human understanding aside and following instructions very, very literally — to think, as one annotator said, like a robot. It’s a strange mental space to inhabit, doing your best to follow nonsensical but rigorous rules, like taking a standardized test while on hallucinogens. Annotators invariably end up confronted with confounding questions like, Is that a red shirt with white stripes or a white shirt with red stripes? Is a wicker bowl a “decorative bowl” if it’s full of apples? What color is leopard print? When instructors said to label traffic-control directors, did they also mean to label traffic-control directors eating lunch on the sidewalk? Every question must be answered, and a wrong guess could get you banned and booted to a new, totally different task with its own baffling rules…
…According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour.
That is, when they were making any money at all. The most common complaint about Remotasks work is its variability; it’s steady enough to be a full-time job for long stretches but too unpredictable to rely on. Annotators spend hours reading instructions and completing unpaid trainings only to do a dozen tasks and then have the project end. There might be nothing new for days, then, without warning, a totally different task appears and could last anywhere from a few hours to weeks. Any task could be their last, and they never know when the next one will come.
This boom-and-bust cycle results from the cadence of AI development, according to engineers and data vendors. Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date. There may be monthslong demand for thousands of annotators, then for only a few hundred, then for a dozen specialists of a certain type, and then thousands again. “The question is, Who bears the cost for these fluctuations?” said Jindal of Partnership on AI. “Because right now, it’s the workers.”…
…A woman I’ll call Anna was searching for a job in Texas when she stumbled across a generic listing for online work and applied. It was Remotasks, and after passing an introductory exam, she was brought into a Slack room of 1,500 people who were training a project code-named Dolphin, which she later discovered to be Google DeepMind’s chatbot, Sparrow, one of the many bots competing with ChatGPT. Her job is to talk with it all day. At about $14 an hour, plus bonuses for high productivity, “it definitely beats getting paid $10 an hour at the local Dollar General store,” she said.
Also, she enjoys it. She has discussed science-fiction novels, mathematical paradoxes, children’s riddles, and TV shows. Sometimes the bot’s responses make her laugh; other times, she runs out of things to talk about. “Some days, my brain is just like, I literally have no idea what on earth to ask it now,” she said. “So I have a little notebook, and I’ve written about two pages of things — I just Google interesting topics — so I think I’ll be good for seven hours today, but that’s not always the case.”
Each time Anna prompts Sparrow, it delivers two responses and she picks the best one, thereby creating something called “human-feedback data.” When ChatGPT debuted late last year, its impressively natural-seeming conversational style was credited to its having been trained on troves of internet data. But the language that fuels ChatGPT and its competitors is filtered through several rounds of human annotation. One group of contractors writes examples of how the engineers want the bot to behave, creating questions followed by correct answers, descriptions of computer programs followed by functional code, and requests for tips on committing crimes followed by polite refusals. After the model is trained on these examples, yet more contractors are brought in to prompt it and rank its responses. This is what Anna is doing with Sparrow. Exactly which criteria the raters are told to use varies — honesty, or helpfulness, or just personal preference. The point is that they are creating data on human taste, and once there’s enough of it, engineers can train a second model to mimic their preferences at scale, automating the ranking process and training their AI to act in ways humans approve of. The result is a remarkably human-seeming bot that mostly declines harmful requests and explains its AI nature with seeming self-awareness.
Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing.
This circuitous technique is called “reinforcement learning from human feedback,” or RLHF, and it’s so effective that it’s worth pausing to fully register what it doesn’t do. When annotators teach a model to be accurate, for example, the model isn’t learning to check answers against logic or external sources or about what accuracy as a concept even is. The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them. Maybe this results in the model extracting patterns from the part of its linguistic map labeled as accurate and producing text that happens to align with the truth, but it can also result in it mimicking the confident style and expert jargon of the accurate text while writing things that are totally wrong. There is no guarantee that the text the labelers marked as accurate is in fact accurate, and when it is, there is no guarantee that the model learns the right patterns from it.
This dynamic makes chatbot annotation a delicate process. It has to be rigorous and consistent because sloppy feedback, like marking material that merely sounds correct as accurate, risks training models to be even more convincing bullshitters. An early OpenAI and DeepMind joint project using RLHF, in this case to train a virtual robot hand to grab an item, resulted in also training the robot to position its hand between the object and its raters and wiggle around such that it only appeared to its human overseers to grab the item. Ranking a language model’s responses is always going to be somewhat subjective because it’s language. A text of any length will have multiple elements that could be right or wrong or, taken together, misleading. OpenAI researchers ran into this obstacle in another early RLHF paper. Trying to get their model to summarize text, the researchers found they agreed only 60 percent of the time that a summary was good. “Unlike many tasks in [machine learning] our queries do not have unambiguous ground truth,” they lamented.
When Anna rates Sparrow’s responses, she’s supposed to be looking at their accuracy, helpfulness, and harmlessness while also checking that the model isn’t giving medical or financial advice or anthropomorphizing itself or running afoul of other criteria. To be useful training data, the model’s responses have to be quantifiably ranked against one another: Is a bot that helpfully tells you how to make a bomb “better” than a bot that’s so harmless it refuses to answer any questions? In one DeepMind paper, when Sparrow’s makers took a turn annotating, four researchers wound up debating whether their bot had assumed the gender of a user who asked it for relationship advice. According to Geoffrey Irving, one of DeepMind’s research scientists, the company’s researchers hold weekly annotation meetings in which they rerate data themselves and discuss ambiguous cases, consulting with ethical or subject-matter experts when a case is particularly tricky.
Anna often finds herself having to choose between two bad options. “Even if they’re both absolutely, ridiculously wrong, you still have to figure out which one is better and then write words explaining why,” she said. Sometimes, when both responses are bad, she’s encouraged to write a better response herself, which she does about half the time…
…The new models are so impressive they’ve inspired another round of predictions that annotation is about to be automated. Given the costs involved, there is significant financial pressure to do so. Anthropic, Meta, and other companies have recently made strides in using AI to drastically reduce the amount of human annotation needed to guide models, and other developers have started using GPT-4 to generate training data. However, a recent paper found that GPT-4-trained models may be learning to mimic GPT’s authoritative style with even less accuracy, and so far, when improvements in AI have made one form of annotation obsolete, demand for other, more sophisticated types of labeling has gone up. This debate spilled into the open earlier this year, when Scale’s CEO, Wang, tweeted that he predicted AI labs will soon be spending as many billions of dollars on human data as they do on computing power; OpenAI’s CEO, Sam Altman, responded that data needs will decrease as AI improves.
Chen is skeptical AI will reach a point where human feedback is no longer needed, but he does see annotation becoming more difficult as models improve. Like many researchers, he believes the path forward will involve AI systems helping humans oversee other AI. Surge recently collaborated with Anthropic on a proof of concept, having human labelers answer questions about a lengthy text with the help of an unreliable AI assistant, on the theory that the humans would have to feel out the weaknesses of their AI assistant and collaborate to reason their way to the correct answer. Another possibility has two AIs debating each other and a human rendering the final verdict on which is correct. “We still have yet to see really good practical implementations of this stuff, but it’s starting to become necessary because it’s getting really hard for labelers to keep up with the models,” said OpenAI research scientist John Schulman in a recent talk at Berkeley.
“I think you always need a human to monitor what AIs are doing just because they are this kind of alien entity,” Chen said. Machine-learning systems are just too strange ever to fully trust. The most impressive models today have what, to a human, seems like bizarre weaknesses, he added, pointing out that though GPT-4 can generate complex and convincing prose, it can’t pick out which words are adjectives: “Either that or models get so good that they’re better than humans at all things, in which case, you reach your utopia and who cares?”…
…Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.
N.S.: That all makes sense. How much impact will the export controls have on China’s military capabilities over the next 10 years? I’ve heard it said that military tech generally uses trailing-edge chips; if so, that would mean that in the short term, China’s military would only need chips that China can already make, using tools they already have. How true is that?
C.M.: Autos provide a good analogy for understanding how militaries use chips. A typical new car might have a thousand chips inside. Most are very simple, like the ones that make your window move up or down. But the high-value features–the entertainment system, the lidar or radar sensors, and the semi-autonomous-driving features, all require more sophisticated and specialized semiconductors. What’s more, a lot of the high-value features in cars don’t only require chips on cars–they also require sophisticated chips in cell towers and datacenters too. This is why Tesla builds its own high-end Dojo chips.
Military systems are pretty similar. Most of the chips in tanks and missiles are low-end, but the chips that provide differentiated capabilities are not. Just like autos, some of the most sophisticated chips aren’t actually in the missiles and tanks, but in the networks and datacenters that guide and train them. We know that autonomous driving efforts require huge volumes of advanced chips in cutting edge datacenters. We know less about the U.S. military’s drone programs, but there’s no doubt they use a lot of sensors, a lot of communications, and a lot of compute. The Himars missiles used in Ukraine don’t require ultra-advanced chips themselves, but they rely on targeting information provided by a vast array of sensors and processors to sort signals from noise or to differentiate tanks from trucks. It’s now easy to put GPS guidance in a missile, since every smartphone has GPS guidance too. But can your missile maneuver itself to avoid countermeasures while operating in an area where GPS is jammed? If so, its going to need more sophisticated semiconductors.
There’s not a single type of chip for which you can say “without this chip, China’s military modernization will grind to a halt.” It’s always possible to design around a certain component. But the more you have to design around subpar semiconductors, the more tradeoffs you have to make between performance, power consumption, reliability, and other characteristics. I think the recent tightening of export controls will exacerbate these tradeoffs.
N.S.: So the goal is really just to slow China down, keep them half a step behind us. That brings me to probably the most important argument against export controls. A lot of people argue that had the U.S. not enacted export controls, China would have remained dependent on U.S. chips for longer, but now that we cut them off, China will simply learn how to make everything itself, thus cutting U.S. companies out of the market and ultimately raising China’s own technological capabilities. What do you think of this argument?
C.M.: I think its hard to sustain the argument that the controls will make China pursue a strategy of reducing dependence on the U.S…because that was already China’s strategy. Chinese leaders, including Xi personally, have articulated this repeatedly since at least 2014. They launched a major industrial policy program focused on the aim of ending reliance on the U.S., spending billions of dollars annually. So to say the export controls caused this goal gets the chronology backward: this goal existed for years before the export controls.
Now, one could argue “China’s prior policies weren’t working and reducing dependence on the U.S., but now China will pursue more effective policies.” But I haven’t seen anyone articulate why this would be the case. It doesn’t seem like semiconductor funding in China has increased (and the sums involved were already vast.) Nor have the export controls introduced new information into the Chinese policymaking apparatus that will make it smarter. Beijing was pursuing this self-sufficiency strategy before the controls precisely because it knew it was so dependent.
Perhaps you could argue that the imposition of controls has reshaped the political economy or the relationships between Chinese firms and government in a way that will lead to smarter Chinese policy. I haven’t seen anyone spell out how this might work. So I’m skeptical, and I think loss of access to chipmaking tools and the broader chilling effects on expertise transfer will make China’s catch up efforts harder.
N.S.: How difficult will it be for China to make chipmaking tools similar to those made by ASML? I know they’re trying very hard to steal ASML’s tech, and I’ve seen one report indicating they may have had some success there. Also I’d expect them to try to purchase ASML machines through third countries, as well as accelerating their own indigenous R&D efforts. Will any of those workarounds work, and if so, how long until they catch up?
C.M.: The likelihood of purchasing these machines through third countries is close to zero. The number of advanced tools produced each year measures in the dozens, and there are only a handful of customers. A single advanced lithography machine requires multiple airplanes to transport. And there are ASML staff on site at all times who are critical to its operation. So its difficult to imagine a set of tools that would be more difficult to smuggle.
Replicating them is easier, but still a monumentally challenging task. It took ASML three decades to develop EUV lithography tools, and it was only possible in close collaboration with users like TSMC and Intel. Of course, it will be easier to replicate the tools than it was for ASML to first produce them. But these are the most complex and precise pieces of equipment humans have ever made. The challenge isn’t only to replicate the unique components inside the tools – such as the smoothest mirrors humans have ever made – though this will be hard. The really challenging part will be to get the hundreds of thousands of components to work often enough so that the tools can actually function in high-volume manufacturing. If each of the hundreds of thousands of components in your tool breaks down once a year, the tool basically never works. So reliability is a potentially fatal challenge.
And remember–lithography tools are probably the hardest challenge, but they’re not the only one. There are also deposition tools, etching tools, metrology tools, and others. China is behind to varying degrees–often significantly–in all of them. All these tools require tens of thousands of precision components and need to be accurate at the nanometer scale.
The final point here is that all the Western toolmakers have new chipmaking equipment rolling out on a regular basis. ASML will soon release its next generation lithography tool, called high-numerical aperture EUV. The industry continues to race forward. So if China manages to produce its own suite of EUV lithography and related etch, deposition, and lithography tools within five years, it will still be substantially behind the cutting edge…
…N.S.: If you were advising the Biden administration, what would you list as the top action items or priorities to improve the U.S.’ position in the semiconductor industry, beyond what has already been done? Also, by the way, are you advising the Biden administration on this?
C.M.: In the short run, there’s more work to be done on making the U.S. cost competitive. I mentioned permitting reform. We should recognize Korea’s and Taiwan’s safety and construction regulations for fabs as equivalent, so that firms from those countries don’t need to redesign their facilities when they want to build in the U.S. The more they can copy and paste from what works in those countries, the less money they have to spend redesigning facilities to suit the needs of America’s fire and plumbing inspectors, who have much less experience with fab safety than the biggest firms. (Moreover, with billions of dollars of equipment in their fabs, chipmakers have plenty of incentive to avoid accidents.) Second, there should be strict time limits in which permits are either rejected or approved, so that the NEPA burden can be limited. At the very least we should be able to make our regulations only as burdensome as Europe’s. Today they’re worse.
The second short-run change is to extend the investment tax credit, which currently expires at the end of 2026. It should be made permanent to ensure that manufacturing in other countries isn’t cheaper simply for tax reasons.
In the long run, whichever country innovates most rapidly will succeed. The CHIPS Act puts more money into R&D, and there’s discussion of focusing CHIPS funding toward prototyping rather than basic science (which is great, but which we already have plenty of.) In the chip industry, prototyping is very expensive, so we have fewer startups and new products than, say, in software, simply due to the high upfront cost. Making it cheaper and easier to turn new ideas into working prototypes like test chips would help boost the rate of innovation…
…N.S.: What are some quantitative metrics we should be keeping an eye on in the semiconductor industry, in order to know how the international competition is going?
C.M.: In terms of technological leadership in the chip industry, a key question will be at what rate leading Chinese firms advance their manufacturing processes and how this compares with non-Chinese firms.
But I think the more pressing short-term metric is market share in China’s domestic chip market. Today China’s domestic chip market is dominated by foreign firms. China’s leaders have repeatedly stated they want to change this by importing fewer. That’s the point of Made in China 2025 and other industrial policy plans. I wonder whether they might finally take steps in this direction — not by overtaking competitors technologically but by pressuring Chinese buyers to use less capable domestically produced chips.
The electronics industry is the only major sector of the Chinese economy that has not thus far been subject to substantial “buy Chinese” pressure. (In contrast to autos, aviation, high speed rail etc.) In most other sectors, “buy Chinese” has been an acceptable policy because Chinese firms learned to produce products at or close to the technological frontier. Could we be at a point where China’s leaders are so committed to self-sufficiency, they decide to pressure domestic firms to buy domestic chips, even if they’re worse? The implications for global trade would be dramatic, because China spends as much money importing chips as anything else.
The projections are reliable, and stark: By 2050, people age 65 and older will make up nearly 40 percent of the population in some parts of East Asia and Europe. That’s almost twice the share of older adults in Florida, America’s retirement capital. Extraordinary numbers of retirees will be dependent on a shrinking number of working-age people to support them.
In all of recorded history, no country has ever been as old as these nations are expected to get.
As a result, experts predict, things many wealthier countries take for granted — like pensions, retirement ages and strict immigration policies — will need overhauls to be sustainable. And today’s wealthier countries will almost inevitably make up a smaller share of global G.D.P., economists say.
This is a sea change for Europe, the United States, China and other top economies, which have had some of the most working-age people in the world, adjusted for their populations. Their large work forces have helped to drive their economic growth.
Those countries are already aging off the list. Soon, the best-balanced work forces will mostly be in South and Southeast Asia, Africa and the Middle East, according to U.N. projections. The shift could reshape economic growth and geopolitical power balances, experts say…
…The opportunity for many poorer countries is enormous. When birth rates fall, countries can reap a “demographic dividend,” when a growing share of workers and few dependents fuel economic growth. Adults with smaller families have more free time for education and investing in their children. More women tend to enter the work force, compounding the economic boost.
Demography isn’t destiny, and the dividend isn’t automatic. Without jobs, having a lot of working-age people can drive instability rather than growth. And even as they age, rich countries will enjoy economic advantages and a high standard of living for a long time…
…But without the right policies, a huge working-age population can backfire rather than lead to economic growth. If large numbers of young adults don’t have access to jobs or education, widespread youth unemployment can even threaten stability as frustrated young people turn to criminal or armed groups for better opportunities…
…East Asian countries that hit the demographic sweet spot in the last few decades had particularly good institutions and policies in place to take advantage of that potential, said Philip O’Keefe, who directs the Aging Asia Research Hub at the ARC Center of Excellence in Population Aging Research and previously led reports on aging in East Asia and the Pacific at the World Bank.
Other parts of the world – some of Latin America, for example – had age structures similar to those East Asian countries’ but haven’t seen anywhere near the same growth, according to Mr. O’Keefe. “Demography is the raw material,” he said. “The dividend is the interaction of the raw material and good policies.”…
…Today’s young countries aren’t the only ones at a critical juncture. The transformation of rich countries has only just begun. If these countries fail to prepare for a shrinking number of workers, they will face a gradual decline in well-being and economic power.
The number of working-age people in South Korea and Italy, two countries that will be among the world’s oldest, is projected to decrease by 13 million and 10 million by 2050, according to U.N. population projections. China is projected to have 200 million fewer residents of working age, a decrease higher than the entire population of most countries.
To cope, experts say, aging rich countries will need to rethink pensions, immigration policies and what life in old age looks like…
…Not only are Asian countries aging much faster, but some are also becoming old before they become rich. While Japan, South Korea and Singapore have relatively high income levels, China reached its peak working-age population at 20 percent the income level that the United States had at the same point. Vietnam reached the same peak at 14 percent the same level.
Pension systems in lower-income countries are less equipped to handle aging populations than those in richer countries…
…And some rich countries won’t face as profound a change — including the United States.
Slightly higher fertility rates and more immigration mean the United States and Australia, for example, will be younger than most other rich countries in 2050. In both the United States and Australia, just under 24 percent of the population is projected to be 65 or older in 2050, according to U.N. projections — far higher than today, but lower than in most of Europe and East Asia, which will top 30 percent…
…People aren’t just living longer; they are also living healthier, more active lives. And aging countries’ high level of development means they will continue to enjoy prosperity for a long time.
But behavioral and governmental policy choices loom large.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), ASML, Meta Platforms, and Microsoft. Holdings are subject to change at any time.