
Shoko Takayasu Karen Hao, author of “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.”
Since ChatGPT’s debut in 2022, the artificial intelligence boom has been cast as a story of boundless innovation. The truth, according to investigative journalist Karen Hao, is starkly different. Hao has been covering OpenAI, the company behind ChatGPT, since 2019, when she was on the artificial intelligence beat as a reporter at the MIT Technology Review, and asserts that we have entered a new and ominous age. In “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” she weaves the behind-the-scenes story of Altman’s sudden firing and triumphant return into a larger narrative about the impact of artificial intelligence all over the world. Hao reported behind Silicon Valley, from Kenyan data laborers to Chilean water activists, to present the fullest picture yet of artificial intelligence and its impact. The book came out May 20, and Hao speaks Wednesday at the Cambridge Public Library. We interviewed her Wednesday; her words have been edited for length and clarity.
Talk me through how you first came to know OpenAI.
In 2018, I started as an AI reporter at MIT Technology Review, which specializes in emerging fundamental research on different technologies. OpenAI, at the time, was a purely fundamental research lab focused on AI, so it came on my radar as one of the main players within that early stage research focus I was covering. OpenAI was founded as a nonprofit at the end of 2015, but in 2018 and 2019, they started making a series of announcements that suggested there were a lot of changes happening. One of the main changes was that they restructured, nesting a for-profit company within the nonprofit to start raising capital, and they brought Sam Altman in. I pitched a profile of the organization to my editor – I ended up being the first journalist to ever profile OpenAI – and I embedded within the company for three days in August 2019. I came in with a very open mind, thinking they had a super interesting vision and wanting to learn more about how they operationalized it. I wanted to know what they meant when they said they were working to ensure artificial general intelligence benefits all of humanity, and I very quickly realized that they could not articulate what they actually meant. They could not articulate what it meant to ensure that this happened. They could not articulate what AGI was, and they could not articulate what it meant to benefit all of humanity. Different executives and employees I spoke to had totally different interpretations of what they were doing. So that was alarming, and there were a couple of other things that made me start to become really skeptical. They professed that they were really transparent, but I found that their culture was secretive, and they professed that they were really collaborative, but the executives were hammering home this idea that they had to be No. 1 in AI research, which is inherently competitive. It seemed to me that even though they positioned themselves as a nonprofit that was distinctive from Silicon Valley’s cutthroat, for-profit culture, they were the same thing. I wrote a profile that said that what OpenAI says publicly to gain goodwill, to accumulate resources and capital, is not what’s actually happening behind closed doors. They were really unhappy with the piece and didn’t speak to me for three years, but I continued to watch the company as it became more and more prominent with time.
When ChatGPT was released, I realized that to most people, it felt like OpenAI came out of nowhere, and that this technology came out of nowhere. A lot of the conversations happening around AI and around the ChatGPT moment adopted the narratives that OpenAI was laying out about the technology and about who they were, and there wasn’t a lot of context. I felt people didn’t have the context that this technology was made with very specific decisions within the organization, that it wasn’t inevitable, that there were all of these choices that the technology reflected based on OpenAI’s specific worldview and ideologies. I felt like I needed to write this book to provide the history behind OpenAI and where ChatGPT comes from, as well as a broader history of where AI comes from, so that people can understand what this is, where it’s going and how they can participate in its development. I think the vast majority of people feel like AI is coming for them and all they can do is lie down and have it wash over them, but I believe that everyone has an active role to play in shaping this technology. And it’s only if people take that active role that we can put AI development on a trajectory that is broadly beneficial to everyone.
What does “empire of AI” mean to you?
That we need to start thinking of these companies as new forms of empire. If you think of empires of old and empires of AI today, they have all the same characteristics. First, they lay claim to resources that are not their own while redesigning rules to suggest that those resources were always their own. The empires of AI do that with the data that people put online, for instance, because people have not consented to their data being scraped and used to train AI models that could ultimately limit their economic opportunities. Similarly, empires exploit labor. Back in the day, labor was not paid or was paid abysmally, and today, these companies contract workers around the world to do data annotation, data cleaning and content moderation for their services in exploitative working conditions. The irony is that they’re also exploiting labor by building technologies that automate work away. Another feature of both kinds of empire is the monopolization of knowledge production. In the last 10 years, AI companies have become so resource-rich that they’re able to provide extremely sizable compensation packages, easily a million dollars, to top talent in AI research, so we’ve seen a shift from AI researchers largely working in academia to AI researchers almost exclusively working within these companies. Therefore, most of the AI research being produced today is being filtered through the lens of what is good or not good for these companies; the bedrock of public understanding of how these technologies work and the limitations of these technologies is being filtered through the empire. The last feature I see in both kinds of empires is this idea that there are good empires and there are bad empires. The British Empire would always conceive of itself as morally superior to the Dutch Empire, while the French Empire would conceive of themselves as morally superior to the British Empire. We see that same rhetoric with AI companies: They’re always positioning themselves as morally superior to the other actor, though the actor changes over time. I write in the book that OpenAI has sort of always had an enemy, but the enemy changes based on what’s most convenient. Conceiving of yourself as a good empire means that you are also doing everything under a civilizing mission that supposedly benefits all of humanity. Historically, empires would try to bring religion, culture and so on to other places that they considered to be culturally inferior to them, and that is essentially what we’re seeing with empires of AI as well. They have this civilizing mission, like they are bringing modernity, progress, all of these superior things to the entire world. But ultimately, when you look at the actual track record of people impacted, as well as both the costs and the benefits of the technology, we’re seeing a lot of dispossession around the world and a lot of power to the empire.
You did an incredible amount of on-the-ground reporting beyond Silicon Valley, from Kenya to Chile. Can you tell me a bit about the process and how you wove the strands together?
It was really important to me not to write a company book or stay in Silicon Valley, because I didn’t think that that would be of service to readers wanting to understand the global impact of this technology and the full expanse of the empire. I spent a lot of time traveling to Kenya, Chile and Uruguay, and I drew on reporting I had done previously in South Africa and Colombia. In Kenya, I met with data laborers contracted by OpenAI. During that era of the company, when it was shifting from a research organization to a more commercial one, as its leaders were thinking about what would happen when they put a text generation tool into the hands of millions of people, they obviously didn’t want it to start spewing racist, toxic, abusive speech, because that will be bad for the user experience and therefore it wouldn’t be a commercial success. To create a content moderation filter that would wrap around all of the models that they developed, including ChatGPT, Kenyan workers had to label reams of the worst text on the Internet, as well as the AI-generated text that came up when the company prompted its models to imagine the worst text on the Internet. These workers would read through that text and put it into a detailed taxonomy: Is it violent content? Is it sexual content? Is it graphic violent content? Is it extremely graphic violent content? Is it sexual abuse content? Is it sexual abuse content that involves children? Ultimately, they ended up in an extremely psychologically traumatized state that was very similar to what content moderators from the early social media era experienced. One of the stories that I highlight in the book is that of a Kenyan man whose personality completely changed from this work, resulting in his wife leaving him, and then once ChatGPT came out, it automated away his brother’s work opportunities. It came full circle in a horrible way, this thing that hurt him and then hurt his family both directly and indirectly.
I really wanted to center these under-told stories, these voices that are rarely ever elevated within the AI discourse, and as I considered how to weave them into this story of the company, I spent a lot of time thinking about “The Crown.” Every episode is about the Crown and the Empire, but the series does an amazing job of introducing you to the power and the reach of the Empire and the function of the Crown through different characters. Sometimes you’re with the royal family, sometimes you’re with the people around the royal family and sometimes you’re in the far reaches of the British empire. I took inspiration from that, and I basically alternated between being in OpenAI and telling the story of the company through other characters’ eyes at the edge of the empire.
How did your own understanding of AI, power and ethics evolve? Did anything surprise you after years of covering this company and this space as a journalist?
The public conversation around AI is so extreme – people think either AI will bring utopia or AI will kill us all – which is very much perpetuated by these companies, but I originally thought it was just rhetoric, and the thing that surprised me the most while writing the book was that there are people who genuinely believe it to be true. I spoke to people who were awestruck with wonder about AI and believe it will bring utopia, and I talked to others whose voices were quivering as they talked about the prospect of everyone dying within a couple years. I realized that, to draw another analogy to cinema, the AI world has become “Dune.” In the film, Paul Atreides’ mother creates this mythology around her son to make him powerful and control the people, but the people who encounter the myth don’t know it was a creation, and it gets to the point where even Paul then loses himself in the myth. In other words, he steps into the character that the myth created, believing suddenly that he is the character. In our case, maybe someone created these myths at some point as rhetorical devices, but there are actual movements of people that believe them fervently and they act through this religious fervor. Early in his career, Sam Altman came across this quote, “Successful people create companies. More successful people create countries. The most successful people create religions,” and reflecting on it, he said, “The most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.” So I think he really understood this and had a sense of how it could and would impact people.
There’s a sense of inevitability in this tech arms race. Do you believe it’s possible to slow it down or reshape it meaningfully?
I very much do think there’s a way to do that, and it all comes from us. These companies need talent to work at their companies, they need labor and they need support in government and in economic spaces. All of the resources, all of the ingredients that go into the making of this product, are collectively owned by people. That data is our data, that land is owned by communities, that water is a public resource. I think Silicon Valley has done a really good job of convincing people that things that are collectively owned actually belong to them, and when we remember they’re ours, we can start shaping what’s happening in this world. If you don’t like the way these companies are operating, don’t use their products, don’t give them more data. Look at real world examples of people making change. There are artists and writers that are suing these companies over their data, asserting their ownership of that original intellectual property that they produced. There are activists all around the world that are pushing back against data center development, saying they don’t want these data centers because they’re not going to give them any kind of benefit in return. They have forced these companies to come to the table and listen to community residents. Now there’s this huge conversation happening around how AI should be integrated into education. I would encourage every school to have community discussions about how to introduce AI into the classroom. Parents and teachers should form coalitions, college students should be raising these issues and discussing how it’s affecting their education. When we take action like this, these companies have to respond. If everyone does it, companies will have to change.
Karen Hao reads from “Empire of AI” at 6:30 p.m. Wednesday at the Cambridge Public Library, 449 Broadway, Cambridge. Free, RSVP required.
The feature image for this post (but not the image seen above) was added to in a digital retouching process. The red table at the far right and left of the frame is not real. The food at the center was photographed and is real.