OpenAI’s Early Days and Shifting Tides
In 2019, Karen Hao, a seasoned reporter at MIT Technology Review, pitched a story to her editor about OpenAI, a company then operating largely under the radar. What unfolded was a journey filled with unexpected turns, revealing the extent to which OpenAI’s ambitions had diverged from its initial goals.
I first set foot in OpenAI’s offices on August 7, 2019. Greg Brockman, the company’s CTO at the time, greeted me with a hesitant smile, acknowledging that granting such extensive access was unprecedented for them.
While OpenAI might have been a relative unknown to the general public, I, as a reporter covering the ever-evolving landscape of artificial intelligence, had been closely monitoring its developments.
Prior to 2019, OpenAI was considered somewhat of an outlier in the AI research community. Its bold claim of achieving Artificial General Intelligence (AGI) within a decade was met with skepticism by many. Despite significant funding, the company lacked clear direction, and its marketing efforts were often perceived as overhyping research deemed unoriginal by other experts. Nonetheless, OpenAI also attracted envy. As a nonprofit, it declared no interest in commercialization, creating a unique environment for intellectual exploration free from the constraints of financial pressures.
However, in the six months leading up to my visit, a series of rapid changes hinted at a significant shift in OpenAI’s direction. The first sign was the controversial decision to withhold GPT-2, despite publicizing its capabilities. Next came the announcement of Sam Altman’s appointment as CEO, following his departure from Y Combinator (YC), alongside the creation of a “capped-profit” structure. Amidst these developments, OpenAI revealed a partnership with Microsoft, granting the tech giant priority in commercializing OpenAI’s technologies and exclusive use of Microsoft Azure cloud services.
Each of these announcements generated controversy, speculation, and increasing attention, reaching beyond the tech industry’s confines. As the changes unfolded, it was difficult to fully grasp their significance. It was evident, however, that OpenAI was beginning to exert considerable influence over AI research and the way policymakers understood the technology. The decision to transition into a partially for-profit business was sure to have widespread repercussions across industry and government.
One evening, encouraged by my editor, I reached out to Jack Clark, OpenAI’s policy director, whom I had previously spoken with. I proposed a profile on OpenAI, sensing that it was a pivotal moment in the company’s history. Clark connected me with the communications head, who extended an invitation to interview leadership and embed within the company for three days.
Inside OpenAI: Mission and Ambition
Brockman and I were joined by chief scientist Ilya Sutskever in a glass meeting room. Seated side-by-side, they complemented each other’s roles. Brockman, the coder and implementer, appeared eager to make a positive impression, while Sutskever, the researcher and philosopher, seemed more relaxed and detached.
I began by asking about OpenAI’s mission: to ensure beneficial AGI. Why invest billions in this problem over others?
Brockman, well-versed in defending OpenAI’s position, stated that AGI was crucial for solving complex problems beyond human capabilities. He cited climate change and medicine as examples, illustrating the potential of AGI to analyze vast amounts of data and accelerate advancements in these critical areas.
He recounted a friend’s experience with a rare disorder, highlighting how AGI could streamline diagnostics and treatment by connecting specialists efficiently.
I then asked about the distinction between AGI and AI.
AGI, once a niche concept, had gained traction, largely due to OpenAI’s influence. AGI refers to a hypothetical AI that matches or exceeds human intelligence in most economically valuable tasks. While researchers had made progress, debates persisted regarding the possibility of simulating human consciousness.
AI, on the other hand, referred to both current technology and near-future capabilities, demonstrating applications in climate change mitigation and healthcare.
Sutskever added that AGI could solve global challenges by enabling intelligent computers to communicate and work together more efficiently than humans, bypassing incentive problems.
This statement led me to question whether AGI was intended to replace humans. Brockman responded that technology should serve people and ensure “economic freedom” while maintaining their quality of life.
Brockman argued that OpenAI’s role was not to determine if AGI would be built, but rather to influence the circumstances under which it was created. He emphasized that their mission was to ensure that AGI benefits all of humanity by building it and distributing its economic benefits.
Our conversation continued in circles, with limited success in obtaining concrete details. I attempted a different approach, asking about the potential downsides of the technology.
Brockman cited deepfakes as a possible negative application.
I raised the environmental impact of AI itself.
Sutskever acknowledged the issue but argued that AGI could counteract the environmental cost. He emphasized the need for green data centers.
“Data centers are the biggest consumer of energy, of electricity,” Sutskever continued.
“It’s 2 percent globally,” I offered.
“Isn’t Bitcoin like 1 percent?” Brockman said.
Sutskever would later say, “I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” There would be “a tsunami of computing . . . almost like a natural phenomenon.”
I challenged them that OpenAI was gambling that it would successfully achieve beneficial AGI to counteract global warming before the act of doing so might exacerbate it.
Brockman hastily said, “The way we think about it is the following: We’re on a ramp of AI progress. This is bigger than OpenAI, right? It’s the field. And I think society is actually getting benefit from it.”
“The day we announced the deal,” he said, referring to Microsoft’s new $1 billion investment, “Microsoft’s market cap went up by $10 billion. People believe there is a positive ROI even just on short‑term technology.”
OpenAI’s strategy was thus quite simple, he explained: to keep up with that progress.
Later that day, Brockman reiterated that no one really knew what AGI would look like adding that their task was to keep pushing forward, to unearth the shape of the technology step by step.
Behind the Scenes: Transparency and Control
I was originally scheduled to have lunch with employees in the cafeteria, but I was told that I needed to be outside the office. Brockman would be my chaperone.
This pattern repeated throughout my visit: restricted access to certain areas, meetings I couldn’t attend, and researchers glancing at the communications head to ensure they weren’t violating any disclosure policies. Following my visit, Jack Clark sent a stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard received my photograph as well, so that they could look out for me if I appeared unapproved on the premises. These behaviors contrasted with OpenAI’s commitment to transparency, raising questions about what was being concealed.
At lunch and in the days that followed, I questioned Brockman about his motives for cofounding OpenAI. He stated that he had become obsessed with the idea of replicating human intelligence following a paper from Alan Turing. It inspired him. He coded a Turing test game and put it online, garnering some 1,500 hits. It made him feel amazing. “I just realized that was the kind of thing I wanted to pursue,” he said.
He joined OpenAI as a cofounder in 2015, noting that he would do anything to bring AGI to fruition, even if it meant being a janitor. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated.
“Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me.
I inquired about what motivated him.
Brockman mentioned the chances of working on a transformative technology during his lifetime. He believed he was in a unique position to bring about that transformation. “What I’m really drawn to are problems that will not play out in the same way if I don’t participate,” he said.
He wanted to lead AGI and craved recognition for his accomplishments. In 2022, he became OpenAI’s president.
Profit, Mission, and Competition
During our conversations, Brockman asserted that OpenAI’s structural changes did not alter its core mission. The capped-profit structure and new investors enhanced it. “We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That’s a crazy thing,” he said.
OpenAI now had the resources to scale its models and stay ahead of the competition. Failing to do so could undermine its mission. It was this assumption that set in motion all of OpenAI’s actions and their far‑reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources.
Brockman emphasized the importance of redistributing the benefits of AGI.
I asked about historical examples of technologies successfully distributing the benefits to the public.
“Well, I actually think that—it’s actually interesting to look even at the internet as an example,” he said. “There’s problems, too, right?” he said as a caveat. “Anytime you have something super transformative, it’s not going to be easy to figure out how to maximize positive, minimize negative.
“Fire is another example,” he added. “It’s also got some real drawbacks to it. So we have to figure out how to keep it under control and have shared standards.
“Cars are a good example,” he followed. “Lots of people have cars, benefit a lot of people. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world,” he finished hesitantly.
“I guess I just view—the thing we want for AGI is not that different from the positive sides of the internet, positive sides of cars, positive sides of fire. The implementation is very different, though, because it’s a very different type of technology.”
His eyes lit up with a new idea. “Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.”
Brockman seemed once again unclear about how OpenAI would turn itself into a utility.
He returned to the one thing he knew for certain. OpenAI was committed to redistributing AGI’s benefits and giving everyone economic freedom. “We actually really mean that,” he said.
“The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect,” he said. “AGI could be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen that extreme of it. I don’t think that’s a good world. That’s not a world that I want to help build.”
Fallout and Reaction
In February 2020, I published my profile for MIT Technology Review, revealing a misalignment between OpenAI’s public image and its internal practices. I said that “Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”
Elon Musk responded with three tweets:
“OpenAI should be more open imo”
“I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high,” he said, referring to Dario Amodei, the director of research.
“All orgs developing advanced AI should be regulated, including Tesla”
Altman sent an email to OpenAI’s employees.
“While definitely not catastrophic, it was clearly bad,” he wrote, of the MIT Technology Review article.
He wrote that it was “a fair criticism,” that the piece had identified a disconnect between the perception of OpenAI and its reality. He would suggest that Amodei and Musk meet to work out Musk’s criticism. For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote. “I think we should at some point in the future find a way to publicly defend our team (but not give the press the public fight they’d love right now).”
After the article, OpenAI wouldn’t speak to me again for three years.