They made a comparison with nuclear energy as another example of a technology with the “possibility of existential risk,” raising the need for an authority similar in nature to the International Atomic Energy Agency (IAEA), the world’s nuclear watchdog.
Over the next decade, “it’s conceivable that … AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the OpenAI team wrote. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there.”
The statement echoed Altman’s comments to Congress last week, where the U.S.-based company’s CEO also testified to the need for a separate regulatory body.
Critics have warned against trusting calls for regulation from leaders in the tech industry who stand to profit off continuing development without restraints. Some say OpenAI’s business decisions contrast these safety warnings — as their rapid rollout has created an AI arms race, pressuring companies such as Google parent company Alphabet to release products while policymakers are still grappling with risks.
Few Washington lawmakers have a deep understanding of emerging technology or AI, and AI companies have lobbied them extensively, The Washington Post previously reported, as supporters and critics hope to influence discussions on tech policy.
Some have also warned against the risk of hampering U.S. ability to compete on the technology with rivals — particularly China.
The OpenAI leaders warn in their note against pausing development, adding that “it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing.”
In his first congressional testimony last week, Altman issued warnings on how AI could “cause significant harm to the world,” while asserting that his company would continue to roll out the technology.
Altman’s message of willingness to work with lawmakers received a relatively warm reception in Congress, as countries including the United States acknowledge they need to contend with supporting innovation while handling a technology that is unleashing concerns about privacy, safety, job cuts and misinformation.
A witness at the hearing, New York University professor emeritus Gary Marcus, highlighted the “mind boggling” sums of money at stake and described OpenAI as “beholden” to its investor Microsoft. He criticized what he described as the company’s divergence from its mission of advancing AI to “benefit humanity as a whole” without the constraints of financial pressure.
The popularization of ChatGPT and generative AI tools, which create text, images or sounds, has dazzled users and also added urgency to the debate on regulation.
At a G-7 summit on Saturday, leaders of the world’s largest economies made clear that international standards for AI advancements were a priority, but have not yet produced substantial conclusions on how to address the risks.
The United States has so far moved slower than others, particularly in Europe, although the Biden administration says it has made AI a key priority. Washington policymakers have not passed comprehensive tech laws for years, raising questions over how quickly and effectively they can develop regulations for the AI industry.
The ChatGPT makers called in the immediate term for “some degree of coordination” among companies working on AI research “to ensure that the development of superintelligence” allows for safe and “smooth integration of these systems with society.” The companies could, for example, “collectively agree … that the rate of growth in AI capability at the frontier is limited to a certain rate per year,” they said.
“We believe people around the world should democratically decide on the bounds and defaults for AI systems,” they added — while admitting that “we don’t yet know how to design such a mechanism.”
Cat Zakrzewski, Cristiano Lima and Will Oremus contributed to this report.