OpenAI CEO Sam Altman, chief technology officer Greg Brockman and lead scientist Ilya Sutskever recently penned a blog post detailing OpenAI’s stance on the development and governance of “superintelligence.”
Initial ideas for governance of superintelligence, including forming an international oversight organization for future AI systems much more capable than any today: https://t.co/9hJ9n2BZo7
— OpenAI (@OpenAI) May 22, 2023
Perhaps unsurprisingly, the company — widely accepted as the current industry leader in generative artificial intelligence (AI) technologies — believes that it would be riskier not to develop superhuman AI than it would be to press forward with its endeavors:
“Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.”
The potential for AI systems to reach human level (a paradigm often referred to as “AGI,” or artificial general intelligence) or, as OpenAI warns, to exceed even expert-level human capabilities, remains widely debated. Many experts claim it’s far from inevitable that machines will ever meet or exceed our own cognitive abilities.
It appears that OpenAI leaders Altman, Brockman and Sutskever would rather err on the side of caution. Their version of a cautious approach, however, doesn’t call for restraint.
The blog post suggests increased government oversight, involving the public in the decision-making process and stronger collaboration between developers and companies in the space. These points reflect the answers Altman gave in response to queries from Senate subcommittee members in a recent congressional hearing.
Related: OpenAI CEO Sam Altman testifies in ‘historic’ Senate hearing on AI safety
The blog post also points out that, according to OpenAI, it would be “unintuitively risky and difficult to stop the creation of superintelligence.” The post concludes with: “[W]e have to get it right.”
In explaining the apparent conundrum, the authors suggest that stopping the purportedly inevitable creation of a superintelligent AI would require a global surveillance regime. “And even that,” they write, “isn’t guaranteed to work."
Ultimately, the authors appear to conclude that, in order to develop the necessary controls and governance mechanisms to protect humanity from a superintelligent AI, OpenAI must continue working toward the creation of a superintelligent AI.
As the global debate over exactly how these technologies and their development should be governed and regulated continues, the cryptocurrency, blockchain and Web3 communities remain stuck in a familiar kind of regulatory limbo.
AI has permeated every tech sector, and fintech is no exception. With cryptocurrency trading bots built on the back of ChatGPT and the GPT API and countless exchanges implementing AI solutions into their analysis and customer service platforms, any regulatory efforts affecting the development of consumer-facing AI products such as ChatGPT could have a disruptive impact on both industries.