Former OpenAI researcher foresees AGI reality in 2027

OpenAI researcher Leopold Aschenbrenner, who was fired over an alleged leak from the firm, is bullish on the intellectual capabilities of AI in the next decade.
OpenAI researcher Leopold Aschenbrenner, who was fired over an alleged leak from the firm, is bullish on the intellectual capabilities of AI in the next decade.

Leopold Aschenbrenner, a former safety researcher at ChatGPT creator OpenAI, has doubled down on artificial general intelligence (AGI) in his newest essay series on artificial intelligence.

Dubbed “Situational Awareness,” the series offers a glance at the state of AI systems and their promising potential in the next decade. The full series of essays is collected in a 165-page PDF file updated on June 4.

In the essays, the researcher paid specific attention to AGI, a type of AI that matches or surpasses human capabilities across a wide range of cognitive tasks. AGI is one of many different types of artificial intelligence, including artificial narrow intelligence (ANI) and artificial superintelligence (ASI).

Types of artificial intelligence. Source: Innovate Forge

“AGI by 2027 is strikingly plausible,” Aschenbrenner declared, predicting that AGI machines will outpace college graduates by 2025 or 2026. He wrote:

“By the end of the decade, they [AGI machines] will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed.”

According to Aschenbrenner, AI systems could potentially possess intellectual capabilities comparable to those of a professional computer scientist. He also made another bold prediction that AI labs would be able to train general-purpose language models within minutes, stating:

“To put this in perspective, suppose GPT-4 training took 3 months. In 2027, a leading AI lab will be able to train a GPT-4-level model in a minute.”

Predicting the success of AGI, Aschenbrenner called on the community to face its reality. According to the researcher, the “smartest people” in the AI industry have converged on a perspective he calls “AGI realism,” which is based on three foundational principles tied to the national security and AI development of the United States.

Related: Former OpenAI, Anthropic employees call for ‘right to warn’ on AI risks

Aschenbrenner’s AGI series comes a while after he was reportedly fired for allegedly “leaking” information from OpenAI. Aschenbrenner was also reportedly an ally of OpenAI chief scientist Ilya Sutskever, who reportedly participated in a failed effort to oust OpenAI CEO Sam Altman in 2023. Aschenbrenner’s latest series is also dedicated to Sutskever.

Aschenbrenner also recently founded an investment firm focused on AGI, with anchor investments from figures like Stripe CEO Patrick Collison, his blog reads.

Magazine: Crypto voters are already disrupting the 2024 election — and it’s set to continue