The ultimate 2023 AI guide: Here’s what happened and why

Cointelegraph has synthesized the biggest AI news of 2023, from developers and regulations to culture and scandals — this is your ultimate guide.
Cointelegraph has synthesized the biggest AI news of 2023, from developers and regulations to culture and scandals — this is your ultimate guide.

There is little question that 2023 was the year of artificial intelligence (AI). The Collins dictionary named AI its word of the year, describing AI-powered language models as “bursting into the public consciousness” and “much talked about” in 2023.

Wikipedia, the encyclopedia of the internet, said ChatGPT — one of the top AI chatbots on the market — is its most viewed page in English for the year, with 49.5 million views.

AI was on everyone’s radar this year, but it wasn’t easy to keep up with the light-speed pace of the technology’s developments.

Cointelegraph decided to look back at 2023 through the lens of AI, focusing on the key developers, new regulations on the scene, the impacted culture and, of course, the clashes and scandals.

Sit back, grab some Christmas cookies, and read how the year unfolded.

Development and developers

The genesis of the modern age of AI can be traced back to Nov. 30, 2022, when OpenAI released its AI chatbot ChatGPT for free public use. 

AI existed long before the release of ChatGPT, but this was a defining moment, with the general public now being able to access a powerful AI chatbot. ChatGPT soon became a household name, with around 100 million weekly users in 2023.

The company released its most powerful model — ChatGPT 4 — to the public in March, rumored to run on 1.76 trillion parameters.

OpenAI holds nonprofit status, though its revenue has soared in parallel with the explosion of ChatGPT. In 2022, the company reported revenue of $28 million, which rocketed to a whopping $1 billion in 2023.

This was the catalyst that set into motion the race between the industry’s tech giants to deploy and develop the most powerful models to dominate the market.

In March 2023, Google released Bard, its rival to ChatGPT. Originally based on Google’s LaMDA family of large language models (LLMs), it was then upgraded to PaLM and finally Gemini — its most powerful model to date — released in December.

In July, Meta, the parent company of Facebook and Instagram, released its own high-level LLM called Llama 2. Microsoft released its Bing AI chat in February 2023, a built-in feature in the Bing and Edge browser, which was rebranded to Copilot in September.

The startup Anthropic released its Claude AI model in March, which was quickly succeeded by Claude 2 in July. Anthropic received major investments in 2023, including $2 billion from Google.

According to data from Statista, the AI market is projected to hit $241.8 billion for all of 2023, with an expected annual growth rate of 17.3% by 2030.

Related: Google’s Gemini, OpenAI’s ChatGPT go head-to-head in Cointelegraph test

Regulations

With every major breakthrough that impacts society on a global scale comes a parade of regulators with their eye on it. Governments worldwide have spent the past year discussing regulating AI, though few have implemented any laws. 

European Union

The European Union became one of the first regions to pass legislation regulating the deployment and development of high-level AI models. Its landmark EU AI Act was initially proposed in April 2023 and passed in parliament in June, with the European Parliament and Council negotiators reaching a provisional agreement on the bill on Dec. 8.

The EU AI Act regulates the governmental use of AI in biometric surveillance, regulates large AI systems like ChatGPT, and sets the transparency rules developers should follow before entering the market.

While the EU rushed to be the first supranational authority to regulate AI, it has received pushback from local EU tech coalitions. In June, the initial regulations sparked an open letter from 160 tech executives. In both instances, EU regulators were asked to reconsider the stringent regulations in favor of more flexibility for innovation’s sake.

Lothar Determann, a partner at Baker McKenzie, Palo Alto, and author of Determann’s Field Guide to Artificial Intelligence Law, told Cointelegraph that:

“The EU is trying hard to be first to regulate AI, but France and other member states lament the fact that they wish instead Europe would be first to innovate.”

“After the highly publicized announcements on Dec. 9 that the EU AI Act is final,” he said, “various concerns and disagreements have been raised. For now, the EU AI Act remains ‘vaporware.’”

United States

The United States has yet to set any regulations in place officially. However, on Oct. 30, the Biden Administration issued an executive order establishing six new standards for AI safety and security, along with its intentions for ethical AI usage within the government.

Industry insiders have called Biden’s executive order “certainly challenging for companies and developers,” particularly for the open-source community, as the order was less direct for developers in this space.

Determann added:

“While many policymakers and commentators focus on the possibility of new AI laws, organizations should focus on compliance requirements and risk mitigation needs under existing laws right now.”

China

China wasted no time in establishing regulations. The Chinese government originally released guidelines in April, which were later loosened and then came into effect on Aug. 15.

Later, in October, China released draft security regulations for companies offering generative AI services, which included restrictions on data sources used for AI model training.

These regulations paved the way for an uptick in local development. After the initial rules were issued, the CEO of Chinese tech giant Baidu said more than 70 AI models had already been released in the country.

On a global scale, the United Nations also introduced an international initiative to tackle challenges in AI governance; the United Kingdom hosted the world’s first-ever AI Safety Summit in November with high-profile guests in attendance; and the Group of Seven, or G7, countries released an official AI code of conduct.

Arts and Culture

Throughout 2023, AI has managed to seep into every aspect of modern life, including arts and culture. 

The year has seen the rise and upgrade of some of the most powerful AI-generated tools on the market. Midjourney just released its latest version, 6, while OpenAI’s Dall-E also got a refresh with the latest ChatGPT upgrade.

In November, Meta introduced new AI-powered tools for video generation and image editing for users of its social platforms. 

However, no industry was rocked in the art world like the music industry in 2023. The year saw the rise of numerous AI music tools from major developers like Meta and Google, as well as from independent projects.

In April, the musician Grimes was the first big music star to say she would split 50% of the royalties with creators generating AI music using her vocals. After this, Grimes launched an open-source software program, elf.tech, dedicated to legally replicating her voice for AI music creation.

However, not everyone in the industry took Grimes’ side. Global labels like Universal Music Group have been on the hunt for creators violating artists’ rights through the illegal use of AI voice replication. 

The Grammys also clarified rules for AI-generated music that would be eligible for award nomination. In an interview with Cointelegraph, Recording Academy CEO Harvey Mason Jr. reiterated that “the role of the Academy is always to protect the creative and music communities.”

However, one of the most memorable moments came in November when Universal Music released the Beatles’ last song, “Now and Then,” with AI’s help to produce John Lennon’s vocals.

Clashes and scandals

The emergence of accessible and powerful AI also brought clashes and scandals in the form of lawsuits, firing and hiring, misleading marketing and more. 

Lawsuits

Many copyright-related lawsuits were opened in 2023 by artists and creatives complaining that AI models were illegally fed their works for training purposes. 

These copyright lawsuits have involved almost all the leading developers, including Google, Meta, Microsoft, Anthropic and OpenAI.

In July, Google was also named in a class-action lawsuit alleging it had violated the privacy and property rights of millions of internet users after it updated its privacy policy with data scraping capabilities for AI training purposes.

Similarly, the Screen Actors Guild-American Federation of Television and Radio Artists ended its strike as negotiations led to a deal for AI usage in productions. The 118-day strike was one of the longest in the union’s history, and despite the conclusion, most of Hollywood remained divided on the terms. 

OpenAI management shuffle 

In late November, OpenAI shook up the industry, but this time it wasn’t due to a new product release. OpenAI co-founder and CEO Sam Altman was suddenly ousted from the company in a surprise move by the board of directors that stunned fans and investors alike. 

The community, including investors, users, and company staff members, were blindsided and furious at the move, with more than half of OpenAI’s employees saying they were prepared to quit.

However, two days later, after Microsoft responded by hiring Altman, OpenAI reinstated him as CEO and replaced the board. The initial cause of the firing is still unclear.

Gemini fake promotion video and deep fakes

Google released a major upgrade to its AI model, dubbed Gemini. It has three incarnations: nano, pro and ultra. It was a long-awaited moment from the AI community, as it was rumored to surpass OpenAI’s GPT-4 and considered a “GPT killer.”

When it was released, it came with flashy videos demonstrating its abilities. However, internet sleuths swiftly noticed that Gemini fell short of its hype and called Google out. The tech company responded, saying it jazzed up its marketing for “brevity.”

A more serious type of fake that surged in 2023 was the proliferation of AI-generated deep fakes. According to data from SumSub, across all industries globally, there has been a 10x increase in deep fakes across all industries globally from 2022 to 2023.

Regionally, this looked like a 1,740% deep fake surge in North America, 1,530% in Asia Pacific, 780% in Europe (including the United Kingdom), 450% in the Middle East and Africa and 410% in Latin America.

Related: AI deepfake nude services skyrocket in popularity: Report

And with all the ups and downs of AI in 2023, so much happened that even the Pope had something to say about it. 

As this technology continues to evolve rapidly, 2024 will surely be equally jam-packed and exciting. Stay tuned for our 2024 AI predictions from industry insiders on what to expect in the coming year. 

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change