Big Tech giants Google, Microsoft, ChatGPT maker OpenAI and AI startup Anthropic announced on July 26 that they would form a new industry watchdog group to help regulate artificial intelligence (AI) development.
In a joint statement released on the Google blog, the companies revealed their new Frontier Model Forum aimed at monitoring the “safe and responsible” development of frontier AI models.
It pointed out that while governments across the world have already begun putting efforts toward regulating AI development and deployment, “ further work is needed on safety standards and evaluations.”
“The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.”
The current core goals of the initiative are to advance research on AI safety, identify best practices for responsible development and deployment of frontier models, collaboration with governments and civil leaders, and support efforts to develop applications.
Membership to the forum is open to organizations that fit the predefined criteria, which includes developing and deploying frontier models.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, the vice chair and president of Microsoft.
“This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
According to the announcement, the Frontier Model Forum will establish an advisory board in the coming months to direct the group’s priorities and strategy. It also says the founding companies plan to consult “civil society and governments” regarding the design of the forum.
Anna Makanju, the vice president of global affairs at OpenAI said there is a “profound benefit” to society from advanced AI systems but to achieve this potential there needs oversight and governance.
“It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible.”
On July 21, prominent AI companies, including OpenAI, Google, Microsoft and Anthropic, met with White House officials and committed to the safe, secure and transparent development of AI.
In June, United States lawmakers unveiled a bill to create an AI commission to address concerns within the industry.
Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.
Magazine: Experts want to give AI human ‘souls’ so they don’t kill us all