The United Kingdom’s AI Safety Institute is set to expand internationally with a new location in the United States.
On May 20, Michelle Donelan, the U.K. Technology Secretary, announced that the institute will open its first overseas office in San Francisco in the summer.
The announcement said that the strategic choice of a San Francisco office would allow the U.K. to “tap into the wealth of tech talent available in the Bay Area,” along with engaging with one of the world’s largest artificial intelligence (AI) labs located between London and San Francisco.
Additionally, it said this move will help it “cement” relationships with key players in the U.S. to push for global AI safety “for the public interest.”
Already, the London branch of the AI Safety Institute has a team of 30 that is on a trajectory to scale and acquire more expertise, particularly in risk assessment for frontier AI models.
Donelan said the expansion represents the U.K.’s leadership and vision for AI safety in action:
“It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.”
This follows the U.K.’s landmark AI Safety Summit, which took place in London in November 2023. The summit was the first of its kind to focus on AI safety on a global scale.
Related: Microsoft faces multibillion-dollar fine in EU over Bing AI
The event boasted leaders from around the world, including from the U.S. and China, with leading voices in the AI space, including Microsoft president Brad Smith, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabiss and Elon Musk.
In this latest announcement, the U.K. also said it is releasing a selection of the institute’s recent results from safety testing it conducted on five publicly available advanced AI models.
It anonymized the models and said the results provide a “snapshot” of the capabilities of the models instead of designating them as “safe” or “unsafe”.
Part of the findings included that several models could complete cybersecurity challenges, though others struggled with more advanced ones. Several models were found to have PhD-level knowledge of chemistry and biology.
It concluded that all tested models were “highly vulnerable” to basic jailbreaks and that the tested models were not able to complete more “complex, time-consuming tasks” without human supervision.
Ian Hogearth, the chair of the institute, said these assessments would help contribute to an empirical assessment of model capabilities.
“AI safety is still a very young and emerging field. These results represent only a small portion of the evaluation approach AISI is developing.”
Magazine:‘Sic AIs on each other’ to prevent AI apocalypse: David Brin, sci-fi author