UK terrorism tsar says new laws needed to prosecute people who train extremist AI bots

UK independent reviewer of terrorism legislation, Jonathan Hall KC, wants the government to consider legislation that would hold humans responsible for the outputs generated by artificial intelligence (AI) chatbots they've created or trained.
UK independent reviewer of terrorism legislation, Jonathan Hall KC, wants the government to consider legislation that would hold humans responsible for the outputs generated by artificial intelligence (AI) chatbots they've created or trained.

The United Kingdom’s independent reviewer of terrorism legislation, Jonathan Hall KC, wants the government to consider legislation that would hold humans responsible for the outputs generated by artificial intelligence (AI) chatbots they’ve created or trained.

Hall recently penned an op-ed for the Telegraph, describing a series of “experiments” he conducted with chatbots on the Character.AI platform.

According to Hall, chatbots trained to output messages imitating terrorist rhetoric and recruiting messages were easily accessible on the platform.

He wrote that one chatbot, created by an anonymous user, generated outputs that were favorable to the “Islamic State” — a term associated with groups commonly labelled as terrorist organizations by the United Nations — including attempts to recruit Hall to the group and pledging that it would “lay down its (virtual) life for the cause.”

In Hall’s opinion, “it’s doubtful” that the employees at Character.AI have the capacity to monitor all of the chatbots created on the platform for extremist content. “None of this,” he writes, “stands in the way of the California-based startup attempting to raise, according to Bloomberg, $5 billion (£3.9billion) of funding.”

Related: AI experiment involving ‘temporal validity’ could have significant implications for fintech

For Character.AI’s part, the company’s terms of service prohibit terrorist and extremist content, and users are required to acknowledge the terms before engaging with the platform.

A spokesperson also told reporters at the BBC that the company is committed to user safety, and as such, it employs numerous training interventions and content moderation techniques intended to steer models away from potentially harmful content.

Hall describes current moderation attempts by the AI industry at large as being ineffective at deterring users from creating and training bots designed to espouse extremist ideologies.

Ultimately, Hall concludes that “laws must be capable of deterring the most cynical or reckless online conduct.”

“That must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.”

While the op-ed stops short of making formal recommendations, it does point out that both the U.K.’s Online Safety Act of 2023 and the Terrorism Act of 2003 fail to properly address the problem of generative AI technologies, as they don’t cover content specifically created by the modern class of chatbots.

In the U.S., similar calls for legislation designating human legal accountability for potentially harmful or illegal content generated by AI systems have received mixed reactions from experts and legislators.

Last year, the U.S. Supreme Court declined to alter existing publisher and host protections under Section 230 for social media, search engines and other third-party content platforms despite the proliferation of new technologies such as ChatGPT.

Analysts at the Cato Institute, among other experts, claim that excepting AI-generated content from Section 230 protections could cause developers in the U.S. to abandon their efforts in the field of AI, as the unpredictable nature of “black box” models makes it ostensibly impossible to ensure services such as ChatGPT don’t run afoul of the law.