Council of Europe adopts new guidelines for responsible AI use in journalism

The Council said the AI guidelines are an “important contribution” to promoting a rule of law-based and human rights-compliant public communication sector.
The Council said the AI guidelines are an “important contribution” to promoting a rule of law-based and human rights-compliant public communication sector.

The Council of Europe announced on Dec. 29 that it will put into effect guidelines for the “responsible implementation” of artificial intelligence (AI) in journalistic practices.

The Council’s Intergovernmental Steering Committee on Media and the Information Society adopted the guidelines originally released on Nov. 30, saying they are an “important contribution” to promoting a rule of law-based and human rights-compliant public communication sector.

“They provide practical guidance to the relevant actors, in particular news media organizations, but also states, technology providers and digital platforms that disseminate news, detailing how AI systems should be used to support the production of journalism.”

The guidelines cover AI systems in various stages of journalistic production, such as the initial decision to use AI and media organizations acquiring AI tools and incorporating them into the newsroom.

AI’s effect on audiences and society is a significant aspect of the guidelines. Therefore, they also propose responsibilities to be taken on by technology providers, platforms and member states.

The Council of Europe is based in Strasbourg, France and includes 46 European countries. Its purpose is to promote democracy, human rights and the rule of law.

Related: Blockchain media authentication app eyes news journalism as primary use case

Over the last year, as AI has emerged into mainstream public use, journalism has seen mixed reactions to the technology.

On the one hand, there has been an announcement from Channel 1 AI that an entire newsroom, completely operated by AI journalists, will be launched in 2024 to present personalized news to viewers.

The German media giant Axel Springer announced in mid-December that it would partner with OpenAI to integrate ChatGPT into its journalism.

Meanwhile, traditional newsrooms have been struggling with copyright issues, and several have alleged that AI models are being illegally trained on media companies’ content. The most recent example is The New York Times’ Dec. 27 lawsuit against OpenAI and Microsoft for misuse of its content in model training.

To catch up on all the AI happenings of 2023, don’t forget to check out our “ultimate 2023 AI guide.”

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change