Researchers at Sony have created an artificial intelligence (AI) system capable of turning snippets of dialogue into fantasy “personas” that could be adapted for use as non-player characters in video games and other media.
AI personas are characters or personalities adopted by AI models, such as large language models. Typically, the process for creating an AI persona involves training a natural language model and then fine-tuning it through a combination of parameter-tweaking and labor-intensive human feedback.
Sony’s latest experiments involve automating this process by training an AI model to extract salient details from dialogue.
According to the team’s paper, the ultimate goal of the work is to make personas less dull:
“The primary issues with dialogue agents are their boring and generic responses and their inability to maintain a consistent persona, often contradicting themselves in conversations.”
Non-player characters
Instead of building a persona from the ground up, the Sony researchers approached the problem from the other end. They created a process called “persona extraction” that develops a persona based on existing information.
A pirate persona, for example, could be developed from dialogue where a character might discuss various aspects of pirate life.
One of the major challenges with developing a persona using this method is that dialogue can often contain extraneous information. For example, a character who discusses the life of a pirate may also discuss things unrelated to their persona as a pirate.
The team trained the AI to differentiate between useful and non-useful information, and the result was the development of full-fledged personas.
While the scope of the experiment didn’t cover the creation of artificial agents in video games such as non-player characters (NPCs), this early work on personas appears to be extremely adaptable.
The personas are capable of generating dialogue, which, in combination with other systems, could be seamlessly used to give NPCs the ability to chat in real-time while maintaining character immersion and should lend itself to automated scripting and routines for such characters.
Imitation and advertising personas
Unfortunately, as the team mentions in its paper, there are ethical considerations involved with the development of these automated systems.
“The ethical concerns of this work center on the possibility of automatically impersonating an existing person, rather than the intended use case of fictional characters.”
By the same token, the researchers’ experimental system could theoretically be used to extract personas from existing people for the purpose of selling them products and services. Ostensibly, the more accurately a persona could reflect an individual consumer, the more targeted advertisements aimed at them could be.
Related: Legal experts weigh in on landmark NYT vs. OpenAI and Microsoft lawsuit