Home / News / Technology / Sony’s AI NPCs: What Are The Ethical Considerations of Intelligent Characters?
Technology
4 min read

Sony’s AI NPCs: What Are The Ethical Considerations of Intelligent Characters?

Last Updated January 19, 2024 2:20 PM
Samantha Dunn
Last Updated January 19, 2024 2:20 PM

         Key Takeaways

Key Points:

  • Researchers have used AI to enhance NPC behavior.
  • AI is a key focus point for Sony.
  • Ethical considerations of AI are brought up with the potential misuse of AI personas in advertising.

A new AI system will be used to develop non-player characters (NPCs). The system, outlined in a research paper , co-authored by researchers at Sony Group Corporation and the Center for Language and Speech Processing at Johns Hopkins University, can turn snippets of dialogue into fantasy personas.

The new development aims to improve the realness of NPCs via the automation of a process referred to as “persona extraction,” which aims to automate a persona narrative, starting from existing character dialogue.

A Look Into The NPC AI System

The latest AI development bypasses the need for manual persona development and has the potential to revolutionize the gaming industry with player-NPC interactions that are much more immersive and believable. The unique approach to NPC behavior removes the need for manual character development and relies on the AI’s ability to discern and distinguish between relevant dialogue for the character.

A graph showcasing Sony AI persona graph for a few characters from the LIGHT fantasy role-playing dataset.
Persona graph for a few characters from the LIGHT fantasy role-playing dataset. The
graph was built from character utterances in conversations. Source: DeLucia, et. al., 2024.

Liability and Accountability in AI: A Shared Responsibility?

As AI’s usage and popularity continue to grow, so do its risks. Although AI is still in its earliest stages, the technology is poised to dramatically reshape the automotive, medical, education, and business sectors. The debate around its ethical implications also requires an evaluation of where ethical responsibility lies. With the creators of the technology, with developers, or with consumers?

A section in the research paper titled “Ethical Considerations” touches upon the ethical concerns of their intelligent “NPC” characters, but the communication does not explicitly state where responsibility lies.

“The ethical concerns of this work center on the possibility of automatically impersonating an existing person, rather than the intended use case of fictional characters. Further, our model is trained on the PersonaExt dataset (derived from the crowdsourced PersonaChat), so we cannot guarantee no presence of offensive language. Manual evaluation is always the best final step, and we encourage developers who use our method for persona extraction to add a toxicity (e.g., hate or offensive speech) evaluation step in addition to quality evaluation”, the section states.

While the paper did not provide any clarity on the specific risks of their new AI development, it recommended manual evaluation of certain risks by developers.

The Ethical Implications of Intelligent Characters

Bioethics, a term  coined by the biochemist Van Rensselaer Potter, who used it to describe an ethics derived from biomedicine, is used in the ethical evaluation of AI technology – particularly the negative impact that AI can have on humanity. AI engineers are focusing on giving AI the ability to discern to avoid  biases and creating unintended harm.

The team responsible for the recent AI paper referenced the possibility of automatically impersonating an existing person, rather than the intended use case of fictional characters, which also has potential implications in advertising and using personas to exploit targeted advertising of real people.

The then-founder of Sony, Masura Ibuka , talked about AI as the future of electronics back in 1960, using the term “artificial brain” to describe how computers would become independent thinkers. Since then, Sony has established  itself as a leader in AI, and launched four flagship AI projects in Gaming, Imaging & Sensing, Gastronomy, and AI Ethics .

AI Lawsuits Begin to Proliferate

Specific rules around the risks posed by AI systems vary from country to country, however, if, or when, an AI system is fully autonomous it will become more difficult to establish responsibility. Generative AI lawsuits have started to emerge, notably, The New York Times suing OpenAI  over the use of copyrighted work.

The complexity of AI technology continues to complicate regulation and legislation. Generative AI and black box AI, in particular, have caused concerns over transparency and the ethical implications of AI decision-making.

Was this Article helpful? Yes No