AI Deployed To Police Video Game Chats For ‘Toxicity’

Even though voice conversations in video games have always provided a safe environment for young people to unwind and engage in pleasant banter, large firms handling popular franchises now view them with disapproval. They aim to include AI to police in-game conversations for inappropriate language and promote PC values.

Activision-Blizzard plans to deploy an artificial intelligence technology to monitor in-game voice conversations for “toxic” words, which might lead to censorship or a ban in real time.

Call of Duty is employing artificial intelligence to monitor in-game voice chat to reduce toxic behavior in online games. For Modern Warfare 2, Warzone 2, and the forthcoming Modern Warfare 3, Activision has teamed up with artificial intelligence company Modulate to include ToxMod, Modulate’s proprietary speech moderation engine.

Activision’s ToxMod is available for beta testing on servers in North America, with the company claiming it can detect and remove instances of hate speech, discriminatory language, and harassment in real-time.

According to Modulate, ToxMod is the only solution for proactive voice chat moderation developed specifically for video games. According to the official website, ToxMod is presently being utilized in a limited number of games, mostly in smaller Virtual Reality games like Rec Room. Call of Duty’s massive daily player base is anticipated to be the tool’s most significant application.

A ban on Facebook or Twitter won’t cost you anything, but being kicked off of Xbox Live or another gaming service might make that costly purchase nearly useless. For $69.99, fans may purchase the next Call of Duty game via the series’ official website. It also requires a high-end gaming computer, a PlayStation 4 or Xbox One.

With the release of Call of Duty: Modern Warfare 3 in November, Activison-Blizzard intends to provide the tool to the public.