Call of Duty Eavesdrops on In-Game Voice Chat With ADL-Trained AI to Help Ban Players for ‘Toxic Speech’

0
661

by Chris Menahan, Information Liberation:

Call of Duty has begun eavesdropping on in-game voice chat using AI trained by the Anti-Defamation League to help ban gamers for using “toxic speech,” “hate speech,” “discriminatory language, harassment and more.”

From PC Gamer, “Call of Duty enlists AI to eavesdrop on voice chat and help ban toxic players starting today”:

TRUTH LIVES on at https://sgtreport.tv/

Activision announced a partnership with AI outfit Modulate to integrate its proprietary voice moderation tool—ToxMod—into Modern Warfare 2, Warzone 2, and the upcoming Modern Warfare 3.

Activision says ToxMod, which begins beta testing in North American servers today, is able to “identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more.”

[…] Call of Duty’s ToxMod AI will not have free rein to issue player bans. A voice chat moderation Q&A published today specifies that the AI’s only job is to observe and report, not punish.

“Call of Duty’s Voice Chat Moderation system only submits reports about toxic behavior, categorized by its type of behavior and a rated level of severity based on an evolving model,” the answer reads. “Activision determines how it will enforce voice chat moderation violations.”

So while voice chat complaints against you will, in theory, be judged by a human before any action is taken, ToxMod looks at more than just keywords when flagging potential offenses. Modulate says its tool is unique for its ability to analyze tone and intent in speech to determine what is and isn’t toxic. If you’re naturally curious how that’s achieved, you won’t find a crystal-clear answer but you will find a lot of impressive-sounding claims (as we’re used to from AI companies).

The company says its language model has put in the hours listening to speech from people with a variety of backgrounds and can accurately distinguish between malice and friendly riffing. Interestingly, Modulate’s ethics policy states ToxMod “does not detect or identify the ethnicity of individual speakers,” but it does “listen to conversational cues to determine how others in the conversation are reacting to the use of [certain] terms.”

Terms like the n-word: “While the n-word is typically considered a vile slur, many players who identify as black or brown have reclaimed it and use it positively within their communities… If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation.”

[…]

In recent months, ToxMod’s flagging categories have gotten even more granular. In June, Modulate introduced a “violent radicalization” category to its voice chat moderation that can flag “terms and phrases relating to white supremacist groups, radicalization, and extremism—in real-time.”

Read More @ InformationLiberation.com