Voice moderation is a sensitive issue. Players expect privacy, but long gone are the halcyon days of early, friendly online gaming. Today, when players interact with strangers in online games it can all too often lead to toxic behaviour. Striking the balance between player privacy and safety for online communities is the challenge facing games studios today.
Boston-based start-up Modulate wants to help game companies clean up toxic behaviour in their games with machine learning-based tools that promise to empower moderators and protect players.
Modulate CEO Mike Pappas told GamesIndustry.biz why its voice-native moderation tool ToxMod is more beneficial than old forms of grief reporting, why studios should build effective codes of conduct amid changing online safety regulations and how their technology and guidance is helping to make online communities safer.
ToxMod: machine-assisted proactive reporting vs user-generated reporting
User-initiated incident reports for bad behaviour have been standard for many years – just this summer, Xbox rolled out new voice reporting features for its platforms. But Pappas says game studios cannot rely on this method alone.
“User reports are a really important part of the overall trust and safety strategy. You have to give your users ownership,” he says.
“But you can’t just rely on that channel. A lot of users have had bad experiences with user reporting systems in the past, where they feel like their reports go into a black box, that they don’t have the tools to submit, and they don’t know if anything they reported is actually getting addressed.”
Submitting a report often takes players out of the action when playing a game or chatting in a social group. And this plays a part in
Read more on gamesindustry.biz