Missed the GamesBeat Summit excitement? Don't worry! Tune in now to catch all of the live and virtual sessions here.
Unity announced today it’s launching Safe Voice in beta. Safe Voice is a new toxicity detection solution that uses machine learning to analyze voice-based player interactions. According to Unity, it can be used to identify toxic behavior and speech at scale, allowing developers to take action faster and helping foster safer, healthier player communities. Safe Voice is currently available in a closed beta.
Safe Voices offers both proactive and player-driven toxicity detection — it’s activated when players report toxic behavior and also monitors for potential problems. According to Unity it uses features such as “tone, loudness, pitch, intonation, emotion, and context of player interactions” to identify potential toxic events. It also gives moderators a dashboard with an overview of trends and problematic behaviors. Safe Voice joins several other tech-based toxicity solutions, as game developers are increasingly turning to tech to find ways of curbing friction in large communities.
Jeff Collins, SVP and GM of Unity Gaming Services, told GamesBeat that Unity wants to give each developer a chance to customize their definitions of toxicity. “Our goal with Safe Voice is to make toxicity management, a process that has traditionally been resource-intensive and highly-manual, much more automated, efficient and scalable for studios. This lets more studios feel confident in enabling exciting multiplayer experiences in their games.”
Collins said that toxicity is a widespread issue for which the games industry still doesn’t have a perfect solution. “By identifying and mitigating instances of toxicity, Safe Voice can help
Read more on venturebeat.com