Riot Games will soon be investigating reports of Valorant players being abusive in voice chat so it can build an automated system dedicated to analyzing those problematic comms.
The company recently updated its Privacy Policy and Terms of Service to "allow us to record and evaluate in-game voice communications when a report for that type of behavior is submitted." Now it's planning to take advantage of those capabilities starting on July 13.
"Voice evaluation would provide a way to collect clear evidence that could verify any violations of behavioral policies before we can take any action," which should limit abuse, and "help us share back to players why a particular action resulted in a penalty," Riot Games says(Opens in a new window).
I've played a lot of Valorant, and I know from experience that some players constantly use racial slurs, scream obscenities, and otherwise abuse the members of their teams. (Which shouldn't come as a surprise to anyone who's used literally any other game's voice chat.)
Valorant allows players to mute each other, but unless someone is willing to accept the competitive disadvantage of playing a tactical shooter without voice comms, odds are that they'll have to endure at least some amount of vile behavior before they mute a particular player.
Reporting the player in question offers a better solution because Riot Games can simply ban them from voice chat—or from Valorant entirely. Those are drastic measures, though, which explains why the company wants to exercise some caution before it swings the banhammer.
So now it's planning to start testing its new system on English-speaking players in North America. The data collected during that period will be used to "help train our language models
Read more on pcmag.com