Riot Games released a report today describing the current state of anti-toxicity measures in Valorant, as well as the future of such efforts, including a beta program to record and analyze the voice comms of players reported for abusive chat.
Riot's terms of service were updated to accommodate this change last year, and Riot will begin a beta rollout of their voice moderation program within 2022. The report was light on details, but the recording and analysis of voice comms for moderation is supposedly only to be used when a player has already been reported for toxic behavior.
I would hope to hear more specifics on the program ahead of its implementation, like how many times players will have to be reported before being surveilled, as well as whether the punishment and appeal process will differ from the anti-toxicity programs Riot already has in place.
The main body of the report was dedicated to outlining Valorant's current system for muting offensive words in text chat, the essential nature of player reporting to positively impact the game, and the results of these systems as reflected by the rate of bans and comms restrictions.
Interestingly, even though punishments are on the rise, Riot's player surveys show that the perception of harassment in Valorant remains steady. In Riot's own words: "...we noticed that the frequency with which players encounter harassment in our game hasn’t meaningfully gone down. Long story short, we know that the work we’ve done up to now is, at best, foundational, and there’s a ton more to build on top of it in 2022 and beyond." I was impressed that Riot would admit to this discrepancy instead of just citing the increased rate of moderation as a win.
The report went on to describe some
Read more on pcgamer.com