Microsoft Corp.'s Xbox released its first transparency report on Monday, detailing how the gaming giant moderates its 3 billion global players.
Xbox took action against more than 4.3 million inauthentic accounts between January and June, according to the report, after increasing its proactive moderation nine times compared with the same period a year earlier. These inauthentic accounts are typically automated or bot accounts that can be used to trick or harass players through spam, facilitate cheating activities, inflate friend or follow numbers or initiate distributed denial of service, or DDoS, attacks.
Dave McCarthy, Xbox corporate vice president of player services, said the increase in proactive moderation is an effort to “weed out” the accounts before they hit the system. Proactive enforcements, which made up 65% of the total, refers to artificial intelligence or human moderators identifying, analyzing and taking action on behavior that goes against Xbox's community standards. Microsoft also relies on players to report inappropriate content through reactive moderation.
Inauthentic accounts aren't just from bots advertising cheat codes for games, McCarthy said. “There's regular activity by nation-state actors and other funded groups attempting to distribute content that has no place on our services,” he added.
Xbox joins a growing number of gaming and gaming-service providers who plan to release transparency reports on a regular basis in an effort to crack down on toxicity and abuse and to create a safe experience for players. Twitch, the live-stream gaming site owned by Amazon.com Inc., released its first report in early 2021 and Discord released its first in 2019. Xbox's Japanese console competitors PlayStation,
Read more on tech.hindustantimes.com