Oracle has begun auditing TikTok’s algorithms and content moderation models, according to a new report from Axios out this morning. Those reviews began last week, and follow TikTok’s June announcement it had moved its U.S. traffic to Oracle servers amid claims its U.S. user data had been accessed by TikTok colleagues in China.
The new arrangement is meant to allow Oracle the ability to monitor TikTok’s systems to help the company in its efforts to assure U.S. lawmakers that its app is not being manipulated by Chinese government authorities. Oracle will audit how TikTok’s algorithm surfaces content to “ensure outcomes are in line with expectations,” and that those models have not been manipulated, the report said. In addition, TikTok will regularly audit TikTok’s content moderation practices, including both its automated systems and its moderation decisions where people are choosing how to enforce TikTok policy.
TikTok’s moderation policies have been controversial in years past. In 2019, The Washington Post reported TikTok’s U.S. employees had often been ordered to restrict some videos on its platform at the behest of Beijing-based teams, and that teams in China would sometimes block or penalize certain videos out of caution about Chinese government restrictions. That same year, The Guardian also reported TikTok had been telling its moderators to censor videos that mentioned things like Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong, per a set of leaked documents. In 2020, The Intercept reported TikTok moderators were told to censor political speech in livestreams and to suppress posts from “undesirable users” — the unattractive, poor or disabled, its documents said.
All the while, TikTok
Read more on techcrunch.com