Bluesky, the decentralized social media platform and X (formerly Twitter) rival has introduced some new safety tools to the platform to better moderate the content. Alongside, it has also added new features for users to moderate who can see their posts and interact with them. This new set of tools was introduced after Bluesky faced criticism for not banning a user who made death threats or members who were creating usernames with racial slurs in them. Interestingly, the platform is still in private beta testing and has not opened to the general public.
According to a report by TechCrunch, Bluesky has introduced these as “more advanced automated tooling” which will flag any content that violates its Community Guidelines. But these will not be auto-deleted, or the poster be banned. These flagged content will then be reviewed by the platform's content moderation team and only then a decision will be made on them.
The company said in a post, “We'll iterate on this so that mods can review offensive content, spam, etc. without any user seeing it first”.
These safety tools will not be available to the users and are part of the console tools that the admins of the social media network will use to manage the platform. It is not clear what the moderation process was earlier, but it appears there was no mechanism to scan the platform for harmful or offensive content.
Users are also getting a couple of new features along the same theme. The first feature gives them back the feature to report their own posts for mislabeled content to help the moderation team fix incorrect labels. This information will be available directly on the compose screen of the platform. Earlier, they depended on other users to report it on their behalf.
The second
Read more on tech.hindustantimes.com