Yes, it’s still early days for the immersive virtual world. But that’s precisely the problem. When Mark Zuckerberg described the metaverse last year, he conjured an image of harmonious social connections in an immersive virtual world. But his company’s first iterations of the space have not been very harmonious.
Several women have reported incidents of harassment, including a beta tester who was virtually groped by a stranger and another who was virtually gang-raped within 60 seconds of entering Facebook Horizon Venues social platform. I had several uncomfortable moments with male strangers on social apps run by both Meta Platforms Inc. (formerly known as Facebook Inc.) and Microsoft Corp. when I visited them in December.
These are early days for the metaverse, but that’s precisely the problem. If safety isn’t baked early on into the design of a new social platform, it is much harder to secure down the line.
Gaming firms like Riot Games Inc., the maker of League of Legends, for instance, have faced an uphill battle trying to rescue a virtual community from toxic behavior. Facebook also knows this problem well: It struggled to put the proverbial toothpaste back in the tube with Covid vaccine misinformation, as highlighted by a whistleblower last year. If the company had been faster to design a prompt asking users to stop and read certain links before sharing them, that might have limited anti-vax posts from going viral and costing lives.
It turns out Facebook has grappled internally with building safety features into its new metaverse services. In 2016, it released Oculus Rooms, an app where anyone with an Oculus headset could hang out in a virtual apartment with friends and family. Despite the cartoonish-looking avatars,
Read more on tech.hindustantimes.com