AI is everywhere now. From search engines to customer-facing chatbots, it is added in a lot of different ways for different tasks. It has also begun appearing in mobile applications on both Android and iOS. Now, noticing the potential risks with generative AI, Google has updated its Play Store policy to push for moderation of AI-generated content in apps. As per the new policy, developers will have to add an option to report when AI generates offensive content, and then developers must use these reports to build safe filters and moderation tools to protect users.
Google posted in its Android Developers Blog to highlight the new changes in its policy. It said, “Early next year, we'll be requiring developers to provide the ability to report or flag offensive AI-generated content without needing to exit the app. You should utilize these reports to inform content filtering and moderation in your apps – similar to the in-app reporting system required today under our User Generated Content policies”.
Google stated that the need for moderation is high as the Android community expects safe and high-quality experiences, which directly influences the long-term success of the app or game in terms of installs, user ratings, and reviews. Safety is also a big parameter for Google which is tasked with moderating the Android app store, and ensuring that users, especially the youth are not exposed to harmful content, generated by AI.
Alongside, Google also expanded privacy protection protocols on the Play Store. It highlighted that some app permissions set up by developers will require additional review by the Google Play team, to ensure that it does not violate any privacy standards set by the company.
“Under our new policy, apps will only
Read more on tech.hindustantimes.com