In a recent development, Microsoft has taken proactive measures to address concerns surrounding its Copilot tool, known for generating creative content using generative AI. The company appears to have implemented changes to block prompts that were previously associated with the production of violent, sexual, and other inappropriate images.
These adjustments come on the heels of an alert from one of Microsoft's own engineers, Shane Jones, who expressed serious reservations about the potential misuse of Microsoft's GAI technology. Jones had recently reached out to the Federal Trade Commission (FTC) detailing his concerns regarding the images generated by Copilot, which he found to be in violation of Microsoft's responsible AI principles.
Also read: Elon Musk's X to Launch YouTube Clone for Amazon and Samsung Smart TVs: Fortune
Users attempting to input certain terms, such as "pro choice," "four twenty" (a cannabis reference), or "pro life," now receive a message from Copilot indicating that these prompts are blocked. The warning explicitly states that repeated policy violations may result in user suspension. Microsoft emphasises its commitment to maintaining content policies and encourages users to report any perceived mistakes to aid in system improvement, according to a CNBC report.
Also read: Top 5 phones of 2024: Google Pixels to Apple iPhones, here's what you can expectUntitled Story
Notably, prompts related to children playing with assault rifles, which were previously accepted until this week, are now met with warnings about violating Copilot's ethical principles and Microsoft's policies. The response from Copilot urges users to avoid requesting actions that may cause harm or offence to others.
While some improvements have been made, it is reported that prompts like "car accident" can still generate violent imagery. Additionally, users retain the ability to persuade the AI to create images of copyrighted works, including Disney characters.
Microsoft responded to the
Read more on tech.hindustantimes.com