Apple is lagging behind the likes of ChatGPT and Google's Gemini in a lot of aspects. However, the company has invested heavily in AI as it aims to bring the technology to the iPhone 16 lineup later this year with the release of iOS 18. It is now being reported that Apple researchers have released a new AI model that can edit images based natural language commands by the user. The technology will possibly be showcased at the company's WWDC 2024 event in June.
Apple's new AI model, called "MGIE," or MLLM-Guided Image Editing, is a multimodal large language model that can interpret and execute user commands on a pixel level (via VentureBeat). The tool can manipulate and edit a plethora of areas of an image, including brightness, sharpness, contrast, and much more. It can also manipulate an image to add artistic effects.
Other than this, local editing could alter the subject's shape, color, size, and texture in a photo. Photoshop-like editing includes resizing the image or cropping, rotating, and adding filters. Users can also change the background of the image. Apple's new AI model understands context and common reasoning. For instance, you can add an image of a pizza and a prompt to make it healthier. The AI model will automatically add vegetables to the image, understanding that health is associated with vegetables in the food.
Using the global optimization requests, the tool can manipulate the lighting and contrast of an image. Furthermore, Photoshop-like editing can also eliminate objects from the background upon request from the user. You can see Apple's AI model in action in the image added below. The company has partnered with the University of California researchers to create MGIE, and once the technology is ready, the company will create various applications for
Read more on wccftech.com