Meta will make its generative artificial intelligence (AI) models available to the United States' government, the tech giant has announced, in a controversial move that raises a moral dilemma for everyone who uses the software.
Meta last week revealed it would make the models, known as Llama, available to government agencies, “including those that are working on defence and national security applications, and private sector partners supporting their work”.
The decision appears to contravene Meta's own policy which lists a range of prohibited uses for Llama, including “[m]ilitary, warfare, nuclear industries or applications” as well as espionage, terrorism, human trafficking and exploitation or harm to children.
Meta's exception also reportedly applies to similar national security agencies in the United Kingdom, Canada, Australia and New Zealand.
The situation highlights the increasing fragility of open source AI software. It also means users of Facebook, Instagram, WhatsApp and Messenger – some versions of which use Llama – may inadvertently be contributing to military programs around the world.
Llama is a collation of large language models – similar to ChatGPT – and large multimodal models that deal with data other than text, such as audio and images.
Meta, the parent company of Facebook, released Llama in response to OpenAI's ChatGPT. The key difference between the two is that all Llama models are marketed as open source and free to use. This means anyone can download the source code of a Llama model, and run and modify it themselves (if they have the right hardware). On the other hand, ChatGPT can only be accessed via OpenAI.
The Open Source Initiative, an authority that defines open source software, recently released a standard setting out what open source AI should entail. The standard outlines “four freedoms” an AI model must grant in order to be classified as open source: use the system for any purpose and without having to ask for permission, study how the system
Read more on tech.hindustantimes.com