For good or ill, artificial intelligence is here to stay. That's actually underselling it because the AI industry is booming. Just ask Nvidia. Industries across the world are spending billions looking at ways to integrate AI models into their businesses. But while AI has the potential to boost productivity, accelerate innovation and become a useful tool for end users, global policy has yet to catch up to meet increasing concerns.
The UK Competition and Markets Authority (CMA) has released a report following a review of Foundation Models (FMs). It outlines a series of principles that it hopes will lead to measures that protect consumers, and foster healthy competition and responsible AI development.
The seven key guiding principles outlined in the report are accountability, access, diversity, choice, flexibility, fair dealing and transparency. They're all laudable goals. But it's that last one is one that needs to be highlighted. Can morals and ethics be broken down into ones and zeros?
Sarah Cardell, CEO of the CMA, said: “The CMA’s role is to help shape these markets in ways that foster strong competition and effective consumer protection, delivering the best outcomes for people and businesses across the UK".
These are important goals, and the CMA deserves credit for its proactive approach. But the report mentions it is limited in scope. Key questions including the protection of intellectual property, the spread of misinformation, the potential for fraud, the need for data protection and security have not yet been addressed. And that highlights just how far there is to go to ensure AI models don't become tools for nefarious purposes.
The AIs the public are most familiar with are Large Language Models (LLMs) like
Read more on pcgamer.com