The UK government is using deep learning algorithms, under the catch-all umbrella of AI, to help its various sections make decisions in welfare benefit claims, ascertaining cases of fraud, and even the scanning of passports. That's all probably of no surprise whatsoever but as one investigation suggests, it's opening a massive can of worms for all concerned.
If you're wondering what kind of AI is being talked about here, then think about upscaling. The systems employed by the government aren't too dissimilar from those developed by Nvidia for its DLSS Super Resolution technology.
The data model for that is trained by feeding it millions of very high resolution frames, from hundreds of games. So when the algorithm is then fed a low resolution image, it can work out how the frame is most likely to appear once it's been upscaled.
DLSS upscaling uses a fairly standard routine to make the jump from 1080p to 4K, for example. It then runs the AI algorithm to correct any errors in the image. But like all such systems, the quality of the end result depends massively on what you feed into the algorithm and what the dataset was trained on.
An investigation by the Guardian into the use of AI by the UK government highlights what happens when there are problems with both of those aspects. For example, the publication reports that the Home Office was using AI to read passports at airports, to help flag up potential fake marriages for further investigation.
The Guardian says an internal Home Office evaluation shows the algorithm is highlighting a disproportionate number of people from Albania, Greece, Romania, and Bulgaria. If the dataset was trained on data that itself is already over-emphasising particular traits in the survey, then
Read more on pcgamer.com