As AI continues to dominate the conversation in just about every space you can think of, a repeated question has emerged: How do we go about controlling this new technology? According to a paper from the University of Cambridge the answer may lie in numerous methods, including built in kill switches and remote lockouts built into the hardware that runs it.
The paper features contributions from several academic institutions including the University of Cambridge's Leverhulme Centre, the Oxford Internet Institute and Georgetown University, alongside voices from ChatGPT creators OpenAI (via The Register). Among proposals that include stricter government regulations on the sale of AI processing hardware and other potential regulation methods is the suggestion that modified AI chips could «remotely attest to a regulator that they are operating legitimately, and cease to operate if not.»
This is proposed to be achieved by onboard co-processors acting as a safeguard over the hardware, which would involve checking a digital certificate that would need to be periodically renewed, and de-activating or reducing the performance of the hardware if the license was found to be illegitimate or expired.
This would effectively make the hardware used to compute AI tasks accountable to some degree for the legitimacy of its usage and providing a method of «killing» or subduing the process if certain qualifications were found to be lacking.
Later on the paper also suggests a proposal involving the sign off of several outside regulators before certain AI training tasks could be performed, noting that «Nuclear weapons use similar mechanisms called permissive action links».
While many of the proposals already have real world equivalents that seem to be working effectively, like the strict US trade sanctions levied at countries like China for the restriction of export for AI chips, the suggestion that at some level AI should be regulated and restricted by remote systems in case of an
Read more on pcgamer.com