In predicting AI crimes of the future, UK academics have revealed the biggest threat

While a group of London academics have predicted the most serious AI crime threats to come, future public enemy number one is a familiar name. 

 

 

A prevailing thought of our time is that intelligent robot uprisings will be our undoing. However, a recent study published in Crime Science warns that the major artificial intelligence threats of the future will likely have more to do with ourselves than A.I. itself.

Rating threats based on their potential harm, achievability, and defeatability, a team of academics, policy experts, and private sector stakeholders identified that deep fakes—a technology already in use and fast-spreading—posed the highest level of threat. 

Something akin to a robot siege might damage property, but the harm caused by deep fakes could erode trust in people and society itself.

While the threat of A.I. may seem like a problem forever stuck in the future, Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL which funded the study, says that these threats will only continue to grow in sophistication as time goes on.

“We live in an ever-changing world which creates new opportunities – good and bad,” said Johnson. “As such, it is imperative that we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur.”

The researchers of the study gathered a team of 14 academics in related fields, 7 experts from the private sector, and 10 experts from the public sector. The experts were then divided evenly into groups of four to six people and given a list of potential A.I. crimes, ranging from physical threats (e.g. an autonomous drone attack) to digital forms of threat like phishing schemes. 

The team considered four main facets of an attack: Harm (physical, mental, or social damages), profitability, achievability and defeatability.

The factors aren’t entirely divorced from one another; for example, an attack’s harm may be a by-product of its achievability. Nevertheless, the experts were asked to consider the criteria separately for each threat. The scores were then sorted to reflect the overall most harmful attacks from A.I. in the coming 15 years.

Of the 18 different types of A.I. threats considered, the group determined that audio and video manipulations in the form of deep fakes posed the greatest threat.

“Humans have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a great deal of credence (and often legal force), despite the long history of photographic trickery,” explain the authors. “But recent developments in deep learning have significantly increased the scope for the generation of fake content.”

The potential impacts of deep fakes range from petty scamming by impersonating a family member to videos designed to sow distrust and spread misinformation to the public at large. On top of the wide range of potential uses, deep fake attacks are difficult for individuals (and even experts) to detect.

“Changes in citizen behaviour might, therefore, be the only effective defence,” write the authors.

The authors themselves concede that the judgements made in the study are inherently speculative and influenced by current political and technological trends. However, we still have some work to do.

Understanding technology’s potential for harm and doing what we can to stay ahead of it through information literacy and community building will go a long way in preparing ourselves against a more realistic robot apocalypse. 

 

 

 

Share via