Over the last two decades, the world has seen technology grow at an exuberant rate, tasks that once required a room full of computers and specialists can now be accomplished by a few taps on the screen of a smartphone in seconds. We only have to take a stroll along the high street, and we can see people talking into smartwatches, paying for shopping or getting regular health updates in Star Trek fashion.
The boundaries between the real world and its virtual counterpart are decreasing by the day.
The rise of AI
The ‘Internet of Things’ (IoT) a term used to describe physical objects that can now provide joe public with a bridge into the online biosphere has arrived. Recently, there has been debate concerning yet another emerging form called Artificial Intelligence or AI.
Put simply, in non-geek speak, AI is an umbrella term to describe a multitude of efforts that can give devices the ability to think, learn and act for themselves, that is synthesising, perceiving, and inferring or understanding data (information).
At present, although AI is seen as something new, it has been around since 1956 starting life as an academic discipline, experiencing many years of trial-and-error theory and practice. Today, mainly because of a successful resurgence fuelled by mathematical and statistical Machine Learning (ML, or building methods that allow machines to learn), we are already seeing AI features seep into our everyday routine.
At the basic level, there is Alexa and Siri carrying out instructions through understanding human speech, a further level takes us to cars with the ability to drive themselves. There is no doubt, that that this is game-changing technology that promises to dramatically alter the way we live and work.
But with such breakthroughs, has come concern and fear recently expressed surprisingly by those who are set to make the greatest profit. Power players such as billionaire Elon Musk and Sam Altman are warning that maybe we should start to slow this technological evolution down through regulation, given the inevitable dangers it will pose to the world as we know it.
Put simply, combined with designs that include automation and the prospect of machines in effect robots becoming a very real threat to humankind may not be far off.
AI’s role in crime?
As someone who studies crime for a living, such predictions have given me food for thought. Not so much in terms of robots taking over the globe, (what we are talking about from that context is what’s called general AI, or machines capable of doing everything better than humans), but certainly about the threat of AI and its darker criminal application in the very near future. Already, academic research has noted AI-enabled crime existing in both the cyber and in the wider real world.
For example, deepfake is one powerful avenue potentially already being exploited. Deepfake involves superimposing human features on individuals, it can also involve voice cloning, that is impersonating people through mimicking of speech.
While such applications are already providing legitimate benefits in areas such as corporate presentations and entertainment, cyber security experts are warning of the increasing threat of Voice Cloning-as-a-Service (VCaaS) being offered on the dark web for purposes such as blackmail.
“Voice cloning technology is currently being abused by threat actors in the wild. It has been shown to be capable of defeating voice-based multi-factor authentication (MFA), enabling the spread of misinformation and disinformation, and increasing the effectiveness of social engineering.” – Insikt Group
Even one of the more low-level traditional crimes of house burglary has not been ignored. A report put together by University College London lists the emergence of burglar bots, which can be put through letter boxes to relay information to potential robbers regarding the contents and whereabouts in properties.
On a bigger scale, such technology can be applied to a political platform and in so doing takes disinformation to a whole new level. There is already evidence of the convincing power of deepfake technology: during the 2019 general election, footage emerged showing Boris Johnson and Jeremy Corbyn appearing to endorse one another for the premiership.
As one AI specialist commented, 50% of online views happen in the first few minutes, so even when the content is found to be fake, hours, even days have gone by, with millions of people having already viewed it and acted on it.
Add to this, automated snooping of personal online content using deep learning algorithms of social media “likes” of products or services and this provides the basis for more advanced and harder-to-defeat phishing to large-scale extortion attempts. In layman’s terms, what this means is that an algorithm will pick up when you like something on social media and bombard you with similar content. If used in an illegal way, it will tell scammers what you are likely to go for in a scam.
From a political violence perspective, AI can pose a threat to the disruption of computer systems responsible for vital energy supply to the UK’s houses and offices, driverless cars could be used as delivery systems for Improvised Explosive Devices (IDEs).
Can AI be used for ‘good’?
Of course, on the flip side, AI can also be used by those responsible for fighting crime.
Presently, we are seeing Machine Learning being developed for forensic applications in the form of evidence recognition or the ability to visually scan crime scenes for valuable clues.
On the same theme, advanced facial recognition software capable of accurately identifying age and gender of both suspects and victims is emerging and is already in use in some parts of the UK. This latter aspect, however, does bring with it controversy. Already there is talk of police being prohibited from using facial recognition technology in public spaces because of violation of ethical standards and human rights laws.
Beyond the UK, China has been using facial recognition technology for a few years now and there have been claims of it being used by the Chinese Government to racially profile Uyghur Muslims, a minority group in China who are being singled out for mistreatment and detention. In a further Orwellian twist, China plans to provide all its population – that is 1.4 billion citizens – with a personal score based on their behaviour. This is known as social credit and such scores will then determine access to travel or simply who can buy what products.
AI and the future of jobs?
There is also of course, the concerning deskilling effect, the advent of future AI will require the need for most occupational roles to be redefined, which in turn could lead to increased levels of criminality because of many thousands of skilled people being made redundant.
We are already seeing this process in its early stages in the form of shopping scanners in major retail outlets such as Tesco. Moreover, in one part of the country, computers are now carrying out the work of 250 people responding to emails reportedly better than humans.
From a criminal justice perspective, what does the impact of AI technology mean for the recruitment and training of the very people responsible for the day-to-day fighting of crime, the next generation of police officers? While we are already underway with providing a pathway for police officers to hold degrees, will we need to go even further, since AI will change the face of criminality at all levels and as such will require a lot more specialised knowledge even for the lowest ranking of ‘beat bobby’.
In Mary Shelley’s famous horror story Frankenstein, a scientist sets out to give life to his own creation but only succeeds in creating a monster that wreaks havoc all around him. A very early work of complete fiction, but just may be a warning from the past not to meddle too much in shaping the future?
