Artificial Intelligence is dominating the headlines, from despicable deepfakes of Taylor Swift raising concerns in US politics to UK Government announcements of massive investment into AI research and regulation. AI will likely change every facet of our lives. So, it would be naive, even dangerous, not to expect criminals to pounce on the opportunity AI brings to their activities.
In January, I debated my concerns about a new era of AI-assisted crime in Parliament, specifically on concerns around AI scams. It was the first time this topic had been specifically debated in the House.
We are addicted to technology; mobile devices live in all our pockets, tracking our every movement. Many televisions now connect to the internet, and social media has connected communities in ways we never thought possible. For all the positives, as I saw as a member of the Online Safety Public Bill Committee, the online world is also full of risk and harm. The scale of use is almost unfathomable; Facebook alone has over 3 billion monthly active users worldwide, each user sharing thoughts, connections and, most notably, data. I am confident that if any Government asked citizens to share the same level of personal data that many give away for free to social media platforms, there would be uproar – we would probably see protests on the streets.
We have, ultimately, become data sources. The fear I raise is that this personal data could be harvested with increasing sophistication for AI-assisted Criminality. By “data, “I mean not just your name or birth date or the names of friends, family, and colleagues, but your face, voice, fears and hopes, your very identity.
Criminals will have no conscience in using personal data to scam, fool, threaten, or even blackmail. Many victims may not even see it coming.
It is hard to find someone who hasn’t been an attempted victim of technology-driven crime and Criminality. Sadly, most of us will have received at least one scam text message. They are usually pretty unconvincing, but that is because they are dumb messages in the sense that there is no context. But imagine that the message is not just a text but a phone call or even a video call and that you can see a loved one’s face or hear their voice. They can start by asking how you are, even mentioning something you recently did together. Only a friend or family member would know such information. On the call, they might say they were in trouble and ask us to send £100 or more because they have ‘lost their bank card’. I am sure most of us would not think twice about helping a loved one, only to find out that the person we spoke to was not real but an AI scam using an AI-cloned voice or deepfake.
My concern is that such a precise scam could be replicated in the thousands in parallel, creating ‘Flash Scams’ that allow AI-assisted criminals to make a fortune.
In 2020, the Dawes Centre for Future Crime at UCL produced a report on AI-enabled future crime. It placed audio/visual impersonation at the top of the list of “high concern” crimes, along with tailored phishing and large-scale blackmail. More recently, in May 2023, a McAfee cybersecurity artificial intelligence report entitled “Beware the Artificial Impostor” shared the risks of voice clones and deepfakes and revealed how common AI voice scams were, attacking many more people in their lives and homes.
I argue that we need a multifaceted approach is necessary, from anti-virus style alerts when AI could be in use to agreements with insurance companies and banks to protect consumers financially from scams. The list is endless.
Scams could ruin people’s lives—mentally, financially and in other ways. I know the Government is investing both time and energy into Artificial Intelligence, and I welcome the world-leading work in the UK. Time is not on our side, so we all need to be aware of the immense opportunity and risks AI can bring, but we need to be mindful that the bad guys will also do whatever they can to take advantage of it.