Deepfake attacks are prompting drastic security changes at enterprises


UK companies are waking up to the threat of deepfake attacks, with most having developed policies to counter the threats posed by this emerging attack vector.

According to GetApp’s 2024 Executive Cybersecurity Survey, 68% of UK respondents said their organization has developed a deepfake ‘response plan’ amidst a surge in AI-powered social engineering techniques.

“AI-generated attacks and deepfakes are provoking anxiety among companies but also action. UK cybersecurity professionals are taking proactive steps to plan for the risks of deepfake attacks, with dedicated measures in place for when an incident occurs,” said David Jani, analyst at GetApp.

“The UK is, however, slightly behind the global averages on some of the more preventative measures, so more work can be done in this regard.”

More than two-thirds are required to use biometric authentication in the workplace, with fingerprints the most common technique, used by three-quarters. Meanwhile, six-in-ten use facial recognition and four-in-ten harness voice recognition technology.

However, while 92% report having success with the security measures they’re using, trust in these systems is falling, and close to 30% of UK respondents expressed significant concern about the potential for AI to be used for biometric identity fraud.

Around half the UK IT professionals surveyed said they had privacy concerns, with 42% fearing potential identity theft from using biometric authentication.

Deepfake attacks are becoming more common 

As AI improves, deepfakes are becoming more of a problem. Earlier this summer, for example, research from Medius revealed that nearly two-thirds of finance professionals have been targeted by deepfake fraud, with 44% actually falling for the scam.

Similarly, research by ISMS.online in May found that nearly a third of UK businesses have had deepfake security issues, mainly through business email compromise.

In one recent high-profile example, British engineering company Arup fell victim to a deepfake attack when an employee was fooled into sending £20 million to criminals posing as a senior staff member.

Alongside standard security measures, the GetApp report recommends looking out for signs that a video may be a deepfake.

This could include ‘jerky’ body movements, blurring around facial features, unnatural eye-movements, unusual coloration or inconsistent audio.

“If you are in doubt about the person you are speaking to, you can make it easier to spot deepfakes by asking them to turn their head 90° to the side to see a profile view of their face,” the report advised

“This can disrupt the software algorithm that projects another face onto the speaker, as it has to adapt to a shape it is not as used to working with.”




Source link

Exit mobile version