Top
HSRCBlog DeepFakes Facilitate Fraud & Cyber Attacks
Fraud Prevention

DeepFakes Facilitate Fraud & Cyber Attacks

Last year, cybercriminals used Artificial Intelligence (AI) to imitate the voice of a CEO to deceive a UK based energy company into making a fraudulent transfer of almost $250,000. AI technology can be used to create deep fakes representing voices and videos that may appear to be real. Research has suggested that with the rapid development of this technology, soon it could become impossible to differentiate between a real and fake person.

 

The recent advancement of this technology has significant consequences and increases the potential danger of misinformation and fake news. It is now possible to purchase a “unique, worry-free” fake person online. There are various websites offering fake people, for example ‘Generated.Photos’ or ‘ThisPersonDoesNotExist.com‘. The website Rosebud.AI can even provide an animated talking fake person.

 

Previously most of us could identify a fraudulent email, misinformation originating from a hacker may now be very convincing and appear to be legitimate. This poses a threat to the privacy and security of companies. An employee would not question the identity of someone appearing to sound and look like a senior member of their company. AI technology can influence people to make misinformed decisions based on a falsehood. The possibility of creating fake images and false data which seems very believable can be quite frightening. This technology facilitates hackers and makes cybercrime and fraud easier to commit. Over the last 6 years this technology has dramatically improved and continues to improve, putting companies even more at risk.  

 

As a result of COVID-19 workplaces had to adapt to working remotely, thus companies became dependent on technology and online networks. This adjustment further endangers companies in becoming victims of fraud. With more company information accessible online, companies are more exposed and have become more vulnerable. 

 

While AI technology is problematic and creates new safety risks, this technological progress is also beneficial. The improvement of facial recognition programs are valuable to identify and arrest criminal suspects. As well as this, the technology used to detect and expose fake information is actually the same AI technology used to create it. Detection technology to identify deep fakes is still under development but it is important for companies to be particularly vigilant and aware of potential cyber-attacks.

 

Check out our new report about the Counter Misinformation (DeepFake and Fake News) Solutions Market 2020-2026 for further analysis and insights about this growing market.

Gil Siegel

gil@hsrc.biz

X
Powered By