Oops! It appears that you have disabled your Javascript. In order for you to see this page as it is meant to appear, we ask that you please re-enable your Javascript!

Deepfakes!

The word “deepfakes” is so new it shows up as an error in all spellchecks. It’s the term given to describe the latest cybercrime where voice recordings are hacked and used to convey fraudulent verbal instructions to, say, transfer funds or cancel a contract or agree a deal. In simple terms it’s a worrying example of how artificial intelligence can be flagrantly abused.

But on another level it provides further evidence of the lengths to which criminals will go in order to circumvent rules and regulations. You’d imagine regulators would have noticed this by now – but no, they keep coming up with more barriers to entry, thereby, providing a honey-pot for fraudsters. It reminds us of the young school child being told not to eat the cake. The restriction merely provides unbearable motivation to eat the cake.

EU regulators have introduced a monster rulebook by the name of MiFID II and one of the most prominent regulations is that all telephone conversations between regulated firms and their clients should be recorded. The regulatory intent is obviously to protect all parties in the case of dispute. But the unintended consequence is to provide a treasure throve of voice recordings that are just waiting to be hacked. Of course, such data should be suitably protected with appropriate firewalls but we all know that fraudsters have a habit of circumventing even the most dependable installations.

The potential for scheming deception is obvious. If subordinates receive an instruction from the boss by e-mail to transfer funds to a third-party they may become suspicious about any number of items in the message. The request is unusual and is not normal practice; some of the words are misspelt, the amount seems large, the third-party is unknown, etc. Similar suspicions will also be aroused where the instruction is received by text message. But what about voicemail? Verbal instructions that are clearly the manager’s voice with the same intonation and accent seem much more authentic and are more likely to be executed. These are manipulative deepfakes!

Sceptics may justifiably argue that company procedures should capture this type of cybercrime – for instance, the boss should never be allowed to transfer funds in this fashion without two counter signatures. But deepfakes are being used for much, much more than simple money-transfer con tricks. Consider an important company announcement. Company CEOs and Financial Controllers regularly provide information on financial performance via investor relations conference calls. If the company announces unexpectedly good news then the share price will spike. Anybody with such inside information could easily benefit.

So what if scammers could impersonate the voice of the CEO using snippets of audio hacked from previous investor calls? Deploying artificial intelligence programmes to sift through hours of voice recordings hackers can now generate an extremely plausible likeness to any voice. This has to be concerning to many industries at various levels. Audio telephone conversations will require expensive encryption. Maybe the most appropriate solution is old fashioned face-to-face conversations. Now there’s a thought!

By continuing to use this site, you agree to the use of cookies. If you do not accept the use of cookies, simply exit the site. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close