Fakes are nothing new but what is new, however, is the quality of the fakes. In the old days, letters could be forged, but it took great skill to do it well and people were very alert to the danger. Then cameras came along and, for a while, they were perceived as the ultimate in reliability, hence the saying “the camera never lies”.
However, humans do lie and modern technology is helping to take the ancient art and science of lying to a whole new level.
Deep fakes are what you get when high-quality images and/or audio meet superpowered editing capabilities powered by artificial intelligence. Let’s break that down. These days, even budget-level smartphones can record HD videos and upload them straight to YouTube. In fact, they can upload them to any of the other main social-media, all of which are eager to steal YouTube’s crown as the internet’s video hub. This means that there’s plenty of raw material for deep-fake creators to use.
Similarly, there’s no shortage of decent, video-editing, some of which is available for free. Admittedly, for true deep fakes, you’re going to want something much better than the basic programmes that let you tidy up videos before posting them to YouTube.
Throw in artificial intelligence, which is now everywhere, and you can add the capability to predict and model someone’s behaviour, for example, their speech patterns and habitual body language. In other words, given enough genuine video footage of a person, you can create believable fake footage of them doing just about anything.
As is often the case with technology, deep fakes appear to have started off in the world of porn. Celebrities, generally women, found their heads pasted onto other people’s bodies. Again, there is nothing new in this. What was new, however, was just how well it was done.
It was done so well, in fact, that major alarm bells started to ring. If this could be done in porn, then why not politics, or policing, especially now that facial-recognition software is becoming increasingly mainstream… Deep fake creators could make your favourite politician, or your favourite actor, do or say practically anything, in a way that was virtually impossible to recognise as fakery.
Initial efforts to counter deep fakes largely focused on analysing existing videos. The problem with this approach is that, by definition, it can only be used after a video is released. Some companies, therefore, decided to turn the situation around and focus on developing ways for people to identify their own “genuine” videos, thus meaning that anything without that mark could be assumed to be a forgery.
While various approaches are being trialled, the general consensus appears to be that the best approach would be to combine blockchain with some kind of secondary verification such as a tag or graphic. In other words, to create the 21st-century digital equivalent of a fingerprint stuck into the wax used to seal a document closed.
Authentication technology, such as “Amber”, can store hashes for every 30 seconds of a video using blockchain. Any changes to these hashes can hint at deep fake interference. Similarly, to verify police body camera videos, company Axiom is using blockchain to fingerprint videos in the physical device where the recording was made (i.e. the body cam).