Deep fakes are increasing - how can we identify them?

Written by

Deep fakes are increasing - how can we identify them?

Deep fakes are increasing - how can we identify them?

Deep fakes are becoming increasingly more realistic, and also, easy to do, can this be combatted? Or will we be living in an ever increasing 'synthetic world'

Deep fakes are referred to as synthetic media in which a person in an existing image or video is replaced with someone else's likeness. A great example of this is the recent Tom Cruise Deep Fake account, whereby AI is used to stimulate an artificial version of Tom Cruise. However, the likeness is uncanny, and quite creepy!


Additionally, whilst this has been done by one user, there are an increasing number of apps and software platforms being created which allow for similar deep fakes to be created. Although not on a similar scale to the above, you can only begin to imagine how this is going to grow over the next 2-5 years! 

Is there anyway we can increase our security against these deep fakes?

Currently, there is no way to identify whether something is a deep fake or not. This could result in people being accused of crimes they didn't do and many other detrimental (or even beneficial) examples. This is definitely a huge threat to our security, and opens up the philosophical question of whether things would have any humanism/realism to them, when everything could in fact be a deep fake.

Because of this, there is a requirement for policy and security to be considered for deep fakes to ensure that 1. Anything published by media is verifiable as the user and 2. Anything that is deep faked can be easily identified.

This is going to be incredibly hard to do, and will require a full-scale effort to ensure that anything done on this scale is done under the correct parameters and legal framework.

My two best suggestions currently are: 

  1. Requesting that all software platforms utilise encryption code underneath the videos to ensure that users can decipher the video to reveal that it was infact a Deep Fake.

This would be beneficial in avoiding any wide-scale AI platforms being used by the everyday user and creating negative deep-fakes. However, does still leave the threat of 'super-users' or 'hackers' who may be able to build their own software that perhaps wouldn't be picked up by regulatory bodies.

  1. Utilising Non-Fungible tokens and blockchain for any video content that is uploaded

As you have seen in my previous posts, i'm a huge advocate of blockchain, cryptocurrency and NFT's. From what it looks like, this has a huge wide-spread benefit that could even spread to the deep fake and synthetic media realm! For example, if any video is uploaded onto the internet, or published anywhere - a transaction ID is accompanied with this. This ID can be searched for within a blockchain explorer that would identify who uploaded the video and potentially, where that video came from.

This would allow for a greater level of transparency across videos - but would of course rely on users being savvy enough to use an explorer to identify.

In my view, this is something that could be looked into further, and potentially, with the use of AI and code be used to identify any common patterns of Deep Fake videos, that would then flag a warning (similar to what Twitter does with some content) that it could be Deep Faked and to be looked at on the explorer.

Where are we headed?

There are two ways most people see that we are going with deep fakes, either down a very dark road, or potentially to a place which would better the ability to render VR, media and improve our communication skills across several platforms.

I hope the latter, but in order to do so, a serious discovery needs to be carried out into the world of deep fakes and how we can identify them moving forward. Technology progresses at a scary speed, much faster than that of regulation, and things could get extremely creepy and damaging in years to come.