Disrupting Disinformation

An analogy of ‘deepfakes’.

The disinformation campaigns that may occur with the emergence of synthetic media or ‘deepfakes’ could augment the era of ‘fake news’. With the employment of deepfakes technology, false or altered video and audio clips could be used to smear people, denigrate the media, deny people the right to representation or provide misleading information.

Without effective checks and balances, society will not be able to tell, for example, when a distorted video clip is produced and when it is actually legitimate. Given that deepfakes technology is still in its infancy, no one seems to have a clue as to how to regulate deepfakes in order to deter its proliferation.

The problem of the fake news era — An inductive reasoning

It has been ascertained that the era of fake news is becoming more prominent as seen with simulated misinformation bots that are able to redirect accurate information from intended recipients and subsequently cast doubts about a particular topic into the minds of its victims.

Augmented fake news could come about, for example, through the aggregation and dissemination of manipulated videos and images that cannot be distinguished from genuine ones — The resultant media are known as deepfakes. The proliferation of these fake media or ‘deepfakes’ and their use to target specific groups of individuals represents a new phenomenon in the next digital age.

The consequences of the advent of fake news promulgated by synthetic media or ‘deepfakes’ will unfortunately be as problematic as the social forces that spread them, especially if we consider how social media and fake news are instrumentalised in shaping people’s views and behavior, and to disseminate political propaganda.

Here are three important components of fake news or deepfakes attributes I have come to understand that poses significant threats to public discourse:

  1. Fake and altered videos and images.

Herein, the fundamentals are to identify how fake and altered media could be shared, spread and the risk of its proliferation, including how fake and altered media could be effectively countered through critical reporting that identifies and confronts fake news propaganda techniques as seen with deepfakes.

In an effort to uproot fake news techniques, I believe policymakers need to support the media in conducting efforts to challenge disinformation and to ultimately disseminate accurate information, and provide a clear framework of measures that journalists and ‘fact-checkers’ can follow to debunk and defeat fake and altered media or deepfakes.

Ideally, policymakers should work towards establishing rules that ensure fake and altered media are unambiguously labelled, and that society understand when the media they consume are fake or altered. Such rules should fundamentally make it clear that widespread disinformation ushered in by altered media is not to be conflated with genuine news.

There are however several aspects of the digital age that foster fake and altered media.

One is the modern context of the global, borderless and interconnected nature of the web and technology, which makes it easier for political actors, contracted ‘state’ players and their supporters to subvert democratic institutions, bypass traditional media and put their deceiving message in front of people, even without using conventional, regulated media.

Another is the proliferation of facilitative technologies that permit the creation of fake and altered media, a new phenomenon referred to as the Digital Revolution 2.0. by the Global Disinformation Index (GDI).

Finally, there is also a phenomenon that is referred to as the Point-of-False Reads, a major cause of the spread of fake and altered media. Such a deception manifests when people do not acknowledge that the media they are seeing is fake or altered, or when people confuse the quality of an underlying video with the simulated media itself.

The Point-of-False Reads is therefore based on the idea that people rarely consider that what they are seeing online is fake or altered.

This however contrasts another kind of false-reading phenomenon that describes the fact that people typically come to identify fake media or altered media, where viewers become accustomed to the fact that a media could be fake or altered and are therefore less likely to evaluate the content for authenticity, even if it is presented as being from a reputable source.

A potential consequence of the Point-of-False Reads is that fake and altered media can reach a much wider audience because it is easier to spread misleading information across social networks since the formats, policies, rules, and scale of such open public forums are generally better suited to go viral.

Social media is also typically cheaper than the mainstream media due to lower overheads, etc., and so it can easily be used to propagate fake and altered media. It is also a significantly more democratic medium, in the sense that it allows political actors and their supporters, among others, to become much more active on social media, whereas traditional media (such as TV, radio, print, etc.) often prohibit such actions due to broadcast licensing agreements and/or the notion of impartiality.

Essentially, fake and altered media are the preferred means by which political action groups might exercise influence over people and so the spread of fake and altered media is often used to achieve political goals or counter accurate political messages.

When society does not have a robust understanding of how to distinguish fake and altered media from real media, we are more likely to be influenced by and become trapped in an ‘echo-chamber’ that reinforces what we subjectively already believe or which sways us away from factual reality.

Future Human