Skip to content

Disinformation

Disinformation is the network

“Disinformation” may now be the most misused and poorly misunderstood concept in the English language.  Even the dictionaries have yet to catch up to the real world meaning of the term.   

We all understand that when someone says that they had been “misinformed” on a particular topic, they mean to say that they had been provided incorrect information.  What we do not know is why or how they were misinformed. Perhaps the information they received was simply out of date for example, or the result of a miscommunication, typo, or similar.  Or perhaps the information in question was what is often referred to as “fake news” – specific information that, for whatever the motivations, is largely or just completely made up.

However, no one says that they were “disinformed.”  This is because real disinformation is not any single fact or story. As practiced by serious actors, disinformation is not about any single piece of news or information, but rather a coordinated, sustained campaign made up of different pieces of information so as to influence a target audience over a period of time – such as during the course of a war, or an election cycle.

Let’s talk about Freedom of Speech

Freedom of Speech is a basic right for all.  Censorship has no place in a democracy. But of course free speech is often somewhat free from reality – and even more often free of any actual objective value. The Internet has not changed this. What it has done is make the free speech of many millions of people publicly and easily available.  Human nature means that some good chunk of this now very visible free speech can reasonably be considered “fake news.”

Individual pieces of “fake news” can arise in a number of different ways.  The majority of it is not the work of sinister forces. In most instances, it is fairly harmless to society. A delusional person posting that he seen 3 Martians strolling down his street is likely to have very limited impact. “Fake news” centered on a specific person is often the result of a grudge, personal rivalry or similar – if a celebrity, there is obvious financial motivation.  Furthermore, reasonable people may disagree in many cases as to whether or not a story is fake.  Many things are matters of guesswork or interpretation – for example, whether someone “stole” their friend’s husband, or whether a given policy has proved good or bad.  Apart from very clear-cut cases such as the Martians, arbitrating truth is an impossibility.  Reality is unfortunately very messy, ambiguous and subjective.  Attempts to do so amount to censorship.

Such censorship is a wholly unjustifiable evil.  Individual nuggets of information only start to become harmful in most cases when it is both amplified and supported in a meaningful way by a broad network. (Amplification is simply various forms of repetition, often across different social media platforms. Support involves different online identities verifying and/or adding further details to the original information nugget.  Support transforms an individual piece of information into a narrative.) At the point at which this occurs, the individual piece of information starts to become something different, and more dangerous.

At Chenope, we thus focus on detecting collusion, or the unnatural, inauthentic amplification or supporting of particular information.  This approach has the merit of being mathematically objective, evidence-based, and avoids the temptation of trying to assess ground truth.

Chenope Disinformation Technology

Our technology is focused on analyzing the mathematical characteristics of the transmission network of different stories so as to detect evidence of collusion.  These transmission networks have detectable properties for a simple common sense reason: they are the direct result of either workers following orders or of the rules embedded in a computer program (bot.)

However because we know well that disinformation is often achieved through suppression of specific pieces of information that don’t serve the desired narrative, our technology also looks for evidence of suppression of specific information nuggets in content being transmitted through a suspiciously behaving network. 

Our components similarly look for other types of evidence of inauthentic behavior, for example use of automated translation technology or content in a local forum that fails to correspond to regional linguistic patterns.

Our approach is derived from the study of sophisticated disinformation campaigns around the globe.