Bots (computer programs of varying degrees of sophistication that post content online with a phony human identity) and trolls (generally low paid or volunteering humans whose behavior follows carefully defined scripts) are expected to be used at an unprecedented level of boldness in the 2024 U.S. federal election.
Why? Because our brains are wired from millennia of evolution to believe things that many, (apparently) unrelated observers tell us are true. Bots in particular are extraordinarily cost-effective: the incremental cost of each additional bot is essentially zero. Even with limited or no use of humans, a large number of bots can be used to create public impressions about things such as the inevitability of the victory of a particular candidate – or to invent and then validate a “fact.”
“Disinformation” may now be the most misused and poorly misunderstood concept in the English language. Even the dictionaries have yet to catch up to the real world meaning of the term.
We all understand that when someone says that they had been “misinformed” on a particular topic, they mean to say that they had been provided incorrect information. What we do not know is why or how they were misinformed. Perhaps the information they received was simply out of date for example, or the result of a miscommunication, typo, or similar. Or perhaps the information in question was what is often referred to as “fake news” – specific information that, for whatever the motivations, is largely or just completely made up.
However, no one says that they were “disinformed.” This is because real disinformation is not any single fact or story. As practiced by serious actors, disinformation is not about any single piece of news or information, but rather a coordinated, sustained campaign made up of different pieces of information so as to influence a target audience over a period of time – such as during the course of a war, or an election cycle.