Skip to content

Political Campaigns​

Detecting Collusion & Inauthentic Behavior

DOES YOUR CAMPAIGN HAVE AN ANTI-BOT STRATEGY?

Bots (computer programs of varying degrees of sophistication that post content online with a phony human identity) and trolls (generally low paid or volunteering humans whose behavior follows carefully defined scripts) are expected to be used at an unprecedented level of boldness in the 2024 U.S. federal election. They are poised to exert an unprecedented level of influence. With the proliferation of advanced tools like Chat-GPT accessible to the public, the ease of creating and disseminating disinformation has reached alarming heights. Furthermore, bots are extraordinarily cost-effective: the incremental cost of each additional bot is essentially zero. Even with limited or no use of humans, a large number of bots can be used to create public impressions about things such as the inevitability of the victory of a particular candidate – or to invent and then validate a “fact.”

Genuine political discourse is in danger of being drowned out by the overwhelming noise generated by bots and trolls. Legitimate candidates may find it exceedingly challenging to counter the barrage of disinformation, and voters will struggle to distinguish fact from fiction. The public’s trust in the electoral system could be severely eroded, leading to widespread disillusionment and apathy. In a worst case scenario, bots could sway the election outcome in favor of a candidate who does not genuinely represent the will of the people.

In light of this, ensuring an effective defense against these tactics is crucial to safeguarding the integrity of the democratic process and to foster public trust in the electoral system.

What Chenope Technology Can Do for Campaigns

Much like writing a hit song, coming up with good “soundbites” or things that “go viral” is a lot harder than it looks.  This is one of the reasons that it is often the case that the really catchy messages are created by the same handful or two of people – those who are known to be good at it – and then broadcast widely over a cooperating network for maximum reach. For example, using our algorithms, we were able to determine that in a prior U.S. presidential election, the vast majority of widely repeated messages against the Republican candidate originated from only four individual accounts.
While the methods for creating and spreading disinformation have become more sophisticated and are continuously evolving, there are discernible patterns that persist. Understanding these patterns provides a critical advantage in identifying the sources, and mitigating the impact of false or misleading information.

Detect

Detect instances of inauthentic coordinated activity against your candidate that is not properly labeled as being associated with a political organization.

Detect instances of individual accounts that who are not what or who they claim. 

Visualize

Visualize trends and individual instances of collusion and inauthenticity on an interactive digital platform in a clear and accessible manner for the voting public.

Combat

Actively combat instances of coordination in the forums on which they occur through use of a personalised avatar that is calling out accounts and educating the public. Trusted voters can play an active role by reporting suspicious behavior to the avatar.