We are currently living in a time of social engineering, unmatched in human history, thanks to social media. Manipulators and malign influence peddlers have been quick to move in and hack this global shift in interactions. They are relentless and constantly evolving to evade attempts to combat them.

Adversarial actors are effectively weaponizing social media platforms in order to rig the human mind. They interfere in elections, financial institutions, public health—in fact, there is almost no corner left untouched.

The ultimate goals may vary, whether sowing discord, using inauthentic manipulation to sway elections, economic or financial sabotage, or PR campaigns to sway public opinion about a person, company or country, the purpose is to get people to believe a particular narrative, regardless of how deceptive or destructive that narrative may be.

For the purposes of this article, the focus will be primarily on Twitter, although much of what is discussed here may apply to other platforms as well.

Our content is fiercely open source and we never paywall our website. The support of our community makes this possible.

Make a donation of $35 or more and receive The Monitor magazine for one full year and a donation receipt for the full amount of your gift.

Donate

First, let’s define a few terms:

Bots: Automated accounts where the frequency and content of tweets are controlled by software. Often used to act as accelerants of messaging. This can be malicious in the form of manipulation, i.e. spreading disinformation, amplifying divisive propaganda; or useful in the form of alerts about natural disasters; or benign in the form of tweeting out routine marketing of everyday products.

Trolls: Online false identities, assumed by individuals acting alone, or as part of an organized group (for example, a troll factory, where individuals are paid to be deception agents online). Their intent is to spread malicious content, attack individuals and be disruptive. The human handler of troll accounts can manage many accounts at once. (A distinction can be drawn between individuals acting on their own and paid trolls).

Cyborgs: Accounts that are partially automated combined with human handlers. For example a factory troll can manage more accounts when the accounts are partially automated. Bots can take over and continue tweeting while the human handler is offline or engaged with other accounts in the roster. Software is used for the automated portion of these accounts.

Sock puppet account: Anonymous accounts with default profile image. The term has also come to mean the misleading use of an online identity, often used to bypass removal by a social media company like Twitter. Sock puppet accounts often post praise of their other account(s) while posing as an entirely separate third party individual.

Troll Factory, aka Troll Farms: Large numbers of inauthentic accounts handled by people hired to pose as authentic users for malign purposes. These can be state-backed operations of adversaries, such as the infamous Russian Internet Research Agency, or troll factories for hire that exist in multiple regions and countries around the world, for example, in India, Moldova, Gambia, the Philippines, even Arizona. PR companies can be brokers for using for-hire troll factories, disguising who the real client is and what influence campaign they are mounting.

Backing: When large numbers of inauthentic accounts “back” the account of an authentic user, making them appear as though they are more influential and increasing the size of their megaphone. This, in turn, encourages legitimate users to follow and pushes the account higher in the ranking.

Boost: When bots and troll factory accounts are used to artificially amplify tweets. This can be done to help get a hashtag to trend or to promote a particular desired narrative. The boosted tweets will then become prioritized by Twitter’s algorithms and receive higher ranking, thus becoming more visible to people exploring a hashtag, regardless of whether or not the person is following the account that put out the tweet. (Genuine users can also encourage followers to “boost” a message, but as this is authentic, it has different connotations than adversarial use of boosting, employing thousands of bots).

Brigading: Coordinated abusive engagement. For example, coordinated mass reporting of a particular account with the goal of having the account silenced, sidelined or removed. It is similar to swarming, where large numbers of coordinated malicious accounts flood a target account with abusive replies and quote tweets in an effort to attract others to join in the swarming and overwhelm the user of the target account. Swarming has a few meanings, including “bot swarming” when large numbers of bots all tweet versions of the same content in a coordinated fashion to give the impression that many people are agreeing and saying similar things.

CIB (Coordinated Inauthentic Behaviour): Facebook defines CIBs as such: “Influence operations as coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation.”

Misinformation: False information, accidentally spread, where the person spreading the false information is unaware of its falsehood.

Disinformation: False information, deliberately spread with intent to deceive.

Malinformation: Factual information, spread with malicious context meant to mislead or inflict harm. Hacked emails or photos, “revenge porn”, doxxing, phishing, are all examples of malinformation.

Propaganda: Can be true or false. It can contain elements of disinformation and malinformation. But its ultimate purpose is to influence and manipulate.

Marc Owen Jones, an associate professor at Hamad Bin Khalifa University and author of Digital Authoritarianism in the Middle East & Political Repression in Bahrain, prefers to use the term deception as a catchall to include elements of disinformation, malinformation and propaganda.

“I think [the term] ‘deception’ is important, because ‘deception’ carries the mode of delivery, as well as the content itself….Deception is more about the means of delivery, which can tell us a lot about the intent as well. If someone is spreading a message with a thousand fake accounts, their intention is not good. Or their intention is to do something that is not transparent.”

Troll networks

In the seven years that I have independently studied inauthentic activity on Twitter, I have seen a tremendous increase in complexity. I have paid particular attention to what I call troll networks, which I will define as follows: complex networks of authentic users, troll factory accounts, bots, cyborgs and trolls that particularly amplify and spread disinformation, malinformation, misinformation, and propaganda.

The authentic users (real people) in these networks are essential to the success of the network. The authentic users know they’re not bots and this increases their belief that others in their network must therefore also be real. It ignores how the messaging within networks can be inauthentically shaped, amplified, and weaponized.

Bots can act as brokers between groups, amplifying strategic content in order to connect different groups, thereby creating increasingly large and complex echo chambers. The high engagement “star” accounts in these networks can be artificially raised up as opinion leaders whose views are then carried forward by other users as well as amplified by bots and troll accounts.

This gives the impression to other users, as well as influential media outlets, that many people hold these same views and thus becomes a way to legitimize and normalize increasingly extreme view points. It moves the Overton window, if you will. This can give the impression of a large movement that is, in reality, relatively small, but that encourages others to join in.

The high engagement accounts in these networks in 2016 could be entirely fake accounts, such as those seen from the Internet Research Agency in St. Petersburg. Paid trolls in Russia posed as both Black activists as well as Trump supporters, creating division and polarization in order dissuade voters on the left from voting for Hillary Clinton, while mobilizing voters on the right to vote for Trump. In other words, both left and right are targets of these operations.

However, in the past few years there has been a shift away from purely fake high engagement accounts, and instead what is most prevalent now is boosting legitimate users (a real person) and weaponizing their content. In other words, for the purposes of malign influence, it doesn’t really matter that the account is real—the content can be weaponized just as effectively. Even more so, in fact.

Adversaries will become followers of someone they wish to promote as an influencer and back them. This amplifies their content and makes Twitter’s algorithms see the account as more important. It also encourages the user to tweet more of the desired deception content, as those are the tweets that will (artificially, at first) receive the most engagement. Social media companies monetize engagement, making the problem an extremely complex one.

By weaponizing the content of actual people, it disguises a malign influence operation more effectively, making it more difficult to shut down or counter. Adversaries can weaponize these networks to respond to any news cycle within hours.

Adversaries can also repurpose tens of thousands of accounts to respond to current events in real time. The bot and troll factory accounts act as amplifiers shifting a narrative, which then encourages authentic users to engage and follow suit.

Bots play a significant role in recommending groups to each other, bridging these groups, creating cohesive messaging amongst different groups to form a larger and more complex network while feeding links to deception media into the groups. For example, linking anti-vax groups with anti-mask groups and gun rights groups while feeding the idea that all of these areas are really about your individual rights. These groups, in turn, share links to propaganda media like Rebel News, Breitbart, True North Centre, InfoWars and so on, amplifying disinformation media amongst different groups. This becomes a feedback loop and increases the echo-chamber characteristic of these networks.

Bots can be used for the opposite purpose as well. They can be used to sow confusion and make groups less cohesive. For example, in the early days of COVID-19, during lockdown, the work of Dr. Kathleen M. Carley and her team at Carnegie Mellon University’s Center for Informed Democracy and Social Cybersecurity showed that many thousands of new bot accounts were created to engage in a “reopen America” campaign. There were bots tweeting on the pro-lockdown side as well as the anti-lockdown side. But their purpose was to diffuse the pro side while making the anti-lockdown side much more cohesive.

Then the messaging took a rhetorical shift. It moved from anti-lockdown propaganda to being about convincing the bridged anti-lockdown groups that it was really about their rights—their right to not wear a mask or get a vaccine. The groups could then be steered towards marrying this with political ideology, politicizing public health measures meant to curb the spread of COVID-19. In essence, these groups could now be weaponized and aimed at real-world political action, such as what we witnessed in Canada with the so-called freedom convoy.


Adversarial actors both exploit and hack the social cognition that results from the messaging and feedback loops in these echo chambers.

Dr. Carley and her team have done extensive research on social cybersecurity and the effects of malign inauthentic online behaviour. They found a “nexus of harm” in which bots are used to search keywords and phrases to “collect” haters and steer them towards hate groups in order to form networks and echo chambers. Disinformation is fed into these groups to provide extremists with a story (i.e. conspiracy theories).

As a hate group or network becomes more of an entrenched echo chamber, the more bot-fed extremist disinformation can have an impact. This increases the risk that the group can “topple” into actions in the real world, including acts of ideologically, politically or religiously motivated violent extremism.

A hashtag like #TrudeauMustGo, for example, has been routinely amplified by adversarial actors and is meant to form an identity marker for an “in” group. But its use is also meant to form a subconscious cue in both the “us” group and the “them” groups, thus exploiting social cognition.

While bots may have been used to trend hashtags like this, getting authentic users to do the same is the goal. As the entrenchment of the echo chambers around drivers of hashtags like #TrudeauMustGo increases, so, too, does the real-world hate and ultimately, the real-world death threats.

Adversarial actors both exploit and hack the social cognition that results from the messaging and feedback loops in these echo chambers.

The perpetuation of conspiracy theories is also a key aspect in how malicious actors cause harm. The belief in conspiracies, such as The Great Replacement, WEF conspiracies, 5G tracking people through the MRNA COVID-19 vaccines, and so on, go hand in hand with online extremism.

When purposeful malign narratives are being pushed online, research by Dr. Carley shows that 60 per cent of the accounts initially pushing these narratives are bots.

Fake news and propaganda websites play an important and dangerous role in spreading deception and in hacking our brain (the amygdala) as well. These sites are a primary source for the types of disinformation spread amongst networks. This is significant in terms of the real-world implications. All of these elements create a ratio of interplay between one another.

Based on the work of Robert Pape and the Chicago Project on Security and Threats (CPOST) at the University of Chicago, examining participants in the January 6 insurrection on the U.S. Capitol, they discovered that participants in the insurrection who held the most radicalized beliefs had as their primary news sources OANN and NewsMax, as well as right-wing online sites like 8Chan, InfoWars and others. That fuelled alarming beliefs that the 2020 election was stolen from Donald Trump and that Joe Biden is not the legitimate president—and that it was acceptable, even necessary, to remove Biden by force. Pape’s findings at CPOST translates into 21 million American adults believing that using violence to remove the government and attack fellow Americans, is valid.

But access to disinformation alone doesn’t accomplish what adversarial actors are looking to achieve. Rather, it is in the shaping and weaponizing of behaviour and beliefs—forming an echo chamber using bots to amplify and bridge groups together as well as using trolls, troll factories, and cyborgs to feed deception into the group, with propaganda media creating feedback loops—that combine to further adversarial goals.

By giving the impression that many others feel the same way and believe the same things, it has a powerful effect on human psychology. In other words, it’s a system designed to hack human psychology and hijack the emotional response system of the brain, to give people a sense that they are right, they are supported and that they belong.

None of us is immune.