Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate by David A. Broniatowski, Amelia M. Jamison, SiHua Qi, Lulwah AlKulaib, Tao Chen, Adrian Benton, Sandra C. Quinn, and Mark Dredze is published by Research and Practice, online before print, August 23, 2018. This is the study mentioned by Jim Wright here: Stone Kettle Station- Critical Path.
I recommend Mr. Wright’s perspective on this because it gives a reader a good idea of what intelligence work entails, even the less…savory side of it. All ethics and ethical arguments aside, like it or not (which is a moot point, its been going on far longer than I have been alive, so whatever I might think about the practice, doesn’t matter. Crying over spilled milk), this sort of thing is a reality. We reap what we sow eh?
Not to get all cliche happy, but I think its things like this that really explain that cliche about hell, paved roads, and good intentions. But I digress, do give Mr. Wright’s blog a look if you are not familiar; he comes by his well deserved accolades honestly. And it’s there you can also find the link to this particular study by Dr. Broniatowski et al. You can also retrieve a copy here American Journal of Public Health.
Before we delve into the study, here is a fairly comprehensive explanation of the anti-vaccine or “anti-vaxxer” narrative and world view. RationalWiki: Anti-vaccination movement. It has a rather extensive list of resources at the end. For a pretty good overview on the rest of what this is about, there is this: The Cyberwar Debate: Perception and Politics in U.S. Critical Infrastructure Protection.
The section on Military Rivals is particularly relevant to this discussion. Much of everything else focuses on how U.S. experts define cyberwar and cybersecurity, and the context in which they most often place it in, which is not wrong, it’s just not complete. Unlike other pieces, this one gives consideration to the context and operational definition of the term cyberwar of non-U.S. nation-states, notably China and Russia.
While this article was published in 2001, it’s extremely relevant today because of that explanation of the context and operational definition of cyberwar as it is understood by our rival nations. It mentions “infowar units” that are part of China’s People’s Liberation Army and notes not much is known about them or their capabilities, and goes on to note that Russia’s concept of information warfare involves psychological manipulation.
The author notes that Russia’s concept differs significantly from our own context in that it is not just about computer network attacks, there is an emphasis on psychological manipulation, where we strongly emphasize the computer network infiltration and attack side of things. What is information warfare (infowars) exactly? It’s using misinformation, disinformation, and propaganda to manipulate target populations.
Sometimes the information is created and used against a target population to manipulate them cognitively, emotionally, and psychologically, and sometimes, it involves opportunistic exploitation of information; rumors, hoaxes, fraud, conspiracy theory is particularly useful. Basically if its deceptive and can be used to instigate mistrust or agitate a target population against itself or their own government, you get the idea.
Information warfare is part of what fuels the cohesion of cults and terrorist organizations; it shapes the ideologies, worldviews, and narratives of these kinds of groups. It’s the reason why Democratic nation-states are particularly susceptible to internal instability, social breakdown, and truth decay. information warfare is the “fake news” and “alternative facts”. It is the cancer that undermines Free Speech.
It contributes to government coups, civil wars, and failed states.
Which brings us to this particular study; “bots” as “accounts that automate content promotion“, “trolls” as “individuals who misrepresent their identities with the intention of promoting discord“, and “content polluters” as “accounts that disseminate malware and unsolicited content“.
The researchers also define a common disinformation strategy called “amplification” as one that “seeks to create impressions of false equivalence or consensus through the use of bots and trolls“.
Also discuss some findings of previous research; the U.S. Defense Advanced Research Projects Agency (DARPA) also did some research involving the identification of “influence bots” on Twitter also focusing on anti-vaccine content which went ignored by the public health community. The research study conducted here is a mixed methods study, both qualitative and quantitative methods.
Unfortunately, despite Ralf Bendrath’s essay (The Cyberwar Debate…) from 2001, nobody seems to have taken the use of information warfare against the U.S. public seriously, at least not until now. Not until after Russia decided to “help” Americans pick the last President through paid advertising and other influence-media tactics using social media networks like Facebook, Twitter, YouTube etc.
So now, 17 years later, we are playing catch-up with exploratory research into this “new phenomenon” and the entire American constituency is trying to wrap their collective minds around the whole concept of information warfare (hello, mind control). We have an ongoing investigation into Russian meddling in a U.S. election, and a public split between blank stare incomprehension, disbelief and denial, and horrified confusion.
This study is about the exploitation of an American conspiracy theory (based on a fraudulent study) about health vaccines and its use as weaponized information against the American public. The fraudulent study is considered one of the most damaging hoaxes to public health in the last one hundred years, and researchers determine that Russia had a big hand in its spread through the social networks.
The researchers were looking to determine how anti-vaccine content was disseminated and promoted online by bots and trolls. In this study, the researchers analyzed a data set of tweets containing anti-vaccine content; and from a Twitter hashtag account associated with known Russian troll activity, then compared it to the anti-vaccine content of Twitter users collected between July 2014 and September 2017.
They were able to find differences in the type and frequency of content, enough to be able to make connections between ordinary Twitter users, Russian trolls and bots, and “content polluters”. The researchers were able to determine that of all tweets containing information about vaccines, 50% of the content contains anti-vaccine beliefs. This poses a serious threat to public health because health information is being “weaponized“.
Because of this propaganda, and it’s spread through social media we now have a much higher risk of the general public becoming susceptible to biological warfare. Especially since diagnosis gets more complicated when you have to test for illness and disease that had just about been completely eradicated in our population because of vaccines. Now we have to worry about mumps, measles, and something called pertussis again.
The authors of the study explain that proliferation of this conspiracy theory content has consequences: It confuses well-meaning but unaware parents who don’t realize the internet is not regulated and internet content is not required to be truthful. They stop trusting trained, licensed, and regulated medical professionals and the health community so they stop listening to them and put their faith in con artists.
People are exposing themselves to the conspiracy narratives after encountering trolls, bots, etc. Exposure to this narrative makes it look like there is no scientific consensus on this topic because of this amplification strategy. They decide to delay on getting their children vaccinated, their child gets sick. Perhaps the child recovers; perhaps not. Undoubtedly the child passes it on and perhaps someone else’s unvaccinated child dies.
There is a scientific consensus on this topic; bots and trolls and internet con artists ensure that you don’t see that information and if you do, that you don’t trust it. Where do these trolls and bots come from? According to the researchers the hashtag they studied involved Russian troll accounts linked to the Internet Research Agency, a company that “specializes in online influence operations” and “backed by the Russian Government.”
The researchers further discuss their methods and data collection (which contains a link to a data supplement sheet) and then moves into the analysis portion. The hypotheses are: “Are bots and trolls more likely to tweet about vaccines?” and “Are bots and trolls more likely to tweet polarizing and antivaccine content?”
They found that bots, Russian trolls, and content polluters are significantly more likely to tweet about vaccination than ordinary users. They also determined that Russian trolls were significantly more likely to tweet about illnesses that are preventable through vaccines, while spam bots and content polluters are less likely to tweet about illnesses that are preventable through vaccination, versus the average Twitter user.
The researchers go on to further discuss the second part of their research, the qualitative analysis of #VaccinateUS, they found that this hashtag was unique in that it contains very polarized messages from both sides of the provaccine and antivaccine groups, and other unusual or distinctive elements not seen in other tags, such as no use of images, no mentions to other users, no links to outside content.
The researchers determined that the tweet authors of #VaccinateUS comprehensively understand both arguments (provaccine and antivaccine) but unlike other tags, this one links associations between the arguments and U.S. politics and use loaded emotional language, specifically “freedom”, “democracy”, and “constitutional rights” while other tags focus on “parental-choice” and “specific vaccine related legislation”.
They also noticed most antivaccine tweets also tend to reference conspiracy theories, ordinary antivaccine tweets talk about a number of individuals, “specific government agencies”, and “secret organizations”, #VaccinateUS exclusively focused on only the U.S. Government. It also focused on divisive contexts like race, socioeconomic status, religion and God, and animal welfare; other antivaccine tweets were inclusive.
In Discussion, Russian troll accounts, commercial and malware distributor (“content polluters”) and “unidentified accounts” are described (accounts that software used could not identify as either software or human) in more detail. Russian troll accounts and sophisticated bots post content about vaccines at “significantly higher rates than does the average user”, they give equal attention to pro and anti-arguments, and cause “discord”.
Content polluters “post antivaccine messages 75% more often than the average nonbot Twitter user”, suggesting these bot networks designed for marketing purposes that are being used by “vaccine opponents”. They also tend to focus more on message content about vaccines and not on illnesses that vaccines are designed to prevent. This suggested to the researchers that some of this could be “click-bait” related.
Of the unidentified accounts, the researchers were unable to positively identify 93% of their random sample as specifically bot or human from the vaccine related Twitter stream. They note that it likely consists of a higher number of trolls and “cyborgs”, much of the content was more polarized and more anti-vaccination argument related than the average human Twitter account.
The study ends with discussion of public health implications, basically advising health providers to continue providing accurate information and not “feed trolls” by arguing with them. The researchers note that trolls and bots promote both sides of the vaccination argument, with trolls especially seeking to polarize and politicize the argument along U.S. political party lines.
It was also noted that anti-vaccine content appears from those with anti-vaccination agendas tend to share dissemination channels with “click-bait” style content and malware. The researchers note that this could increase the likelihood of infection risk of both malware and computer viruses for computers as well as biological viruses for the general public.
In short, this is what cyberwar looks like.
Further reading: The Atlantic: How Misinfodemics Spread Disease