Recent media reports have shown that WhatsApp is used extensively in the spread of misinformation and fake news. In India, this has culminated in mob violence, lynchings and murder. In light of this, WhatsApp, now owned by Facebook, responded swiftly and widely. They used mass media campaigns (through radio and billboards in India) to request people not to forward information that they may not know to be true. The mass media campaign urges users to “spread happiness, not rumors” in various local languages, acknowledging how the popular messaging app has begun to have some unintended consequences. They also specifically called for researchers and NGOs to work with them to help curb misinformation and transmission of fake news.
Following all this, they took two concrete steps that changed the way users interact with the app: one was that no message(s) could be forwarded to more than five contacts at a time. They also introduced a smaller preface to each forwarded message to indicate that it came from elsewhere. Finally, more recently, WhatsApp has now limited the number of messages that can be forwarded at a single time.
There are, of course, underlying behavioural patterns that WhatsApp is trying to change with these measures: one could be that it is now making forwarding messages itself more difficult (or at least tedious). The second might be that it is now forewarning individuals that forwarded messages are somehow different from original content. The latter could be important in priming individuals to think about the nature and source of the message before consuming the information within it. However, much of this is predicated on assumptions about user behaviour, which could differ markedly by country or culture, and that WhatsApp may or may not know much about already. For the purposes of this article, consider just the case of WhatsApp users in India.
First, it is important to know how a recipient processes the message before using it in a certain way: does she read it? Does she know who it came from? What is her relationship with and perception of the sender? These are characteristics that we know little about but could be significant in understanding user behaviour — especially when dealing with contentious messages whose source is unclear. In fact, research has shown that whether we believe a certain message to be true is influenced by how close it is to our own political beliefs. Text messages, images, videos, or a combination may also be dealt with differently — does the user process the entire message and think about its contents? And this is everything before the message is forwarded!
User behaviour related to forwarding may not be a linear or coherent process. Thus, it might be important to learn whether or not the recipient reads the message (and if so, how much of it they read), and whether that varies by the topic or theme of the message. For example, if the message has anything to do with politics, then it would be logical to assume that one’s own political ideology interacts with the message. It has also been suggested that the decision to consume or share this information further is often independent of the objective truth of its content. To understand this better, we need to ambush the user at each decision point to understand what is important at each stage, and how it potentially influences decisions at the next stage. In this Whatsapp-forwarding-behavioural-model, decisions at earlier stages (perhaps not verifying the truth or falsity of the information because this is effortful) can often lead us down a slippery slope of misinformation.
There are many factors that could be important at each stage, and we have discussed only a few here. For example, political ideology, cultural beliefs, social norms, and the mental bandwidth needed to verify a message could all be important factors. There is also the weight assigned to the truth or importance of a WhatsApp forward — indeed, whether one believes a WhatsApp message to be true could be down to one’s individual characteristics (age, location, gender), as much as it could be to the characteristics of the person from whom it was received, or where it was received (e.g. in a group). The nature of the content of the message (a joke, a “fact,” a patriotic message) can also interact with these individual differences to influence how information is exchanged over Whatsapp. Given that we know so very little about these factors, it would be remiss to suggest precisely what WhatsApp should be doing to tackle misinformation.
That said, one avenue that WhatsApp could use is notifying users on starting up the app about the perils of sharing information without verification. This could add a layer of caution (even if easily dismissed) to user behaviour. We know from the case of Facebook and other social media that prior exposure to fake news increases our likelihood of believing it to be true. But without knowing the why and how of forwarding specific to WhatsApp, which can only be determined through rigorous experimentation and research, capping user actions may be just a shot in the dark. Since efforts are on to learn more about this, it’s perhaps best to look (and look hard) before you forward that message.
(This was first published on Pragati on 30 January, 2019: