Why Does Disinformation Spread so Easily?
The disinformation that is circulating widely via online platforms is a social phenomenon that must be closely watched.

An anti-hoax wall mural is seen on Tuesday (5/5/2020) in Cipondoh, Tangerang municipality, Banten. The sharing of hoaxes and fake news on social media amd chat groups is a growing concern, alongside the growing number of smartphone users in the country.
The disinformation that is circulating widely via online platforms is a social phenomenon that must be closely watched. In his latest study, Tom Buchanan presented the factors behind the distribution of disinformation.
The development of digital technology has been not only useful in supporting human interactions, but also in giving rise to the social phenomenon of false information. In the digital age, everyone can create and share content easily. As a result, false news (disinformation), fake news and hoaxes have spread and continue to spread widely.
Disinformation is one of three types of false information as defined in the UNESCO publication (2018) Journalism, ‘Fake News’ & Disinformation: Handbook for Journalism Education and Training. In addition to disinformation, the two other types are misinformation and malinformation.
Creating and sharing this type of false information is done intentionally for a specific purpose.
The UNESCO handbook defines disinformation as: “Information that is false and deliberately created to harm a person, social group, organization or country.” Creating and sharing this type of false information is done intentionally for a specific purpose.

An infographic based on the article, “Why do people spread false information online?” by Tom Buchanan and American Psychological Association (September 2020), shows the distribution pattern of hoaxes.
An incident that has emerged as a result of disinformation is the 27 Aug. 2020 attack on the Ciracas Police station in East Jakarta. The incident started when erroneous information was circulated that triggered an emotional reaction (Kompas 29/8/2020).
This incident is a clear example of the chaos that disinformation can cause. This raises the question, what makes people want to spread false information, especially on social media networks?
Tom Buchanan asks this very question in his report, “Why Do People Share Disinformation On Social Media?”, published in September 2020. The report explores the characteristics of the messages as well as the kinds of people who use the opportunity presented by social media to spread a variety of disinformation.

Kepala Bidang Humas Polda Metro Jaya Komisaris Besar Yusri Yunus dalam siaran pers daring, Jakarta Metropolitan Police public relations head Sr. Comr. Yusri Yunus speaks to the media about coronavirus-related disinformation at a press briefing in Jakarta on Monday (4/5/2020). The Jakarta Police has recorded 443 cases of hoaxes and hate speech related to Covid-19. Courtesy of the Jakarta Metropolitan Police
Buchanan\'s research project covered the social media platforms Facebook, Twitter and Instagram for respondents in the United Kingdom, along with Facebook’s platform in the United States for respondents who live in the US. The study involved a total of 2,634 people, with 638 respondents in the US and the rest in the UK.
Characteristics of messages
Buchanan, a psychology professor at the University of Westminster in London, England, mapped three characteristics of disinformation with the potential for sharing. The first characteristic is consistency. When someone receives the same message over and over, there is a chance that the person will believe in the false message.
In fact, before the consistent barrage in receiving the same message, the person is aware that the information they are receiving is wrong. But when someone receives false information constantly, they will gradually come to perceive it as truth.
The Nazis once used this tactic. Nazi Propaganda Minister Joseph Goebbels called it "the big lie". He reasoned that if big lies were told over and over again, people would end up believing them.
This is what is called a "filter bubble" in the digital ecosystem, including social media.
A person can be exposed to disinformation as a result of the social media algorithm that a particular platform uses. This is what is called a "filter bubble" in the digital ecosystem, including social media.
This term, popularized by Eli Pariser, refers to the tendency for an online platform to recommend similar or related information to the user. This is another thing that can push consistent exposure to disinformation on a user.
The second characteristic is consensus, or agreement. A user can believe that a message or content posted to social media is true if it garners a lot of responses from netizens. For example, the number of “retweets”, "likes", and "shares" that appear below the uploaded content can encourage someone to participate by in the thread.
In marketing, this phenomenon is known as "social proof". A product will gain more attention when a crowd of people is drawn to it. In the current era of digital commerce, assessments look at how many people buy, respond to or review a particular product.
The same principle applies to disinformation when it is analogized as a trade product. The problem is that the number of likes, shares, and other indicators can be manipulated and inflated.

An infographic based on the article, “Why do people spread false information online?” by Tom Buchanan and American Psychological Association (September 2020), shows the distribution pattern of hoaxes.
Services that “sell” follows and likes are widespread in cyberspace. This is commonly known among netizens, but it is not certain that an individual can determine which sites use this service and which sites have gained them organically (through actual users), or “organic reach”.
Finally, the third characteristic is the authority accorded to the source of the message or content. A message can be deemed valid and trustworthy if it came from a credible person or organization. However, malicious parties can falsify their identity as an individual or an organization with the aim of disrupting public order.
For example, when an Instagram search for the “Jokowi” account is conducted, the search result will show many accounts that use “Jokowi” in their names. The official account that belongs to the Indonesian President will have a blue checkmark. This indicates that the account has been authenticated as belonging to the individual whose name is associated with the account.
Disinformation can also be spread through fake accounts that look authentic. For example, there is a Twitter account named @TVRINasional, whereas the official Twitter account of the television broadcaster is named @TVRIJakarta. This is indicated by the blue checkmark to the right of the account name.
However, the @TVRIRiau account does not have a blue checkmark yet. This means that there is a possibility that some unknown individual has created a “stealth account”, an account that is entirely unrelated to any official accounts. One example of a stealth account is @TVRIKotaManaSaja. Individuals can spread disinformation and harm society under the guise of a Twitter account that contains “TVRI” in its name.
State agencies are able to monitor messages that share disinformation. In Indonesia, this function is fulfilled by the Communications and Informatics Ministry (Kominfo) as well as the cybersecurity division of the National Police.
Next, what about the users who voluntarily and consciously contribute to the spread of disinformation?
User characteristics
Buchanan found a paradox among users. People know about disinformation and understand its potential impacts, but still have opportunities to share it. This is what Buchanan tries to uncover in his study.
The first characteristic concerns a person’s beliefs and his views about the world. During his research, Buchanan discovered a link between an individual’s political affiliation and their tendency to believe certain types of information. In the context of the Indonesian public, the political polarization during the 2019 presidential election could be a lesson.
A person tends to believe those things that are in alignment with their convictions.
The Indonesian public became divided as either Jokowi supporters or Prabowo supporters. Individual choices can influence how a person responds to disinformation. A person tends to believe those things that are in alignment with their convictions.
For example, those people who supported Jokowi eagerly swallowed disparaging information about Prabowo as truth, and vice versa. The truth value of information was disregarded when the content of a message related to individual trust and emotion/affection.

An infographic outlines the kinds of people who spread hoaxes, based on Why do people spread false information online?” by Tom Buchanan and American Psychological Association (September 2020), shows the distribution pattern of hoaxes.
The second characteristic is spontaneity, which is a common occurrence among social media users. The frequency of spontaneous responses increases when users digest information at a mere glance, and do so arbitrarily.
This is explored in an article by Kirsten Weir in the American Psychological Association, “Why we fall for fake news: Hijacked thinking or laziness?”, which says that users tended to judge information simply by looking at “screenshots that showed the headline, the source and the first few sentences of a news story”.
The article refers to this behavior as “lazy thinking”. Users receive, digest and then judge information in just one step, without thinking carefully or checking the information against published facts in other sources.
From the results of his research, Buchanan has proposed a solution in the effort against sharing disinformation. This solution employs a format similar to false information that is shared, but instead shares accurate information. These correct messages must outperform messages that contain disinformation in terms of reshares, likes and other indicators.

Two students ride past an anti-hoax mural on 19 Sept. 2018 in Joho, Manahan, Solo, Central Java.
The method is intended to “train” the social media algorithm to push correct and accurate information to users and build up the “organic reach” of accurate content, without using paid services to increase likes and follows. The more users who participate in sharing the accurate content, the more it is likely to become a hot or trending topic.
Finally, public figures, government agencies, leading media outlets and nongovernmental organizations that specialize in certain issues should share quality, verified content. Obviously, the organic reach of this quality content must be greater than that of shared disinformation.
Buchanan also sees that weak digital literacy plays a role in how disinformation is shared. Improvements must be made to social media services and the internet ecosystem. The strategy to do so is to reproduce quality content on various platforms to combat disinformation.
Digital literacy still needs improvement, particularly in how to consume digital media as products. If the quality products (accurate information) are less competitive than disinformation products, digital literacy will be worthless. (KOMPAS R&D)