Contents on page
Propaganda, misinformation, deep and cheapfakes, and other attempts at manipulation have become a pressing issue online.
Whether it is a misleading headline, a distorted image, or a completely fabricated story, the rapid spread of false information on social and traditional media can have significant impacts, from damaging reputations and eroding public trust to triggering legal consequences.
Herein, we will explore the anatomy of viral falsehoods—how they are created, amplified, and spread across digital platforms—and discuss how early detection of false information can play a key role in mitigating the impact.
What is false or misleading information?
Before diving into how false information spreads, it is important to understand what it is. False information, also referred to as “information disorder,” can take various forms of misleading content.
Here are some key types:
- Misinformation:
False or inaccurate information that is shared without harmful intent. This can happen when people unknowingly share incorrect facts or news. An example might be a well-meaning individual sharing an unverified piece of news that turns out to be false.
- Disinformation:
Deliberately misleading or biased information, manipulated narratives, or propaganda intended to deceive. Disinformation is spread with the intention of manipulating opinions, sowing discord, or achieving political, financial, or social gain.
- Malinformation:
Information that is based on reality but used maliciously to inflict harm on a person, organisation, or country. This can include leaks of private information or distorted facts meant to cause damage.
- Propaganda:
Systematic and deliberate spreading of biased or misleading information to promote a political cause or point of view. Propaganda often uses emotionally charged language and imagery to influence public perception and behaviour.
- Fake news:
Fabricated stories that are presented as legitimate news but lack factual basis. Fake news is often sensational and designed to mislead for various purposes, including financial gain or political influence.
- Hate speech:
Any form of communication that belittles a person or group on the basis of characteristics such as race, religion, ethnic origin, sexual orientation, disability, or gender. Hate speech can incite violence, discrimination, or prejudice and is often spread to polarise communities.
The distinctions between these are crucial because each type of false information requires different strategies to combat effectively. For example, while misinformation might be mitigated through education and fact-checking, disinformation and propaganda often require more sophisticated detection and intervention techniques.
For a more detailed exploration of how to combat misinformation and disinformation, check out our previous post on the dangers of misinformation and how to prevent them.
How false information spreads online in 4 key stages
False information does not spread by accident; there are distinct stages that a piece of misleading content goes through before it becomes viral.
Typically, there are 4 key stages of how false information spreads:
1. Creation and initial sharing
False information often starts with a deliberate act of creation. This could be a fabricated story, a doctored image, or a manipulated video. The authors of such fake information usually have a motive—be it political influence, financial gain, or simply causing disruption (the worst type).
Once created, this false information needs to be shared. The initial sharing often occurs on smaller, less regulated networks like fringe blogs, forums, or niche social media groups. A lot of that happens on Telegram, too. These platforms then serve as the testing grounds, where the content can be spread among a small group to see how it is received.
2. Amplification through social media and online platforms
After initial sharing, the next stage is amplification. This is where social media platforms play a pivotal role. Algorithms on these platforms are designed to promote content that generates engagement—shares, likes, comments—regardless of whether the content is true or false. This algorithmic bias can result in false information being spread widely and quickly.
“False news is more novel, and people are more likely to share novel information”
Researchers at MIT have found that fake news can spread up to 10 times faster than true news on social media. This is because false stories often contain sensational or emotionally charged content that grabs attention and compels users to share.
“People who share novel information are seen as being in the know.”
3. Escalation through influencers and media outlets
Once a piece of false information starts gaining traction, it often gets picked up by influencers or even mainstream media outlets. This is sometimes done unintentionally, as influencers or journalists may not realise the content is fake before they share or report it. We have seen it happen.
However, once shared by a figure with a large following or a trusted news outlet, the false information is seen by a much larger number of people. At this stage, the spread escalates significantly as people tend to trust content shared by individuals and organisations they respect.
4. The echo chamber effect
It is the final stage in the spread of misleading information where the echo chamber effect sets in. This occurs when false information is repeatedly shared within a group of like-minded people who are inclined to believe and further spread it.
In these echo chambers, cognitive biases such as confirmation bias play a key role. People are more likely to believe information that aligns with their existing beliefs and are less likely to challenge or verify it. As a result, the false information continues to circulate, gaining even more attention and credibility within that community.
In other words, when like-minded people gather and share information in only one direction with no counter-arguments coming in, these members become split from the community by echo chambers.
The role of technology in accelerating the spread
Despite the many good things about technological advancements in recent years, the spread of false information is one of the bad ones due to the acceleration of the latter caused by several technological factors.
On one hand, we have social media platforms using algorithms to prioritise content that is likely to engage users. Unfortunately, this often means sensational content is given precedence over factual reporting.
Combined with the filtered information bubbles effect, this algorithmic amplification makes it easier for false information to reach a wide audience quickly.
On the other hand, generative AI and automated accounts, or bots, have allowed bad actors to easily create new fake content and mimic the effect of widespread support.
Bots can mass generate new content, likes, shares, and comments within minutes, creating the illusion of consensus for a false narrative.
This makes the content appear more credible and encourages more real users to share it on their own.
Why is the spread of false information online harmful to society?
Now that you know how quickly false information picks up speed, you might be able to envision how that can have serious consequences for individuals, organisations, and society at large:
- False information can damage the reputation of individuals or organisations, leading to loss of trust and credibility. This is particularly true for fields like journalism, where trust is paramount.
- Spreading false information can also lead to legal consequences, especially if it results in defamation or other legal violations. Both the original creator and those who share information can be held liable in some jurisdictions.
- Fake information can also have real-world impacts on public health and safety. For example, misinformation about medical treatments can lead to harmful practices or undermine public health campaigns, like with the misinformation about COVID-19 and Ibuprofen.
Preventing the spread of false information with early detection
Given the speed and reach of false information online, early detection is the key to preventing its harmful impacts. This is where advanced technology comes in.
Leveraging AI and ML, we have developed solutions that address the challenges of disinformation and trolling online. Our systems continuously gather data from a wide range of sources, analysing content for signs of manipulative communication. By using adaptive learning algorithms, our technology can recognise emerging patterns and tactics used in disinformation campaigns, staying ahead of new threats as they evolve.
Our approach is rooted in the latest technological advancements, ensuring that organisations have the most effective defence against complex digital threats. Identrics’ solutions are designed to integrate into existing digital infrastructures and enhance your ability to detect and prevent harmful content.
Journalists, educators, tech-savvy individuals, and organisations alike have a role to play in preventing the spread of false information. By utilising tools like those offered by Identrics, we can all become more responsible digital citizens.
If you are interested in learning more about how our tools can help protect your oganisation from false information, we offer demos and free consultations to demonstrate the capabilities of our solutions.
Safeguarding our online environment together
Identifying the anatomy of falsehoods is the first step in combatting their spread. By being aware of how false information spreads and the technological factors that accelerate it, we can take proactive steps to protect ourselves and our communities.
Part of promoting digital literacy and critical thinking is also understanding why false information is being spread online.
If you would like to learn more about that, we encourage you to check out our other posts on why and how misinformation affects us online.