Contents on page
False information spreads faster and wider than ever before, thanks to social media, blogs, and even mainstream media outlets. But the real question is: why do we, as individuals, share this information?
To truly understand how to combat the spread of false information, we need to explore the psychology behind sharing. What makes people feel compelled to spread misleading information, knowingly or unknowingly?
In this article, we will dive into the cognitive, emotional, and social factors that drive the sharing of false content and how this behaviour impacts society.
Why do we spread false information on social media and traditional media?
False information does not spread on its own—it relies on human behaviour to travel around the digital landscape. While for some of the falsehoods we can blame technology and algorithms, for others there are key psychological factors that lead to the sharing of disinformation.
There are 3 most common reasons for this:
Reason #1: Cognitive biases
Our brains are wired with cognitive biases that can lead us to share false information without even realising it. How does this happen? Two of the most common biases involved in this behaviour are confirmation bias and anchoring.
Here are some key types:
- Confirmation bias
This occurs when one’s favours information that aligns with their pre-existing beliefs or attitudes. If someone comes across a piece of false information that confirms what they already believe, they are more likely to share it without verifying its accuracy.
- Anchoring bias
This one is a pervasive cognitive bias that causes us to rely too heavily on the first piece of information encountered (known as the “anchor”) when making decisions. In our case, if someone’s initial exposure to a topic involves false information, they may anchor their understanding to that falsehood, making it harder to accept later corrections.
Reason #2: Emotional triggers
Emotions play a powerful role in what we choose to share. They can affect any aspect of our daily decisions, from purchase choices to the important resolutions we make at home or work.
Studies show that emotionally charged content is more likely to go viral, and false information often plays on strong emotions like fear, anger, or excitement.
Here are some key types:
- Fear and anger:
Content that induces fear or anger often compels individuals to share as a form of self-preservation or out of outrage. For example, during health crises or political unrest, false information can incite panic, leading to widespread sharing.
- Excitement:
Sensational headlines or stories that evoke excitement, even if they are not true, can trigger impulsive sharing. The desire to be the first to share something “interesting” can outweigh the need for accuracy.
- Sadness and compassion:
Information that evokes feelings of sadness or compassion can lead to impulsive sharing, especially if it involves a vulnerable or disadvantaged group. The desire to help or raise awareness can override critical thinking.
- Hope:
Stories that offer hope or inspiration can be highly contagious, as they provide a sense of comfort or motivation.
Reason #3: Social validation and echo chambers
In an age of social media, the need for validation and belonging often drives what we post or share.
Social validation is an innate human tendency where one or more individuals crave attention and recognition from others. In this case, sharing new or interesting content can be seen as an easy way to generate likes, comments, and shares, giving users a sense of belonging and worthiness.

Echo chambers, on the other hand, often become the norm for closed online communities. Within these groups, false information that aligns with the community’s shared beliefs is repeatedly circulated without any challenge. This reinforces group thinking and makes it harder for factual corrections to break through.
Discover how our AI-driven tools can help you identify and combat false information within online communities. Work with us to foster a more informed and fact-driven environment.
Who is liable when false information is posted online?
Talking about why is false information spread online, one common question that arises is how is that allowed in the first place—as in who is to be kept liable for the falsehoods.
This discussion usually leads to a very specific question: Can you sue someone for spreading false information The answer is yes, but it is often complex and varies depending on the type of false information, intent, and jurisdiction.
Lawsuits related to false information often fall under defamation, libel, or slander laws. Defamation refers to false statements presented as facts that harm an individual’s or organisation’s reputation. Libel specifically covers written or published false information, while slander addresses spoken falsehoods.
To successfully keep someone legally liable for spreading false information, the grounds for a lawsuit generally need proof that:
- The information is false and not merely an opinion.
- The false information caused harm, such as reputational damage, emotional distress, or financial loss.
- The spreader acted with intent or negligence, depending on the legal standard of the jurisdiction.
All in all, real-world examples of celebrities suing tabloids for false headlines or companies defending their reputations against false claims show that legal actions are not just theoretical. However, these cases also highlight the complexities and challenges inherent in holding someone accountable.
Additionally, legal protections like freedom of speech often play a significant role in how cases are judged, especially in contexts involving public figures or matters of public interest.
How to combat the spread of false information and its impact?
The ripple effects of false information extend far beyond individual beliefs, impacting society as a whole.
When misinformation spreads unchecked, it can erode public trust and lead people to make harmful decisions. However, we can adopt certain proactive strategies—leveraging technology, education, and collaboration—to combat misinformation and mitigate its effects.
Strategies to combat disinformation and misinformation
Strategy | Description | Tools or platforms to use |
Leveraging AI for early detection | Advanced technology, such as AI, can monitor and detect false information in real-time, identifying patterns of disinformation before they go viral. | Identrics’ AI-driven disinformation detection tools |
Promoting digital literacy | Educating individuals to recognise and question false information is key to building a more informed society. By fostering critical thinking and equipping people with the skills to verify content, we can create a community less susceptible to manipulation. | Sofia Information Integrity Forum |
Collaboration across sectors | Fact-checking organisations, social media platforms, and media outlets are already starting to collaborate to debunk misinformation promptly. | Fact-checking platforms |
Without these preventive measures, organisations and societies run the risk of facing the full impact of unchecked misinformation, which leads through:
- Erosion of trust which happens when false information is left to circulate widely, diminishing trust in media, public institutions, and personal relationships. In this state, realising they have been misled, people can feel cynical and less likely to believe accurate information in the future.
- Influencing public behaviour, causing people and organisations to make detrimental decisions about critical issues, such as health or politics.
By adopting proactive strategies, we can altogether avoid or mitigate these risks and work towards a more informed, resilient society that is less vulnerable to the damaging effects of misinformation.
Adopting proactive measures against false information
The psychology behind sharing false information reveals how biases, emotions, and social dynamics play a major role in its spread. While these forces are powerful, individuals and organisations can still take steps to prevent the amplification of false content.
By fostering critical thinking, promoting digital literacy, and leveraging technological tools, we can all contribute to promoting the integrity and accountability of the information we share and consume.
Interested in learning more about how you can combat disinformation? Reach out to us for a presentation of our solutions that can help you detect and prevent various nuances of false information.