Identrics. Customisable AI & NLP solutions that help you understand data.
WASPer by Identrics
WASPer is a solution tackling the dual challenge of user-generated content and AI-driven synthetic propaganda.
By combining advanced AI models and a unique taxonomy, it ensures more reliable moderation and safeguards the integrity of online discourse.
Why is WASPer Needed
Social media platforms and comment sections, once spaces for open dialogue, now often serve as tools for manipulation. As disinformation campaigns evolve, every comment, post, and tweet becomes a medium to influence opinions and shape decisions.
The scale of the problem is staggering:
- Recent studies report a 457% increase in AI-generated misinformation on online platforms known for spreading disinformation.
- Sophisticated troll networks and bots exploit these tools to infiltrate comment sections, generating inflammatory content designed to mislead audiences and polarise discussions.
- Traditional moderation methods—heavily reliant on manual oversight—are unable to keep pace with the speed, scale, and subtlety of modern AI-driven tactics.
These challenges demand a new, scalable approach to ensure the integrity of online discourse and protect audiences from disinformation’s harmful effects. WASPer was designed to address this critical need, combining AI detection with a sophisticated framework for analysing and mitigating propaganda.
How WASPer Works
WASPer operates through an advanced multi-step pipeline, designed to detect, analyse, and categorise AI-generated propaganda with precision and scalability. By leveraging state-of-the-art AI models and a robust hierarchical taxonomy, WASPer ensures that even the most subtle manipulative techniques are identified and addressed.
Here is a breakdown of the process:
1. Human vs. AI-Generated Detection
- The first step identifies whether a given piece of content is organic (human-generated) or synthetic (AI-generated).
- This is achieved using advanced classifiers fine-tuned on diverse datasets, ensuring high accuracy for both English and Bulgarian texts.
2. Binary Propaganda Detection
- If the content is synthetic, WASPer determines whether it contains propaganda.
- This step utilises a binary classification model to distinguish between neutral and propagandistic content.
3. Multi-Label Propaganda Classification
- For synthetic content flagged as propaganda, WASPer applies a multi-label classification model.
- This model assigns specific propaganda techniques based on a four-level hierarchical taxonomy, offering nuanced insights into the manipulation tactics used.
4. Multilingual Processing
- WASPer supports both English and Bulgarian, with models tailored to the linguistic nuances of each language.
- The multilingual framework ensures that the solution can address global challenges in AI-generated propaganda.
Technological Highlights
A four-level taxonomy systematically categorises propaganda, from general detection to specific techniques.
- Level 1: Determines whether text is human-generated or AI-generated.
- Level 2: Identifies the presence of propaganda.
- Level 3: Groups propaganda into broader categories, such as self-identification or defamation techniques.
- Level 4: Pinpoints specific propaganda techniques, such as “Bandwagon,” “Fear Appeal,” or “Whataboutism.”
This taxonomy provides granular insights, allowing users to understand not just the presence of propaganda but the tactics employed.
- Models like BgGPT (for Bulgarian) and Mistral (for English) were fine-tuned on domain-specific datasets.
- These models excel in tasks such as distinguishing human vs. AI-generated content, detecting propaganda, and classifying techniques.
- Fine-tuning ensures linguistic and contextual accuracy tailored to Bulgarian and English audiences.
- Built to handle large datasets efficiently, WASPer can process vast volumes of content from social media platforms, comment sections, and more.
- Scalability ensures WASPer remains effective for platforms of any size, from niche forums to major social networks.
Several components of WASPer, including the taxonomy and trained models, are publicly available for collaboration and adaptation.
-
Accessible Taxonomy: A publicly available hierarchical taxonomy of propaganda techniques for researchers and developers.
🔸 Propaganda Techniques Taxonomy -
Pre-Trained Models: Hugging Face-hosted models for detecting propaganda content and analysing specific techniques in English and Bulgarian.
🔸English Propaganda Detection
🔸Bulgarian Propaganda Detection
🔸English Propaganda Techniques Classification
🔸Bulgarian Propaganda Techniques Classification - Collaboration Opportunities: These open resources invite collaboration from researchers, developers, and media professionals worldwide.
Use Cases and Real-World Impact
Disinformation campaigns often leverage AI to produce persuasive, targeted content at scale. WASPer identifies these operations by detecting synthetic propaganda and analysing the techniques used. This information disorder intelligence allows platforms and organisations to respond promptly, mitigating the influence of coordinated manipulation.
With user-generated content (UGC) increasing exponentially, traditional moderation methods are no longer sufficient. WASPer automates the identification of harmful content, ensuring faster and more consistent moderation. Its ability to process vast volumes of data enhances operational efficiency and frees human moderators to focus on high-priority tasks.
Journalistic integrity is increasingly under threat from manipulated narratives and disinformation. WASPer safeguards editorial workflows by detecting synthetic and propagandistic content, enabling publishers to maintain the credibility of their reporting. By providing tools to verify and protect content, it ensures that audiences receive accurate, reliable information.
Propaganda undermines trust in digital spaces and exacerbates polarisation. WASPer’s advanced analysis identifies not just manipulative content but also the strategies behind it, equipping platforms to build safer, more respectful online environments. This fosters community engagement and trust in platform integrity.
For platforms monitoring media sentiment and engagement, WASPer offers valuable insights by analysing sentiment, detecting propaganda, and categorising content at scale. This enables smarter reporting and improved decision-making for brands, publishers, and organisations alike.
Team Behind WASPer
Yolina Petrova, PhD
VP of AI & Data Solutions
Boryana Kostadinova
Data Scientist & Linguist
Bogomil Katanov
Junior AI Engineer
Nikola Blajev
Junior ML Engineer
Collaboration Opportunities
While being an advanced solution for combatting AI-generated propaganda, WASPer also fosters a growing community of researchers, developers, and organisations dedicated to protecting the integrity of online spaces.
Here’s how you can get involved:
For researchers and innovators
Our open-source resources provide a foundation for innovation in AI against synthetic propaganda. You can collaborate with us to:
- Expand the capabilities of WASPer’s models.
- Develop new features and applications tailored to your research goals.
- Contribute to the ongoing refinement of the hierarchical taxonomy.
For media platforms
WASPer easily integrates with media outlets, online publishers, and content moderators. By embedding its AI-powered tools, platforms can:
- Automate the detection of synthetic and propagandistic content.
- Enhance trust with their audience by delivering verified, reliable information.
- Scale the moderation efforts without compromising quality.
For businesses and organisations
WASPer’s customisable nature can be tailored to meet the unique challenges of your industry. Whether you are looking to combat disinformation, monitor public sentiment, or facilitate content analysis, adopting WASPer by Identrics ensures access to an up-to-date AI technology and expert support.
Ready to collaborate?
© 2022-2025 NGI SEARCH
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union, nor the granting authority can be held responsible for them. Funded within the framework of the NGI SEARCH Project under grant agreement No.101069364.