Identrics. Customisable AI & NLP solutions that help you understand data.

Solutions / WASPer

WASPer by Identrics

WASPer is a solution tackling the dual challenge of user-generated content and AI-driven synthetic propaganda.

 

By combining advanced AI models and a unique taxonomy, it ensures more reliable moderation and safeguards the integrity of online discourse.

Fighting AI propaganda at its source

WASPer is a powerful ally in the fight against misinformation, disinformation, and synthetic propaganda. It provides the tools necessary to identify, analyse, and mitigate AI-generated disinformation in real time.

Here is how WASPer solves the problem:

  1. Detects synthetic content: Identifies whether content is human-generated or AI-generated with very high accuracy.
  2. Filters propaganda: Uses an LLM-based model to determine if synthetic content contains propaganda.
  3. Analyses techniques: Breaks down detected propaganda into distinct techniques using a hierarchical taxonomy.
  4. Adapts across languages: Excels in processing multilingual content, starting with English and Bulgarian, making it a globally adaptable solution.

By automating and enhancing online moderation efforts of user-generated content, WASPer empowers platforms to protect their audiences and rebuild trust in digital media.

Social media platforms and comment sections, once spaces for open dialogue, now often serve as tools for manipulation. As disinformation campaigns evolve, every comment, post, and tweet becomes a medium to influence opinions and shape decisions.

The scale of the problem is staggering:

  • Recent studies report a 457% increase in AI-generated misinformation on online platforms known for spreading disinformation.
  • Sophisticated troll networks and bots exploit these tools to infiltrate comment sections, generating inflammatory content designed to mislead audiences and polarise discussions.
  • Traditional moderation methods—heavily reliant on manual oversight—are unable to keep pace with the speed, scale, and subtlety of modern AI-driven tactics.

These challenges demand a new, scalable approach to ensure the integrity of online discourse and protect audiences from disinformation’s harmful effects. WASPer was designed to address this critical need, combining AI detection with a sophisticated framework for analysing and mitigating propaganda.

WASPer operates through an advanced multi-step pipeline, designed to detect, analyse, and categorise AI-generated propaganda with precision and scalability. By leveraging state-of-the-art AI models and a robust hierarchical taxonomy, WASPer ensures that even the most subtle manipulative techniques are identified and addressed.

Here is a breakdown of the process:

1. Human vs. AI-Generated Detection

  • The first step identifies whether a given piece of content is organic (human-generated) or synthetic (AI-generated).
  • This is achieved using advanced classifiers fine-tuned on diverse datasets, ensuring high accuracy for both English and Bulgarian texts.

2. Binary Propaganda Detection

  • If the content is synthetic, WASPer determines whether it contains propaganda.
  • This step utilises a binary classification model to distinguish between neutral and propagandistic content.

3. Multi-Label Propaganda Classification

  • For synthetic content flagged as propaganda, WASPer applies a multi-label classification model.
  • This model assigns specific propaganda techniques based on a four-level hierarchical taxonomy, offering nuanced insights into the manipulation tactics used.

4. Multilingual Processing

  • WASPer supports both English and Bulgarian, with models tailored to the linguistic nuances of each language.
  • The multilingual framework ensures that the solution can address global challenges in AI-generated propaganda.

A four-level taxonomy systematically categorises propaganda, from general detection to specific techniques.

  1. Level 1: Determines whether text is human-generated or AI-generated.
  2. Level 2: Identifies the presence of propaganda.
  3. Level 3: Groups propaganda into broader categories, such as self-identification or defamation techniques.
  4. Level 4: Pinpoints specific propaganda techniques, such as “Bandwagon,” “Fear Appeal,” or “Whataboutism.”

This taxonomy provides granular insights, allowing users to understand not just the presence of propaganda but the tactics employed.

  • Models like BgGPT (for Bulgarian) and Mistral (for English) were fine-tuned on domain-specific datasets.
  • These models excel in tasks such as distinguishing human vs. AI-generated content, detecting propaganda, and classifying techniques.
  • Fine-tuning ensures linguistic and contextual accuracy tailored to Bulgarian and English audiences.

 

  • Built to handle large datasets efficiently, WASPer can process vast volumes of content from social media platforms, comment sections, and more.
  • Scalability ensures WASPer remains effective for platforms of any size, from niche forums to major social networks.

Several components of WASPer, including the taxonomy and trained models, are publicly available for collaboration and adaptation.

  1. Accessible Taxonomy: A publicly available hierarchical taxonomy of propaganda techniques for researchers and developers.

    🔸 Propaganda Techniques Taxonomy
  2. Pre-Trained Models: Hugging Face-hosted models for detecting propaganda content and analysing specific techniques in English and Bulgarian.

    🔸English Propaganda Detection

    🔸Bulgarian Propaganda Detection

    🔸English Propaganda Techniques Classification

    🔸Bulgarian Propaganda Techniques Classification
  3. Collaboration Opportunities: These open resources invite collaboration from researchers, developers, and media professionals worldwide.
Identifying disinformation campaigns

Disinformation campaigns often leverage AI to produce persuasive, targeted content at scale. WASPer identifies these operations by detecting synthetic propaganda and analysing the techniques used. This information disorder intelligence allows platforms and organisations to respond promptly, mitigating the influence of coordinated manipulation.

Enhancing UGC moderation

With user-generated content (UGC) increasing exponentially, traditional moderation methods are no longer sufficient. WASPer automates the identification of harmful content, ensuring faster and more consistent moderation. Its ability to process vast volumes of data enhances operational efficiency and frees human moderators to focus on high-priority tasks.

Building trust in journalism

Journalistic integrity is increasingly under threat from manipulated narratives and disinformation. WASPer safeguards editorial workflows by detecting synthetic and propagandistic content, enabling publishers to maintain the credibility of their reporting. By providing tools to verify and protect content, it ensures that audiences receive accurate, reliable information.

Protecting digital spaces

Propaganda undermines trust in digital spaces and exacerbates polarisation. WASPer’s advanced analysis identifies not just manipulative content but also the strategies behind it, equipping platforms to build safer, more respectful online environments. This fosters community engagement and trust in platform integrity.

Optimising media listening platforms

For platforms monitoring media sentiment and engagement, WASPer offers valuable insights by analysing sentiment, detecting propaganda, and categorising content at scale. This enables smarter reporting and improved decision-making for brands, publishers, and organisations alike.

Get to know the key members of our dedicated team, including:
Yolina Petrova, PhD

Yolina Petrova, PhD
VP of AI & Data Solutions

Boryana Kostadinova

Boryana Kostadinova
Data Scientist & Linguist

Bogomil Katanov

Bogomil Katanov
Junior AI Engineer

Nikola Blajev

Nikola Blajev
Junior ML Engineer

While being an advanced solution for combatting AI-generated propaganda, WASPer also fosters a growing community of researchers, developers, and organisations dedicated to protecting the integrity of online spaces.

Here’s how you can get involved:

For researchers and innovators

Our open-source resources provide a foundation for innovation in AI against synthetic propaganda. You can collaborate with us to:

  • Expand the capabilities of WASPer’s models.
  • Develop new features and applications tailored to your research goals.
  • Contribute to the ongoing refinement of the hierarchical taxonomy.

For media platforms

WASPer easily integrates with media outlets, online publishers, and content moderators. By embedding its AI-powered tools, platforms can:

  • Automate the detection of synthetic and propagandistic content.
  • Enhance trust with their audience by delivering verified, reliable information.
  • Scale the moderation efforts without compromising quality.

For businesses and organisations

WASPer’s customisable nature can be tailored to meet the unique challenges of your industry. Whether you are looking to combat disinformation, monitor public sentiment, or facilitate content analysis, adopting WASPer by Identrics ensures access to an up-to-date AI technology and expert support.

Ready to collaborate?

© 2022-2025 NGI SEARCH

NGI SearchEU flag

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union, nor the granting authority can be held responsible for them. Funded within the framework of the NGI SEARCH Project under grant agreement No.101069364.