Identrics. Customisable AI & NLP solutions that help you understand data.



Revolutionizing Online Moderation Against AI-Generated Trolling

WASPer by Identrics

Welcome to an open-source initiative at the forefront of transforming online moderation. Join us in the battle against AI-generated trolling to reduce misinformation and elevate the online experience.

Confronting AI-Assisted Disinformation

As social platforms become battlegrounds for misinformation, trolls strategically deploy AI to diminish media trust. The evolving landscape, especially in article comments, is ripe for Influence Operations (IOs), where trolls manipulate perceptions through inflammatory messages. Conventional methods heavily reliant on human supervision lack scalability. We address the pressing issues of AI-assisted trolling and the need for efficient, scalable solutions.

Project Overview and Societal Impact

Our project seeks to revolutionise online moderation by crafting a tool that empowers both online publishers and users. By developing a groundbreaking taxonomy and leveraging cutting-edge generative architectures, WASPer addresses the dual challenge of AI-generated trolling.

Trolls, utilizing advanced AI, spread misinformation and propaganda, eroding the trustworthiness of online content.

WASPer’s ambitious goal is to create a tool that not only detects and mitigates this AI-assisted trolling but also restores the essence of ‘social’ to social media.

The impact:

  • Countering hidden agendas;
  • Elevating the quality of online discussions;
  • Ensuring that readers can trust the authenticity of the content they engage with.

Project Statement

WASPER is a visionary project committed to transforming online moderation against the rising tide of AI-generated trolling. In a landscape where trolls exploit advanced AI techniques to spread misinformation and propaganda, we aim to create an open-source tool that automates, improves, and scales content moderation across online platforms.

Our focus is two-fold:

  • The development of a novel taxonomy that intricately captures the diverse types of trolling content;
  • The application of state-of-the-art generative architectures with multilingual capabilities to detect and combat AI-generated trolling.

With experienced AI researchers and developers, we are determined to produce tools that empower media outlets and readers alike to swiftly identify and mitigate the impact of AI-assisted trolling—a primary challenge addressed by the NGI Search Programme.

Intended Platforms:

  • Online media outlets.
  • Comment sections.


  • Limits the negative impact of bots and trolls on readers’ experiences.
  • Enhances the trustworthiness of online content.

Use Cases

  • Identifying and mitigating misinformation campaigns.
  • Improving the quality of online discussions by reducing trolling noise.
  • Restoring trust in high-quality media and journalism by countering AI-based content manipulation.

The team

Get to know the key members of our dedicated team – Iva Marinova, Chief Data Scientist; Yolina Petrova Ph.D., Senior Data Scientist; Dimitar Todorov, Manager Software Architecture Engineering, and Deyan Peychev, Chief Technology Officer.


Iva MarinovaIva Marinova, Chief Data Scientist


Yolina Petrova Ph.D.Yolina Petrova Ph.D., Senior Data Scientist


Dimitar TodorovDimitar Todorov, Manager Software Architecture Engineering


Deyan PeychevDeyan Peychev, Chief Technology Officer


© 2022-2025 NGI SEARCH

NGI SearchEU flag

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union nor the granting authority can be held responsible for them. Funded within the framework of the NGI SEARCH Project under grant agreement No 101069364

Want to learn more?