Contents on page
Sofia Information Integrity Forum 2025 brought together over 500 participants across three days from November 5th to 7th, marking a significant milestone for what began as a collaborative initiative between Identrics, Graphwise, and GATE Institute three years ago, and evolved to include Sensika Technologies, Center for the Study of Democracy, and the Foundation for Humanities and Social Studies – Sofia.
Rundown of SIIF 2025
This year’s edition featured a packed agenda with keynote speakers including cognitive psychologist Stephan Lewandowsky, Alliance4Europe’s Saman Nazari, NATO StratCom Centre of Excellence’s Janis Karlsbergs, and European Commission legal advisor Inês de Matos Pinto.

The forum drew attendance from key public figures, including Bozhidar Bozhanov, former minister of e-governance, alongside researchers from institutions like the Hybrid Warfare Research Institute, CERTH (Centre for Research & Technology, Hellas), and NATO’s Defence Strategic Communications journal.
The event opened with addresses from key figures in the local information integrity ecosystem, including our CEO Nesin Veli, setting the tone for three days of intensive discussion spanning two parallel tracks – main sessions focusing on strategic and cognitive dimensions, and a technical track examining frontier tools and methodologies.

But the growth of the event was not just about the numbers. The real growth came from the fact that we had proof that this event served for the formation of a maturing ecosystem of researchers, technologists, policymakers, and practitioners grappling with the accelerating challenges to information integrity.
Furthermore, we are now witnessing how discussions about maintaining information integrity are evolving from being a niche concern to bringing together people from all walks of life.
Here is what we took from SIIF 2025 as a result of our own participation, discussions, and listening to a dozen other presentations and panel discussions.
The Omnichannel Reality of Information Threats
What emerged clearly across the forum is that information integrity is not a problem you solve with a single tool, a training programme, or a policy document. It is a complex challenge demanding an omnichannel approach spanning three interconnected domains, each necessary but insufficient on its own.
- Organisations need to build internal capacity, not just incident response teams that activate during crises, but strategic functions that anticipate threats, prepare responses, and maintain persistent communication. This means communication teams working alongside risk management, legal collaborating with public affairs, and leadership understanding that information threats are business threats. When a disinformation campaign targets your organisation, it affects stakeholder trust, market position, and decision-making capacity. You cannot address that with a press release and a fact-check.
- Academia needs to push the boundaries of what we understand about information manipulation, cognitive vulnerabilities, and technical countermeasures. The research presented at SIIF 2025 – from synthetic image detection to propaganda pattern analysis – represents years of work that bridges computer science, psychology, linguistics, and political science. But academic research cannot stay in journals and conference proceedings. It needs pathways into operational deployment, which means researchers working with practitioners to translate findings into usable frameworks.
- Technology needs to evolve from detection to prediction, and this is where we at Identrics are focusing our development. Post-crisis analysis tools are necessary but insufficient. We need systems that simulate disinformation scenarios before they occur, test how narratives might spread across different demographics, and help organisations prepare responses proactively rather than reactively. The technical challenge is not just building sophisticated AI models. It is building systems that integrate into organisational decision-making processes, provide actionable intelligence in compressed timeframes, and help teams move from “what happened” to “what is likely to happen next.”
None of these domains works in isolation. This interconnection was visible in every session, every discussion, every challenge articulated from the floor.
Organisations can build all the internal capacity they want, but without academic research to inform their strategies and technical tools to operationalise their responses, they are fighting with incomplete information.
Academia can produce brilliant research, but without organisational adoption and technical implementation, the insights remain theoretical.
Technology can create powerful tools to analyse coordinated disinformation campaigns, but without organisational structures to use them and academic frameworks to guide their application, the tools address symptoms rather than root causes.
Therefore, this year, we have decided to continue the discussions between all members of the event by creating a dedicated Discord channel for all participants and people who would like to stay connected.
What Identrics Brought to SIIF 2025
Our participation at the forum went beyond our role as co-organisers, as we had our CEO, Nesin Veli, our Chief Operations & AI Officer, Yolina Petrova, and our Information Integrity Specialists, Todor Kiriakov and Devora Kotseva, participate in the programme, reflecting our commitment to the omnichannel approach we outlined above.
From predictive simulation demonstrations to technical track moderation to cognitive vulnerability discussions, our team contributed perspectives that bridge technology development, academic research, and operational deployment.
Shifting From Reaction to Prediction
Our CEO, Nesin Veli, opened his presentation with a provocation:
“We are excellent at reacting, but terrible at predicting.”
He demonstrated what becomes possible when organisations shift from post-crisis analysis to pre-crisis preparation, using predictive AI models to simulate disinformation campaigns spreading online.

For the presentation, Nesin used a hypothetical drone incident near Varna as a test case. Drawing on real events – recent drone sightings across European capitals, the naval drone detected off Bulgaria’s coast – sequences of AI models generated data-driven predictions based on patterns from actual disinformation campaigns on how bad actors might exploit these events to spread disinformation.
The system produced exactly what you would expect: claims of government incompetence, false flag theories, NATO provocation narratives. Plausible stories that trigger emotional responses, spread quickly, and shape how people interpret subsequent facts.
The technical architecture of this simulation combined narrative generation, synthetic audiences representing different demographics, and our WASPer propaganda detection model – all working together to simulate how information environments develop under pressure.

The goal of this demonstration was to showcase what difference it makes if organisations are able to prepare responses to various incidents in advance so they can establish accurate narratives first and maintain them persistently to the public. Because the first truth wins, especially when it is persistent.
By using predictive AI systems like that, decision-makers learn how generative AI, neural networks, and machine learning actually work together, through hands-on simulation. Most importantly, it is a practical response to the massive output of coordinated manipulative narratives online facilitated by AI technologies.
Contact Identrics to learn more about our technology and capabilities.
Moderating the Technical Frontier
Our Information Integrity Specialists, Todor Kiriakov and Devora Kotseva, moderated the technical track – “Faster >> Better >> Stronger: Defending and Reinforcing Information Integrity with Frontier Technology.”
The lineup brought together researchers working on AI-powered fact-checking, synthetic image detection, OSINT propaganda analysis, and strategic communication lessons from hundreds of campaigns.

What emerged was an honest acknowledgement of gaps between laboratory performance and operational reality. Detection systems achieving impressive controlled-environment accuracy struggle when adversaries adapt.
The track kept focus on practical deployment:
- How do tools perform when attackers know they exist?
- What happens moving from proof-of-concept to scale?
- How do you balance accuracy with speed when minutes matter?
Understanding Cognitive Vulnerabilities
By the forum’s third day, our Chief Operations & AI Officer, Yolina Petrova, PhD, joined a panel exploring why disinformation works at a fundamental level, alongside Andy Stoycheff from NTCenter and Assoc. Prof. Dr Ralitsa Kovacheva.
The discussion examined the biological and cognitive foundations that make humans vulnerable to manipulation, where Andy Stoycheff demonstrated how stimuli trigger predictable emotional responses within milliseconds, inhibiting rational processing.

That served to explain why when bad actors deliberately trigger biological responses that bypass critical thinking, the recommendation to “check your sources” becomes insufficient.
Yolina Petrova, PhD, our Chief Operations & AI Officer, then picked up on the technological side and explained how large language models can be seen as “stochastic parrots” – systems optimised for plausibility rather than factuality.

With AI on the rise, AI-generated content is rising too, 200-300% YoY, while trust in online media drops over 70% when audiences learn their content is AI-generated. But this leads to a paradox, as organisations need AI to produce content at scale.
The solution discussed was not avoiding AI but educating audiences about how these systems work – what they are good at, where they fail, and how to interpret AI-generated content appropriately.
Assoc. Prof. Dr. Ralitsa Kovacheva provided a valuable context about Bulgaria’s structural vulnerabilities – collapsed institutional trust, high social media dependence, and a conspiracy mindset in the majority of the population.

In the end, what the panel reinforced is that there is no single antidote. Solutions require exposing manipulation mechanics, establishing a common language across sectors, and building agility to respond quickly when traditional approaches break down.
Building the Ecosystem Forward
The information environment will keep evolving. Threats will keep adapting. Our response needs to evolve faster.
The forum confirmed what we have observed in our work: progress requires ecosystem thinking. Organisations adopting best practices need those practices informed by current research and enabled by appropriate technology. Academic research needs pathways to practitioners who can implement findings. Technology needs to account for organisational realities and human psychology.
For us at Identrics, the forum confirmed our direction. We are building predictive AI models because organisations need to shift from post-crisis analysis to pre-crisis preparation. We are combining technical capabilities with psychological insight because tools that ignore human cognition fail operationally. We are working with communication teams, risk managers, and decision-makers because information integrity is foundational to organisational resilience.
This is the omnichannel approach in practice: technology serving organisational needs, guided by academic understanding, producing capabilities that advance collective knowledge.
We are grateful to everyone who made SIIF 2025 possible.
