We all say we are “AI-driven” these days. Yet, according to a sobering MIT report, nearly 90% of AI proof-of-concepts never make it to production. It seems that history keeps repeating itself.
Remember when the metaverse and blockchain, each promising to reinvent the world, attracted billions in investment and delivered little. Now, AI risks walking the same path. Too many leaders green-light AI projects not because they solve a real business problem, but because they do not want to be left behind the AI hype.
Despite over $40 billion invested in AI enterprise, studies show that most organisations still see zero measurable value. And it is not because the AI models do not work – they usually do. It is because they are built in isolation, without the context of domain expertise, the real processes in the organisation, or the bigger picture purpose.
When we chase the hype instead of the strategy, we end up automating false certainty – numbers that look impressive, but explain nothing. That is not a failure of technology. It is a failure of alignment between what AI can do and what the business actually needs. And when that connection is missing, AI only amplifies the misalignment.
And that brings us to our field, where we recently did a major AI-driven improvement: media intelligence.
The Misalignment in Media Intelligence

What does misalignment look like in media intelligence? Part of it hides in plain sight.
One metric, in particular, has refused to die and caused a misalignment in the industry: the Advertising Value Equivalent, or AVE. That is the idea that one article equals one ad (one value point). It is simple, seductive… and utterly meaningless nowadays.
A while back in time, AVE was a thing for measurement, but right now, AVE cannot tell the difference in what is happening with the audiences, because they do not consume media the way they used to. Today, they scroll, they swipe, they share, and every one of those actions carries a different weight. That is why we call AVE turned into a zombie metric – it refuses to die, no matter how irrelevant it becomes.
In a way, AVE is the media industry’s version of the AI hype cycle: it measures what is easy, not what is meaningful in this context. It gives the illusion of progress without real alignment to purpose.
And AVE is not alone.
Take the classic static reach – the number everyone quotes – “your story reached two million people!” Sounds impressive. Except… most of those 2M never even noticed. They scrolled past it between a cat video and a meme.
When looked at in isolation, Static Reach does not make sense either. It is a surface-level number, a count of potential eyeballs, not real attention. It does not reflect engagement, it does not account for credibility, and it certainly does not measure trust. So, by being disconnected from these factors, Static Reach ends up being just another vanity metric.
“A front-page story that nobody reads is worth less than a tweet that changes policy.”
How We Approached This Case
Together with our partners from Medianet, we saw this gap growing. Brands were demanding proof of influence, not impressions. Public institutions wanted evidence of trust, not just traffic. And instead of starting with code, we started with questions:
What would valid scores look like?
Our goal in this journey was simple: to create meaningful, AI-informed, automated yet transparent scoring that makes sense in today’s diverse media world – one that unites print, online, and broadcast impact under a common language.
- We wanted a measurement system anyone could understand, not just data scientists.
- А score that anyone can click through, inspect, and understand how it was calculated.
- We didn’t want to replace human judgment. We wanted to inform them.
Because AI should not be a black box that spits out a score, but be a lens that helps humans see clearly. So, we went to the scientific literature.
We spoke to domain experts – analysts, journalists, communication scientists – and we saw that every story carries three fundamental signals behind it:
- where it appeared,
- who created it, and
- how people reacted to it (or source, author and audience).
We supported these observations with actual empirical data – historical data of millions of articles, posts and broadcasts. Then, we built predictive models and AI-driven simulations and analysed the long-term trends for each of these three pillars. We also tested their relative importance and quantified how each of them actually contributes to the real-world impact.
Our goal was to make this multi-layered influence of source, author and audience measurable and explainable. We did not need a multi-billion-parameter model. We needed a relevant one. A metric that is just like ethical AI itself, it is not built to impress, but to inform. A metric that reflects reality, not vanity. We wanted a framework, a metric that is R.E.A.L.
Introducing the R.E.A.L. Impact Score

That is what led us to create the R.E.A.L. Impact Score: a score from 1 to 100 that is automatically calculated for every story that is being published out there.
“R” for Reach
Reach is indeed where it starts. This is the traditional outlet reach that looks backwards – it is a snapshot of the past. It tells us how large the audience of a specific outlet or source used to be and helps us estimate how large it might be next time a story gets published in the same source.
Each month, we capture that snapshot for every outlet out there to keep our predictions for the stories that are coming out at this very moment updated.
“E” for Engagement
But reach alone is not reality – it is only the surface. The life of a story today is fast. That is where engagement comes in – it goes deeper. It shows what people are actually doing with a story.
Are they sharing and commenting on it? Debating it, defending it?
Engagement reveals attention, and attention is the real currency of modern communication.
Our predictive models showed that most engagement happens within the first thirty hours after publication. That is when audiences decide whether a story matters or disappears. That is why we made sure to measure attention dynamically.
For each story that is being published, our system tracks engagement signals in near real time, within that crucial time window, constantly updating the overall score so users can see if a story is gaining traction or going viral, so that you can act while it matters, not weeks later after the story is already dead.
“A” for Authority
And just as timing matters, so does origin. Where a story comes from matters. Who creates it matters. That is where authority comes in – the credibility layer of our scoring framework.
The scientific world gave us a valuable analogy here. In science, influence is not about how often someone speaks; it is about how often their work is cited by peers. That is the essence of credibility. We applied the same principle to media intelligence mapping, which genuinely shapes conversations, not just who shouts the loudest.
Using network analysis, we built domain-specific AI models trained on media and communication data to map who genuinely influences conversations. And just like in research, where citations evolve as new studies emerge, our authority layer evolves too. It is refreshed every month, tracking who is most referenced, on which topics, across which networks.
The result is a living credibility map – a dynamic view of who truly shapes public dialogue.
“L” for Leverage
When the three pillars above (reach, engagement and authority) are seen together, we are given the true leverage of measuring communication and the impact of any published story.
How It Works in Practice
Take this story, for example:

It seems that Jane Doe from the Funny Magazine published an online news article about our event.
The impact of this story at this very moment is estimated at 73, not bad at all. And because we made each piece of the R.E.A.L. Impact Score transparent – no black boxes, no mystery weights.
You can expand the number representing the real impact score and see what stands behind it. Obviously, it was published in a source with a large audience, and by an Author who is relatively well recognised by her peers.
Then, you can further expand, go a layer deeper and see the real reactions, for example, if you expand on the engagement layer. Or if you click on Authority, you will see exactly how the influence of this author was calculated, and so on.
The Right Method, The Right Team

Now, there is a new AI buzzword every day – agents, synthetic data, RAG, you name it. Most will not matter in a few months. They are newborn ideas – exciting, but still learning to walk.
What matters is not the next acronym.
It is the method, domain knowledge. Collaboration. Transparency. Because AI is not a one-person POC revolution and fancy dashboards. It is a team sport. And the teams that win are not the ones chasing hype. The right team is not made of “AI gurus”; it is cross-functional.
- A domain expert who understands the real problem.
- An AI engineer who knows the fundamentals and listens to the domain experts.
- And a leader who keeps everyone aligned with the bigger picture.
That is exactly how we built our Impact Score. Not with a giant, billion-parameter model, but with small, domain-specific ones – each designed for a specific task in media and communication. One classifies story types, another tracks engagement, and a third measures credibility. Each model is built not to sound intelligent, but to deliver intelligence.
“You don’t need the biggest model. You just need the right one.”
From Vanity Metrics to Real Measurement
When we began this journey, our goal was simple – to make media evaluation meaningful again. Not louder. Not faster. Not flashier. But smarter.
We started small – one type of content. We tested, learned, and refined. And that is how real transformation happens – not with a headline hype, but with a method.

If you want added value, think big, but start small. Build something useful before you try to make it universal. Measure. Iterate. Scale intelligently. That is how AI earns trust. That is how one makes an impact R.E.A.L.
