The world is a digital place, and humanity is living in it. Over the last decade, artificial intelligence (AI) has become an integral part of our lives, and these days, it seems like there’s nothing that can’t be done with the help of this kind of technology.
Our CEO, Vladimir Petkov, shared more about ethical AI and how technologies can help fight disinformation in an interview for the Annual AI Report for Bulgaria, created by SeeNews and AI Cluster Bulgaria.
The AI report for Bulgaria
Bulgarian AI developers generated EUR 25.7 million in revenues in 2021 – an increase of 17% from the previous year, and a jump of over 35% compared to 2019.
The analysis shows that Bulgarian entrepreneurs have a strong presence on the local AI scene – as many as 70% of the companies in the Bulgarian IT sector are owned solely by Bulgarian individuals or companies.
Ethical AI
As we introduce new technologies into our lives, we must always consider the impact on society. AI is one of the main drivers of the digital revolution, but we need to think about how to embed ethical principles across its lifecycle.
AI should be developed and used responsibly, with concern for human rights and dignity. Great effort should be made at building strong security measures into it from conception onward, not just as an afterthought or response to negative publicity.
Identrics creates AI and machine learning models, but at the same time understands the huge responsibility to develop an ethical technology. Therefore, all AI creations should be fine-tuned with additional algorithms to avoid biases, factual inconsistencies, hallucinations and other potential issues.
“We can prevent unethical behaviour of models by ensuring that data used to train them is unbiased. We then monitor and control the models, fact checking and storing data appropriately. Learning from past mistakes is also a must, working with model improvement processes such as HITL cycle monitoring and detecting anomalies in production environments and reassessing training data regularly,” said Vladimir Petkov, CEP at Identrics.
AI against disinformation
Fake news, propaganda, and disinformation are problems across the globe. Social networks and new technologies have spurred the spread of these issues and continue to contribute to their prevalence.
“We provide solutions to help you detect hate speech and disinformation in comments on blogs, news sites, forums, or other online communities. The latest projects we are working on with partners include machine learning models for detecting and alerting of hate speech and propaganda,” shares Vladimir.
Conclusion
At the end of the day, AI is like any other technology. It can be used for good or bad purposes – it depends on how we use it. Therefore, we need to focus our attention on how to employ it responsibly and ethically to avoid any negative consequences that could arise.
Write us a message now asking about bespoke recommendations how ethical AI solutions can make a positive impact on your business.