In an increasingly connected world, AI is becoming more and more prevalent. The benefits of AI are clear to see: it can help us with everything from making medical diagnoses to creating new business processes. However, we must address the ethical issues as we become more reliant on AI systems. We should be more aware of the risks connected to AI because many people still rely on such technology without fully understanding it.

What is ethical AI?

While the definitions for ethical AI can vary across companies and industries, one good way to describe it is by defining it as responsible AI. Using AI responsibly, in general terms, means that it is used in ways that it does not cause any harm to stakeholders. Ethical AI can mean many different things based on this definition, including avoiding liability issues or ensuring safety. Some companies are developing AI guidelines to follow specific rules that match the organization’s core values. These guidelines can differ from company to company.

Vladimir Petkov, CEO at Identrics, during DEV.BG All In One

Technological innovation is important, but before we implement it we must be aware of the ethical consequences.

What ethical issues do we face?

Ethical AI is essential because of the way it can impact society. AI software can be used to make decisions and provide insights, which means it will impact our lives. Therefore, several ethical issues need to be addressed:

  • Prejudice based on race, political orientation, and gender;
  • Hallucinations;
  • Factual inconsistencies;
  • Hate speech and queerness;

How data is collected for machine learning models also significantly impacts how well they work. Thus, this can lead to problems with biases and other issues like discrimination and racism. With enough training data available for any model, it’s possible to build very accurate systems that discriminate against people based purely on their appearance or sexual orientation without anyone noticing until it’s too late. And by then, it may be impossible for them to get back into a job market, for example, dominated by automated systems designed precisely for those purposes.

How does knowledge management contribute to ethical AI?

Knowledge management is a systematic approach to creating, capturing, storing, retrieving, using, and sharing knowledge. It helps us become more efficient and effective by learning from our mistakes and avoiding them in the future. Knowledge management can help us make better decisions because we have more information at our disposal.

Knowledge management also allows us to build on what we already know, and to come up with new solutions when faced with problems or issues related to ethical AI.

Vasil Shivachev, COO at Identrics, during Tech against Disinformation event

We have this experience of extracting knowledge from open sources of information and turning it into knowledge. We monitor more than 100K sources daily (traditional, online, social media, Telegram, etc.). The materials that come to us are more than 1.5 million per day.

Ethical AI Principles

As a society, we’re still figuring out how to develop and implement ethical AI properly. Many people have strong opinions on what should or shouldn’t be done in the development of AI, but there are also some broad principles that many agree on. We’ll briefly go over these below:

  • Encourage transparency and accountability. People should be accountable for their actions, whether designing an algorithm or testing it with real users.
  • Promote fairness and inclusiveness. The goal of an ethical system is to help everyone. We should not ignore certain groups when designing algorithms; make sure they work well for all types of people.
  • Use data responsibly. Data collection is a huge part of building algorithms today; companies should know where their data comes from and what privacy policies exist around it (or don’t).
  • Be designed for trust, privacy, and security. These are the three big concepts related to creating ethical systems. Trust means that trusting a system will perform well without the risk of failure each time it is used. Privacy is knowing there’s no way hackers could get into any personal information without permission (ease-of-use). Security refers specifically to keeping hackers out and knowing that if/when someone does manage access to your accounts, then measures will be taken immediately, so they don’t have time to do anything serious before being kicked out again (safety).

How to build Ethical AI

Prevention in training data. Your training dataset must reflect only ethical interactions. For example, suppose you are building a self-driving car. In that case, the data should reflect only safe driving practices and never any dangerous or unethical behavior such as speeding or running red lights.

Constant monitoring and control by people. Ethical AI systems must be monitored by human beings who can intervene if they detect anomalies in the system’s behavior or outputs. This could be through direct observation of how your AI interacts with users or simply running tests on its output data to see if it conforms with expected outcomes (and comparing this against previous results).

Fact-checking and storage procedures. Suppose there is a question about whether or not an AI has made a fair decision about anything (e.g., hiring someone for a job). In that case, this should always be checked against facts stored elsewhere before being used as evidence against any discrimination claims made by humans involved in such decisions being made automatically by an algorithm instead.

Vasil Shivachev, COO at Identrics, during Tech against Disinformation event

Technologies come not to replace people, but to help them do their jobs better because the enormous flow of information is impossible to process by humans alone.


Ethical AI is a topic that has been gaining much traction recently, but it’s still not widely understood. In our opinion, it is because most people don’t actually know how ethical AI works and what it means for society. We hope this text gave some insight into ethical AI and how it can be implemented in businesses. Still, many more questions about the future of AI are yet to be answered.