When we read or hear about Artificial Intelligence, the term “AI Ethics” comes up very often. Simply put, AI Ethics focuses on developing Artificial Intelligence (AI) ethically. Recently, MKAI, an organization that engages in a conversation on various themes of Artificial Intelligence and has members in its community from different professions, countries, and backgrounds (many of whom do not have a technical background), started a dialogue around the theme “Business Rationale for Ethical AI.” This was an interesting choice of a subtopic within the broader discussion of AI Ethics, and as an AI enthusiast, this topic got me thinking. I wondered what could be the business rationale to develop AI systems in an ethical manner?
One business rationale for developing Ethical AI (and not the only rationale) – is to build trust. Owing to the fact that AI and what it means (covered below) may not be known to every single person, this trust ought to be built with all stakeholders of the business. This implies focusing not only on the shareholders but also on the employees, customers (current or potential), regulators, and everyone else who may be impacted by the business entity seeking to use or which is already using AI. Trust creates confidence amongst the stakeholders and would only benefit the business in the short and long term. Without trust, an organization focusing on any business, including AI, may not have the desired impact in the areas that it seeks to create an effect.
The next question that came up naturally is how to build that trust? Coming from a legal background, this question naturally broke itself into a subquestion, i.e., whether legal compliance by a business involved in AI would demonstrate trust?
Generally speaking, legal compliance by an organization is not the only but a critical component for building trust. Applied in the AI context, this would mean that an organization compliant with existing AI regulations and frameworks may demonstrate that it is following ethical practices.
However, currently, as far as AI legal compliance is concerned, there is no finalized formal legal framework globally to guide us. As a matter of fact, the European Union has taken the lead on this and has published the EU Regulations as recently as April 21, 2021. Therefore, legal compliance with AI is being figured out globally, and various entities are involved in demystifying and understanding these regulations to develop this essential legal framework effectively. For example, ForHumanity (an organization known for developing AI Audits to build trust in AI) is working on this, see here. Therefore, in the absence of fundamental laws governing AI (known as “hard law”), the aspect of legal compliance for building trust in AI, unfortunately, at this point, does not take us too far.
Assuming that complex law or AI regulations did exist, in the context of ethics, a philosophical question sometimes arises that an organization may be legally compliant but not ethical. Such a philosophical discussion is beyond the scope of this post. Moreover, it is impossible to anticipate all future developments from a practical standpoint and, therefore, unrealistic to expect organizations to anticipate all legal consequences and regulatory frameworks.
Cognizant of the absence of “hard law” and aware of the value of trust and developing trust in AI, many leading organizations working on AI have published self-regulations to demonstrate that they are building Ethical AI. This is known as soft law. These self-regulations, illustratively, have specific common themes such as –
- Avoid Bias in the AI system.
- Ensuring justice and fairness in the design of the AI
- A focus on user data rights and respect for human autonomy and human rights
For a person curious to understand and demystify what Ethical AI means, reading these self-regulations and pursuing courses online on AI and Ethics would be an excellent way to go about this.
Another way to try and understand the broad concept of “Ethical AI” in a simple manner is to focus on “What AI can (currently) do and understanding what AI (currently) cannot do.” This is critical. This implies that we must understand the meaning of AI because it is only when we know what AI is – can we see what it does.
What is AI? – AI is intelligence displayed and simulated by machine codes. The use of the word “intelligence” in the term “Artificial Intelligence” seems to create an impression that machines can reach human-level intelligence and give rise to the fear that machines may replace humans one day. But it is here that lies the nuance – which is also relevant for understanding Ethical AI. Artificial Intelligence is of two types – (a) Artificial Narrow Intelligence (“ANI”) and (b) Artificial General Intelligence (“AGI”). AI that exists today is Artificial Narrow Intelligence. Many experts agree that Artificial General Intelligence – a point where machines are as intelligent as humans in every sense – is technologically far away. How far – and when we will reach there is a different debate. The critical point is that today – it is Artificial Narrow Intelligence.
To simplify the explanation even further, and help you understand Artificial Narrow Intelligence, think of AI as machines that can learn, reason, and act for themselves. Let us take a few examples:
- Can the machine ‘hear’ you and respond sensibly? If the answer to that is yes, then it is most likely AI. Think of Alexa. You speak to the device – it hears you and responds.
- Can the machine “read” what you type and respond sensibly? If the answer to that is yes, then it is most likely AI. Think of any recommendation engine – your search bar on Netflix. You type in there, and before you finish – it gives your recommendations.
If you want to understand AI correctly, a great way to do this is to see this wonderful diagram here.
The use of AI has grown in the last couple of years predominantly due to a technique called machine learning. Through this process, the algorithm is trained on large amounts of data, and then from that data, the algorithm identifies patterns. It then uses those patterns to make decisions. This is called supervised learning – a type of machine learning technique to train AI and develop AI models. There are other ways, but to keep it simple, let us think of supervised learning as one technique to build an AI system.
Understanding AI this way, two patterns (pun intended) clearly emerge. The first – AI is everywhere – think of Alexa, Netflix, YouTube, Spotify, or any other recommendation engines that you may be using. It might be a part of your daily life. And the second – AI starts with the data. Data is used to train the AI, and it is that data that AI finds its patterns.
Therefore, if we are looking at ethical AI or developing ethical AI – that builds trust – it is essential to know the data on which it is built and the context of such data. From an ethics perspective, things to think about are whether the data is biased or whether the data is representative. If your data is biased or not representative, your AI cannot give accurate results. That may have adverse impacts on society. The debate on facial recognition technology highlights the harm of biased data sets, which is why recently, the US proposed a federal law to regulate facial recognition.
In conclusion, if you are someone thinking on AI and wondering what it is all about or thinking on AI ethics, the key takeaways from this post for you are – (a) Any debate on Ethical AI must first begin by understanding what AI can do today; (b) When you know and understand what AI can do today, you may, based on your unique experiences, be able to anticipate or identify sub-topics in the AI ethics debate that require discussion to create awareness and (c) building trust for stakeholders is key for any business involved in AI and therefore developing ethical AI will only bolster that trust.