
Some problems are arising with the developments in AI which are very relevant both ethically and societally, such as job replacement, privacy issues, security, data protection, surveillance, physical safety, and much more. In regards to jobs, AI may overtake a significant number of tasks of the jobs that are currently done by humans. This will have consequences on workflows, societies, economies and labour markets. There is a huge discussion about the future of work, raising some interesting philosophical questions about the value and meaning of work, and the meaning of human lives. We can also observe vulnerable users, like children, being affected via attachment and deception methods. Moreover, with algorithms, there exists a potential for bias in data collection, data processing throughout the life of the system.
Currently the big narratives about the future of AI where it surpasses humans in General Intelligence are not a priority in the conversations that MKAI hosts, as many problems regarding the ethics of AI are near-term, and require immediate attention, and there is no evidence that AGI will be possible in our lifetimes. Whilst AI is currently narrowly applied to provide specific purposes, its implications still create a barrage of ethical questions. People are not clear on how systems come to their decisions, and users and even developers don’t understand the system completely. There are endless implications within AI that require a deep ethical analysis for the safety of the systems.
Responsibility and Transparency for AI Systems
As mentioned, these ethical problems are not just philosophical problems but very practical issues to be addressed. As AI is getting more agency and is shaping the future, there is a need to take responsibility for these many considerations. Currently, technology can’t be held responsible for these issues, because it lacks the required ‘moral agency’ capacity like ‘free will’ and lacks any form of ‘consciousness’. Humans are therefore held responsible, especially legally, but the question is – ‘who exactly’? Philosophers of technology call this the “Many Hands Problem”.
For this, we can look at Lessig’s concept of “code as law” vs. “code is law”. This suggests whether we are in physical spaces or the web, design plays a crucial role in making people follow the rules. Design influences what people do. You can “control by design.” Code as law is the traditional law, as one decides by law what producers and consumers do. Code is law implies that the coder becomes the regulator, constraining users’ behaviour (Bano). But algorithmic regulation may threaten the right to privacy and processing will impact individual autonomy and self-determination.” (Bano)
But there is another approach that could be taken – “responsibility relevant at all stages of the process”. Responsibility for data collection, selection, bringing datasets together, and so on. By this process also – Intended consequences known by the programmers and users can be removed, but what about unintended consequences such as bias? Another reason for demanding transparency and explainability in AI systems is “The black box problem” – as even the technical people may not understand every step of how an algorithm came to its decision. Experts may not understand each other e.g. data scientists versus engineers who use the technology or operators who may not know the technologies behind a machine. But these are important questions as they are valuable in themselves, and also instrumental in making possible responsibility as answerability.

Organisations can, therefore, anticipate and prevent future potential harms through the creation of a culture of responsible innovation to develop and implement ethical, fair, and safe AI systems. For that, everyone from data scientists, data engineers, domain experts, delivery managers to departmental leads who are involved in the design, production, and deployment of AI projects, should consider AI ethics and safety as a priority. In just a few years into the future, AI systems will be able to process and use data not only at more speed but also with more accuracy. With power comes great responsibility. Thus the development of AI systems should always be responsible based on ethics and for public benefit.
Responsible AI Ethics
According to Dr David Leslie, Humans are held responsible for the accuracy, reliability, and soundness of their judgments when they do things that require intelligence. Moreover, we hold them accountable for their fairness, equity, and reasonableness of how they treat others and demand their actions and decisions to be supported by good reasons. Similarly, the need to develop principles tailored to the design and use of AI systems is that their emergence and expanding power to do things that require intelligence has heralded a shift of a wide array of cognitive functions into algorithmic processes, which themselves can never be held directly responsible or even immediately accountable for the consequences of their behaviour. This reality gives room for the creation of a discipline that can deal with the ethical breach in the applied science of AI, and this is what the frameworks for AI ethics are currently trying to fill. Fairness, accountability, sustainability, and transparency are principles meant to fill this gap between smart AI systems and their current fundamental lack of moral responsibility. At the current level in which AI is operating, humans are only responsible for their program-based creations. AI systems implementation and design must be held accountable.