Can we trust any algorithm with independent decision making & execution?

Responsible Artificial Intelligence for good of humans and society – A perspective Series, article 2; permission to publish provided by WOPLLI

To date, it is not common for the recipients of an automated decision inferred by an algorithm to be aware of that fact upfront as it is typically not disclosed, let alone explained. Also, there is lack of adequate provisions to engage with a human within the organisation that created or implemented AI. In many cases, people are not even made aware that their data was collected and used to train algorithms. As in the previous section, we have seen that real life harm is being caused.

In some situations, the recipients of decisions will not care to ask as the outcomes were not deemed to be adverse towards them.

But if an algorithm provides a non-adverse outcome for one target subject, will it also provide a non-adverse outcome for another recipient, under similar conditions?

Algorithms have been found to be inaccurate, unreliable and in many cases have limitations that translate to downside risks that are unlikely to be managed by organisations deploying them. In summary, AI algorithms are a black box at present that can have adverse effect on humans and their lives, so can we trust them?

We would lean towards a no; under present conditions.

Need for Balance

In most cases, innovation with Artificial Intelligence has progressed with focus entirely on the desired outcomes. It needs to be balanced by mitigating the downside risks which can and has in many cases, caused unintended consequences of adverse impact to humans and society.

The contributing factors are many and varied. However, there are common drivers across all organisations innovating with Artificial Intelligence that led to adverse outcomes.

It is the responsibility of the Artificial Intelligence algorithm providers and implementors to safeguard humans from any adverse outcomes, while delivering the intended benefits.

Having a ‘human centric’ approach is of utmost importance. We review a few areas to discuss the approach below.

Augmentation vs replacement

Artificial Intelligence technologies must be designed to work with humans, not replace them. There have been promises to improve job quality and productivity with AI providing the needed assistance and experience augmentation. Augmentation is not easy but when done right, good human progress is possible; while replacement can lead to fear of losing jobs, work, skills and even humanity. So, AI providers and implementors should decide on where to focus and how to implement.

Depending on the need, the immediate goals of augmentation might include making a process more efficient by automating highly repetitive tasks so that a human worker can concentrate on aspects that are more unpredictable, require reasoning or that machines cannot yet handle. Or the objective might be to make a process safer, more ergonomic, or more interesting for the human team working alongside AI and/or robot teammates.

However, instead of experience augmentation as above, there is a concern that AI will be replacing jobs in the future. Workers in industries ranging from healthcare to agriculture and industrial sectors may see disruptions in hiring due to AI. It is argued that while AI will take jobs, better jobs will appear, arguably in more analytical and artistic fields that do not require repetition and hence cannot be reproduced by AI.

Questions are being asked about how many people will be able to qualify or switch to these newly created jobs? Will we see a situation where there will be people who cannot make a switch? To what extent will humans decide how AI will dictate the shape of how we ‘work’ and what that ‘work’ is? We and in this relation, the providers and implementors of AI should ask these questions to provide a balance & not lean towards replacement.

Judging humans vs enabling humans

As noted earlier, automated decisions made by AI technologies in many instances have adversely impacted humans rather than benefiting them. Why has it happened? Part of the answer is in finding whether AI algorithms are deployed to judge and perform some form of social scoring on humans or if they are deployed to enable & augment humans.

These functions are not mutually exclusive. While one person may be the recipient of judgement and /or social scoring; the same algorithm may be augmenting & easing the job for a person in the organisation that has deployed the AI algorithm.

When AI powered systems have bias and/or when non-ethical considerations are baked into them and they are used for social scoring and / or mass surveillance, humans can be subject to challenges, issues, or harm.

Similarly, Artificial Intelligence systems can be deployed to enable positive impact on human’s lives. Why can an AI system become an automated ‘judgement’ system for some while an enabler for others? This difference can be attributed to the intent and culture of the deploying organisation. The organisation building with empathy vs not makes the difference.

If the AI system (or the people creating it) does not consider the recipients with empathy, then it is possible that to create systems that will have unmitigated bias.

Do we need to develop more empathy in the organisations that drive the development and implementation processes so that the producers and implementors of AI algorithms can relate to the recipients of their automated decisions? Will such empathetic mindsets reduce the chances of building systems that are created to form judgement or score other humans?

Need for Responsible Artificial Intelligence

Since AI algorithms can produce automated results with non-deterministic approach, organisations deploying algorithms should be responsible and install guardrails that safeguard humans from unintended consequences and harms arising from those outcomes. The organisations should be ready to learn from such instances when they arise and improve as well.

The goal for any organisation deploying AI must be to ensure that the outcomes can only assist or augment human experiences by opening growth potentials that would otherwise be not achievable. The organisation must manage the downside risks associated with the deployment of AI and this will require a robust AI ethics and governance framework that is operationalised throughout the organisation to provide oversight and accountability whilst embodying diverse inputs and multistakeholder feedback throughout the lifecycle of AI systems.

Since the growth of AI is reliant on gaining information from a wide variety of sources, the most important aspect is the ‘learning’ that the AI must go through. It is here that we see the birth of machine learning algorithms which aims to empower machines to teach themselves and learn from a wide variety of input. Safeguards should be introduced to enable human oversight.

It is the responsibility of those and their organisations that create these algorithms to install guardrails based on their risk appetite as well as incorporate human oversight to constantly review the outputs and provide the necessary oversight throughout the lifecycle of AI systems.

Guardrails can be created with improving ethics, mitigating bias and injecting empathy.

Importance of ethics, bias & empathy in creating Responsible AI

One of the reasons why AI algorithms struggle with incorporating ethics into their inferences and therefore automated decision-making processes, is their inability to possess human-like attributes such as empathy or emotion. Whilst they can learn from analysing copious amounts of data and information quickly, their inferences are a reflection of the raw data consumed without the ability to discern based on beliefs that humans possess.

Today, we often discover that these algorithms will simply regurgitate and amplify what they have ‘learnt’ from others based on the information fed to them during training. Issues typically arise when the sources of training data do not reflect the values and ethics of the people who these algorithms are applied towards.

Mitigate bias issues, inject empathy

When building algorithms and selecting data sets to train them, it is important that issues related to bias are mitigated. Considerations for proper ethical values and empathy have to be provided by the humans creating and implementing these algorithms.

There are many types of biases and this paper discusses in depth about these biases in datasets as well as how they can be mitigated. Fundamentally, an understanding of the diverse types of biases that could be inherent in training datasets as well as in machine learning models can help organisations decide on how they then choose to mitigate them. Some biases may remain, and it will then be the responsibility of the organisation deploying these algorithms to manage their residual risks as well as disclose them for transparency.

1) Understanding of inherent biases is important for mitigating actions to be undertaken to prevent adverse outcomes that can disadvantage, discriminate, or harm humans.

2) Developing algorithms empathetically for the recipients of the automated decisions is more important.

Given the societal implications of machine learning, it is important for us as humans to ensure that what we are creating does not have a negative impact on society and the world. Just like our legal system requires juries in a court to reach a verdict, data scientists should use discretion when making algorithmic decisions. As citizens and consumers of technology, we also need to be cognisant of the biases we might be letting in by assuming that technology is impartial. The issues that arise due to technological bias will only get amplified because such technologies increasingly make more critical decisions in our lives.

References

Bias Mitigation in Datasets – Shea Brown, Ryan Carrier, Merve Hickok, Adam Leon Smith

Share to your network:
Vikas Malhotra

Vikas Malhotra

Vikas Malhotra is Founder & CEO of WOPLLI Technologies. He is a technologist at heart and has steadily progressed global digitalization for the past 25+ years, while working at many enterprises and verticals. He has been an innovator and has pioneered technology, experiences, features and frameworks as an early adopter and implementer of technologies, most recently for Microsoft Cloud. Vikas has vast experience in areas of technology architecture, cyber security, privacy, laws, regulations & compliance & trust. Vikas has co-founded WOPLLITM with the vision of making our experiences (as we work, play, learn, live) safe, fair and trusted. He is a board member and contributor to many standards and frameworks including ForHumanity, Trust over IP Foundation, IEEE P2145 (Blockchain Governance) and NIST privacy working group. WOPLLITM has created and adopted architecture principles of human centricity, decentralization, distribution, heterogeneity and self-healing.

Chris Leong

Chris Leong

Chris Leong is a member of Board of Advisors at WOPLLI Technologies. He is also Director at Leong Solutions Ltd and a fellow at ForHumanity. Chris is very keen on helping organizations adopting and deploying Artificial Intelligence to innovate responsibly. His experiences over the past 30 years of working in change and transformation projects within the financial services, software, data, Governance, Risk and Compliance industries has provided him with a unique set of perspectives to guide leaders on their digital transformation journey and ensure that humans are always at the heart of all outcomes.

Share

Can we trust any algorithm with independent decision making & execution?