Responsible Artificial Intelligence for good of humans & society: A perspective

Responsible Artificial Intelligence for good of humans and society – A perspective Series, article 1; permission to publish provided by WOPLLI

Although the application of Machine Learning techniques has existed for more than 50 years, the technological advances in this past decade that have resulted in faster and smaller processors coupled with the introduction of new big data and communication technologies, has accelerated the collection and processing of an ever-increasing amount of data. This in turn has enabled the application of those Machine Learning techniques and their evolution to the extent that is synonymous with what society may relate to as ‘Artificial Intelligence.’ In practise, this applies to AI, Algorithmic and Autonomous Systems.

The burgeoning of start-ups in this field has seen new projects and algorithms being created every day with promises of innovation. These Artificial Intelligence projects are definitely innovative. AI technology has promised to slow down climate changes, help find solutions for cancer and other health issues including COVID and deliver fully automated vehicles. More organisations (71%) feel that AI is a game changer.

There have been varying degrees of promises, progress, and successes in many of these areas due to the ability of the underlying technologies to process vast amounts of data and infer decisions that are within the acceptable tolerances of accuracy. Although these promises have not been completely fulfilled, we (humans) remain fascinated with what these technological progressions could deliver. AI technologies have great potential to advance humanity.

There has been an increasing number of instances, however, where inferences by AI algorithms have not been accurate, reliable, safe or relevant, consequently disadvantaging, discriminating or harming the recipients of automated decisions made by those inferences. Cathy O’ Neill’s book Weapons of Math Destruction outlines various scenarios where the deployment of Artificial Intelligence technology may become Weapons of Math Destruction specifically when bias and non-ethical considerations are baked into self-learning systems. Montreal AI Ethics Institute, tracks adverse outcome incidents attributed to the use of Artificial Intelligence technologies.

Transformative technologies such as Artificial Intelligence need to be deployed to serve and benefit humans. This is a journey. The technology has not reached the potential as promised. As a result, the automated decisions made from inferences in complex environments without humans in the loop cannot be trusted at the outset. These automated decisions can lead to unsafe and unfair situations for a human. Whether decisions made entirely by Artificial Intelligence technologies can ever be trusted is up for debate, as it lacks and some will argue that it will never be able to possess human attributes such as empathy and emotion, to be able to make ethical decisions. We will approach this question again in few years.

Humans need to be in the loop and eventually own the decisions, whether it be simple tasks such as sending an email suggested by an algorithm, booking free time in one’s schedule based on the suggestion by an algorithm, or for more complex situations such as following a clue of a crime suggested by an algorithm, making a decision to give or deny a loan as suggested by an algorithm, or connecting people to certain type of news or information in a social media based on recommendations made by an algorithm. Some companies may hawk their AI algorithms to be superior to others, but people (humans) should approach this with a grain of salt.

Organisations have to consider ethics and bias in AI algorithms as something that needs mitigation by forming proper governance and undergoing independant audits, prior to deployment. Along with better ethics and bias mitigation, organisations must also consider implementing ‘build with empathy’ practises. These practises taken together could lead to responsible AI creation and implementation. In order to gain trust of their target subjects, organisations have to build transparency in their models.

Organisations have been slow to adopt responsible practises and implement governance. Meanwhile, many jurisdictions have been considering or creating regulations, these however have been inconsistent and slow. All 193 member countries of UNESCO have recently agreed to the Ethics in Artificial Intelligence framework, that will help progress steady adoption of responsible AI across the globe. This shows that global human centric agreements to use technology in right way are possible.

WOPLLITM perspective is to make safe, fair, and trusted experiences. Six points have been laid out as recommendation within this article. A human who feels safe and is fairly treated, can be happier and enriched. Responsible AI implementation can assist and augment human experiences which in turn can lead to better society.

AI, Algorithms & Autonomous Systems

In 1956, researchers in computer science from across the United States met at Dartmouth College in New Hampshire to discuss seminal ideas on an emerging branch of computing called artificial intelligence, or AI. They imagined a world in which “machines use language, form abstractions and concepts, solve the kinds of problems now reserved for humans, and improve themselves.” This historic meeting set the stage for decades of government and industry research in AI. These investments have led to transformative advances now impacting our everyday lives, including mapping technologies, voice-assisted smart phones, handwriting recognition for mail delivery, financial trading, smart logistics, spam filtering, language translation, and more. AI advances are also providing great benefits to our social wellbeing in areas such as precision medicine, environmental sustainability, education, and public welfare.

The dictionary definition for algorithm states that it is a simple “step-by-step procedure for calculations.” Algorithms are mathematical instructions for calculation, data processing, and automated reasoning. Traditional algorithms are deterministic where for a given input, the algorithm will always produce the same output going through the same states and logic. Machine learning algorithms are non-deterministic; hence the outputs are dependant on how the models are trained and future inputs can affect their behaviours.

Artificial Intelligence (AI) technologies at its base are software algorithms that enable powerful machines to execute tasks by processing huge amounts of data that humans would otherwise take a significant amount of effort and time to complete. Such software can be used to enhance robotics to create “intelligent” machines. AI experience augmentation may include automation and/or collaboration between technology and humans on the various tasks that make up work processes.

Innovation with Artificial Intelligence

Society is constantly keeping up with the pace of technology evolution. One area of innovation in technology is in the space of implementing Artificial Intelligence and autonomous systems. There are many scenarios where these technologies are seen to make progress.

Artificial Intelligence is advancing more and more in its domain and setting new trends every day. We are seeing this happen in many areas as listed below –

  • Intelligent Process Automation
  • Healthcare industry
  • Internet of Things
  • Smart Money
  • Automobiles
  • Virtual Assistants and Chatbots
  • Processors
  • Quantum
  • Cyber Security
  • Robotic Process Automation

FinancesOnline report provides key statistics for Artificial Intelligence for 2020 /21. The figure below shows the areas / use cases where Artificial Intelligence is playing a role and that it is seen as a game changer by 71% of the organisations.

Figure is courtesy of FinancesOnline

Our future is envisioned with more digitalisation where ambient processing of data will happen all around us which will lead to implementation of more innovative AI and automation techniques impacting our lives as we go around our daily business.

There has been varying degrees of promises, progress, and successes in many of these areas due to the ability of the underlying technologies to process vast amounts of data and infer decisions that are within the acceptable tolerances of accuracy. Although, these promises have not been completely fulfilled, but we (humans) remain fascinated with what these technology progressions could deliver. AI has a great potential to advance humanity.

Harm to Humans

As we embark into new innovations with Artificial Intelligence and automation, we have to be cognisant of the downside risks of these innovations as well. One of the questions that has come up in last few years is if Artificial Systems can cause harm to humans?

While it is anticipated that there is harm caused, it has not been recorded systematically until now. If we can learn about what harms are caused, we can make more people understand the issues work towards fixing them.

There have been an increasing number of instances, however, where inferences by AI algorithms have not been accurate, reliable, safe or relevant, consequently disadvantaging, discriminating or harming the recipients of automated decisions made by those inferences. The Montreal AI Ethics Institute launched an AI incidents database in November 2020. As per their latest report;

  1. AI can cause varied types of harms to humans.
    1. Harm to physical health / safety & harm to social & political systems is tied at 23.8% each
    1. Psychological harm and harm to civil liberties are in the 2nd place at 14.3% each.
  • Harms are unevenly distributed.
    • Approximately 30% harms are distributed according to race while 19% are distributed according to the gender.

Figures are a courtesy of Montreal AI Ethics Institute

This report is one of the early indicators of harms caused by AI technologies. Here is one of the key takeaways per the findings in the Montreal AI Ethics report is that “while an autonomous car (which has been possible due to AI) poses obvious (physical) safety challenges, the harms to social and political systems, psychology, and civil liberties represent more than 50% of the incidents recorded to date”. These represent a failure of implementing technology responsibly.

Technologies by themselves do not do harm, but it is the lack of maturity and then its application by organisations in certain scenarios that may cause harm.

One of the areas where lack of maturity and /or its application is a problem is Facial Recognition (and by extension biometric information). This area has become very popular in recent years. Researchers have found that leading facial recognition algorithms have different accuracy rates for different demographic groups. The first study to demonstrate this result was a 2003 report by the National Institute of Standards and Technology (NIST). More studies have been done including one by researchers from MIT and Microsoft in 2018 showing that gender classification algorithms—which are related, though distinct from face identification algorithms—had error rates of just 1% for white men, but almost 35% for dark-skinned women. Through their thorough testing in 2019, NIST also confirmed that a majority of algorithms exhibit demographic differences in both false negative rates (rejecting a correct match) and false positive rates (matching to the wrong person).

This lack of maturity taken together with the notion that this technology is used by organisations (such as law authorities) to identify people and take actions (law and crime); that can have a profound impact on people’s lives, can and has been found to cause harm where it should not have existed. Plus, in order to process facial recognition, personal information of people is getting processed, many times in places where the person did not give there information directly.

This is a privacy problem.

References

Weapons of Math Destruction by Cathy O’Neil

Representation and Imagination for preventing AI harms – Sean McGregor

Finances Online AI statistics 2020 /21: Finances Online

Face recognition vendor test 2002 – NIST (2003)

Face recognition vendor test: demographic effects – NIST (2019)

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification – Joy Buolamwini, Timnit Gebru


Share to your network:
Vikas Malhotra

Vikas Malhotra

Vikas Malhotra is Founder & CEO of WOPLLI Technologies. He is a technologist at heart and has steadily progressed global digitalization for the past 25+ years, while working at many enterprises and verticals. He has been an innovator and has pioneered technology, experiences, features and frameworks as an early adopter and implementer of technologies, most recently for Microsoft Cloud. Vikas has vast experience in areas of technology architecture, cyber security, privacy, laws, regulations & compliance & trust. Vikas has co-founded WOPLLITM with the vision of making our experiences (as we work, play, learn, live) safe, fair and trusted. He is a board member and contributor to many standards and frameworks including ForHumanity, Trust over IP Foundation, IEEE P2145 (Blockchain Governance) and NIST privacy working group. WOPLLITM has created and adopted architecture principles of human centricity, decentralization, distribution, heterogeneity and self-healing.

Chris Leong

Chris Leong

Chris Leong is a member of Board of Advisors at WOPLLI Technologies. He is also Director at Leong Solutions Ltd and a fellow at ForHumanity. Chris is very keen on helping organizations adopting and deploying Artificial Intelligence to innovate responsibly. His experiences over the past 30 years of working in change and transformation projects within the financial services, software, data, Governance, Risk and Compliance industries has provided him with a unique set of perspectives to guide leaders on their digital transformation journey and ensure that humans are always at the heart of all outcomes.

Resources