Responsible Artificial Intelligence for good of humans and society – A perspective Series, article 3; permission to publish provided by WOPLLI

One of the critical elements for forming trust is transparency. So, it is imperative for organisations creating and implementing AI to disclose information on data use & other aspects, upfront to the subjects of the decisions generated from the AI algorithm.

For the organisation that is creating and implementing AI, it is important to understand the universe of data used to train AI algorithms and whether they are representative of the population the corresponding AI system it is deployed towards. Privacy considerations related to the sourcing of all data used along with respective consents for the use, need to be accounted for, along with the bias and ethical considerations.

The organisations then need to be open about all this information towards the recipients of the automated decisions.

The lack of awareness and accountability of their responsibilities by the leaders of the organisations around the use of such powerful and impactful technologies is astounding and reflect poorly on their organisation’s culture, and social standing within the wider community.

Additionally, there is no mechanism or provision for the recipients of such decisions to immediately engage with or appeal to the organisation that has deployed these AI powered automated decision-making capabilities.

This lack of transparency currently is a barrier to accountability and establishing trust by humans in AI algorithms and hence digitalization.

Importance of Regulations, Frameworks and Agreements

Since the organisations have been slow or incapable in implementing appropriate disclosure measures, regulations have been created in many jurisdictions around the world asking for better transparency. The primary reason is that organisations cannot be entrusted by themselves to produce AI algorithms that can be trusted to ensure safety and fairness for humans.

Hence, there is a need for;

  1. Regulatory authorities to enforce regulations related to the deployment of automated decision-making AI algorithms that can affect humans,
  2. New laws to be introduced for systems that are classified high risk and
  3. For these existing algorithms to be independently audited using crowdsourced audit criteria, such as those provided by ForHumanity.

While, there has been interest in new regulations, the regulatory bodies and law makers around the world have been slow and inconsistent in putting forth regulations and frameworks globally. Enforcement of regulations is slower. As an example, on Nov 27th, 2021; ClearView AI – a global facial recognition company was fined by UK for failing to comply with UK’s privacy laws. At the same time, these privacy issues are not seen the same way by other jurisdictions around the world, due to inconsistencies in laws and regulations.

Global human centric agreements are possible.

As of Nov 25th, 2021, all 193 countries that are member of United Nations Educational, Scientific and Cultural organisation (UNESCO) have reached a global agreement on Ethics of Artificial Intelligence. Recognizing the positive benefits and potential harms that AI can bring, following UNESCO recommendation ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy.

  1. Protecting data
  2. Banning social scoring and mass surveillance
  3. Helping to monitor and evaluate
  4. Protecting the environment

It’d be interesting to see how this agreement actually takes shape on ground in 2022 and beyond.

WOPLLITM perspective

WOPLLITM represents our (human) experiences – work, play, learn, live. One of key principles of WOPLLITM is Human Centricity. The human experiences should be safe, fair and trusted. With ambient processing of data around us, enabling pervasive Artificial Intelligence among other innovations, we are experiencing many positive effects. However, as discussed in this article, we must also introduce guardrails to thwart any potential negative effect on human experience.

We have to usher into the future with innovative and responsible Artificial Intelligence and Automotive Systems that are trusted.

WOPLLITM proposes following:

  1. Minimize personal data collection to mitigate privacy issues. A person must have rights to data & be able to control it.
  2. Set up Governance to build transparency. This calls for better and explainable AI models.
  3. Do not make (unaware) humans the subject of Artificial Intelligence to catalogue, score or identify them. Do not implement social scoring and / or mass surveillance. Instead, implement AI innovations to augment and aid human experiences.
  4. Strategize and create responsible AI systems from ground up. Consider good ethics, bias removal and empathy as key ingredients to build responsible AI.
  5. Think through architectures where datasets are distributed and obtained through diverse methods, to mitigate privacy & bias issues.
  6. Build AI together. Develop empathy and create AI for a human at the other end. Put yourself in the shoes of the person who will be the target of your AI.

Progressive Humans and Society

We have seen in many instances when humans are impacted from automated decisions inferred by AI that can result in adverse outcomes. As a society, we are in a learning mode to realize that such harms are possible, since nobody has experienced the new frontiers that this technology is presenting us until now. These situations can cause a ripple effect in a person’s lives. These adverse outcomes not only impact the person, but also their family and potentially the society. With no upfront information due to lack of transparency and many times no recourse against automated decisions by AI because it was just implemented as a one-way street; it can lead to unhappy situation for people.

If downside risks of AI, algorithmic and autonomous systems are mitigated using better ethical standards, bias removal & empathy, so that the outcome from the resultant Responsible AI assists and augments humans instead of negatively impacting them; we can then dream of a great future.

A future where powerful and transformative innovations powered by Responsible Artificial Intelligence can help people achieve bigger and better things in their lives, leading to situations that can be more productive and leading to benefits for the society as a whole.

References

About AI – National Artificial Intelligence Initiative

Augmentation: The Promise and Possibility of Human Machine Collaboration – Boston Fed

UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence – UNESCO

Clearview AI, a facial recognition company, is fined for breach of Britain’s privacy laws – NY Times

Share to your network:
Vikas Malhotra

Vikas Malhotra

Vikas Malhotra is Founder & CEO of WOPLLI Technologies. He is a technologist at heart and has steadily progressed global digitalization for the past 25+ years, while working at many enterprises and verticals. He has been an innovator and has pioneered technology, experiences, features and frameworks as an early adopter and implementer of technologies, most recently for Microsoft Cloud. Vikas has vast experience in areas of technology architecture, cyber security, privacy, laws, regulations & compliance & trust. Vikas has co-founded WOPLLITM with the vision of making our experiences (as we work, play, learn, live) safe, fair and trusted. He is a board member and contributor to many standards and frameworks including ForHumanity, Trust over IP Foundation, IEEE P2145 (Blockchain Governance) and NIST privacy working group. WOPLLITM has created and adopted architecture principles of human centricity, decentralization, distribution, heterogeneity and self-healing.

Chris Leong

Chris Leong

Chris Leong is a member of Board of Advisors at WOPLLI Technologies. He is also Director at Leong Solutions Ltd and a fellow at ForHumanity. Chris is very keen on helping organizations adopting and deploying Artificial Intelligence to innovate responsibly. His experiences over the past 30 years of working in change and transformation projects within the financial services, software, data, Governance, Risk and Compliance industries has provided him with a unique set of perspectives to guide leaders on their digital transformation journey and ensure that humans are always at the heart of all outcomes.