Loading Events

Licensing Artificial Intelligence: Is it practical, critical or political?

Forum Objective:

At this month’s Inclusive Artificial Intelligence (AI) Forum, MKAI enquires about top-down governance for safe, responsible, and ethical artificial intelligence. Specifically, we ask three questions: Is the licensing of artificial intelligence – Practical? Critical? Political?

By licensing, MKAI refers to the concept of a ‘social license’, where an organization works openly with communities impacted by its actions to obtain their trust and acceptance.

Our objective is to establish the drivers and opportunities, and to note the consequences of creating ‘social licenses’ for artificial intelligence.

Forum Abstract:

Why does artificial intelligence need a social license?

Responsible AI is not enough. Responsible AI can help companies develop AI technologies that keep in mind both the greater good and long-term consequences; it can prompt businesses to go beyond algorithmic fairness and bias to identify potential effects on safety and privacy.

Responsible AI can too often be a purely technology-based approach, focused on promoting algorithmic fairness and spotting biases. In the absence of formal regulations, each organization creates its own principles, which naturally vary from company to company and even within organizations. These variances in responsible and ethical AI guidelines and principles proposed by different organizations have further compounded the issue. MKAI states that being responsible and trustworthy are not necessarily related and are distinct from each other. Responsibility may foster trust, but it can never be a substitute for it.

So, prior to a business deploying AI applications at scale, we ask if it must adopt a human-focused approach that nurtures trust across all stakeholders — employees, executives, suppliers, shareholders, communities, civil society, and government. This process will be the basis of a ‘social license’ for all organizations deploying AI.

Forum Outcomes:

Our intended outcome is to give attendees a more complete understanding of what it will take for organizations to gain stakeholders’ trust in AI. We will explain concepts such as:

  • Transparency: Transparency will help businesses develop AI applications, regardless of the assumptions that underpin their algorithms or AI’s impact on their workforce. Companies must be transparent, truthful and open about the implications of upskilling and reskilling existing employees to fill new positions as needed. It is important to note that not all jobs created by AI will be better paid than those it replaces.
  • Managing Risks: Risks associated with AI must be carefully evaluated and mapped by businesses (like the regulatory environment). AI-related risks can be addressed to a level that suits the CRO’s risk appetite.
  • Personnel oversight: Personal oversight must be institutionalized, regardless of how companies assess the systemic risks posed by AI. Human in-the-loop interventions should be recommended when AI systems cannot handle certain circumstances.
  • Communication and education: All potential benefits and drawbacks of AI applications must be fully disclosed, and there is a need to raise awareness around talking about the risks and how to deal with them. Once the company has committed to ensuring the safe, fair, and unbiased use of AI, it needs to regularly update its methodology approach and fundamental principles that guide the development process.
  • Guidelines: All sectors of government should help businesses understand the ramifications of the technology they deploy, including AI. Highly trained regulators are needed to help both governments and companies develop AI technology that is compliant with laws and regulations.

Forum Attendance:

MKAI events are inclusive. Our expert speakers are carefully selected for their ability to make the subject approachable and understandable. MKAI aims to help all people improve their AI-fluency and understanding of this domain. This forum is especially relevant for policymakers, Governmental leaders and corporate decision-makers.

Forum Speakers and Contributors:

  • Matthew James Bailey, Author of “Inventing World 3.0”; Pioneer and Authority on AI & Global AI Ethics, Smart Cities, IoT, Innovation, Ecosystems
  • Jacquie Hughes, Specialist Adviser at UK government
  • Jibu Elias, Research and Content Head at INDIAai
  • Tania Peitzker PhD., CEO at AI Bots as a Service
  • Danielle Adrianna Davis, Esq., Tech and Telecom Policy Counsel, Multicultural Media, Telecom and Internet Council
  • Nicole DuPuis, PhD., Vice President, Innovative Mobility and Emerging Technology, Intelligent Transportation Society of America
  • Oriana Medlicott, Senior Researcher – Technology and Innovation Strategy, Fujitsu


November 23 @ 5:00 pm - 7:00 pm


Online with Zoom