MKAI INSIGHTS NETWORK OF DIVERSE STAKEHOLDERS (MINDS)

mkai.org logo

In our algorithm-driven world, companies that can demonstrate their trustworthiness will win.​

In contrast to other high-risk industries, artificial intelligence presents special challenges for regulation and governance. Therefore, relying solely on experts, research, and conventional testing is insufficient to create artificial intelligence (AI) that is secure from harm. True diversity is required for the development of artificial intelligence. People with varied life experiences and differing points of view need to be included in the conversation and their insights, concerns, and observations to be heard and factored into the development.

Multi-Stakeholder Feedback is essential to identify potential harms of an artificial intelligence application. MKAI has the largest and most diverse AI de-risking community in the world.

Learn how to gain vital perspectives about the potential risks, harms, exclusions, biases and prejudices from your AI.

Start a conversation with us

Let’s talk about reducing your AI risk through accessing diverse perspectives
 
 
Richard Foster-Fletcher
Richard Foster-FletcherFounder of MKAI.org
Read More
It is impossible to understand what it is like to experience the world as a person of a different gender, race or age to your own, or as someone with a disability or that thinks differently. The only way to learn is to ask.
Algorithmic bias and injustice are prominent topics in AI ethics and machine learning conversations, and for good reason. From racist chatbots to facial recognition algorithms that fail women, there have been many cases of bias and injustice infiltrating artificial intelligence models. The risk implicit in the technologies we are supposed to trust has been exposed through high-profile failures in artificial intelligence (AI) and autonomous systems.
MKAI provides a Multi-Stakeholder Feedback service known as MKAI Insights Network of Diverse Stakeholders (MINDS). It’s a large and inclusive collective of diverse individuals. We provide foundation education and peer-to-peer mentorship so that, together, the collective can help companies and organisations to discover what is out of sight as they develop and deploy artificial intelligence. When external, open dialogue becomes routine between organisations and stakeholder communities like this, we will we begin to mitigate the problems created by AI.

Why use MINDS?

Access Diverse Thinking

We provide access to our 1,000+ AI ethics stakeholders that will engage with you to discover the 'blind spots' in your AI plans.

Gain Greater Perspective

Many unique individuals will work together to review your AI processes. We help you to spot the mistakes before they materialise into embarrassing or expensive errors.

Be challenged

MKAI stakeholders will ask the difficult questions that might not get raised otherwise. They will challenge you to think wider and deeper about the impact of your AI.

Unlock unique lived experiences

We enable you to speak to people that don't think you like you do, and help you and 'your AI systems' to see the world through their eyes.

Who is this for?

Anyone deploying AI is open to risk, but there is specific pressure to get it right on those companies involved with:

HR Automation

Social Media Algorithms

Recommendation Engines

Education Technologies

Insurance Algorithms

Health and Medicines

Security and Surveillance

Government Related Systems

What MINDS offers:

Data

The data used in AI models is either restricted, under-representative, or biased. Individuals in this collective can contribute information and data to make the models more diverse and useful.

Discourse

Often, teams working on AI projects have a lack of diversity, for example, gender, age, region, neurodiversity, and impaired capacities. This restricts the scope of their ideas as well as the understanding of those who will use the systems. MKAI members will examine the issues that organisations are attempting to address in the nations they are targeting. We will identify the flaws, errors, and omissions.

Testing

Testing solutions with or on a diverse group is challenging. With over 95 nations represented, and many other diverse attributes, the MKAI collective will be able to provide comments, feedback and suggestions.

Dr. Odilia Coi
Dr. Odilia CoiDirector of Innovation at MKAI.org
Read More
"Good processes and governance are not enough to address the serious issues that can arise from AI. A broader perspective is needed, and now is the time to mandate that multi-stakeholder feedback is consulted for every high risk AI system. There is simply too much to lose."