- This event has passed.
MKAI August Inclusive AI Forum – Autonomous Intelligence: What will it take to solve the bias problem in Artificial Intelligence (AI)?
August 19 @ 5:00 pm - 7:00 pm BST
In this Inclusive Artificial Intelligence (AI) Forum we will discover the concept of AI, or algorithm bias, what risks it presents and to what degree we can eliminate it.
AI bias, also known as algorithmic bias is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous or incorrect assumptions in the machine learning process or from the use of incomplete, faulty or prejudicial data sets to train and/or validate the machine learning systems. AI bias often stems from problems introduced by the individuals who design and/or train the machine learning systems and often leads to the creation of algorithms that reflect unintended cognitive or social biases or prejudices. As AI systems are making their way into the military, banking, and bio-medical sector and assisting humans continuously, in this Inclusive AI Forum we will examine in what ways bias in an algorithm is a threat to humans and what can be done about this.
To frame this discussion, we can classify the source of bias in AI systems in 3 ways. Bias in the data, Bias in the human, Bias in the process.
Bias in the data
When the data sample does not represent all the dimensions of actual data, there is a huge chance of the algorithm producing biased output based on trained data.
Bias in the humans
The individuals that are training the algorithms have their own biases, these biases are very closely tied to their ethnic, cultural, linguistic values. Many of these biases involuntarily enter into AI training and results in biased output and the individuals, therefore, can create algorithms that reflect unintended cognitive or social biases or prejudices.
Bias in the process
When the AI training process does not meet certain requirements or criteria there is a significant chance of producing biased output by algorithms. For example, an algorithm predicting weather conditions in the United Kingdom cannot be trained on the weather data collected from India. So AI training process should be continuously monitored and follow certain protocols.
Forum Learning Outcomes:
Complete elimination of bias in AI may never be possible, but that doesn’t mean that huge strides cannot be made to make AI fairer, more representative of the people it is intended to serve and more inclusive. This MKAI Inclusive Artificial Intelligence (AI) Forum explores what steps we can take steps to reduce the biases in AI.
MKAI events are inclusive. Our expert speakers are carefully chosen for their ability to make the subject approachable and comprehensible. MKAI aims to help all people improve their AI-fluency and understanding of this domain. Everyone is welcome!
This event was inspired by the work of Aditya Paturi
Forum Speakers and Contributors: