MKAI Inclusive AI Forum August 2021: What will it take to solve the bias problem in Artificial Intelligence (AI)?

Speaker 1: Simon Swan

Title:Machine Learning Lead at Synthesized

Simon Swan contributes to the core technology of Synthesized and is the Machine Learning Lead. Simon is passionate about algorithmic and AI fairness and is the product lead for FairLens, Synthesized open-source bias measurement and detection toolkit. Prior to joining Synthesized in 2019, he worked in the legal and medical industries as a NLP & Machine Learning engineer. He has an academic background in Statistical Thermodynamics and Computational Linguistics from the University of Cambridge.

Speaker 2: Atenkosi Ngubevana

Title: Group Executive at Vodacom

Atenkosi Ngubevana is a reputable, energetic evangelist of digital transformation. She has digital strategic and operational experience in Business and IT partnerships.

She is experienced in leading Strategic Digitisation programs for multi-national organisations over the EMEA, LATAM and APAC regions including managing digitisation strategies in South Africa, Mozambique, Lesotho, Tanzania , and the DRC.

Speaker 3: Ryan Carrier

Title : Executive Director at ForHumanity

ForHumanity is led by Ryan Carrier, the Founder, and volunteer Executive Director.

He founded ForHumanity after a 25-year career in finance. His global business experience, risk management expertise, and unique perspective on how to manage the risk led him to launch and fund the non-profit entity.

Ryan focused on Independent Audit of AI Systems as one means to mitigate the risk associated with artificial intelligence and began to build the trust model associated with the first-of-its-kind process for auditing AIs, using a globally, crowdsourced process to determine “best-practices”.

He serves as ForHumanity’s Executive Director and Chairman of the Board of Directors. He is responsible for the day-to-day function of ForHumanity and the overall curation of Independent Audit.

Chat Transcript

17:00:13 From Richard Foster-Fletcher :
17:01:22 From Vibhav Mithal : Welcome Mithil ! Welcome to the MKAI – Inclusive Forum!
17:01:55 From Ryan Carrier : background on my talk
17:10:02 From Sabrina Sidl : Starting a CH room on Equitable AI Global AI collective. We need all voices to ensure a responsible and transparent application of technologies
17:10:31 From Ben Horlick : I’d definitely join a CH room 🙂
17:10:50 From JAMI K.K : very good example Simon, about Alpha Situation considerability ” at the time of design phase, very well said
17:11:11 From Lavina Ramkissoon 🙂 : @sabrina count me in on CH 🙂
17:11:22 From Alex Monaghan : Brave man!
17:11:24 From Sridhar Sola :
17:11:29 From Pamela Gupta : I agree – I gave a talk on LinkedIn Live on why its critical we build with principles of Transparency, Ethics, Security and Privacy.
17:12:38 From Sabrina Sidl : Great – just find me on CH and I will invite you or ask to join Global AI Collective. Love to hear from all of you
17:12:42 From Jaisal Surana : Hello Everyone. Love the opening address @Simon.
17:12:45 From Simon Swan : Thank you all! I appreciate the kind comments!
17:12:47 From Divya Dwivedi : Awesome looking forward to attend CH rooms.
17:12:49 From amanda h : Hello everyone!
17:12:51 From Alex Monaghan : I’d be interested if I knew what a CH room is – I’m guessing it’s not Centrally Heated …
17:12:59 From Matthew Emerick : Hello, everyone! This is Matthew Emerick from the Cross Trained Mind. I am happy to connect with anyone who is interested in AI.
17:13:00 From V : Hi everyone! I’m game developer and product manager, helping my team to create a machine learning toolkit for games. I’m here for the interesting topic and for the networking. Nice to meet you! 🙂 (
17:13:33 From Ana Montes : Who is the model excluding, and what are the unintended consequences of this? A question to ask to reduce the problem?
17:14:53 From Alex Monaghan : Very good point – why should AI be less biased than humans?
17:15:21 From Sabrina Sidl : The bias is inherently built in ..
17:15:25 From Arvind : +1 to Alex’s point above
17:15:41 From JAMI K.K : very true, because at the end we create needful logic & intelligence for AI
17:15:57 From Divya Dwivedi : ClubHouse CH
17:16:05 From Divya Dwivedi : I guess
17:16:41 From Alex Monaghan : OK – ClubHouse – I’ll google it 🙂
17:16:42 From Maz Shar : @Alex for the same reason we try to build AI that makes better decisions than humans. As with other things, AI can lead to bias to scale super quickly vs other systems
17:16:43 From Arvind : Bias is represented in all data we have. Training data-sets are subsets of this. To remove bias is to make a judgement call and curate such data. But does that mean you’re introducing a reciprocal bias?
17:16:43 From Markus : Maybe we should start with the following base case: Human = Embodied bias If so, the only way to eliminate bias = eliminate humans?
17:17:05 From Markus : (obviously not going to work, but need to be mindful of it)
17:17:12 From Paula Kilgarriff : Subject matter experts and diverse communities, and lots of unlearning and humans in the loop.
17:17:37 From Rohit Mishra :
17:17:55 From Maribel Quezada : Identify bias and create a tool that helps to identify where in the algorithm it initiates and helps correct it
17:17:58 From amanda h : You cannot solve for human bias at a faster rate than bias in AI. Human bias has been a problem for all of time, and it’s rate of progress is slow. But you can implement checks and balances to account for and prevent for bias in AI.
17:18:18 From Alexandra : I think a useful point made in the LinkedIn talk was on the focus of dealing with problems arising from bias in ai, rather than removing bias from ai itself, which I imagine is inevitable?
17:18:21 From Arvind : We’re making a broad statement if we associate bias with just human beings
17:18:21 From Matthew Emerick : We need to bring more people to the table. Not just based on gender and skin color, but from other fields and ways of thinking.
17:18:23 From Sabrina Sidl : It is my mission to get ALL voices heard – the creation of algos is done by a limited view but that should not prevent an all-inclusive input of datasets representing all kinds of diversity
17:18:32 From Sabrina Sidl : *limited few
17:18:34 From Rohit Mishra :
17:18:38 From Ben Horlick : agree amanda
17:18:51 From Neeraj Satpall : It is all about awareness. Kudos to forums such as this one that are helping people realise that diversity, equity, and inclusion is much more important than ever!
17:18:56 From Mithil Bhimani : Hello
17:19:19 From Karen Beraldo : See you all on LinkedIn, comment here:
17:19:21 From Maribel Quezada : Harvard has bias tests for areas – not just skin tone. That is a good starting point
17:19:32 From Markus : Maybe we need some “algorithmic bias circuit breakers”, similar to what it’s done in algorithmic trading in financial markets?
17:19:34 From Arvind : If you have 2 X-Ray machines in 2 hospitals. And the hospitals have different rates of TB detection. Any model using training data from those machines will associate ANY attributes with the differing rates of TB detection – no humans need be involved
17:19:48 From Sridhar Sola : What about in countries like India, where differentiation is mostly caste-based and economic?
17:19:52 From Markus : (basically, they would stop the Ai system if they are triggered)
17:20:54 From Arvind : Bias is not just based on power and social standing.
17:21:27 From Paula Kilgarriff : Fashion and beauty sector data sets annoy me! Zzz , I like my nose! I make my own beauty filters now
17:21:35 From Ben Horlick : Not to always be beating the David Graeber drum, but the model of formal/informal systems of community-based credit is pretty applicable to Ati’s comments
17:22:07 From Arvind : Theoretically a machine can learn that mostly rich people get cancer – not because that’s true – but because financially comfortable people are more likely to seek diagnosis and treatment – leading to an inherent skew in available data
17:22:12 From Matthew James Bailey : The diversity of cultural traditions and values is imperative to the future of AI
17:22:33 From Maribel Quezada :
17:23:16 From Maribel Quezada : Prejudice is not part of human nature. This is a nature vs nurture argument
17:23:32 From Neeraj Satpall : To address bias issues, companies need to have the right interdisciplinary teams, including AI ethicists, including ethics and compliance, law, policy, and corporate responsibility.
17:23:39 From Markus : As part of looking at bias, we also need to consider Big Data / data methodologies / systemic risk measures and statistical analysis. Some of those thematics overlap with classical data science, others don’t. Hence, we/Ai community needs to widen what goes into data science & decision-making.
17:23:44 From Ana Montes : The group purchase is true for other countries. Groups get together to buy for example a car, so in the everyone that participated ends up with a car. This happens in Mexico.
17:23:51 From Alexandra : Predjudice and bias are different no?
17:23:52 From Hanno Eigenbrod : How u can you teach them Zen Mindfulness, there is no bias there. One let’s go of mental formations, which is bias, dropping the labelling
17:23:54 From Sally Macdonald : @arvind – The two hospitals detecting AI will have dramatically different rates of TB detection if they are based in King’s College Hospital, London, which sits in one of the country’s highest TB hotspots and somewhere where the incidence is virtually nil. The incidence in the wider population has to be an input as well, otherwise the model won’t work.
17:24:19 From Neeraj Satpall : @Ati thanks for an impactful talk!
17:24:26 From Pamela Gupta : My biggest fear is we are rolling out these massively impactful systems without guidance and regulations, see my talk
17:24:27 From Markus : Ai-Cop Jaisal!
17:24:37 From Vibhav Mithal : Great points! Diversity is important to even detect bias. Fantastic talk!
17:24:53 From Maribel Quezada : Psychiatrists need to be brought into this conversation to help “train” people who are responsible for the algorithms
17:25:01 From Ben Horlick : 100%
17:25:05 From Simon Swan : Super super interesting Ati! couldnt agree more.
17:25:05 From Arvind : @Sally – how do we qualify “Wider population” in this example though?
17:25:27 From Markus : Well, talk about “Ai” is ubitiqous, but do we have “ai” or ML and/or Deep learning at this point mostly?
17:25:28 From Jaisal Surana : Thanks @Markus
17:25:42 From Nayyara Rahman : did someone say Clubhouse
17:25:44 From Markus : Thank you Ati, great talk
17:25:59 From Sabrina Sidl : The market leader making the most money in these initiatives have to be held accountable to have responsibility in ensuring equity in AI
17:26:11 From Matthew James Bailey : Once we understand that ethics and cultural diversity can be encoded in the next generation of AI, then our societies will take a huge leap forward – World 3.0.
17:26:17 From Sally Macdonald : @ Maribel – and philosophers.
17:26:25 From Arvind : Taking up a “bias problem” without talking about a whole boatload of things is attributing a LOT to just social prejudice
17:26:30 From Rohit Mishra :
17:26:38 From Maribel Quezada : @ Sally – 100%
17:26:39 From Alex Monaghan : Zoom is biased against men, obviously – it’s time something was! 🙂
17:26:48 From Matthew James Bailey : Here you go –
17:27:49 From Mithil Bhimani : Hello My name is Mithil Bhimani From India
17:28:46 From Alex Monaghan : AI doesn’t HAVE to be a data-based product – that’s a paradigm choice. There is also rule-based or knowledge-based AI – it’s just harder than data-based ML!
17:28:49 From Shannon Fischer Menlo Park, CA : Hi everyone,
Interested in the topic.Here to learn! happy to connect.
17:30:17 From Mithil Bhimani : I have one question regarding ai jobs anyone tell me how ai & ml create a new jobs in future because this technology is automatic revolution.
17:30:19 From Varshaa : Hi All, Happy to connect on linkedin. Please send me an invite with message “MKAI” in the subject and will be glad to connect
17:30:27 From Varshaa :
17:31:11 From Neeraj Satpall : This is such a pertinent discussion! very happy to connect with people here
17:31:13 From Ana Montes : Can you do a double blind test in AI as you use in psychology?
17:32:20 From Ben Horlick : @Ana – what do you mean by double blind test here?
17:32:48 From Richard Foster-Fletcher : Join our ongoing conversation on Telegram:
17:33:11 From Paula Kilgarriff : for Fashion AI & Blockchain for Sustainability (Metaverse, 3D Commerce & AR/VR);)
17:33:30 From Richard Foster-Fletcher : Follow MKAI updates on LinkedIn:
17:34:34 From Richard Foster-Fletcher : Again: Join our ongoing conversation on Telegram:
17:34:36 From Edel Sanchez : I think to solve the bias in Artificial Inteligence is to include the poors, because they pay AI more expensive
17:34:46 From Neeraj Satpall : MKAI is leading by example by having more women in leadership positions!
17:34:49 From Sabrina Sidl : We are sharing this mission
17:34:54 From Alberto (Papi) Roldan :
17:35:05 From Mike Nash : Thanks Vibhav brilliant.
17:35:25 From Ana Montes : In the double blind test not only does the person participating in the experiment don’t know whether they are in the placebo or experimental group, but also the perp. I don’t know if I explained myself.sons that will give out the protocol also do not know whether they participants are in which group
17:35:28 From Vibhav Mithal : All the MKAI Links Join and contribute to the MKAI Conversation on Telegram: – Follow MKAI for info and updates at: – LinkedIn: – Telegram: – Instagram: Learn more at Have questions? – Please reach out –
17:36:48 From Pamela Gupta : Ryan Carrier we are very much aligned, I am in CT and would love to meet and collaborate
17:36:53 From Vibhav Mithal : If you have questions – please connect – Please reach out –
17:37:03 From Vibhav Mithal : Excited to host you Ryan!!
17:37:20 From Paula Kilgarriff : Machine learning for non tech or subject matter expert is a good idea for participating non bias AI use cases , thats something MKAI should consider
17:37:21 From Mike Nash : What an interesting job you have Ryan.
17:38:19 From Vibhav Mithal : That is a great starting point – the conversation is not about taking Bias to 0. The conversation is ‘bias mitigation’!
17:38:27 From Alexandra : agree!
17:38:43 From Rika Eichner : that is a great Approach!
17:39:05 From Hongdan Han : well said Ryan, totally agree
17:39:30 From Markus : More about ForHumanity here:
17:40:43 From Alex Monaghan : Reminds me of Tom Lehrer’s 1970s claim that the US military had gone further than anyone else in eliminating bias – removing discrimination on grounds of gender, colour, sexuality, and even ability! Other branches of the US establishment may have followed this lead since …
17:40:50 From Mike Nash : Thanks Markus for link,
17:41:43 From Alex Monaghan : Fairness is a key concept = great to know that someone has defined it!
17:43:50 From KS : British Standards institute is devising global standards no AI
17:43:54 From KS : on!
17:44:16 From Pamela Gupta : I agree, 3rd party objective review is important.
17:44:17 From Ben Horlick : lol similar to financial audits…
17:44:32 From Mike Nash : Auditable AI would this include transparency for all?
17:46:15 From Gerry Copitch : Ryan you rock!
17:46:36 From Alex Monaghan : @mike transparency is a problem – we can audit input and output but there is no obvious way to audit the process in between. Same with humans – you can’t audit whether people are behaving in a biased way, except by looking at inputs and outputs.
17:47:04 From Markus : Told you, *R*O*C*K*S*T*A*R* 😉
17:47:14 From Dean Svahnalam – AiBC : Totally agreed with u Gerry ! Ryan ROCK <3
17:47:16 From Ben Horlick : Don’t disagree that ultimately a compliant vs non distinction needs to be made but why does that need to be a 1/0 binary vs a “score” similar to a lot of ESG audit firms?
17:47:29 From Vibhav Mithal : (a) Independent auditor; (b) Binary rules – that is a great way for clarity and certainty on how to build trust in AI.
17:47:37 From Jennifer : Just finishing my data science certification to pivot out of education, aiming to get involved in ethical AI in either nonprofit or sustainability. Would love to connect!
17:48:07 From KS : you can’t get to transparency for all. Example is financial algorithms as they can potentially be gamed and used for fraud. Transparency should sit above ethical standards. But all AI systems need transparent disclosure to those it is making decisions on.
17:49:17 From Markus : Most ESG “Scores” are just another way to greenwash. Binary rules are auditable because the audit test for each individual audit rule is based on established & factual evidence
17:49:24 From Pamela Gupta : @Alex I agree, the model for Governance I created includes Transparency, Privacy and Security.
17:49:30 From Markus : (similar to financial audits)
17:49:36 From Mike Nash : @alex so possibly a standardised framework is key first.
17:50:00 From Ben Horlick : Financial audits aren’t binary really though, accountants issue a final “opinion”
17:50:27 From Ben Horlick : Its unclear to me why binary is auditable but non-binary is not
17:51:24 From Alexandra : Because there’s more room for vageness perhaps?
17:51:27 From Ben Horlick : My issues with most ESG scoring systems are methodological not output-oriented
17:51:29 From JAMI K.K : Ryan I totally agree with most of it and I love it , good one buddy
17:51:31 From Lavina Ramkissoon 🙂 : Q: @Ryan has forhumanity done a self audit around policies, people, products, services, risk etc …
17:51:48 From Ben Horlick : Fair – all depends on the “scoring” methodology
17:51:48 From Susana Molinolo : Buuut what about the lobbyists that infiltrate everything?
17:52:26 From Pamela Gupta : I have to drop, please reach out Pamela Gupta on LinkedIn if you are interested in Trust in AI. We will have a security and risk summit for AI in end of October if you want to present or attend please reach out.
17:53:26 From Alexandra : I think doing both is a good idea, as if something is compliant it could probably achieve a higher or lower score, so it would be nice to know that score as well
17:53:38 From Markus : FH has a code of conduct (mindful of infiltration etc.) They also use democratic voting by the FH Fellows on issues ensuring everyone’s views can be heard
17:53:43 From Alexandra : For standards / best practice etc
17:54:26 From Ben Horlick : Yeah – that was my thought. I’m unfamiliar with EU AI ethics compliance regulations, so the output may just be a function of the regulations
17:54:37 From Sally Macdonald : er… the audit profession currently earns enormous fees for non-audit work from their audit customers globally. That is what the new Audit Whitepaper Restoring Trust in Audit is all about. Audit is going to be fully split from other fees building on the Kingman review in 2019 – by 2024 – and in the UK, but that is not the same globally. There will be a new regulator (ARGA), but the decision to split audit and consulting was only announced in July 2020, following the Carillion scandal. Auditors are not sufficiently independent yet.
17:54:57 From Ben Horlick : fascinating, thanks
17:55:09 From Alexandra : I’m just going on experience of building standards
17:55:45 From Markus : Yes, in the financial space, it’s a Big 4 + Tier2 cottage industry, not sufficiently independent (yet, may change, but remains to be seen how well implemented)
17:56:08 From Mike Nash : I think the greatest problem will be benchmark as different regions may have different levels of benchmarking. Maybe need an international standard to be established?
17:56:18 From michele abraham : I specialize in AI within recruiting and pipeline within tech men to women ratio is very flawed, leading to 82/18
17:56:35 From Ben Horlick : as long as the auditee is paying for the audit (which is always the case in the US afaik), its real hard to be independent haha
17:56:45 From Ben Horlick : for financial audits anyway
17:56:54 From Markus : ISO, WEF, OECD and others working on standards. I am doing my own (high-level) bit with the UN Problem is no global alignment at present
17:57:04 From Ben Horlick : yup yup yup
17:57:11 From Vibhav Mithal : I like how this is a practical way to try and find a solution to bias in AI.
17:57:55 From amanda h : Nor should the leaders who have a stake in the outcome.
17:57:56 From Mike Nash : @Markus Interesting Markus. Will they all come to a unified agreement?
17:58:08 From Deborah Power : Mitigating bias and measuring binary risks and accountability by independent AI auditors assessing risks sounds great. But similar to financial auditing which is not always a transparent independent function.
17:58:13 From Alex Monaghan : Why is a 100% female shortlist/longlist a step too far? We see this all the time in the UK.
17:58:20 From Markus : Not dissimilar from the Credit rating agencies, where the Bond issuer pays for the credit rating (with many few exceptions). Been criticising (and improving) that old-fashioned model with the SEC since 2010 and the ECB/ESMA since 2012/13.
17:58:37 From Markus : #very few
17:58:59 From Richard Foster-Fletcher to Vibhav Mithal(Direct Message) : Would you like me to add you to the panel discussion?
17:59:09 From Vibhav Mithal : There can be global alignment – if an AI Audit framework is approved by a regulator in one place. Once a framework is approved, it might act as a catalyst for others to adopt. It is one step to mitigate bias.
17:59:40 From Vibhav Mithal to Richard Foster-Fletcher(Direct Message) : Thanks ! I can come in at a point when I have a contribution. We can keep it impromptu. This is very kind. Thank you.
17:59:50 From Markus : We HAVE an opportunity to build this Ai model transparently (independence is hard-coded in by FH/Ryan in the operating model already)
18:00:11 From Vibhav Mithal : Agree Markus. The opportunity!
18:00:22 From Richard Foster-Fletcher to Vibhav Mithal(Direct Message) : Ok, just add yourself if you have something to say
18:00:25 From Sally Macdonald : How do men feel about jobs they would like to compete for being subject to women-only candidate lists? Surely what is important is that the best person for the job gets the position. Surely what we should be doing is developing systems where candidates can be totally anonymised, perhaps even doing interviews with cameras off and voice distortion on.
18:00:35 From Paula Kilgarriff : Infomercial
18:00:37 From Vibhav Mithal to Richard Foster-Fletcher(Direct Message) : Sure. Thanks a lot. I will prompt here and will come in.
18:00:56 From Paula Kilgarriff : Great discussion!
18:00:58 From Markus : It feels at times that “men” are not allowed to “feel” anymore…
18:00:59 From Aleah Shuren : You were great, Ryan!!
18:01:04 From Jennifer : Very enlightening! Thank you!
18:01:10 From Susana Molinolo : Standing ovation Ryan Carrier!
18:01:14 From Vibhav Mithal : Exceptional talk Ryan!
18:01:25 From Neeraj Satpall : Thanks for an enlightening talk @Ryan
18:01:26 From Gerry Copitch : Huge THANKS Ryan! Brilliant as always!!
18:01:31 From Debby Kruzic : Very interesting and engaging. Thank you.
18:01:48 From Simon Swan : Thank you Ryan! Just awesome and clearly presented!!
18:01:48 From Chanelle : thank you!
18:01:50 From Alexandra : Thank you!! Ryan
18:01:52 From Ana Montes : How can it be taken to ALL countries?
18:01:54 From Sally Macdonald : Thank you, Ryan. That was a super introduction to some of the different areas which need to be addressed.
18:01:59 From Markus : Great talk, thank you Ryan for spreading the word and keeping up this VERY important work by ForHumanity – for humanity!
18:02:03 From Hongdan Han : Brilliant talk Ryan, thank you
18:02:05 From michele abraham : Great organization!! Thanks so much for this information Ryan!
18:02:05 From Mike Nash : I think a unified model will come, once on state sets level ie (EU AI legislation) May initiate change rippling across world.
18:02:35 From Yeffe : Appreciate Team MK Ai-4 Humanity
18:02:37 From Mike Nash : Great points Ryan.
18:02:43 From Vibhav Mithal : ForHumanity is a great place – here – Come and build an #infrastructureoftrust.
18:02:48 From Karen Rea : Thank you Ryan. Fab.u.lous!! And yes I can attest to the crowd sourcing. I am part of it and honoured to be there. 👏👌
18:03:26 From Vibhav Mithal : Join the MKAI conversation here –
18:03:32 From Deborah Power : Thank you Ryan – its important to hear all views. Setting the foundations for independent feedback. 👍
18:03:59 From Vibhav Mithal : The LinkedIn post to say Hi to everyone here –
18:04:16 From Markus : Yes, it’
18:04:24 From Markus : it’s a great place!
18:05:33 From Susana Molinolo : S-U-C-H a hopeful discussion. Sorry I have to drop.
18:05:54 From Neeraj Satpall : AI ethics need to be addressed with regulation, and AI ethics need to be imbibed in research and development, around responsible AI, fairness, accountability and explainability.
18:05:57 From Alex Monaghan : @sally as a middle-aged middle-class overweight white male (at least I’m not going bald!), I think men need to experience bias in order to be motivated to address it. Regarding all-female lists, we see this in UK politics, in public service jobs, and in several niche professions – and it is redressing a balance. If we looked at balancing the total number of employees, rather than balancing an individual hire, it is obvious that we should generally employ females. If you consider the extra obstacles which minorities – and women – have to overcome in order to get to the same place as white males in most societies, it is obvious that the women and the minorities are usually more talented. We need to look at the wider picture, not just an individual case.
18:06:30 From Richard Balele : Thanks Ryan!
18:06:36 From Mike Nash : @Markus interested to hear how you get on with international standards. Sounds good.
18:06:37 From Markus : Agree with @Alex – nothing beats experience bias
18:06:47 From michele abraham : agreed!
18:06:57 From Vibhav Mithal : If you want to learn more about ForHumanity – here is a link – . The entire framework explained in a conference organized by ForHumanity on September 8 and September 9, 2021.
18:07:29 From Ben Horlick : signed up!
18:07:42 From Markus : @Mike, I share the occasional soundbite (if I can and semi-confidentially) at the MKAI WhatsApp group. Suggest you join…
18:07:45 From Luiz Fernando Contatori Romano Meneses Botelho : What a wonderblasting meeting, thanks! Sharing my URL:
18:07:59 From Richard Foster-Fletcher : Shout if you want to be in the ‘hot seat’. We can always pull up another chair 🙂
18:08:00 From Vibhav Mithal : Great to hear Ben. And below is the MKAI Link to continue the conversation – 🙂
18:08:28 From michele abraham : Do we think that radical pipeline recruiting shift may mitigate majority of standardized bias within data sets and certain AI use cases?
18:09:19 From Mike Nash : @markus sounds good. Happy to connect and add value.
18:09:33 From Sally Macdonald : Bias isn’t in itself, per se, a bad thing. Failing to recognise the nature and extent of it is. We are all human and therefore all biased in some degree. The danger lies in not being able to recognise the extent of it. If recognised, steps can be taken to tweak the AI to mitigate it. One has to take some kind of stand, even if it runs counter to the tastes of some participants. That is not a problem, as long as it is known.
18:09:42 From Ben Horlick : is the WA or telegram group more active?
18:09:43 From Neeraj Satpall : Well said @Hema, companies need to ask if they have the right interdisciplinary teams in place
18:09:45 From Luiz Fernando Contatori Romano Meneses Botelho : Thanks Richard – I may not have the right clothes for the “hot seat” today… 🙂
18:12:14 From Vibhav Mithal : Hi Ben – The link is for the Telegram group. All groups are active. :-). There are many sub-groups as well. The telegram link is here – You will need to have Telegram on your phone/device to join.
18:12:18 From michele abraham : I agree Ati!
18:12:19 From Markus : It’s not in the interest of the dev/coder to challenge / scrutinise their own code…
18:12:33 From Ana Montes : I think the order should be people, then product and profit. I still think that one of the big problems is that the law in the US is that the organization has to satisfy the stokeholders, and changed it to the stakeholders. It will make a big difference.
18:12:36 From Markus : I.e. they don’t mark their own homework
18:12:50 From JAMI K.K to Richard Foster-Fletcher(Direct Message) : Thanks for this wonderful & thoughtful session, love it, I have to step out for another meeting, nice to see all these wonderful peps, been blessed
18:12:52 From Chanelle : agreed!
18:12:52 From Ben Horlick : Thanks Vibhav!
18:13:23 From Richard Foster-Fletcher to JAMI K.K(Direct Message) : Thank you so much Jami!
18:13:33 From michele abraham : We need people within accessibility, people from various economic backgrounds, veterans, mental health
18:13:33 From Richard Foster-Fletcher to JAMI K.K(Direct Message) : Congratulations again for today
18:13:35 From Vibhav Mithal : You are welcome Ben!
18:13:36 From Neeraj Satpall : Ques to Panel – what would be your recommendations for organisations to have a measurable, actionable de-biasing strategy? Should the approach contains a portfolio of technical, operational, organisational actions?
18:13:53 From michele abraham : There are so many barriers to entry within technology
18:14:04 From JAMI K.K to Richard Foster-Fletcher(Direct Message) : Bye for now
18:14:22 From Richard Foster-Fletcher to JAMI K.K(Direct Message) : Bye 🙂
18:14:48 From Markus : We need to train devs/coders not only the hard phyton or whatever skills but also the ability to question if the SHOULD do what they CAN technically do. These are hard questions – and somebody needs to ask them!
18:15:15 From Alex Monaghan : Data sets will always be biased. They will never be complete, and their interpretation by machines cannot be predicted, so mitigation will not work a priori. The problems may be mitigated by a posteriori manipulation, or by rules/knowledge/understanding which is not currently applied to most AI products. 2 recent examples I have studied this week: Teslas and their habit of crashing into emergency vehicles at night, because the flashing lights and ilumination at RTAs is not represented in most Tesla datasets. Horses and zebras – you would think that they are distinguished by their looks, but AI systems learn that only horse images contain saddles and/or stables, so any photo of a zebra with a stable and a saddle is 100% identified as a horse. These examples are both cases of bias, and neither was predictable until the product was in the “wild”
18:15:17 From Jennifer : YES! I am a midwestern white woman, but I have lived overseas in four countries for 7 years total. I am not of those cultural backgrounds with my heritage, but I see things differently than people with those backgrounds when bridging the gap in communication and understanding too
18:15:37 From Vibhav Mithal to Richard Foster-Fletcher(Direct Message) : Ryan’s work is far ahead of many others. I can join in at any point of time. Happy to contribute in any way. :-).
18:16:07 From Richard Foster-Fletcher to Ean Mikale(Direct Message) : I see your hand Ean and I’ve asked Jaisal to come to you 🙂
18:16:37 From Markus : Agree with @alex These algos need back/stress-testing and not in a simulated environment but the real world
18:16:58 From Mike Nash : I think that compliancy should to be applied to AI project now as Ryan said with GDPR, there were many that were caught out changing legacy systems.
18:17:14 From Ben Horlick : agree – need something like the Fed’s stress testing on banks imo. out of sample backtesting
18:17:55 From Markus : Stress-testing/ Back-testing and Real-world “sandboxes” – not dissimilar to what banks are doing
18:18:28 From Rebecca Spour : What do folks think about the iso 307 blockchain working group?
18:18:28 From Ben Horlick : yup
18:18:32 From Alex Monaghan : I wrote a bit on LinkedIn about this. Bias is a judgement by humans. Fairness is a judgement by humans. You can’t automate those.
18:19:03 From Sabrina Sidl : automation does not make things safer or less bias – I believe the magic lies in collective intelligence – use technology with ongoing human/tech audits
18:19:05 From Markus : It also ignores “noise” in the data. As per Daniel Kahneman’s latest book
18:19:15 From Maribel Quezada : Yes. They already use something like that in the healthcare space
18:19:20 From Manijeh Motaghy : I’m sorry, I just got on. Will try and make sense of the rest of the content. thank you for this subject.
18:19:44 From Richard Foster-Fletcher : Luiz, there’s no dress code for MKAI 🙂 You are very welcome to contribute on or off camera
18:19:50 From Alex Monaghan : @manijeh – good luck with that 🙂 There will be a recording available …
18:20:25 From Luiz Fernando Contatori Romano Meneses Botelho : Wunderblasting Richard, thank you! I´ll do my best off câmera this time. How nice of you, btw.
18:20:40 From Richard Foster-Fletcher : Be great to hear from you Luiz
18:20:44 From Alex Monaghan : @Richard – woah – you are brave today! You want Italian politicians in the shower on your panel? 😉
18:20:52 From Hongdan Han : decentralized AI
18:21:04 From Markus : Distributed ledger technology / DLT – check out Hedera Hashgraph
18:21:25 From Hongdan Han : The combination of AI and blockchain technology
18:21:27 From Yeffe : Approx. 10 min appeal start 22 mark… Ai4 – Climate Science… Can we make it happen? 🚀
18:22:38 From Markus : I would argue anything relying on technology – such as computer / wifi access / network + connectivity carries the danger to exclude a large chunk of the world population. That’s bias too!
18:22:42 From Richard Foster-Fletcher : Alex posts interesting content – take a look
18:22:50 From Mike Nash : Do the panel think that there should be less regulation but stiffer penalties or unified firm frameworks for compliance?
18:23:45 From Ana Montes : Keep humanity the master and AI the servant!
18:23:51 From Markus : More globally-coordinated regulation rather than national / federal regs that may lead to regulatory arbitrage – certainly by the Big Tech / FANGs
18:24:00 From Deborah Power : Shared global code ethics sets an obligation to provide increased professional ethical principals supported by AI digital technology systems and development of everyone to self-regulate and create human transparency.
18:24:23 From Alex Monaghan : Oh great – reind me how many million customer records were hacked from Microsoft this month …
18:24:34 From Ben Horlick : + T-Mobile
18:24:45 From Mike Nash : Good point @deborah
18:24:47 From KS : Vodaphone and Vodacom
18:24:57 From Markus : + (the ones not yet in the public domain / reported)
18:25:09 From Ben Horlick : apple backdoors 🙂 but that’s probably a topic for another day
18:25:30 From Karen Rea : This is such an important evening as we are seeing a real meeting of minds and a positive way forward that is both practical and workable. Recognising bias and dealing with it within AI/algorithms will be a vital step to making AI ethical and as good as we can for humans. Well done MKAI and For Humanity 👏👏
18:25:34 From KS : Public attitudes to data here are key. e.g the NHS in the UK attempt to centralise all patient health records centrally. Millions opted out. Lots of work to build up trust and participaation
18:25:45 From KS : participation
18:25:53 From Markus : ^^^ I opted out too ^^^
18:26:00 From Markus : How dare they!!!
18:26:23 From Markus : Fast way to destroy any remaining trust left in public service orgs and govts’
18:26:56 From Richard Foster-Fletcher : NHS Data + Palintir – what will be the fall out of this, that we hear about in 5 years from now?
18:26:59 From Ana Montes : We have plenty of history that shows organizations are very poor at self regulating themselves.
18:27:26 From Markus : Not wanting to become political, but Afghanistan is a good example how WRONG ALL governments got all of this. Do we want to trust govts and/self-regulation. No thanks!
18:27:52 From Markus : (We need to begin with the end in mind)
18:28:04 From KS : All relies on the shoulders of effective communication. Not enough PRs are upskilled themselves to support. So, another issue as public relations practitioners are the guardians of the truth, the ethical guardians and the reputations guardians of governments, organisations, businesses, brands etc. so they have a huge role with huge responsibility
18:28:12 From Markus : Am doing it for everyone else
18:28:20 From Markus : (and survival)
18:28:39 From Markus : Love the cat!
18:28:43 From Mike Nash : One of the hardest issues may probably setting the level of bias to create the benchmark. How would this be proposed?
18:28:46 From Alex Monaghan : Cats should decide. They know what’s good for them!
18:28:46 From michele abraham : Yes Dean!
18:28:55 From Mike Nash : Hi Cat. 😉
18:29:04 From KS : #This CatDoesNotExist – AI! haha!
18:29:14 From Alexandra : My cat agrees 🙂
18:29:16 From Markus : Animals have a sixth sense – maybe that’s what’s really needed here?
18:29:30 From Markus : Instinct / Gut feeling
18:29:33 From Pieter van der Walt : Reminds me of Schordinger
18:29:41 From Era : Catty 😄😍
18:29:47 From Markus : Schroedinger
18:29:53 From Maribel Quezada : When tools are militarized by those in power – not the intended use
18:29:56 From Ben Horlick : ha!
18:29:59 From Pieter van der Walt : My AI spelling wasn’t working
18:30:01 From Markus : Schroedinger’s Ai is determining the future state of Humanity
18:30:04 From KS : Why many governments are mass upskilling their citizens or trying to
18:30:39 From Markus : You’re making a v.valid point @Pieter 😉
18:31:06 From Ana Montes : That needs to be the most important premise everything that we do need to be done for humanity and the protection of the earth.
18:31:08 From Maribel Quezada : Ask Oppenheimer who was horrified to find what his scientific experiments were ultimately used
18:31:16 From Ben Horlick : ^^^
18:31:23 From Vibhav Mithal : Join the MKAI conversation here –
18:31:24 From Markus : @KS but ARE govts really upskilling their citizens? I’ve not seen enough of it (in the UK and Germany)]
18:32:10 From Markus : Think Assimov’s three laws of robotics and the Zeroth law could apply to Ai too
18:32:16 From Richard Foster-Fletcher to Dean Svahnalam – AiBC(Direct Message) : Hmmm not sure who un-spotlighted you
18:32:24 From Ben Horlick : They should!
18:32:25 From Richard Foster-Fletcher to Dean Svahnalam – AiBC(Direct Message) : Thanks for your contributions!
18:32:36 From Markus : (at least that’s the basis of my conversations at the UN level)
18:32:44 From Ben Horlick : although if the baseline for an algo is “do no harm to any human” we probably wouldn’t have any algos lol
18:33:04 From Debby Kruzic : Good points, Ati. I have seen privacy falling into this point of thought as well. Companies that protect privacy would benefit with company reputation, sales and bottom line.
18:33:21 From michele abraham : are we saying bias in place of discrimination? I just had that thought, as bias is universal, but discrimination is often at play within diversifying workplaces in creating the software and tools we all use. Just a thought
18:33:28 From Maribel Quezada : Anything that can be weaponized should be regulated to come extent
18:33:42 From Maribel Quezada : *some
18:33:42 From Alex Monaghan : People are a problem, as Douglas Adams said. People are responsible for Johnson’s Britain, for Orban’s Hungary, for Putin’s Russia, for Trump’s America – all these leaders were democratically elected, and I can’t imagine anyone is a fan of all four!
18:33:48 From Mike Nash : Thank you Simon..
18:34:01 From Simon Swan : no problem!
18:34:37 From Markus : @Maribel re. “weaponization” my biggest concern is that none of what we are discussing here is applicable to army / military
18:34:37 From Alexandra : People or leaders?
18:34:47 From KS : AI has already been weaponised and will be going forwards. We have had the first UK chief executive held to ransom over a deep fake audio call, which the police are still investigating.
18:35:10 From Markus : Lack of true LEADERS. These are all Mis-Leaders afaik
18:35:23 From Ben Horlick : @Markus – disagree, I mean if you watch the new boston dynamics videos, there are very clearly military applications there haha
18:35:30 From Alex Monaghan : @alexandra that’s why I picked democratically-elected leaders. The people decided. Why is a complex question, of course!
18:35:36 From Alexandra : ‘Good’ leaders are key to reach people. What a good leader is is another question
18:35:49 From Markus : We need to get rid of mis-leaders of the past and find Leaders of the Future (a lot on here maybe?)
18:35:50 From Mike Nash : Great point Hema….thank you.
18:35:57 From Luiz Fernando Contatori Romano Meneses Botelho : @AlexMonaghan… also responsible for Bolsonaro´s current retrograde Brazil…
18:36:14 From Alexandra : Yes agree Alex
18:36:39 From Alexandra : And Markus
18:36:43 From KS : health care records will take a long, long time to get to any level of consistency and as for e.g. the NHS is not one organisation, there are hundreds of acute hospitals, community providers, private providers and social care all collating and creating data in all different ways for for different purposes. There is no single point of sharing data across the same organisation let alone across whole geographic localities currently. So, huge and why data trusts will be required
18:36:44 From Dean Svahnalam – AiBC to Richard Foster-Fletcher(Direct Message) : No worries <3
18:36:58 From Deborah Power : Great points Dean – everyone has to be represented for humanity to transition. A level of singularity education for every citizen.
18:37:11 From Ben Horlick : agree
18:37:13 From Markus : @Ben what I mean is that everything we’re discussing here wrt. regs / audit / rules / controls etc of Ai systems does NOT apply to military / army / secret services
18:37:49 From Ben Horlick : @Markus – I believe the military uses AI systems already though, so I don’t really see the difference as much
18:38:04 From Markus : As seen already autonomous Ai drone in Syria going haywire and shooting / searching human targets
18:38:07 From michele abraham : yes Ryan!!
18:38:29 From Ben Horlick :
18:38:39 From Markus : @ Ben, yes, they do and I have seen (some) of it. It scares the XXXXX out of me
18:38:50 From Ben Horlick : THAT I agree with haha
18:38:59 From Mike Nash : Great points Ryan. Do you think a global framework will happen or will it evolve and start from one country/region,
18:39:06 From Markus : The difference is they are OUTSIDE any regulation. That’s the difference.
18:39:13 From Dean Svahnalam – AiBC to Richard Foster-Fletcher(Direct Message) : Was I too much
18:39:19 From Richard Foster-Fletcher to Dean Svahnalam – AiBC(Direct Message) : Not for me
18:39:24 From KS : on the fence and it is why ethics is so difficult – ethics in different nations across the world are different to each other, so that’s a huge consideration.
18:39:29 From Dean Svahnalam – AiBC to Richard Foster-Fletcher(Direct Message) : Cool… Thanx
18:39:51 From Richard Foster-Fletcher to Dean Svahnalam – AiBC(Direct Message) : The team was very pleased to get your views, they shared this on the WhatsApp group with me
18:40:14 From Alex Monaghan : Yes, @markus, that needs to change. I always end up mentioning sci-fi, movies – these are all the imaginable scenarios, and very few of them end well!
18:40:21 From Dean Svahnalam – AiBC to Richard Foster-Fletcher(Direct Message) : Very Happy to hear that <3
18:41:04 From Mike Nash : Good point Vibhav, on a legal front. Do you think that it will be hard to regulate legally as most AI is international?
18:41:16 From Markus : Re. Hollywood, “Person of Interest” is a great series!
18:41:21 From Dean Svahnalam – AiBC to Richard Foster-Fletcher(Direct Message) : Maybe Jaisal took me way by mistake
18:41:24 From Deborah Power : Great points Vibhav 🔥
18:41:37 From Richard Foster-Fletcher to Dean Svahnalam – AiBC(Direct Message) : Maybe, do you want to contribute more comments?
18:41:50 From Mike Nash : Thank you panel for answering the question. Very good.
18:42:14 From Markus : @alex, the military / army will not change unfortunately. Maybe an international convention similar to restriction of biological/chemical weapons is needed here?
18:42:22 From michele abraham : I’m interested to know if anyone here has worked in tech with an actually diverse data or product team. I genuinely haven’t ever seen one or been part of one. That’s my biggest concern for future of AI in private consumer tech
18:42:36 From Alex Monaghan : Good point., @vibhav. You might recognise the root of this quotation: “There is no justice. There is only AI.” That day is coming, unless we are very lucky.
18:42:42 From Markus : @Michele – agree and see what happened at Google
18:42:46 From Dean Svahnalam – AiBC to Richard Foster-Fletcher(Direct Message) : No, I m good I guess, otherwise I can raise my hand
18:43:18 From Richard Foster-Fletcher to Dean Svahnalam – AiBC(Direct Message) : Ok all good
18:43:26 From Manijeh Motaghy : what if companies are big enough to be able to pay the penalties and would not care about following the regulation? As we have seen through out the history of big business. Penalties are not the most effective consequence and after the harm is been done????
18:43:54 From Richard Foster-Fletcher : Alibaba, Tencent….they can afford almost any fine
18:43:59 From Ben Horlick : make the penalties bigger 😈
18:44:09 From Manijeh Motaghy : What Preventative measure can there be?
18:44:12 From Richard Foster-Fletcher : These companies have warchests
18:44:20 From michele abraham : @Markus Which happening, litigation and turnover within product?
18:44:30 From Markus : Same as in banking: banks broke rules because penalties were “affordable” and they had fighting funds. Since the GFC in 2010, this has changed a lot
18:45:01 From Vibhav Mithal : Penalties can work – but first you need laws. Laws need to be based on input. Input comes from awareness from all quarters. Awareness is built when we have conversations bringing our independent journey to the table and think on AI.
18:45:02 From Markus : @Michele – the treatment of their Ai research team incl. Timnit Gebru and others
18:45:19 From Ben Horlick : In my opinion the only penalty issued coming out of the GFC that had a material impact was the suspension of WF from expanding their balance sheet (which wasn’t even GFC related)
18:45:27 From Vibhav Mithal : Thank you Ati !!! Wonderful talk.
18:45:29 From Alex Monaghan : The main reason GDPR has been so successful – being taken seriously – is that 4% of global revenue is a painful price for any organisation!
18:45:30 From Karen Beraldo : Thank you so much Ati!
18:45:37 From Deborah Power : Thank you Ati
18:45:39 From Manijeh Motaghy : Education? Reforming humanity’s perspective?what other
18:45:53 From Markus : Yes, GDPR is VERY penal (and costly) and hence somewhat feared
18:45:54 From michele abraham : From what I see, the best strategy is tripling down our recruiting efforts, which the UK government has and I admire
18:46:11 From Alex Monaghan : Humour always works so well in this forum 🙂
18:46:14 From Hema Lakkaraju : Thank you Ati!
18:46:18 From Mike Nash : @vibahav – Good points.
18:46:23 From Vibhav Mithal : Thanks Mike.
18:46:32 From Richard Balele : Thanks Ati!
18:46:36 From michele abraham : Thank you Jaisal!
18:46:38 From Adewale : Thank you Ati
18:46:42 From Luiz Fernando Contatori Romano Meneses Botelho : @MicheleAbraham – by the looks of the tech/startup teams in my geography, you may infer how much we are missing diversity and inclusion (actually, if we don´t expand and universilize education and opportunities for all walks of life, we might run the risk of perpetuating these privileges and excluding people who are not able to obtain the same opportunities):
18:46:53 From Mike Nash : Thanks Ati
18:47:48 From Vibhav Mithal : For anyone looking to connect with an IP lawyer, and an AI Enthusiast who can answer your questions on both MKAI and ForHumanity – please feel free to reach out –
18:48:36 From Ben Horlick : Have there been a lot of efforts on the audit side to create “standardized” out of sample datasets? Feels like that along with open sourced libraries feels like most of the answer to me
18:48:46 From Mike Nash : @simon – Open sourcing can help. Maybe there could be a team of data experts to help with audit process (Ryan points)
18:49:32 From michele abraham : Yes Luiz, by recruiting, essentially I mean solving economic discrimination gap, which happens to incredibly large populations. 51% women, 30% global population identifies within neurodivergence and accessibility. And within the private and public sector, these are groups that are very marginalized in tech
18:50:17 From Karen Beraldo : It is a great chat indeed, let us continue this conversation. Say hi and stay connected:
18:50:39 From Mike Nash : Cloud sourcing regulation too? Interesting concept.
18:50:43 From michele abraham : I do believe we can create pipelines, as a tech founder, our teach was 50% women, we had veterans, neurodivergence, had accessibility advisor
18:50:45 From Luiz Fernando Contatori Romano Meneses Botelho : Amazing, huh? @Michele! We´ve got to do something about it before something serious builds up, just as with climate change (that we couldn´t tackle appropriately in due time)…
18:50:46 From Alex Monaghan : I’m plucking figures out of the air slightly here, but – there are about 3000 languages worldwide, fewer than 100 are well supported by speech technology – and that number has not really grown in the last ten years. AI will never support thousands of languages, until we get the Star Trek Universal Translator (i.e. never) – so we will always have bias.
18:50:57 From Manijeh Motaghy : Can regulations be coded within AI programing process before any company use the more standard technology?
18:51:00 From michele abraham : team*
18:51:05 From Markus : @Alex agree
18:51:11 From michele abraham : We put the extra work in
18:51:17 From Alexandra : I’m thinking, there are times when I want the focus of data to be local, and there are times when I’d like it to be global, so is diversity always key?
18:51:21 From Vibhav Mithal : Join the MKAI conversation here – Just sharing the link once again if you missed it.
18:51:49 From Markus : As Ryan said earlier, we cannot eliminate bias, but we can figure out how to mitigate it. Which is what FH does
18:51:56 From Vibhav Mithal to Richard Foster-Fletcher(Direct Message) : ForHumanity has a conference on 8th and 9th. We still have 91 people here. Can we share that link here. I am expecting to Ryan to do it – but not sure if he will.
18:52:05 From Vibhav Mithal to Richard Foster-Fletcher(Direct Message) : The conference will explain everything they do.
18:52:09 From Mike Nash : Maybe work to a standard of Explainable AI (XAI) that writes thinking to blockchain with open source (Crowd source regulation)?
18:52:17 From Alex Monaghan : Great reminder, @markus
18:52:26 From Richard Foster-Fletcher to Vibhav Mithal(Direct Message) : Sure
18:52:26 From michele abraham : From my lived experience in technology, it’s not just bias
18:52:32 From michele abraham : Its also discrimination.
18:52:45 From Ben Horlick : @Mike – agree that’s a good audit / transparency mechanism
18:52:48 From michele abraham : I really care, which is why I bring this up in this thought leadership space
18:52:52 From Markus : @Michele, sadly have to agree also
18:52:57 From Luiz Fernando Contatori Romano Meneses Botelho : Alex, this is appalling!!! “there are about 3000 languages worldwide, fewer than 100 are well supported by speech technology”
18:53:21 From Luiz Fernando Contatori Romano Meneses Botelho : Indeed Michelle!
18:53:51 From Alex Monaghan : @michele, that’s a very interesting distinction. We assume that AI bias is accidental – while discrimination is often deliberate. I hope there is almost no deliberate discrimination by AI, but who knows?
18:54:00 From Markus : @Richard very good points,
18:54:00 From Mike Nash : @ben your right, it would bring transparency.
18:54:23 From Deborah Power : Good points Ryan / Richard
18:54:30 From Markus : You’re going through the hiring / HR filter and suddenly have been on-biased (= onboarded)
18:55:04 From michele abraham : It’s indirect within data, by not having inclusive data sets, and deliberate in current hiring practices, which I keep ranting about lol.
18:55:11 From Mike Nash : Good point Richard.
18:55:12 From michele abraham : @ alex
18:55:33 From michele abraham : What we use, is created by people
18:55:41 From Gerry Copitch : We should also incorporate non-humans to eliminate bias 🙂
18:55:59 From michele abraham : Machines are trained by humans
18:56:06 From Mike Nash : @gerry 🙂
18:56:10 From michele abraham : so theoretically, it would create even more bias
18:56:38 From Richard Foster-Fletcher : Interesting decentralised, tokenised apps are appearing – some based on the Ethereum blockchain that can be used to share data securely, project IP and track contribution from external parties (individuals) and even pay them using embedded smart contracts in cryptocurrency. Very exciting and something MKAI plans to implement!
18:56:39 From michele abraham : I am interested in creating holistic AI which will take years but that is my MSc thesis
18:56:53 From Gerry Copitch : Ultimately machines will train themselves 🙂
18:57:23 From Markus : check out re. DLT tech, Hedera Hashgraph and HBars
18:57:33 From Vibhav Mithal : Great point Hema! We need to de-mystify and understand what we mean by ‘diversity’.
18:57:33 From Luiz Fernando Contatori Romano Meneses Botelho : @michelle – so clearcut to see – I also approached these biased hiring practices, but as @alex said, not all languages are fully mainstream – it was recorded in Portuguese:
18:57:36 From Karen Rea : I have to go but thank you for a fantastic event 👏
18:57:45 From Alex Monaghan : @michele true, but usually blindly these days – the information is collected in a pretty blind way, nobody can go through all the data and skew it deliberately in most cases. It does tend to be easily-collecetd dat in many cases, as Jaisal is saying
18:57:46 From michele abraham : Very interesting things happening in blockchain!
18:57:50 From Deborah Power : Awesome panel as usual – Thanks everyone – I enjoy the discussions so much.
18:58:18 From Deborah Power : Have to leave for my next meeting. Thank you
18:58:30 From Richard Foster-Fletcher : Thank you Deborah
18:59:09 From Gerry Copitch : If Carlsberg ran AI webinars they wouldn’t even be in the same league as MKAI events 🙂
18:59:12 From Markus : (Big) Data = (Big) power
18:59:31 From Markus : (our) data = valuable to other companies
18:59:36 From Vibhav Mithal : Fantastic points – Asking the question why is key.
18:59:48 From Ben Horlick : agree
18:59:57 From Markus : That includes behavioural data etc, for instance Oculus Go collecting data about your body movements etc
19:00:07 From Ben Horlick : (Facebook)
19:00:28 From Ricardo Baeza-Yates : 90% of the institutions will never have big data, the current hype is just increasing the technological gap that already exists. Check
19:00:32 From Markus : WhatsApp (Facebook) Instagram (Facebook)
19:00:51 From Markus : Almost on a trip to collect as much as possible
19:01:03 From Alex Monaghan : Selling data will be lots of fun! I’m sure you know the story where Singapore had a problem with rats in WW2, so the British army offered a bounty on dead rats – just the tails actually. They bought millions of rat tails – more than there could reasonably have been – before they discovered that many entrepreneurs were running huge rat farms to get the bounty. Humans are special in so many ways!
19:01:06 From Alexandra : Likewise have to leave, thanks so much all very insighful discussion!
19:01:09 From Ben Horlick : Thanks for the article Ricardo – looks interesting
19:02:13 From Simon Swan : Really well summarized!
19:02:31 From Jaisal Surana : Thanks everyone
19:02:38 From Mike Nash : Great discussion. Good nutshell Jaisal.
19:02:39 From Ben Horlick : open sourced!
19:02:43 From Lavina Ramkissoon 🙂 : 🤣
19:02:47 From Debby Kruzic : Thank you all very much.
19:02:50 From Markus : Massive thank you to all the panel members for their insights, Jaisal for being a good cop, Richard for facilitating all of this and MKAI for doing this mind-blowing events! You ALL ROCK!
19:02:55 From Rika Eichner : That was a wonderful and really insightful session – thank you!
19:03:04 From Markus : And Vibhav for managing the crowd and comms
19:03:06 From : Awesome conversation, thank you so much everyone!
19:03:16 From Daren Warburton : Brilliant session, thank you all so much
19:03:23 From mariacarolinagonzalezhernandez : Insightful event. Really enjoyed it!
19:03:25 From Will Peck : Thanks everyone, another great session!
19:03:30 From Jitendra Shakya : Thank you very much guys wonderful 👍🙏
19:03:36 From Markus : We want MORE!
19:03:38 From Hema Lakkaraju : Thank you everyone. Really enjoyed this session. Lets connect :
19:03:39 From Karen Beraldo : Join us at telegram to continue this conversation:
19:03:43 From Mike Nash : That was fantastic everyone. Really pushed the thinking forward. Thank you Mike Nash
19:03:47 From Vibhav Mithal : Thank you everyone for a great session!
19:04:12 From Dean Svahnalam – AiBC : Thank u so much everyone <3 I love to learn from u guys <3 Thank u again
19:04:13 From michele abraham : Amazing talk, thank you everyone and for inviting me Richard!!
19:04:14 From Jaisal Surana : Thanks everyone lovely session thanks to all panelists and speakers
19:04:18 From Aleah Shuren : Thank you for such an insightful session!
19:04:25 From Nayyara Rahman : thanks all
19:04:30 From Nayyara Rahman : goodbye and stay in touch
19:04:57 From Richard Balele : Thank you everyone for a great talk! ATB..
19:04:59 From Markus : Love your cat @Dean, where’s your’s, @Ryan?
19:05:11 From Luiz Fernando Contatori Romano Meneses Botelho : Thanks / Gracias / Danke / धन्यवाद
19:05:14 From Adriana Esper : Thank you it was awsome!
19:05:17 From Ben Horlick : Thanks all!
19:05:23 From Markus : Vielen Dank!
19:05:26 From Murali Rao to Richard Foster-Fletcher(Direct Message) : let me know once you pause the recording
19:05:27 From Adewale : Thanks everyone
19:05:29 From Murali Rao to Richard Foster-Fletcher(Direct Message) : thank you
19:05:42 From Prathima Appaji : Thanks everyone
19:05:56 From Ed : Thank you very much everyone
19:06:00 From Edel Sanchez : Thanks all
19:06:25 From Shannon’s : Have to sign off… thanks so much everyone. We all need to be talking about bias. Please connect

19:06:38 From Gerry Copitch : Got to go. Love all of you!
19:07:17 From Karen Beraldo : Love you Gerry!
19:07:23 From Markus : Good night, Gerry – let’s do another WonderCafe soon…
19:07:42 From Karen Beraldo : Thanks everyone!
19:09:13 From Karen Beraldo : Join us at telegram:
19:10:13 From Manijeh Motaghy : I’m so glad to have been a part of this group today. I’m not from a tech professional, but a human development trainer and mindfulness teacher. I am always interested in knowing about human biases and the channels they travel, how the impact others and the future generations…
19:10:17 From Vuppala Rohit Anjan : Hi Everyone

Thanks for panelist for conducting such a good GM

its very intersecting i thank all name by name

Follow me on






Subscribe to my #YouTube channel:-…

Contact me on:-


#WhatsApp:- 9963264764
19:11:06 From Karen Beraldo : Michele we would love to have you in Women in MKAI September meetup!
19:11:26 From Simon Swan : That’s such a great insight, thanks @michele!
19:11:37 From Karen Beraldo : All the women here are invited, of course!
19:11:42 From Markus : Fab idea, Karen, but please share the insights with the MKAI “men” too… This is relevant for ALL of US!
19:12:34 From Karen Beraldo : Of course Markus, hope there will be a time when all genders will be running the same marathon, with no obstacles on it
19:12:53 From Jaisal Surana : sorry guys need to drop I loved every part of it.
19:12:55 From Markus : Here’s to the crazy ones, the misfits, the rebels, the troublemakers, the round pegs in the square holes… the ones who see things differently – they’re not fond of rules… You can quote them, disagree with them, glorify or vilify them, but the only thing you can’t do is ignore them because they change things… they push the human race forward, and while some may see them as the crazy ones, we see genius, because the ones who are crazy enough to think that they can change the world, are the ones who do. – Steve Jobs –
19:13:09 From Karen Beraldo : Thank you so much Jaisal!
19:13:56 From Karen Beraldo : Brilliant
19:14:06 From Markus : @Richard, given you are still recording, could this talk from Michele be shared via MKAI?
19:14:26 From Markus : This SO relevant and luckily you’re still running the recording 😉
19:14:26 From Richard Foster-Fletcher : Yes kept the record on, very glad I did
19:14:28 From Edoardo Di Maggio : Thank you very much everyone, very interesting and insightful conversation and arguments. Pleasure meeting you. Edoardo Di Maggio
19:14:32 From Lavina Ramkissoon 🙂 : haha yes everyone males, females, non binary, etc …. 😉😊
19:15:21 From Karen Beraldo : Thanks Edoardo!
19:16:34 From Vibhav Mithal : Thanks everyone – for a fantastic evening! Great insights. Have to drop off now. Please take care. Look forward to the recording!
19:16:41 From Markus : I think there’s talk that 90% of human operations / thoughts are done unconsciously
19:16:52 From Markus : Good night Vibhav
19:16:55 From Karen Beraldo : Thanks Vibhav!
19:17:10 From michele abraham : thanks so much for your thoughts Vibhav!
19:17:19 From Karen Beraldo : I dont know if 90%, but I am sure a huge amount of them are
19:17:47 From Markus : 89% 😉
19:18:01 From Karen Beraldo : 🙂
19:19:23 From Markus : Is there something like MAindfulness?
19:19:54 From Markus : Love the work by Elisabeth Kuebler-Ross
19:20:55 From Mike Nash : Another interesting point…when a lot of AI is being developed to self learn/evolve would it be complicit in bias too or would it only be at source that is responsible (ie human)?
19:22:54 From michele abraham : So true!
19:22:58 From Markus : @Mike – Guess Ai could , via conditioning / nudging, enforce human behaviour (which couyld be both good or bad)
19:23:36 From Ana Montes : Markus I like that.
19:23:46 From Simon Swan : Bias breeds bias
19:25:24 From Mike Nash : @Markus good point, so maybe bias monitoring should be an iterative process (continue – Monitor/change/monitor/change), channelling or focussing to a certain level or benchmark.
19:25:49 From michele abraham : LOL
19:26:08 From Simon Swan : @mike definitely! The assessment of bias should be under constant evaluation
19:26:24 From michele abraham : LOL
19:26:37 From michele abraham : Touche Richard!
19:26:56 From Markus : Could be teach Ai to “love” (humans). Would that help?
19:27:01 From Murali Rao : Great session. Thank you !
19:27:03 From Ana Montes : Death my reluctant companion, which reminds me of the importance of living life fully
19:27:19 From Mike Nash : @simon Good point, I suppose being receptive all the time creates a better outcome quicker?
19:27:20 From Markus : Carpe diem, Ana 😉
19:27:27 From Karen Beraldo : I think we have another special guest for Women in MKAI
19:27:44 From michele abraham : yayyy
19:27:50 From Markus : Are men allowed to join Women in Ai?
19:27:58 From michele abraham : Is there a link to women in AI?
19:28:00 From Karen Beraldo : Maybe in the future…
19:28:04 From Markus : Guess not, otherwise what’s the point
19:28:17 From Karen Beraldo : I will send it to you Michele
19:28:19 From Markus : Doesn’t that introduce “bias”…
19:28:30 From Markus : [Playing with you, Karen]
19:29:04 From Karen Beraldo : 🙂
19:29:32 From Simon Swan : @Mike an interesting point you made earlier about AI evolving and the biases changing or affecting subsequent biases. A similar example of “AI”
19:30:04 From Markus : Guess there could be a Bias-reinforcement/feedback loop
19:30:08 From Simon Swan : ..getting stuck in a bad feed backloop, is with social media connecting people who think in similar ways
19:30:28 From Markus : Bit like being in your Bias-Bubble (Social media bubble)
19:30:55 From Mike Nash : @simon yes so true….yes bad bias or ethics, breeds bad bias or ethics. So constant monitoring and breaking cycle is key. Thank you.
19:30:55 From Simon Swan : Yeah, a horrible example of data being used without acknowledgement of the impact on others
19:30:56 From Markus : We need ppl starting to ethically hack Ai systems
19:31:03 From Ryan Carrier :
19:31:55 From michele abraham : @Markus, with diversified, holistic data sets to train, AI will be a whole diff system in my view, do you disagree?
19:32:04 From michele abraham : Hi Karen, yes, it’s me!
19:32:20 From Mike Nash : Thanks Ryan…interesting.
19:32:42 From Markus : @Michele, agree, but wonder if more is needed in addition to your points
19:34:16 From michele abraham : for example, FB and Amazon just settled major lawsuits for using recruiting software that couldn’t identify female identifiable names
19:34:27 From michele abraham : So women weren’t being asked to interview
19:34:35 From michele abraham : Ryan is 100% correct
19:35:05 From Ana Montes : Play does that, there are rules and also there is the testing of rules.
19:35:27 From Richard Foster-Fletcher : “Algorithmic classifications are far from objective. They impose a social order, naturalize hierarchies, and magnify inequalities. Seen through this lens, AI can no longer be considered an objective or neutral technology.” Kate Crawford, Author
19:35:35 From V : I’m really sorry, I got to leave, thank you
19:35:52 From Karen Beraldo : Thanks V!
19:35:59 From michele abraham : The concern is 100% that if machine learning is done via internet databases, there will be inherent bias within the machines
19:36:04 From michele abraham : 100% concerning
19:36:21 From michele abraham : And yes data! So true Dean!
19:36:22 From Ryan Carrier :
19:36:37 From michele abraham : thanks Ryan, will check this out!
19:36:40 From michele abraham : Dean, 100%
19:36:41 From Ryan Carrier :
19:36:57 From Mike Nash : Very interesting points. Ryan, what happens when we setup a framework of compliance (possibly legal) and others don’t play the game. What would you suggest?
19:37:52 From Ryan Carrier : mandate law first, Enforcement – it is our only choice, anti-trust third and criminality fourth
19:38:42 From Lukman Raimi : This is a beautiful conversion. I presented a paper 2 days ago advocating the deployment of PRECISION AGRICULTURE for enhancing FOOD SECURITY in Africa. The raging concern is safety of the data and privacy including theft by competitors.
19:38:52 From Simon Swan : Would love to hear some comments on how a tool such as synthetic data could be used to help mitigate biases:
19:39:04 From Ryan Carrier : until data can be aggregate at the 50+ million level, it is meaningless
19:39:45 From Ryan Carrier : and it would be duality of data because data is built with another entity – for example my data made with Amazon or with Spotify
19:39:52 From Mike Nash : Thank you. I guess the best thing is light the compliance torch and set the standard first to quash most misuse. We may not stop everything, but most issues.
19:40:13 From Ryan Carrier : +1 Mike
19:40:58 From michele abraham : What do we think about the future of AI as it pertains to metaverse and decentralization?
19:41:22 From Mike Nash : @Ryan Got it, thank you.
19:41:27 From delphine nyaboke : gotta drop off now.

wonderful discussion 😊
19:41:55 From Karen Beraldo : Delphine thank you so much for supporting us!
19:42:25 From : The true challenge for the human race in the age of digital, is to re-discover human consciousness and connectedness in the real world.
19:42:56 From michele abraham : 100%
19:44:37 From michele abraham : yikes
19:48:03 From Divya Dwivedi : Another fascinating session. Thank you all. C u soon.
19:48:39 From Karen Beraldo : Thank you Divya! Yes, talk soon!
19:48:53 From AKveni : Great session! Thank you so much.
19:49:07 From Karen Beraldo : Thanks AKveni
19:49:16 From Ryan Carrier : Have to price it – must create a market and exchange for price discovery, but this will increase the digital divide
19:50:15 From michele abraham : Fascinating!
19:50:29 From Ana Montes : Scary
19:50:38 From Dean Svahnalam – AiBC : very
19:50:44 From Mike Nash : If setting up bias, ethics compliance levels in data or AI algorithms, how would we measure or set the agreed levels of compliance?
19:51:40 From Ryan Carrier : from our perspective we crowdsource those requirements and have the government or regulator approve them… all voices welcome here
19:51:51 From Hema Lakkaraju : thank you all
19:52:02 From Hema Lakkaraju : Have to leave
19:52:02 From michele abraham : Hema, thank you so much for your feedback!
19:52:13 From Hema Lakkaraju : great convo!
19:52:15 From michele abraham : Simon, such great feedback and insights!
19:53:15 From michele abraham : Are these monthly meetings?
19:53:38 From Karen Beraldo : Yes Michele. We have Inclusive Foruns every month
19:53:52 From Richard Foster-Fletcher :
19:53:58 From michele abraham : Happy to connect with everyone! Amazing to listen to all the speakers and panel
19:53:59 From Richard Foster-Fletcher : Next one
19:54:05 From Manijeh Motaghy : If any of you are interested to connect with other humans interested in the humanity please join us here
19:54:07 From Ryan Carrier : for connections, but I will ask you if you want to know more about ForHumanity 😀
19:54:20 From Mike Nash : @ryan I suppose being receptive to all ideas and then get accredited. I can imagine that in some cases you exceed government standards?
19:54:37 From Ryan Carrier : yes we do and they either accept or push back
19:54:56 From Dean Svahnalam – AiBC : +1 <3
19:54:58 From Mike Nash : Good points Simon/Ana
19:54:58 From michele abraham : Well said!
19:55:18 From Lukman Raimi : nice conversation!
19:55:45 From Lukman Raimi : Although I raised my hand, but not recognized. I enjoyed every bit.
19:55:49 From Mike Nash : @Ryan Yes I suppose it is too and fro. Wish we had this in UK,
19:55:58 From Manijeh Motaghy :
19:56:27 From Mike Nash : Fantastic chat everyone. Thank you everyone. Mike