MKAI Inclusive AI Forum April 2021: The Business Rationale for Ethical AI

Speaker

Ton Wagemans, Partner at Considerati

Presentation: The Business Rationale for Ethical AI 

Ton Wagemans is a Partner at Considerati as well as legal policy and ethics consultant for the digital world, advising clients on how to deal with the legal policy, ethical and strategic challenges of the information society and digital trust. He has experience in privacy and data protection, cybercrime, e-commerce, intellectual property, online piracy, internet freedom and internet regulation.

 

Speaker

Alex Edmans, Professor of Finance at London Business School

Presentation: The business case for responsibility

Alex Edmans is Professor of Finance at London Business School. Alex has a PhD from MIT as a Fulbright Scholar, and was previously a tenured professor at Wharton and an investment banker at Morgan Stanley. Alex has spoken at the World Economic Forum in Davos, testified in the UK Parliament, and given the TED talk “What to Trust in a Post-Truth World” and the TEDx talk “The Social Responsibility of Business” with a combined 2 million views. He serves as Mercers School Memorial Professor of Business at Gresham College, giving a four-year programme of lectures to the public, and on Royal London Asset Management’s Responsible Investment Advisory Committee. Alex’s book, “Grow the Pie: How Great Companies Deliver Both Purpose and Profit”, was featured in the Financial Times list of Business Books of the Year for 2020. He has been named to Poets and Quants Best 40 Professors Under 40 and Thinkers50 Radar.

Speaker

Merve Hickok, Founder @AIethicist.org

Talk: AI Ethics in Business: Risk Management to Brand Differentiation

Merve Hickok is the founder of AIethicist and Lighthouse Career Consulting. She is an independent consultant & trainer focused on capacity building in ethical and responsible AI and governance of AI systems. Merve is a senior researcher Center for AI & Digital Policy; founding editorial board member of Springer Nature AI & Ethics journal: one of 100 Brilliant Women in AI Ethics 2021; Fellow at ForHumanity Center; a regional lead for Women in AI Ethics Collective; and a member in a number of IEEE & IEC work groups that set global standards for autonomous systems. Previously Merve has worked as a VP of HR in a number of different roles with Bank of America Merrill Lynch.

Chat Transcript:

17:25:45 From  Alex Monaghan : Yes. The main thing about GDPR is the possible fines – up to 4% of global turnover – big money. That has triggered action, but the effects are not yet clear, and refinement is certainly needed.

17:28:32 From  David Wood : Q: Are you saying, Tom, that companies should wait for clear laws being in place, before trying to be ethical?

17:28:34 From  Gerry Copitch : So happy to see you are here Vibhav!

17:29:02 From  Ana Montes : Can an equivalent of a Socratic Oath  be created for AI that focus on, first do no harm?

17:29:04 From  Harold Huggins : Tom, What is the social economic effect of  longitudinal data from the federal reserve system supports, concerning  the US Department of Education purchased over 1 trillion dollars in defaulted student loans. These loans were monetized through fraudulent bookkeeping records by minorities, through Gresham Law, https://en.wikipedia.org/wiki/Gresham%27s_law

17:29:45 From  Alex Monaghan : A question – so what do we think ethics means now? Just abiding by the law, or following some social principles, or more ambitious golas like sustainability, equality, openness, …?

17:30:31 From  Vibhav Mithal : That was a wonderful talk Tom! Absolutely practical. Exceptionally insightful for a person from the legal field. Would be great to follow your work!

17:31:02 From  David Wood : @Ana – the problem with prioritising “do no harm” is that there may be harm from inaction as well as from action. In traditional language, there are sins of omission as well as sins of commission.

17:31:04 From  Marie : Hello Vibhav. Good to see you’re having a speeding recovering. Sending good vibes from Brazil.

17:31:12 From  George Raikos : Being “Unethical” can be a decision tied to a high risk appetite?

17:33:40 From  Paul Quaiser : AI discussion seems to be centered around Large Enterprises. Is anyone aware of Edge Network AI systems that are oriented toward stakeholders. I am seeing development in this arena from personalized education developers and the gaming industry.

17:34:25 From  Alex Monaghan : That’s a VERY specific question!

17:35:18 From  David Wood : @Alex – I think your question is the key one.

To me, ethics means doing the right thing, even when there are pressures (or appetites) to do things that are expedient, or short-term

17:37:23 From  Angel Salazar : Open to collaboration on inclusive growth agenda enabled by new technologies https://www.linkedin.com/in/angeljsalazar/

17:37:25 From  Odilia Coi : Thank you  very much Tom, very interesting presentation!

17:37:29 From  Deborah Power : Ethics are very important for the board level to be involved in the ethics of their companies to become engaged in the humanitarian aspects of their corporate social responsibilities.

17:37:36 From  Liam McDermott : Thank you Tom!

17:37:39 From  Vibhav Mithal : THANK YOU TON! Great presentation!

17:37:42 From  Richard Boocock : very interesting Tom, thank you

17:37:43 From  Deborah Power : Thank you Ton

17:37:43 From  Jaisal Surana : Thanks Ton for sharing the great work and client cases.

17:37:44 From  Jitendra Shakya : Thanks Ton

17:37:46 From  Lisa Welchman : Thanks, Ton.

17:37:46 From  Debbie Bandara : Thank you Tom, excellent presentation!

17:37:48 From  Alex Monaghan : David, the trouble is that flies in the face of competitive business – unless consumers vote with their dollars! Amazon for example has ethical issues, but consumers have turned to them more and more, and there is not much sign of Amazon wanting to prioritise ethics over profit. Amazon is not unusual in this respect – it’s just a very obvious example, and a lifeline for many people in lockdown!

17:37:51 From  Ana Montes : Thanks for the work you are doing

17:38:00 From  Jivan Shashikant Suryawanshi : Thank you tom

17:38:01 From  Richard Boocock : Sorry, Ton! (not Tom)

17:38:12 From  Monika Manolova : Thank you Ton Great presentation

17:38:41 From  Karen Beraldo : Harold maybe you could find help on this matter with https://www.linkedin.com/in/nancyrubin/ I know that she was involved on a project about student loans

17:38:58 From  Angel Salazar : Spot on questions from Jaisal Surana. Tough but necessary to keep ourselves as a society in check

17:39:16 From  Karen Beraldo : Thank you so much Ton!

17:39:19 From  Angel Salazar : Tom, great presentation!

17:39:20 From  David Wood : @Alex – You’re right, companies can grow their businesses by being unethical.

That doesn’t mean the definition of ethics should change!

17:43:57 From  Hamza Basyouni : M-Pesa has been phenomenal and true use case for future economic development.

17:46:05 From  Alex Monaghan : I’m more interested in WHY Vodafone did this – for profit, for market share, or just because they are nice guys?

17:46:30 From  Liam McDermott : Can someone please provide a specific example of ethical AI in buinsess? I am failing to see the differentiation between “ethical AI” and simple “data ethics” in its original form.

17:47:37 From  Alex Monaghan : @liam – a common example is racial or ethnic equality in AI apps – in loan outcomes, in security surveillance, or even in hand dryers!

17:48:52 From  Liam McDermott : But that “equality” is derived from the data itself not AI…the machine learning models don’t affect that

17:51:13 From  Ana Montes : Actively doing good for whom? Who Decides?

17:52:09 From  David Wood : @Liam – Great question. Consider what rules might be put into the algorithms themselves.

For example, a surveillance drone might be coupled with a weapons system. If it detects a known terrorist, it could unilaterally decide to initiate a strike against the presumed terrorist.

Should an AI company provide the military with algorithms to enable that kind of behaviour? That’s an ethical decision for that company.

17:53:42 From  Alex Monaghan : @liam the difference between unethical AI and ethical AI can be whether you say “it’s down to the models” or you say “it’s our responsibility to ensure that the models promote equality”.

17:55:43 From  Liam McDermott : @David- I have heard the argument surround facial recognition quite often in a government of military context. I would put your example under this umbrella. My question is strictly in a business context, not life and death.

17:56:20 From  Ben Fraser : Really great points. It’s also worth highlighting that just because the raw data is fundamentally biased, it does not mean our models have to be. There are a huge range of modern techniques we use to counter such bias. The way we preprocess, engineer and model our data has a huge impact on the ethical outcome.

17:56:40 From  Alex Monaghan : I think you might also look at companies which fail – maybe being nice to work for makes you more likely to go bust, and then you don’t figure in the 20-year-old company stats.

17:57:45 From  Liam McDermott : @ Ben How can data itself be biased? It is a simple reflection of what has been collected? AI is just statistical modeling…bias can only be introduced on purpose.

17:57:57 From  Partha Bhattacharyya : Creating optimal value as a purpose

17:58:14 From  Hamza Basyouni : I think transfer learning can help elevate bias and improve models operationally

17:58:43 From  Alex Monaghan : Many data collection methods are biased. Many data sets are biased. Deliberately or otherwise.

17:59:05 From  Liam McDermott : @ Alex Should we be talking about the collection process then, not AI?

18:00:34 From  David Wood : An example from Hannah Fry. A Google image search for Maths Professor used to show only one picture of a woman in the first twenty hits. Hannah pointed out that, statistically, that was probably an accurate reflection of the actual gender statistics of maths professors. But an ethical decision might be to show more pictures of women, to reflect the kind of statistics we would like to see in society.

(Repeating that same search today gives something like 4 or 5 women in the first twenty pictures, interestingly.)

18:01:05 From  Alex Monaghan : @liam Yes, but it’s like water purification – you can’t always go into the geology and take out the impurities before you collect the water, you may need to analyse and remove the impurities after collection. Or you could just do what they did in Flint and say it’s not your problem.

18:01:48 From  Liam McDermott : @ Hope the AI doesn’t catch fire like the water in Flint 🙂

18:02:54 From  David Wood : For a thorough but accessible introduction to the surprising conflicts between different notions of fairness, I strongly recommend “The Alignment Problem: Machine Learning and Human Values” by Brian Christian.

https://www.goodreads.com/book/show/50489349-the-alignment-problem

18:03:44 From  Alex Monaghan : Haha!

18:03:45 From  Liam McDermott : @ Alex Who decides what is “impure”? If you are removing data on the basis of it offending sensibilities, that is not very scientific.

18:04:17 From  Deborah Power : Amazing Alex

18:04:26 From  tonwagemans : Very interesting perspective! Thanks

18:04:30 From  Richard Foster-Fletcher : Questions for Alex Edmans?

18:05:05 From  Vibhav Mithal : Fantastic Alex! Great presentation.

18:05:31 From  Karen Beraldo : Thank you so much Alex!

18:06:07 From  Odilia Coi : Thank you very much for your  contribution Alex!

18:06:12 From  Gerry Copitch : A fascinating and insightful perspective! Thanks Alex!

18:06:16 From  Hamza Basyouni : Thanks enjoyed that Alex !

18:06:32 From  Alex Monaghan : @ liam that’s where it gets tricky of course! Positive discrimination? legislation? market share (there are lots of ethnic minorities)?

18:07:01 From  Janos PC : Questions – System and thinking change?

Social propose business are do not respected with tradicional VC wordview.

So where is the door from this trap?

18:07:18 From  Liam McDermott : @ Alex I think this is where the real danger lies. Allowing opinion to shape science.

18:08:09 From  Janos PC : Citizen Social Science in the age of the ALPHA GENERATION

https://www.linkedin.com/pulse/citizen-social-science-age-alpha-generation-humanner-/

18:08:25 From  Mus@fliptin.com : @Liam, check  Compas( system to predict whether or not a perpetrator was likely to recidivate ) is a good example to look at where the model used could only perpetuate the inherent discrimination prejudice done by human , irrelevant of the bias data

18:08:42 From  Ana Montes : Thanks Alex great presentation.

18:11:33 From  Ana Montes : Responsibility and impact go together. How can you make sure that the organizations take responsibility for the impact that they create?

18:11:33 From  301054 : There is a big Ethic issue in the AI system. I have been asked a good question recently by a business that is ‘Why our system has ethic discrimination error?’

18:12:30 From  301054 : I just post is not a question for the speaker but for everyone in here

18:13:02 From  MBester : I do think the important part of this to communicated intuitive assumptions and think while you do AI/analytics in general. Throughout the years businesses has been making wrong decisions based on analytics and now on AI as there is a lack of communications of expectations and assumptions are being made without realising it.

18:13:48 From  Alex Monaghan : Ethics are complex. If you are a private school, for example, is it ethical to use AI to increase the advantages of your paying, already privileged customers – or would ethics mean you had to disadvantage your customers?

18:13:52 From  Richard Foster-Fletcher : Please join me in thanking Alex for his time 🙂

18:14:11 From  MBester : Thanks so so much. Very nice talk!

18:14:28 From  Hamza Basyouni : :14:33 From  Liam McDermott : Thank you Alex!

18:14:38 From  Richard Boocock : Thank you Alex!

18:14:48 From  Deborah Power : Thank you Alex  great examples

18:14:48 From  Ratna : Thank you Alex!

18:14:49 From  Ben Fraser : Many thanks Alex, a hugely engaging and interesting presentation!

18:14:50 From  Dr. Chrissann R. Ruehle : Great information! Thank you for sharing your expertise, Alex!

18:14:51 From  Monika Manolova : Thank you Alex

18:14:53 From  Marie : Excellent presentation!!. let us be realistic: companies look mainly at market share. being a good guy translate into increase mkt share this how the business mind is wired (not being judgmenta. it’s just reality). They (businesses) want us to think otherwise (that they’re just being the good guys) so that society can swallow and accept the trade offs of certain techs (lay offs etc) peacefully and also to position themselves against States/governments’ actions.

18:15:03 From  Oumayma Zeddini : thank you Alex!!

18:15:06 From  Jaisal Surana : Huge thanks to Alex for the insightful talk.

18:15:18 From  Partha Bhattacharyya : Wonderful insights on the practical impact of ethics from Prof Alex

18:16:41 From  Adina Tarry : Both Ton and Alex have been awesome! So many clear and useful  ideas to take away! A BIG THANK YOU TO BOTH!

18:16:53 From  Alex Edmans : Thanks for the challenge Marie. I understand that people look at market share; however, ethics and market share may support each other rather than being in contradiction. I’ve been teaching MBA students for 15 years, and 15 years ago people just wanted the highest salaries. Now things have changed a lot and they genuinely want purposeful careers. Thus, to hire top talent, it’s important to be a company that’s truly committed to purpose

18:17:19 From  Alex Edmans : also, regulation sometimes catches up with companies that aren’t seen as ethical – see, for example, regulators’ treatment of Uber vs. Airbnb

18:17:24 From  Mus@fliptin.com : @ Alex ,some credit on  Mpesa, it  was technically launched by Safaricom where Vodafone was a minority shareholder of 40%. I have some doubt on their initial intentional purpose as depicted  but much more opportunistic as they actually pivoted very early.

18:17:40 From  Richard Foster-Fletcher : All the MKAI Links

Join and contribute to the MKAI Conversation at:

– Telegram: https://t.me/joinchat/9_YAbNz6WQAyZWJk

Follow MKAI for info and updates at: 

– LinkedIn: https://www.linkedin.com/company/mkai

– Telegram: https://t.me/MKAI_ORG

Register for the next MKAI event at:

– https://mkai.org/event/may-inclusive-forum/

18:17:52 From  Deborah Power : Hi Vibhav good to see you.

18:20:10 From  Paul Levy : it refers to a “tipping point” where digital data becomes “authority” over human authority

18:20:46 From  Alex Edmans : Alex wrote:

“Ethics are complex. If you are a private school, for example, is it ethical to use AI to increase the advantages of your paying, already privileged customers – or would ethics mean you had to disadvantage your customers?”

This is only my opinion and I appreciate that opinions may differ; political views enter into topics such as this. I’m actually not so concerned with inequality if everyone is better off. If a private school makes their schoolkids better, this doesn’t make state school kids worse off. The private school pupils become better business leaders, doctors etc. in the future and the pie grows. If the concern is that there are a limited number of slots at university, then universities should use AI to try to uncover hidden talent that may not be in the grades – just as Billy Beane (manager of the Oakland Athletics covered in the book Moneyball) used AI to uncover great baseball players whose ability wasn’t in the stats

18:22:18 From  Johanna Afrodita Zuleta : @Paul fascinating point!

18:22:39 From  Alex Edmans : Ana Montes: Responsibility and impact go together. How can you make sure that the organizations take responsibility for the impact that they create?

Key to this is impact reporting. This is indeed what the Sustainability Accounting Standards Board is aiming to do – come up with a common set of standards governing the non-financial impact that companies should report

18:22:40 From  Liam McDermott : Billy Bean just looked at “on base %” which was an overlooked statistic. I don’t believe he uses machine learning.

18:23:15 From  Richard Boocock : I feel the Post Office situation is an absence of ethical leadership in the implementation of a ‘dumb’ IT financial/inventory management system in sub-post offices. Leadership chose to accept that their sub-post office owners were dishonest rather than asking – are we sure??

18:23:50 From  Alex Edmans : Liam: you’re right, I’m using “AI” loosely – but Beane used a much more statistical approach than what people used to use

18:24:36 From  Alex Edmans : not just OBP, but also what matters wasn’t just your batting average but whether your outs were ground balls / flyouts or instead lineouts (the latter are “less bad” ways to get out)

18:24:45 From  Alex Edmans : i.e. he was looking beyond the statistics

18:24:58 From  Paul Levy : At present on Facebook you can detag yourself from a picture. This is your right to.privacy. But you can also detag in order to correct a digital error (e.g. “that is not me”. i predict within a few years you won’t be able to detag yourself from an image (it will be much harder), if AI has tagged you.

18:25:16 From  Alex Monaghan : @Alex Edmans – yes, my example was a trivial and tendentious one – but I don’t agree that “If a private school makes their schoolkids better, this doesn’t make state school kids worse off.” If there is a finite number of college places, this will impact state school kids. Society as a whole may benefit in the long term, but in the short term there will be an impact on the relatively disadvantaged.

18:25:36 From  Liam McDermott : @ Alex I love the part in the movie when the scouts talk about players having a clean cut good face…good jaw lol

18:25:37 From  301054 : This is ethic talk

18:25:55 From  David Wood : Q for Merve – Your website lists a large number of different sets of AI Principles that different organisations have proposed.

Do you have a view as to which of these sets are the best?

18:27:16 From  hesam : AI is the water. You can drink it or get drown in it!

18:27:50 From  Paul Levy : i humbly disagree. Morality is the water.

18:28:04 From  Liam McDermott : @ hesam The water is made of buzzwords

18:29:04 From  Gerry Copitch : Under the new EU proposals face recognition is prohibited only if it’s conducted in real time. It could still be used on video captured in the past.

18:29:55 From  Liam McDermott : @ gerry If the ban facial recognition, they should ban peoples pictures in government databases as well

18:30:15 From  Richard Foster-Fletcher : Clearview AI 🙁

18:30:43 From  Vibhav Mithal : India s pending data protection bill has adopted from GDPR. What the final version will be – is in the air. Sothat is relevant.

18:30:50 From  Paul Levy : interesting to know, Gerry

18:31:07 From  Marie : Will other countries likely to adopt EU-like regulations on AI?

18:32:01 From  Johanna Afrodita Zuleta : I arrive later, was there a report shared earlier?

18:32:18 From  Paul Levy : Do we need an international declaration on human rights in our relation to AI and emerging’s AI relation to itself and us?

18:32:33 From  Frits Bussemaker : @merve, to what extent does culture play a role with determining the ‘best’ principles as might differ per culture.

18:32:36 From  Richard Foster-Fletcher : https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence

18:33:41 From  Jaisal Surana : @Merve Do countries also look at regulations and policies for protecting vulnerable groups?

18:33:56 From  David Wood : Regarding the EU proposed restrictions on real-time facial recognitions, note that some exceptions are envisioned:

“There are numerous exceptions to this prohibition, including letting police use such systems to find the ‘perpetrator or suspect’ of any criminal act that carries a minimum three-year sentence…”

https://www.theverge.com/2021/4/21/22393785/eu-ai-regulation-proposal-social-credit-ban-biometric-surveillance-exceptions

18:35:11 From  Ana Montes : There are standards for quality that organizations get certified for at an international level can something similar can be done for AI?

https://t.me/joinchat/9_YAbNz6WQAyZWJk

https://chat.whatsapp.com/FPt7DC7jzNg3m47tIvXhMB

18:40:50 From  Liam McDermott : Is it legal in the EU for a police officer to identify someone through a drivers license photo? If so, why would you ban AI facial recognition?

She will be delighted to work with you to support you with a blog or blog post contributions

18:43:38 From  Fiona J McEvoy : @Liam because they’re known to introduce bias at scale https://www.pbs.org/independentlens/films/coded-bias/

18:44:07 From  David Wood : @Liam – My view is that AI facial recognition should NOT be banned. However, there have been horrific consequences of cases of misidentification. So it should only be used as ONE of several inputs before any drastic action is taken. (This echoes back to the Post Office case raised earlier by Richard.)

18:44:42 From  Liam McDermott : @ Fiona If someone is wrongly identified, you can just fingerprint them.

18:45:40 From  Liam McDermott : @ David That makes a lot of sense. I agree with you

18:45:41 From  Fiona J McEvoy : @Liam so its okay to summarily arrest and release innocent people of color?

18:46:08 From  David Wood : @Liam some of the misidentification examples arose after video of a criminal act had been analysed by an AI, in situations when no fingerprints had been left behind.

18:46:30 From  Liam McDermott : @ Finoa Of course not, but the AI isn’t racist. It is the data being fed into it

18:46:48 From  Paul Levy : good to see you here @davidwood

18:47:00 From  Liam McDermott : @ David Doubt that will be enough to convict someone

18:47:33 From  Fiona J McEvoy : @Liam, it’s the data and the way these things are built. There’s a lot of good scholarship on this. The fact they’re out in the wild should worry us all

18:48:17 From  David Wood : “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match” – 3 examples covered in https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html

18:48:38 From  Liam McDermott : @ Fiona As I tend to trust law enforcement, it makes me think the world will be safer for me with AI in it

18:49:25 From  Liam McDermott : @ David A reflection of our failing justice system, not AI

18:50:28 From  Marie : How do think are a major barriers for having a coordinated participation of civil society in AI governance? do you think a global governance possible?

18:50:30 From  Liam McDermott : @ David There are many people in prison on witness misidentification…nothing new

18:51:04 From  Ron Ozminkowski : Representative data are key, yes, though not sufficient.  Bias can be reduced further by making sure those who train the AI models and those who scale and use the models also represent the populations to whom those models will pertain.  It’s the diversity in their perspectives that can lead to better training and better models, and better use of those models.

18:51:09 From  David Wood : @Liam – I agree.

And there are many examples when facial recognition was used with a positive result: https://www.securityindustry.org/2020/07/16/facial-recognition-success-stories-showcase-positive-use-cases-of-the-technology/

18:51:56 From  Fiona J McEvoy : @Liam that’s you’re decision, but we wouldn’t be having this conversation now if there hadn’t been innumerable instances of supposedly neutral AI making decisions that are skewed, irrational or downright biased. Once blips in the system are scaled it’s difficult to row back. With facial recognition there are numerous issues and vulnerabilities that have led a ton of governments to restrict or ban. They aren’t acting through paranoia or resentment of the police

18:52:01 From  Fiona J McEvoy : *your

18:52:06 From  Pasquale : @Merve Are you saying, then that the responsibility of a working system is of people that compose the training data, as opposite to people that train or design the system?

18:52:23 From  Mus@fliptin.com : @Liam    consent management is also  a big concern in the use of Facial Recognition   and not just in law enforcement, look at facial recognition use cases in retail stores for example.

18:54:29 From  Gerry Copitch : No other way to express this. Forgive the teenage terminology but You Rock Merve!

18:54:33 From  Ana Montes : Thanks Marve, you are doing wonderful and important work.

18:54:41 From  Liam McDermott : @ Fiona Again, it is the data, not the AI. Machine Learning is just a statistical engine on steroids. You should take a course…you will feel much better. 🙂

18:55:31 From  Fiona J McEvoy : @Liam — thanks for mansplaining. I write and work in AI/data ethics

18:55:32 From  Vibhav Mithal : Thanks Merve for your insights. Very very important discussion.

18:56:04 From  Paul Levy : We also need to build notions of recognition, respect, “snooping”, “space”, into some fundamental norms in early years education. it also starts there.

18:56:13 From  tonwagemans : Thanks Merve!

18:56:20 From  Richard Foster-Fletcher : KEY LINKS

Join and contribute to the MKAI Conversation at:

– Telegram: https://t.me/joinchat/9_YAbNz6WQAyZWJk

Follow MKAI for info and updates at: 

– LinkedIn: https://www.linkedin.com/company/mkai

– Telegram: https://t.me/MKAI_ORG

Register for the next MKAI event at:

– https://mkai.org/event/may-inclusive-forum/

18:56:26 From  Liam McDermott : @ Fiona Didn’t mean to seem sexist…thought you would know better if you were in AI 🙂

18:56:32 From  Karen Beraldo : Thank you so much Merve

18:56:36 From  Mus@fliptin.com : Thanks Merve

18:56:48 From  Ratna : Thanks Merve

18:56:52 From  Richard Boocock : Thank you all! Great event and very thought provoking both in content and chat

18:56:56 From  Diego Cammarano : Thank you everyone for another interesting event. Have a good evening

18:56:59 From  Johanna Afrodita Zuleta : Delighted to be part of the conversation. 

I am crosspollinator, building interdisciplinary alliances as a value catalyst. Bridging the gap between the worlds of diplomacy, corporate and arts & culture with a focus on sustainability. 

I facilitate sense making exploring and growing our understanding of our humanity, and what keeps us humane in the digital era, interested in the balance between digital and analogue and how is AI  engaging with the humanities. 

https://www.linkedin.com/in/johannazuleta/

18:57:01 From  Jaisal Surana : Thanks Merve I am so touched with your work. Look forward to working with any of your work.

18:57:19 From  Chris W : Many thanks to all the speakers. It was a great session.

18:57:27 From  Fiona J McEvoy : @Liam read more

18:57:37 From  Liam McDermott : @ thanks…will do!

18:57:46 From  Partha Bhattacharyya : Thank you Merve for the insightful discussion

18:58:55 From  Paul Levy : happy to continue conversations https://www.linkedin.com/in/paul-levy-1b35853

18:59:16 From  Alex Monaghan : Thanks all!

18:59:20 From  Monika Manolova : Thank you excellent speakers and wonderful MKAI team

18:59:31 From  David Wood : @Fiona – Many thanks for highlighting the video https://www.pbs.org/independentlens/films/coded-bias/

 – I’ll be watching it soon

18:59:37 From  Dr. Chrissann R. Ruehle : Thank you!!  Excellent speakers and really enjoyed this information.

18:59:41 From  Paul Levy : many thanks

18:59:43 From  Ratna : Excellent speakers, thanks MKAI team !

18:59:49 From  Paul Levy : important matters

18:59:59 From  Frits Bussemaker : Good session (as allways!)

19:00:05 From  Oumayma Zeddini : thank you everyone, amazing speakers!

19:00:28 From  Dr. Chrissann R. Ruehle : Thank you!!  Excellent speakers and event. Really enjoyed it!!

19:00:29 From  Pinal Patel : Thank you everyone

19:00:44 From  Joseph David : thank you

Share this post