Speaker 1: Nikita Lukianets, Founder at Open Ethics
Presentation: AI transparency and self-disclosure. 101
Nikita Lukianets, a Founder of the Open Ethics initiative that fosters the inclusive dialogue between experts and citizens to design systems where humans and AI successfully work together. Nikita is also a Founder and CTO at PocketConfidant AI, a 24/7 coaching technology powered by artificial intelligence. Previously, as a fellow with SIGNALIFE Ph.D. program in life sciences in France, Nikita worked on supervised and unsupervised learning algorithms for neuronal classification, bridging approaches in neurobiology and statistical learning.
Speaker 2:Matthew Bailey, Founder at AIEthics.World
Presentation: Is Artificial Intelligence the New Guru?
Matthew James Bailey is an internationally recognised maven in the Internet of Things, Innovation, Smart Cities and Artificial Intelligence. His extraordinary leadership is widely acknowledged throughout governments and the private sector. He is a sought-after advisor, consultant and keynote speaker. Bailey has been privileged to meet with famous global leaders such as Steve Wozniak, innovation and technology; Sir David Attenborough, the environment; Professor Stephen Hawking, theoretical physicist and cosmologist; in addition to prime ministers, ministers, under secretaries of G7 Countries, and many more.
Speaker 3: Pamela Michelle Jasper, AI
Ethicist Presentation: Baking in AI ethics at every stage
Pamela M. Jasper, PMP is a global financial services technology leader with over 30 years of experience developing front office capital markets trading and quantitative risk management systems for investment banks and exchanges in NY, Tokyo, London, and Frankfurt. Pamela developed a proprietary Credit Derivative trading system for Deutsche Bank and a quantitative market risk VaR system for Nomura. Pamela is the CEO of Jasper Consulting Inc, a consulting firm through which she provides advisory and audit services for AI Ethics governance. Based on her experience as a software developer, auditor and model risk program manager, Pamela created an AI Ethics governance framework called FAIR – Framework for AI Risk which was presented at the NeurIPS 2020 AI conference.
Speaker 4: Matt Eustace, Head of Content at Aiimi
Presentation: Data Disclosure in the age of AI and Machine Learning
Matt Eustace is an expert at using technology to mitigate risk and solve privacy and compliance challenges, particularly in regulated industries and public sector organisations. Over his 20 year career, Matt has worked with leading global technology vendors and is a founding member of data and AI company, Aiimi. Privacy-whizz Matt spends his days helping organisations get started with advanced content and data analytics products, targeted at finding and disclosing information in response to requests like FOI and DSARs.
17:00:15 From yissel contreras to Everyone : Hi Everyone, thanks, Markus, happy to connect! https://www.linkedin.com/in/yisselcontreras/
17:00:54 From Markus Krebsz to Everyone : someone sent me this the other day, great collections on ethical Ai frameworks, given tonight’s theme thought it may be useful: https://blog.einstein.ai/frameworks-tool-kits-principles-and-oaths-oh-my/
17:01:07 From Matthew Bailey to Everyone : https://aiethics.world
17:02:40 From Neeraj Satpall to Everyone : Good to see you all, my work is in the field of intelligent automation and business continuity. Very happy to connect https://www.linkedin.com/in/neeraj-satpall/
17:02:46 From Debby Kruzic – Teckedin.com to Everyone : Hello everyone. My first time.
17:07:01 From Linda Bano to Everyone : If you’d like to know more about who we are and what we do, reach out at firstname.lastname@example.org or visit the website https://lnkd.in/gcYEuPe.
17:10:40 From Richard Foster-Fletcher to Everyone : And key links for you action…. MKAI Links: Next months Inclusive AI Forum on ‘the business rationale for responsible AI’ – https://mkai.org/event/april-inclusive-forum/ MKAI WhatsApp group – https://chat.whatsapp.com/FPt7DC7jzNg3m47tIvXhMB Follow MKAI on LinkedIn for information and updates – https://www.linkedin.com/company/mkai
17:13:59 From Alex Monaghan to Everyone : I don;t think we would all agree that Deep Learning is the end goal. I wouldn’t even call it AI in some senses. Ethics regarding unintelligent technologies like hammers and Deep Learning are very different from ethics for intelligent or even autonomous technologies such as GAI, AI soldiers, …
17:14:19 From Alok to Everyone : Hi All, Alok here, stepped into analytics domain recently and pursuing my PGDM Data Analytics in Canada. Interested in AI ML. Would love to connect with you all. https://www.linkedin.com/in/calok64/
17:16:23 From Matthew Bailey to Everyone : Culture is the foundation for AI Ethics and AI itself
17:17:07 From Aleksandra Hadzic to Everyone : Hi everyone! Alex here. I am looking forward to this session! Still working a bit, but I’m listening in the background! I would love to connect with you on LinkedIn, as well: https://www.linkedin.com/in/aleksandra-hadzic-66a4bb136/
17:17:19 From Alok to Everyone : I am looking for some tools & certification on governance
17:17:58 From Dean Svahnalam – AiBC to Everyone : Which kind of tools r u looking for Alok?
17:18:01 From nickmaravich to Everyone : This is such a great gathering! Thank you all for doing this. Can’t thank you all enough. What we are all trying to do (having a dialouge around AI) – is so very important! Thanks again!
17:18:23 From Alok to Everyone : Hi Dean, something like Collibra
17:18:52 From Markus Krebsz to Everyone : This is the 10th time this week that “risk” is mentioned in the context of Ai – love the fact ppl are waking up to this (as an old risk guy… 😉 )
17:19:21 From Matt Eustace to Everyone : I’ll mention it in my talk just for you 🙂
17:19:22 From nickmaravich to Everyone : Agreed Markus!
17:20:02 From Neeraj Satpall to Everyone : “Risk comes from not knowing what you are doing!” Warren Buffet
17:20:05 From Dean Svahnalam – AiBC to Everyone : Can u please tell me more @Alok?
17:20:23 From Hamza Lebbar to Everyone : Well said @Neeraj
17:22:40 From nickmaravich to Everyone : Doug Hubbard and Sam Savage – Risk and Uncertainty are not the same yet a lot of folks in IT treat those words as synonyms. This is a great prez!
17:22:48 From Markus Krebsz to Everyone : Can we have some Ai to replace “politicians”?
17:23:43 From Neeraj Satpall to Everyone : Managing risk needs to include people. The biggest risk in unsuccessful AI deployments is leaving people behind.
17:23:44 From nickmaravich to Everyone : This reminds me of BuitWith.com but for AI and Ethics – a true fantastic idea!
17:24:06 From Lavina R to Everyone : at markus …dont say that South Africa politicians say it will solve politics issues within parties
17:24:20 From Richard Foster-Fletcher to Everyone : Here’s what Nikita is referring to: https://openethics.ai/events/self-disclosure-for-an-ai-product-call-for-companies/
17:24:28 From Karen Beraldo to Everyone : great question Markus!
17:25:17 From nickmaravich to Everyone : This has been fantastic – thanks again
17:25:22 From Markus Krebsz to Everyone : As a “risk guy”, I argue ANY organisation should have two key strategic objectives: 1) Survival, 2) with a long-term sustainable benefit Benefit for a not-for-profit/NGO would be “for the greater good of all” and for a corporate “profit” obviously, But key is “long-term sustainability” in my view.
17:25:32 From Mateja K to Everyone : This is great contribution to AI ethics area. Very good idea.
17:26:25 From Monika Manolova to Everyone : Thank you Nikita, wonderful presentation I really like the mms comparison.
17:26:26 From Markus Krebsz to Everyone : Question: What are the three biggest RISKS in Ai Ethics?
17:26:38 From karen rea to Everyone : Would the big companies really adher to anything that is not sanctioned?
17:26:38 From nickmaravich to Everyone : Great question Markus
17:26:43 From Bridget Greenwood to Everyone : Nikita mentioned they’ll be working with just 5 companies. What do these companies look like ideally?
17:27:50 From Lavina R to Everyone : do risks criterias change based upon developed countries versus developing countries? are they an influencing factor ?
17:28:29 From Charlie Pownall to Everyone : Should AI governance be transparent? If so which elements?
17:28:31 From Markus Krebsz to Everyone : So bad data quality leads to Garbage in / Garbage out?
17:28:51 From Matthew Bailey to Everyone : Unethical data in = unethical AI
17:28:58 From Paul Levy to Everyone : have you looked at peer disclosure and community disclosure?
17:29:05 From Ana Montes to Everyone : If I understand this would be like labels. In addition would this be something that would become more global, incorporating it into standards like the ISO standards. International Standards Organization that certifies quality around the world.
17:29:13 From nickmaravich to Everyone : Yes do we have a baseline starting point? Data, with Also, with or without human decision maker? Can we score risk at layers – layers to say :hey here is the risk in the data or here is the risk in the Algorithm and then here is how the human might choose incorrectly even if the data and algorithm were relatively bias free – how can we build up from a baseline like this?
17:29:39 From nickmaravich to Everyone : Data with algorithm* not Also sorry
17:29:41 From Matthew Bailey to Everyone : Fairness is subjective……..has to be adaptable dependant on the person and their culture at any moment in time
17:30:19 From Markus Krebsz to Everyone : So the cultural context (of the developer/coder) is ultimately important. Back to #CodedBias
17:30:20 From nickmaravich to Everyone : Yes love the fact that the suggestion is to define an ethical fabric it key
17:30:31 From nickmaravich to Everyone : Or it all becomes too fluid
17:30:31 From Pamela Gupta to Everyone : I agree @Nikita – AI systems require a holistic security and privacy but unlike traditional systems have additional considerations, I published a AI Security Privacy Intgerity and Transparency model, AI SPIT that covers these areas
17:30:45 From Pranil Shinde to Everyone : how to put control over self supervised and learning ai machines so that they will stick to compliance, ethics and standard?
17:31:41 From Alex Monaghan to Everyone : I’m not convinced that this is qualitatively different from any other complex software. There are plenty of control programs which nobody fully understands – look at Windows for instance – and I’m sure there are similar issues with aeroplane autopilots and other complex systems. Deep Learning doesn’t really change the type (or even scale) of risk. Only when you get beyond Deep Learning is there a qualitative difference.
17:32:53 From Markus Krebsz to Everyone : I am currently working on what I call the KXIs, the Key Performance/Risk/Control indicators. This comes from the financial services originally, but am trying to adapt for AiEthics – once shareable, I will via MKAI WhatsApp group.
17:33:13 From Tom Allen to Everyone : Hey everyone. Tom Allen from The AI Journal. Great to be here with you all today and thank you for putting this on, MKAI!!!
17:33:34 From Markus Krebsz to Everyone : I love the fact that Humans are becoming MORE important in overseeing Ai (rather than less).
17:33:59 From nickmaravich to Everyone : Alex – I don’t disagree and as it sits today – windows will not do anything on its own. Yet as AI advances it could make decisions for us which is the big question mark with AI – even if this never happens – but I think this is the difference between AI and anything else we ever encountered in IT.
17:34:20 From Paul Levy to Everyone : i think we need to distinguish between ethics and ethos here
17:34:45 From Neeraj Satpall to Everyone : It is important for organisations to clearly differentiate between governance for AI programs and more conventional system implementations.
17:34:48 From Lavina R to Everyone : exactly @Paul
17:35:47 From Lavina R to Everyone : think understanding and context is as key to labels in our human understanding as is in AI application
17:35:52 From Odilia Coi to Everyone : Thank you for this interesting presentation. How do you convince companies to disclose their training data sets and source code?
17:36:18 From Markus Krebsz to Everyone : I would argue we need an “Ethics in Ai” taxonomy so everyone is speaking about the same thing without any room for (mis)interpretation. Again, I’ve seen this often in the financial products / risk space where taxonomy became a regulator requirement – how else can you compare/contrast what firms are doing and benchmark implementation?
17:36:46 From Pamela Jasper to Everyone : fully autonomous – the answer depends on the type of use case. if it is a high risk use case, then adding auto embedded agents, and continuous monitoring should be used to the maximum.
17:37:26 From nickmaravich to Everyone : Yeah think benchmarking with help us be more scientific in this space – to put physics in the Domain of AI and Ethics
17:37:48 From Henry Kafeman to Everyone : Computer languages like Ada inherently know range limits for variables and trap at compile and runtime for invalid values. This type of validation and means of improving reliability is needed for AI algorithms.
17:37:51 From Odessa Sherreard to Everyone : Thank you for answering my question Nikita, I really appreciate the complexity of standardisation. This is a sector that REAS group is eager to understand as we develop our product, KonnekApp. We also offer 360 degree video content and digital marketing.
17:38:05 From Neeraj Satpall to Everyone : Great stuff!!
17:38:06 From Monika Manolova to Everyone : So happy you mentioned data topology @Nikita Excellent talk & presentation
17:38:08 From Bridget Greenwood to Everyone : Amazing talk and Q&A Nikita – thank you so much. I’m so impressed and keen to learn more
17:38:13 From nickmaravich to Everyone : Back to Nikita’s point about IT does not have a list of Ingredients
17:39:44 From Richard Foster-Fletcher to Everyone : His work can be found at https://openethics.ai/
17:43:15 From IIM Kozhikode | Debayan Pal to Everyone : Why the Indian flag though? @Matthew
17:43:27 From nickmaravich to Everyone : Yes ! Something giving me pause about Deep Learning, Blockchain and now NFTs – can our environment sustain this type of innovation given the energy demands alone? A very thorny question indeed.
17:43:45 From Neeraj Satpall to Everyone : Question – Which is a bigger risk for ethics in AI – privacy and surveillance, bias and discrimination, or the role of human judgment?
17:44:17 From Alex Monaghan to Everyone : @nickmaravich I disagree – I think we already have this problem, but the difference is that the software developers/sellers are clearly responsible at the moment. This changes when companies can start to blame the software rather than the developers/sellers – and start to blame the client’s data rather than their own configuration and controls. There was a case a few years back where iTunes was “storing” people’s music in the cloud, but actually replacing it with more standard versions – for example replacing a rare recording of a jazz trumpeter with the much-better-known commercial track which happened to have exactly the same name/date/etc. This was not AI – this was just a dumb algorithm, with consequences that nobody had thought through. Not sure how the problem was resolved, but that type of problem (deleting someone’s data, “normalising” services and experiences) has been around for a while. Deep Learning doesn’t do anything fundamentally different.
17:45:30 From Nikita Lukianets to Everyone : @Markus Krebsz absolutely agree, we need AI ethics taxonomy first to speak the same language We’re building an open one, hosted in GitHub with an approach to transition to ontology relationships and simple metaphors + complete definitions available to everyone to pull https://openethics.ai/taxonomy/generate/
17:45:35 From Neeraj Satpall to Everyone : Very pertinent @Alex!
17:45:40 From nickmaravich to Everyone : Thank you Alex! Love the fact that we can all have this exact dialogue! 🙂
17:47:02 From Markus Krebsz to Everyone : @Nikita Lukianets – brilliant, thanks for sharing! Will follow-up separately with you via LinkedIn, just have to get through this week…. 😉
17:47:11 From nickmaravich to Everyone : Alex can you message me on LinkedIn? I don’t want to derail but I am not following what you are disagreeing with – not looking to be right – just looking to learn from you! 🙂
17:49:56 From Markus Krebsz to Everyone : Question for Matthew: What role, if any, should the regulators or quasi-regulators play in all of this evolution?
17:52:03 From nickmaravich to Everyone : But does that not go back to us as humans – we all have Bias thus the Data coming from us the humans – so how might be solve for this? Again, thank you for raising awareness to all of this!
17:53:02 From Markus Krebsz to Everyone : Music in my (risk & regs) ears: governance, compliance, metrics…!
17:53:12 From nickmaravich to Everyone : I see and that makes more sense – sorry for jumping the gun on that one 🙂
17:55:27 From Nikita Lukianets to Everyone : @Odilia Coi, great question. Thanks Actually, we’re not asking to disclose the training data sets or algorithms, because it constitutes the IP. We don’t want to undermine anyone’s competitive position. Rather we ask to disclose approaches to how the data was labeled, therefore opening doors to understand potential sources of bias/unfairness + showcase security practices. Plus this is totally volunteer disclosure. This is the typical reaction of the company until they learn our approach, so we’d have to still learn in Open Ethics how to tell about the disclosure approach so that business won’t get scared
17:55:35 From Markus Krebsz to Everyone : (Digital) sovereignty is interesting, given that the day-to-day life sovereignty has seemingly been (partially) diminished/diluted with Covid/Lockdown measures etc etc
17:55:52 From nickmaravich to Everyone : Agree Markus
17:55:52 From Markus Krebsz to Everyone : May it can help to (re-)empower ppl where the feel powerless currently
17:56:52 From Pamela Gupta to Everyone : This is Pamela Gupta, I am delighted to be here from AI Ethics World and Advancing trust in AI, you can join Matthew and me this Saturday as we discuss AI Ethics and Trusted AI at https://us02web.zoom.us/webinar/register/WN_ehJ6nu7nQ0ezp3oe4In4sw. We will discuss importance of Bias in building trustworthy AI
17:57:56 From Neeraj Satpall to Everyone : Thanks for sharing @Pamela Gupta
17:58:28 From Markus Krebsz to Everyone : I am glad @Matthew is referencing the environmental footprint as this will become increasingly important (also for individual users who decide what service they subscribe to, or not
17:58:31 From nickmaravich to Everyone : I really like how you are linking the natural world with AI
17:58:56 From nickmaravich to Everyone : As unfettered growth engines are not sustainable in biological systems
18:00:39 From Markus Krebsz to Everyone : I would argue this needs to be GLOBAL/UNIVERSAL, not just British
18:00:43 From nickmaravich to Everyone : This is really really powerful stuff – thanks again to all – back to the day job have to bolt.
18:00:53 From Henry Kafeman to Everyone : The complete lifecycle definitely needs to be considered front and centre for Environmental impacts in AI as well as more conventional/understood industries.
18:00:54 From Richard Foster-Fletcher to Everyone : Thanks for your contributions Nick
18:01:48 From Nikita Lukianets to Everyone : @Bridget Greenwood I’ve mentioned that during the workshop we’ll work only with 6 companies. Not because we want to filter some in or out but rather because we only can handle 3 breakout rooms with 3 facilitators from Open Ethics. We’ll be following up with others who applied but who didn’t get into this batch separately with the updated “disclosure kit” document.
18:02:00 From Patricia to Everyone : A lot of masacres and colonization has been done in the name of “culture”. We have broken workplaces in the name of “our culture”…
18:02:16 From Markus Krebsz to Everyone : Since you mentioned “culture/conduct”, I am throwing my “universal conduct risk paradigm” for the UN into the mix: https://www.unece.org/fileadmin/DAM/trade/wp6/documents/2017/GRMF2F/2017_02_22_1400_Krebsz_UCRP_-_Draft_version_22_Feb_2017.pdf
18:02:55 From Markus Krebsz to Everyone : Thank you, humanity forgets so quick – maybe Ai can remind us of the things that went well in the past
18:04:49 From karen rea to Everyone : The process you describe is the oxygen mask! Thank you for a terrific talk.
18:05:25 From Gerry Copitch to Everyone : Thanks Matthew! Such a joy to listen to an ethicist who values ancient wisdom, especially Aristotle in this context.
18:05:30 From Odilia Coi to Everyone : @Nikita I see, a very delicate point to approach. Thank you so much for answering and for your awesome presentation!
18:06:38 From Ana Montes to Everyone : How much are you and others already incorporating other cultures into the equation?
18:07:10 From Hamza Lebbar to Everyone : AI has no power over the bias in data, this is something we, humans, create. And we should be very thankful for the AI breakthrough because it’s thanks to AI models that we’re capable of uncovering the bias in data that in most cases have been used for years, but hidden. AI is uncovering the bias in data, not creating it. A good control over data would help stop the spread of bias. That’s why I prefer to use the term Ethical data instead of ethical AI.
18:08:03 From Neeraj Satpall to Everyone : Well said @Hamza
18:09:00 From Clive Hudson to Everyone : I believe all of humanity will be “pushed back” on the arrival of superintelligence.
18:11:58 From karen rea to Everyone : Is the solution for the future to teach the children the data/AI risks and benefits from an early age at school, so they grow up with that awareness? We should be passing the baton to them but enabling them as well. We have that responsibility.
18:12:12 From Debby Kruzic – Teckedin.com to Everyone : A lot of food for thought, Matthew, and great conversation. Thank you.
18:12:23 From Anu Toor to Everyone : Magnificent talk Mathew… how you featured the associate between ethical AI, environment and the ancient wisdom is interesting. Thank you!!
18:13:08 From Monika Manolova to Everyone : Excellent presentation Mathew, evolving towards a world 3.00
18:13:08 From Matthew James Bailey to Everyone : Thank you everyone. Its my pleasure to be with you. Dont forget to check https://aiethics.world
18:13:12 From Richard Foster-Fletcher to Everyone : MKAI WhatsApp group – https://chat.whatsapp.com/FPt7DC7jzNg3m47tIvXhMB
18:13:51 From Markus Krebsz to Everyone : WhatsApp group is fab, I suggest everyone joins us!
18:13:52 From Errol Finkelstein to Everyone : Please share the Telegram specific groups links
18:14:21 From Markus Krebsz to Everyone : Big thank you to @Vibhav – amazing work!
18:15:43 From Richard Foster-Fletcher to Everyone : Please join me in thanking Matthew
18:16:11 From karen rea to Everyone : Great to see Vibhav in real life! And I agree, Markus; it’s a great group.
18:16:25 From Neeraj Satpall to Everyone : Thanks a bunch Matthew for an amazing presentation!
18:16:44 From karen rea to Everyone : Please join me in thanking Matthew….thank you Matthew. Awesome.
18:16:50 From Neeraj Satpall to Everyone : Great credentials @Pamela Jasper!
18:16:52 From Richard Foster-Fletcher to Everyone : MKAI is on Telegram for ‘focussed discussions’ join us here https://t.me/joinchat/9_YAbNz6WQAyZWJk
18:17:23 From Dwight Nelson to Everyone : Thank you Matthew. Was really interesting to find out about the World 1.0 to World 3.0
18:17:37 From Richard Foster-Fletcher to Everyone : MKAI Channels: WhatsApp = General chat about the AI big issues Signal = Same for those that don’t WhatsApp Telegram = Focussed and themed chats
18:18:34 From Vibhav Mithal to Everyone : The WhatsApp group is for us to engage in general conversation on AI. The Telegram group is a little different where we choose specific themes – and then take the conversation forward on that accordingly. If you want to take a broad approach, join us on WhatsApp. If you have a focus on a specific area – and would like to take a specific approach to AI, join us on Telegram. The choice is yours. :-). We all “converse” on AI!
18:18:56 From Vibhav Mithal to Everyone : Exactly as described by Richard!
18:19:30 From Richard Foster-Fletcher to Everyone : Join us where you like to be 🙂 – WhatsApp – https://chat.whatsapp.com/FPt7DC7jzNg3m47tIvXhMB – Signal – https://signal.group/#CjQKIE5_q5MflReU4SiiI-IBLvJcryh9-m2wBIwRz-__zEWnEhClXL8q_KM70I2BENk76hOy – Telegram – https://t.me/joinchat/9_YAbNz6WQAyZWJk
18:21:02 From Matthew James Bailey to Everyone : People can use the AI Data Ethics framework – its in the book and offered by AIEthics.World’s academy for business, government and innovators….
18:21:54 From Hamza Lebbar to Everyone : Thanks for the great talks. Have to leave!. Would be happy to connect with AI enthusiasts around, I am NLP freelancer: https://www.linkedin.com/in/hamza-lebbar-1951aa173/
18:21:59 From Markus Krebsz to Everyone : This is a trip down memory lane now: I’ve done a lot of model risk management and even run several model validation course for the British Bankers Institute.
18:22:25 From Markus Krebsz to Everyone : Still, much more work needs doing and Ai is adding to the complexity of all of this
18:25:29 From Dean Svahnalam – AiBC to Everyone : https://www.linkedin.com/in/dean-svahnalam-519b6189/
18:26:52 From Markus Krebsz to Everyone : I wonder how much of all of this Ai firms actually are doing? (I know banks do!)
18:31:06 From Markus Krebsz to Everyone : 50 Shades of Bias!
18:32:18 From Nikita Lukianets to Everyone : @Markus, in according to 360 survey , 1/3 uses at lest something… So.. very few
18:33:04 From Neeraj Satpall to Everyone : Question – Are highly regulated industries like banks legally on the hook if the algorithms they use end up discriminating against classes of consumers, while a whole lot of big companies manage to get away?
18:33:18 From Pasquale to Everyone : How did you solve the problem of the absence of transparency of Machine Learning Systems in your model theory?
18:34:26 From Markus Krebsz to Everyone : @Neeraj – banks in principle are subject to TCF – treating customers fairly. If they don’t (even if it’s because of faulty models – then yes, they ARE on the hook!
18:36:19 From Neeraj Satpall to Everyone : Thanks @Markus, but other big companies should also be mandated to treat customers fairly!
18:36:35 From Markus Krebsz to Everyone : @Neeraj, think of unfair lending decisions, as in #CodedBias
18:37:37 From Markus Krebsz to Everyone : I only fly (semi-autonomous) drones ;-(
18:37:53 From Neeraj Satpall to Everyone : Yes indeed @Markus, as machines learn from data sets they’re fed, there is high likelihood they may replicate many of the banking industry’s past failings!
18:38:03 From Ana Montes to Everyone : It is well documented that banks have shown bias against minorities, and have used exploitive practices. How is this fact being used to make sure it stops happening and fairness is brought into the picture?
18:38:28 From Monika Manolova to Everyone : I love the concept of models needing to know who they are as well @Pamela
18:38:47 From Markus Krebsz to Everyone : Conduct risk regulation helps, but frankly a lot more needs doing, both by banks and non-banks
18:39:10 From Neeraj Satpall to Everyone : Yes indeed
18:39:28 From Henry Kafeman to Everyone : Where do the boundaries lie or need to lie between; bias, discrimination and the distribution of a populations? – How can this be reconciled with the vast variety of cultural/country/population “norms”?
18:40:16 From Marie to Everyone : Great presentation. Privacy engineers, data privacy officers use these types of models similar to COBIT, ISACA, IEEE and then, like, which can be adapted to AI. GDPR, most privacy laws don’t oblige companies to adopt models for data/AI, so it’s pretty much up to the companies
18:40:50 From Richard Foster-Fletcher to Everyone : Here’s Pamela – https://jasperconsulting.ai/
18:40:50 From Markus Krebsz to Everyone : @Neeraj, here’s the Global Conduct Risk Paradigm I developed for banks: https://tinyurl.com/GCRP-Krebsz
18:41:12 From Nikita Lukianets to Everyone : Pamela, amazing talk as usual, great balance between technical and popular concepts
18:41:26 From Neeraj Satpall to Everyone : Thanks a bunch @Markus for sharing this!
18:41:39 From Aleksandra Hadzic to Everyone : That was amazing, Pamela! An effective informative speech, I love it, thank you!
18:42:42 From Charlie Pownall to Everyone : Pamela mentioned the AIID, which draws on the AI incident & controversy repository, amongst other sources. Everyone is welcome to use, copy and adapt the repository. http://repository.aiaaic.org
18:42:52 From Odilia Coi to Everyone : Thank you so much Pamela!Brilliant presentation!
18:43:02 From Pasquale to Everyone : @Pamela. How did you solve the problem of the absence of transparency of Machine Learning Systems in your model theory?
18:50:05 From Anu Toor to Everyone : Thank you @Pamela Jasper for sharing your thoughts on AI regulations
18:52:21 From Pamela Jasper to Everyone : Hi Pasquale, great question. So model risk management is a broader framework that provides guidance on what and how orgs need to prepare for and approach the development of models. it does not specify which decisions the org makes. transparency like data disclosure can be addressed in many ways. one item in FAIR is Model Tiers which evaluates your models and provides a risk based approach to determining how to approach model risk. so for example if you have three models, one is used to determine loans it may be ranked a higher risk tier, thus requiring ALL modes of transparency mitigations in the firms arsenal. a lower risk model may not require the full list but only one or two. Again the framework is broad which is why it has stood the test of time in very sophisticated quantitative use cases.
18:52:52 From Pamela Jasper to Everyone : Thanks everyone happy to chat further. Thanks for the feedback!. Great Talk Today!
18:54:21 From Pamela Jasper to Everyone : Hi Charlie thanks for the repository reference. Yes there are a few out there now, Germany has Unding.de. and The Algorithmic Justice League with Joy Boulamwini has her CRASH database. which sounds like a pun on the Aviation Incidents referred to by AIID. When AI fails, is it an incident? or a full blown CRASH!
18:57:17 From Markus Krebsz to Everyone : Pretty impressive!
18:58:37 From delphine nyaboke to Everyone : thanks Mark
18:58:44 From Richard Boocock to Everyone : Great case study, great presentation, thank you
19:00:04 From Diego Cammarano to Everyone : Thank you everyone for the great presentations and to the organisers for another very interesting event
19:00:04 From Andrew Garrow to Everyone : Thanks all, amazing speakers again tonight 🙂
19:00:05 From Temur Chichua to Everyone : Thank you for interesting evening. What an event.
19:00:12 From Alok to Everyone : Thanks Richards for inviting
19:00:12 From Pamela Jasper to Everyone : Great job Matt, Nikita, Matthew!
19:00:14 From Richard Foster-Fletcher to Everyone : MKAI Links: Next months Inclusive AI Forum on ‘the business rationale for responsible AI’ – https://mkai.org/event/april-inclusive-forum/ Follow MKAI on LinkedIn for information and updates – https://www.linkedin.com/company/mkai All replays from today’s talks and a copy of the chat will be up tomorrow at https://mkai.org/march-inclusive-replays/ Continue the conversation: – WhatsApp – https://chat.whatsapp.com/FPt7DC7jzNg3m47tIvXhMB – Signal – https://signal.group/#CjQKIE5_q5MflReU4SiiI-IBLvJcryh9-m2wBIwRz-__zEWnEhClXL8q_KM70I2BENk76hOy – Telegram – https://t.me/joinchat/9_YAbNz6WQAyZWJk
19:00:24 From Odilia Coi to Everyone : Thank you so much, amazing session!
19:00:30 From Nilam Gupta to Everyone : Thank you so much team
19:00:42 From Markus Krebsz to Everyone : Amazing session, thank you to all the speakers and Jaisal, Vibhav and last but not least Richard & MKAI! See you all on WhatsApp
19:00:45 From Aleksandra Hadzic to Everyone : Thank you so much people, this is amazing!
19:01:00 From Jorge Assis to Everyone : Thank you all the speakers!
19:01:02 From Jaisal Surana to Everyone : Thanks to everyone!
19:01:10 From Nikita Lukianets to Everyone : Thank you a lot to Richard and the whole MKAI team
19:01:16 From Dwight Nelson to Everyone : Thanks again MKAI!
19:01:20 From Eleftherios Jerry Floros to Everyone : Thanks everyone, very interesting. !
19:01:29 From Pamela Jasper to Everyone : Thanks Richard, Jaisal!
19:01:37 From Vibhav Mithal to Everyone : https://www.linkedin.com/in/vibhav-mithal-9199b416/