MKAI Inclusive AI Forum February 2022: Cultural Representation in Artificial Intelligence

Chat Transcript:

16:59:23 From Odilia Coi : Hello everybody, happy to be here
17:00:14 From J J Bryson : Were we supposed to do “opening statements” and if so for how long?
17:00:29 From Markus Krebsz :
17:04:08 From Vibhav Mithal : Risk and trust – that is an excellent intersection. Your insights would be outstanding Chris! Welcome!
17:04:31 From Chris McClean (he/him) : Thanks Vibhav. 🙂
17:05:31 From Vibhav Mithal : Welcome Cansu!
17:06:07 From Markus Krebsz : Sharing my old “conduct risk & culture” chestnut here that I developed a few years back for the UN…
17:06:29 From Markus Krebsz : Wise choice!
17:08:04 From Vemir : Pleasure to see everyone!
17:08:41 From Vibhav Mithal : Welcome everyone to the MKAI Inclusive Forum ! Hi Vemir !
17:08:56 From J J Bryson : Victoria falls?
17:10:03 From Markus Krebsz : Sehr gut!
17:10:40 From J J Bryson : I forgot to say, I’ve also worked in industry for 7 years — 5 in financial industry in Chicago, a bit more than a year at LEGO, a bit less than a year in computer manufacturing (as an “object oriented reengineering” consultant — it was the 1990s..)
17:11:07 From J J Bryson : All as a programmer, systems architect &systems administrator
17:12:09 From Markus Krebsz : Yes, I missed this a lot!
17:12:42 From Karen Rea : Eeeek! You are too young, Ricard, young man!
17:13:10 From Edward Darling : Young from our perspective!
17:13:15 From Axel : Axel Beelen, legal consultant specialised in copyright, data protection, blockchain and AI. Belgian and beers afficionados
17:13:59 From Hentie Stassen : Hi, Hentie Stassen here from South Africa. Not as illustrious as the other participants. Please feel free to connect.
17:14:26 From Evgeniya : Evgeniya Fedoseeva Founder & CEO (Knowledge Management 4.0, inclusive digital ecosystems, innovation, and tech). London LinkedIn:
17:14:39 From Francis Heritage : Francis Heritage, AI consultant with Faculty Science, primarily building tools for UK Defence. Former Naval Officer and Organisation Culture consultant. Whisky connoisseur…
17:14:52 From Faeqa Chowdhury : Hello everyone, Faeqa Chowdhury here – I am a AI legal technologist at ThoughtRiver and heading into Google’s Legal Team next week! Feel free to connect –
17:15:16 From Edward Darling : Edward Darling: designer of The Life Circle and creator of The Life Map addressing UN SDGs
17:15:54 From Programify : Hi, I’m Clive Hudson from Programify. R&D into language based reasoning leading to superintelligence.
17:16:44 From Alex ( : Ciao, Alessandro Migliaccio, founder of association AiShed ( devoted to research into AI topics in conjunction with Systems Engineering.
17:16:47 From Jennifer Galo : Hi everyone! My name is Jen, and I am pivoting out of education (ESL, analytical and research writing, lit analysis)into data science. I’m American but lived abroad for years until the pandemic, currently in Chicago. I’m here to learn from all of you as I continue to build my portfolio for interviews as I hope to get into ethical AI positions and value-aligned industries. I would love to connect if you’re interested
17:17:08 From Mehwish Arshad : Hi all, Mehwish Arshad here. I am a data scientist working for PA Consulting based in London. I have a PhD in bioengineering.
17:17:54 From Vibhav Mithal : Welcome Clive, Alessandro, Jen, Mehwish to the MKAI Inclusive Forum!
17:18:40 From Vibhav Mithal : Happy to connect with everyone here –
17:18:46 From Quinn (they/them) : Hi, all. Quinn McGee, from Brooklyn, New York. I’m a product manager working on Trust and Safety at Grindr.
17:18:51 From Richard Foster-Fletcher – : Jenn you have a question?
17:19:17 From Sue Turner : Hi all – I’ve converted from business / government relations / communications / charities into AI & data via MSc AI & Data Science. I’m particularly motivated by finding ways for disadvantaged people to benefit from AI. Love to connect
17:19:31 From Andres Leon : Hello all! here Andres Leon-Geyer, professor in 2 universities in Lima, Peru. Head of a XR Lab for scenic arts in the PUCP university and member of board of XR-Latam, focused there on XR ethics.
17:19:32 From J J Bryson : I used to literally think Americans didn’t have accents and wonder how British kids that were so little could have learnt an accent so fast (these were the singers on Pink Floyd’s “the Wall” I was wondering about, as a child) – this is all confessions of ignorance driven by Patricia.
17:20:20 From Vibhav Mithal : Excellent points Johanna. And great point Patricia. In may not apply during Covid, but the most important thing one could do, if one can, is TRAVEL. Exposure to different cultures is critical. It helps us realise who we are, and understand about others
17:20:56 From Syed Mustafa Ali : AI and digitalisation have a racial and colonial genealogical particularity that has laid down the contours of a universality.
17:20:57 From Patricia Shaw : I love that Joanna. Thank you for sharing
17:21:17 From Patricia Shaw : I think accents speak a lot into unconscious biases as well
17:22:36 From Karen Rea : Hello everybody. I am Karen Rea, UK Barrister and Deputy District Judge (civl), specialising as a Barrister in Regulatory Fitness to Practice Law . I have a growing and fast-paced interest in AI and all things related to Law and AI. I co-lead, with Vibhav, the MKAI AI and Law group. We meet monthly. I am also a qualified nurse from long ago, so don’t ask me to resuscitate you! My Linked In handle is: So wonderful to see you all here.
17:22:46 From Jenn Huff : Is AI actually breaking cultural norms or is it amplifying existing patterns — patterns that are usually unvoiced or event contradictory to voiced values, but none-the-less exist in the underlying data upon which AI (&ML generally) are based?
17:22:48 From J J Bryson : Well I think we’re achieving consensus. GPAI says we should just run with OECD + UN SDGs, and then almost every nation signed up to the UNESCO statement this past November.
17:23:17 From J J Bryson : @Jenn I would say AI doesn’t do anything, but various people and groups are doing both of those.
17:24:00 From J J Bryson : Twitter is my main hangout
17:24:12 From J J Bryson : But I do have a LI account, I’m just /Bryson there.
17:24:16 From Markus Krebsz : I am looking for a comprehensive Ai RISK TAXONOMY. If anyone is aware of a good one, can you please share the link? Many thanks!
17:24:24 From Jenn Huff : I mean, that is an argument that hammers or domesticated horses, telegraphs, or guns do ot do not “do” anything — AI is a tool, of course
17:25:06 From Syed Mustafa Ali : @Joanna: Without falling into the posthumanist excesses of ANT, I think the claim that AI doesn’t ‘do’ anything is too hasty.
17:25:36 From Chris McClean (he/him) : Markus, I created an tech/AI impact taxonomy (considering risk and benefits) and happy to share if you’d like.
17:25:42 From Matt Ensor : Hi everyone, I am Matt Ensor from – we use conversational AI for culturally appropriate consultation and engagement, particularly with indigenous communities.
17:26:45 From J J Bryson : Well, depending on your definition of AI, but my definition of intelligence (&therefore AI) is generating action from context, so hammers don’t act and robots do. But I don’t **attribute** responsibility to AI because there’s no way to hold it accountable, so from an ethical perspective I focus on developers and owner/operators & users as actors.
17:27:10 From Syed Mustafa Ali : AI might not be an actor, but I would suggest it is nonetheless a cause.
17:27:17 From Markus Krebsz : The Golden Rule
17:27:18 From J J Bryson : Ha ha I wrote that for Jenn but it works for Syed 🙂
17:27:55 From Francis Heritage : Markus, HArvard/MSFT have an interesting taxonomy:
17:28:22 From J J Bryson : Agree that AI is an agent / actor, but believe justice has to be between peers or else it’s not meaningful, which of course will lead into the discussion of colonialism.
17:28:33 From J J Bryson : And we can’t be peers with something we author entirely.
17:28:58 From Jenn Huff : Hi, this is me, btw! and you can also find me @jenn_huff on twitter
17:29:21 From : Great link!
17:29:25 From Syed Mustafa Ali : @Joanna: yet let us remind ourselves that correlation is not causation, as well as of the possibility of causal over-determination alongside the rhetorical force of certain tech assemblages…
17:30:01 From Evgeniya : People fear new things in general and AI education is hugely important. Data trust is key to building AI trust. AI works with data. The quality of data input will affect AI outputs. AI biases is still a big problem. I witnessed a GPT-3 experiment live a few months ago. The internet data was used to illustrate how AI outputs can be misleading. The conclusion was – do not trust everything what GPT-3 says before validation of data sources. Imagine AI in the recruitment or justice sectors?
17:30:42 From Syed Mustafa Ali : It’s bigger than data. It’s bigger than bias.
17:30:54 From : I told yesterday about the Digital Ostracism issue related with AI Ethics.
17:31:06 From Syed Mustafa Ali : It’s about world-(re)making.
17:31:12 From Andres Leon : When we say “more than anywhere of the world”, “most of the world”, which criteria are we using to compare with the world. Who makes the measuring criteria, where comes it from and which parts of world / groups in the world is it looking at?
17:31:18 From Jenn Huff : I think it is tricky when we get to anthropomorphizing technology — “what technology wants”, “technology will solve climate change/destroy us all/ etc” — BTW, I am an archaeologist _and_ a technologist, so certainly think about technology and how it works iteratively with culture, human evolution quite a bit
17:31:20 From J J Bryson : WRT Chris’ thing about French desire to have AI reflect own desires, that is absolutely the way I think AI ethics should go. I’m terrified of some of my colleagues who think we should somehow mine what our culture’s values are then program it into AI and have that AI constrain us to those ethics. Unfortunately, that’s what some mean by “value aligned design”. IMO ethics is a part of a living society and needs to stay agile. It can never be perfect so should always be allowed to improve.
17:32:15 From Vibhav Mithal to Richard Foster-Fletcher – Message) : Please come to me last. I do not want to take anyone else’s time. Leave the decision with Jaisal and you.
17:33:02 From Evgeniya : @Syed Mustafa: I am referring to the outcomes of human behaviours through technology. It is not a GPT-3 problem for sure.
17:33:10 From J J Bryson : [speaking of LI if you want to actually link not follow me please say how we “met” here & give me a little background in the note, because I’m hopeless with both names & faces, so therefore really do try to use my LI network to help me remind me of people I remember having met.
17:35:09 From Jenn Huff : There are roughly an infinite number of definitions of “culture” — anthropologists (who are not the only experts on the subject!!) can’t even agree — so all definitions are valid, but we probably need to say them out loud and agree on them in a conversation, instead of assuming that we are sharing a definition
17:35:21 From J J Bryson : CouldI ask Mafunase a question about rights from her perspective?
17:35:31 From Richard Foster-Fletcher – : Yes please
17:35:35 From Cansu Canca : Happy to connect:
17:36:32 From Chris McClean (he/him) : Likewise, always happy to continue these conversations:
17:37:13 From J J Bryson : I thought it mostly emerged from the continent’s recent history of war crimes TBH
17:37:33 From : there is a problem in thinking that a generation can remake the world, as if everything before didn’t work. Each time more Democracies in the Western World are become illiberal and technocratic. Dissent political views are increasingly not been tolerated and censured. This is leading to Deplatformization and in the practice a Digital Ostracism from modern society emulating Medieval practices.
17:37:35 From J J Bryson : history/experience
17:38:33 From J J Bryson : @renato I wouldn’t say at each time, there have been oscillations in each direction in the last 80 years.
17:40:06 From J J Bryson : @Mufanase this relates to my question about rights. Some people are saying now that “ethics” being socially specific is a means by which corporations avoid regulation, whereas we ought to be grounding our AI governance in *rights* which have been negotiated at the UN and are now grounded in law. I was particularly wondering about Mafunase’s view on that but of course everyone’s.
17:40:21 From Andres Leon : A very relevant point, which Animesh mentioned. The difference of associations with concepts is medular.
17:40:46 From Atef Ahmed : Happy to connect with you.
17:40:48 From istvan : Would AI not entrench these differences as it stands (facebook echo chambers, etc)?
17:41:00 From J J Bryson : I’d love to see what the dystopia / utopia ratio is across science fictions.
17:41:15 From Karen Rea : “Concreting” so-called typical ethical values by whomsoever is/are given that remit in any one country, in any one corporation or in any one Government is the way to inbuilt bias and intransigence, as well as stasis that would resist any iteration of ideas, ideals and agility founded in regulations and statutes. Flexibility and agility are the key words to making AI (in itself moving faster than light!) future-proof as much as possible.
17:41:30 From Richard B : I feel it is very dangerous to make sweeping generalisations of ‘culture’ at the level of the nation state. I feel that culture and AI will be an ongoing discussion/debate and dialogues, that will be unlikely to arrive at an “answer”….
17:41:35 From : this phenomena is from the last years, before didn’t have the digital platforms that concentrate so much power and influence in public opinion and part of our digital lives.
17:43:05 From J J Bryson : Polarisation can be thought of as factionation of society, at least on behalf of those who feel polarised and choose to focus on a smaller than fully-inclusive national or transnational identity.
17:43:09 From : As I told yesterday, governments should not be allowed to concentrate so much power, into coerce people to capitulate to their will in order to avoid public demonstration of dissent.
17:43:52 From J J Bryson : Another one of my colleagues form the Middle East, Danit Gal, does a great talk contrasting Chinese, Japanese & South Korean perspectives.
17:44:02 From : It is a contrast, the Western critics teocracies and do the exact same thing
17:44:07 From Andres Leon : There are always differences and similarities, in human form of thoughts due to cultural contexts. Problems arise, I believe, if we loose of sight that there are both and only focus on one of these.
17:44:13 From J J Bryson : I still have my intervention ready too 🙂
17:44:51 From : in order to lead the Western countries should set the example in Democract behavior
17:45:06 From Markus Krebsz : @Francis, great link – thank you! @Chris, yes pls, I would be interested
17:45:15 From Joe Fitzgerald : Love Jaisal’s question, dovetailing: The Radio was long a tool+cultural symbol of colonial oppression in Algeria during French occupation. Then a rapid cultural shift in the late 1950s transformed the technology into a mechanism for the liberation movement. (Franz Fanon, a Dying Colonialism). Could an analogous shift in cultural perceptions of AI allow a similar transformation to limit control of values imposed by ‘Western’ society? Or are existing AI systems too complex to be re-appropriated, must it be redesigned from the ground up by and for real-world communities?
17:46:00 From : AI systems still not capture the full complexity of our world
17:46:10 From Richard B : My experience of working in many countries is that there is little faith in the wisdom of so-called western liberal democracies, just sayin’!!
17:46:10 From Alex Monaghan : Sorry I missed the start, maybe this was raised – I am wondering if we feel that cultural differences will be strengthened or weakened by AI, and whether that is a good or bad thing. There is very little representation of different cultures in futuristic visions, for instance – there seems to be an assumption that globalisation will accelerate – do we agree? Do we think that this tendency should be fought?
17:46:33 From Shashwat Tripathi : Cultures affect products. Let’s take example of McDonald’s which uses ham in USA while it is replaced by potatoes in India. So, we all are affected by local people; we won’t build something that is not used, and waste our resources.
17:46:57 From : we live a paradigm shift now, and the situation is going to yet to achieve a peak of cognitive dissonance between generations
17:47:03 From Markus Krebsz : I am currently working on a draft UN recommendation for Ai / NeuroTechnology / Robotics (AiNR-Tech) and as part of this shared my presentation to the UN this morning on the MKAI WhatsApp group. You can reach/connect with me here:
17:47:48 From Syed Mustafa Ali : @Joe: good question. Is the coloniality of AI contingent or necessary, at least in relation to large scale, data-expansionist ML/DL?
17:48:15 From Patricia Shaw : is an example of AI which has been developed with a particular cultural context in mind
17:48:16 From Hentie Stassen : Would culture affect something like the different recommendations that you provide through a recommender? The reverse would be that you might be able to ascertain and derive which ethnicity the subject associates with – which is definitely not an ethical use
17:48:21 From : I think that is dangerous people normalize the idea that those that are in the “dissent” group should be allowed to be ostracized and bear the consequences by standing by their opinion in a medieval way, as deplatformization can means be without the means to survive
17:48:28 From Matt Ensor : I work in Pacific and Māori cultures, and the face-to-face bias for all communication, which is more respectful than transactional means that we have had to rethink things like how we design websites and interfaces.
17:48:33 From Chris McClean (he/him) : I just came across great philosophical perspective on sovereignty from Judith Butler. She describes sovereignty and governmentality as tactical (a means rather than an ends). So it’s one thing to talk about the similarities and differences between value frameworks of different cultures, but how governments use these values tactically to enforce laws and policies seems to be a important aspect of the conversation too.
17:49:45 From Jenn Huff : well, cappuccino and what-not everywhere in the US is sending ideas and resources ‘back to the metropole’ — the colonial center… so we do expect that globalism and colonialism to affect the more powerful side of the equation
17:50:05 From Andres Leon : I am sad I have to leave, the talk is fascinating. Thanks all & MKAI again for these worthwhile discussions and interesting topics.
17:50:21 From Syed Mustafa Ali : If the postcolonial world bears the systemic /structural legacy effects of colonialism, then in contributing to maintaining that order, it is already colonial.
17:50:21 From Shashwat Tripathi : Well about congnitive behavior shift, we are leaving social media. It is changing people; Instagram and tik tok reels made changes in women’s physical behaviour. Capitol Hill is large scale manipulation example with mind, and Delhi riots as so.
17:50:33 From Shashwat Tripathi : We must be careful with what we draft.
17:50:53 From Evgeniya : @Shashwat: a great point re cultural product differentiators. “‘TeslaMic’ karaoke sets sell out under an hour in China”
17:51:05 From Syed Mustafa Ali : AI that is. It does need to do any additional extractivist work to be colonial.
17:51:10 From Brent Zuber – Calgary, AB : If I understand the spectrum correctly – at one end is “Individual Identity” and at the other ultimate end is a “collective (average) human Identity” (and a digital AI equivalent for each). Then “culture” is a small collection of individuals beliefs. Is there a Venn diagram therefore of semi-overlapping cultures? Therefore an almost infinite set of pluralistic AI instances?
17:51:21 From : cultural shift cames from paradigm shift
17:51:33 From Markus Krebsz : Please join MKAI, Paul Levy and me for our forthcoming 2-part “Drones & Ai” course: It’s FREE, should be fun and NO coding or piloting skills required. 😉
17:51:35 From : but the problem is the tribalization
17:51:54 From : the view of the other as not good if not part of your own tribe
17:52:09 From Alex ( : Sociolocy sociology….what about the link with AI development
17:52:32 From : thus, if the other is different people normalize the idea of punishment of those that are the “other”
17:52:47 From Patricia Shaw : I hope to talk about the maori principles today
17:53:20 From Patricia Shaw : @matt ensor
17:54:10 From : AI has direct impact in Sociology, since AI can be used to identify, classify people in different categories
17:54:58 From Jenn Huff : ‘JJ Bryson is totally right about the cultural specificity of ethics… but there are also a lot of overlaps from culture to culture because they are loosely (amongst other things) a set of rules about how to get along together in a group
17:55:16 From Shashwat Tripathi : Someone said what AI is to do with sociology; it is evident that social media is a part of your life and it evidently affects your way of thinking. I guess that you got your answer.
17:55:33 From : we should not underestimate the impact of technology
17:55:48 From : technology reshape Human behavior
17:56:36 From Richard B : Totally agree with cultural specificity of ethics – I have lived and breathed it
17:57:10 From : the ideal should be Humans driving change in technology
17:57:11 From Edward Darling : The Human Rights for states are fundamental but there needs to be balancing Human Responsibilities for citizens
17:57:23 From : Correct
17:57:28 From Cansu Canca : Mafunase said that human rights conflict with culture, no? Can we hear an example – I think it would be useful to think about it more concretely
17:57:40 From : There must be oversight over Government policies
17:57:55 From Jenn Huff : @Brent — I think we should think about status as overlapping layers, not opposite sides of a spectrum… you are a professional, a parent, a child, whatever the different individual and collective identities you have — but I agree that we in the western intellectual world — definitely in the US — tend to construct individualism as the opposite of “collectivism” or shared mutual obligations to our communities
17:58:13 From Matt Ensor : @patricia – thanks for raising Māori data sovereignty, data is treated like land.
17:58:15 From : Culture can conflict with Human Rights, when the culture becomes toxic in the sense of separate people in tribes
17:58:20 From J J Bryson : @Jenn — yes, some things help with almost any society, e.g. prohibitions on murder, yet really fundamental parts of moral agency like how you choose or are assigned a life partner, when you are an adult, can consent to sex, can fight in a war — even these vary by society. Other things people feel are deeply ethical e.g. which parts of your body you cover in public are basically identity indicators, but identity also is part of the means by which societies sustain themselves. Security is an essential part of sociality.
17:58:50 From Alex Monaghan : Just looking around the world today, perhaps today specifically, human rights doesn’t seem to be universally practiced or enforceable. Even the most “western” societies do not respect cultural rights. Maoris are an interesting case, but Canada has just had huge scandals with its indigenous people, the UK is very poor at respecting minority indigenous cultures, and pretty much all of Europe struggles with the rights of gypsies, of more recent migrants, and even of asylum-seekers.
17:59:17 From Markus Krebsz : In addition to what Patricia is just saying, there’s a greater recognition now that our “thoughts / thinking / brain activity” should be protected also. Chile has recently adopted the world’s 1st neuro-rights and I would expect to seeing more along those lines. As a German philosopher once said: “Die Gedanken sind frei.” (the thoughts are free). I would add then: “Die Gedanken sind frei und mein.” (the thoughts are free and my thoughts belong to me!)
17:59:20 From Evgeniya : Through my work I observe that technology changes human habits/simplifies operational segments of their lives rather than behaviors. Behaviors are deeper – at the motivation level.
17:59:35 From Evgeniya : Currently reading this book and highly recommend: “The Reasonable Robot. Artificial Intelligence and the Law.” by Ryan Abbott 📚 Chapter 6 – Punishing AI
17:59:36 From Markus Krebsz :
17:59:46 From Joe Fitzgerald : Great demo of the impossibility of ONE language for ethics: “Tikanga, kawa (protocols) and mātauranga (knowledge) shall underpin the protection, access and use of Māori data.” – thank you for sharing Patricia
17:59:59 From : the awareness on personal level is not enough, when societal policies are in place that censors and limits participation of the individual
18:00:22 From Matt Ensor : We have built our AI from the basis of Māori data sovereignty, so we need to identify any Māori data created, and then make sure it is protected from use not in the interest of Māori.
18:00:23 From J J Bryson : It’s not clear to what extent thoughts are free, if you haven’t learnt concepts it’s harder to understand and resolve some things. Also we are so essentially social that we are heavily influenced by the behaviour of those around us even when we are not conscious of it.
18:00:40 From Markus Krebsz : Great talk earlier where it was suggested that TikTok maybe weaponised in a rising Cyberwar
18:00:46 From Shashwat Tripathi : Machine is today capable of hurting you emotionally. The targeted branding is an example of emotional exploration of subject.
18:00:51 From : yes, there is peer pressure
18:01:02 From : bandwagon effect as well
18:01:08 From Joe Fitzgerald : Patricia’s link again for convenience:
18:01:15 From Richard B : Tiktok is the modern version of leaflet drops over countries in WW2
18:01:38 From Evgeniya : Thanks, Joe – was looking for that link!
18:01:44 From Markus Krebsz : FaceBook years ago research what people are “Thinking” by collecting data specifically about post before posting and then deleting them. So BEFORE pressing the send button. A way to “read people’s mind”!
18:01:45 From Richard B : Tiktok is therefore a vector for disinformation in any conflict
18:02:00 From Markus Krebsz : Vector for disinformation & attack
18:02:02 From : Yes, disinformation can happen
18:02:16 From Shashwat Tripathi : I agree with @Richard
18:02:17 From : one can fight disinformation with good and trustuful information
18:02:22 From J J Bryson to Richard Foster-Fletcher – Message) : Don’t forget that my question was aimed at Nafunase first & Animesh second, even though Patricia chose to field it & she hadn’t spoken for a while so fair enough, but you might circle back to the others on the rights question. I would sincerely like to hear Mafunase but I gather her culture includes not pushing forwards for turn taking.
18:02:33 From : a bad idea can be tackled with a better one
18:02:36 From Markus Krebsz : If disinformation is used for making (misinformed) decisions and/or influencing what ppl are doing (or not doing)
18:02:54 From Richard B : ‘bad’ ‘good’ – very subjective
18:03:06 From Peter Scott : Agree with Richard B, and it goes further in that the recommendation algorithm makes it like a different leaflet custom-written for each person that picks one up.
18:03:07 From : the problem is to think that “centralization” is the answer
18:03:18 From Shashwat Tripathi : We have adversarial search which may be applied to tackle some fake news, but it resource hungry and infeasible
18:03:21 From : or impose the will in a top down manner
18:03:40 From Richard Foster-Fletcher – to J J Bryson(Direct Message) : Yes I was just saying this to Jaisal, she is staying quiet and we need to get more from her
18:03:45 From Richard Foster-Fletcher – to J J Bryson(Direct Message) : THanks for noticing
18:04:18 From J J Bryson : Back to simple definitions, I tend to think of “good” as contributing to a public good — for some public. “Bad” is destructive behaviour, which can sometimes be useful in competitive circumstances. But obviously in both cases, useful to whom, goods for which public.
18:04:22 From Chris McClean (he/him) :
18:04:31 From Markus Krebsz : The general public is likely not undertaking “adversarial search” in an e-slavement environment. Challenge of viewpoints is tiring whereas pure consumption is what most will prefer.
18:04:36 From : “who” defines what is good or bad?
18:04:50 From Guillaume CLAMART-MÉZERAY : Aye @Evgeniya about AI education and algorithmics bias must be explained to avoid much confusion. Need mediations to create confidence and educate to improve user experience.
18:05:19 From Shashwat Tripathi : Intelligent agent developed to have access to information and powerful resoning may be a good solution, but who will decide what is fact and what is fiction.
18:05:28 From : if politicians becomes despotic, one would not agree with their definition
18:05:39 From Markus Krebsz : Looking forward to the mighty Paul L.’s point!
18:05:52 From J J Bryson : Definitions exist in usage, there are multiple definitions available, there’s no right one this is also cultural, but where there are multiple available it is helpful to specify which one you are using right now. Also, I was just trying to communicate some concepts that don’t commonly get labelled at all but are close to some ideas of good & bad.
18:06:46 From Shashwat Tripathi : I disagree with human rights framework as rights being universal; they are conditional.
18:07:29 From Joe Fitzgerald : As JJ alluded to earlier, Silcon Valley corps and neocolonialist nations have watered down capital H-R Human Rights language so effectively it functions as a rubber stamp /whitewashing mechanism for the very powerful
18:08:06 From Cansu Canca : I think even transparency is not universally agreed upon 🙂
18:08:25 From : anytime a government decides that basic human rights can be “suspended” and police starts to act in a complete autonomous way in order to “stop” dissent, things can go very wrong.
18:08:36 From J J Bryson : Actually I said that about ethics, I said some were suggesting that HR, rooted in international rights, is LESS easy to fudge.
18:08:37 From Chris McClean (he/him) : Agreed, @cansu
18:08:40 From Richard B : Transparency as a word is not universally agreed on…..
18:09:04 From Patricia Shaw : Interesting how a discussion on culture can easily be transformed into discussions on assymetries of power, where the guardrail is self-determination and ability to control. But how “empowerment” especially of individuals can be easily used through power assymetries as a scapegoat for taking responsibility and a further way to misuse and disempower
18:09:09 From Richard B : I have found different cultures perceive transparency in very different ways
18:09:24 From Shashwat Tripathi : I don’t care about dissent; whatever you get is conditional and it deters exploitation.
18:09:27 From Richard B : In the USA transparency means clear and visible
18:09:33 From Gerry Copitch to Richard Foster-Fletcher – Message) : Ansgar Koene…..Heavyweight!! is in the audience 🙂
18:09:41 From Markus Krebsz : I have read the UN HR over the weekend. And then put this into perspective of our current / modern life, realising how HR breaches are done every day in every day transactions, sadly.
18:09:43 From Richard B : In other countries transparency means invisible….
18:09:53 From Richard Foster-Fletcher – : Join your peers and friends in the ongoing MKAI conversation on Telegram here Everyone welcome and we will be very pleased to see you there
18:10:29 From : all liberal democracies have opposition
18:11:13 From : dissent is just a natural phenomena
18:11:20 From Richard Foster-Fletcher – : Join us on Telegram 🙂
18:11:44 From Markus Krebsz : Digital currencies increase “Transparency”, in particular for govts – who love fully transparent citizens for them to tax every transaction etc.
18:11:52 From Cansu Canca : To be clear, I can think of many examples but I am curious about the issues that Mafunase is talking about
18:12:10 From : a “consensus” by fear is not democracy
18:12:36 From Chris McClean (he/him) : We could spend another several hours on “bias.” It’s impossible to remove bias from technology because all technology reflects the values of the creator. When humans decided to create a hammer, we decided that increasing the percussive force of our hands is worth cutting down trees and extracting minerals from the earth.
18:13:05 From Patricia Shaw : Culture is also embedded in our laws and these too can reinforce pre-existing and historic power assymetries
18:14:04 From Shashwat Tripathi : Please share your telegram channel again
18:14:16 From Markus Krebsz : Join us on Telegram
18:14:20 From Vibhav Mithal : Join us on Telegram
18:14:37 From Richard Foster-Fletcher – :
18:14:57 From Jenn Huff to Richard Foster-Fletcher – Message) : I would be really interested in the AI – Risk cohort!
18:15:04 From Joe Fitzgerald : This is a great group of people, thanks @Cansu for turning me onto this event. I have to leave now, hope everyone stays healthy & happy, I hope to join again soon.
18:15:24 From Markus Krebsz : Thank you for joining, @Joe!
18:15:50 From Edward Darling : Having to leave, thank you panel for sharing your thoughts, experience and wisdom
18:15:59 From Chris McClean (he/him) : Instead of focusing on AI risk, I would suggest starting with AI impacts, which can be positive or negative. For example, AI can increase or decrease fair access to health care, it can increase of decrease fairness in criminal justice, etc.
18:16:15 From Cansu Canca : @Chris – agreed!
18:16:33 From : the problem is not the power assimitry per se, is to negate the other the possibility of autonomy
18:16:59 From Evgeniya to Richard Foster-Fletcher – Message) : Hi Richard, following-up. A great initiative and I am interested. I am a huge supporter of tech ethics and inclusive access to tech/AI education:
18:17:00 From : power assimetry is natural into some extent
18:17:24 From Vibhav Mithal : Hi Chris – Thanks for your comments. I think the conversation will be both risk and impacts. Risk assessment will help act before the AI enters the market. whereas impact will help us deal with situations after the fact. Of course, risk identification would also help anticipate impacts.
18:17:44 From Karen Rea : Proportionality, Trish!!
18:17:45 From Shashwat Tripathi : No, computers were developed by DARPA, and they are source of high tech defense. So AI will create more polarization in the world.
18:19:09 From Richard B : I don’t recall Turing working for DARPA……
18:19:12 From J J Bryson : @Renato this is more or less what I said in my intervention. We can’t expect perfect power symmetry, but the closer to symmetry the closer to justice, but then also the more difficult coordination so we need to make efforts to find the good compromises within our countries. But between countries, it is essential that interactions empower both sides and empower the weaker at least as much as the stronger, or else we are into the arena of exploitation.
18:19:17 From Shashwat Tripathi : Or to say more asymmetrical power distribution
18:19:21 From Chris McClean (he/him) : Thanks @vibhav. I understand that approach, and I think it’s valuable. But I’ve been around risk management for about 20 years, and I think there’s a potential that starting there will limit the discussion on AI benefits (which is necessary to determine whether or not AI should be pursued to begin with).
18:19:28 From Shashwat Tripathi : Turing didn’t make computers
18:19:40 From Shashwat Tripathi : Please stick to the facts
18:19:55 From J J Bryson : @RichardB Turing worked for Bell Labs which was R&D for AT&T which was a monopoly that helped the US substantially in WWII
18:19:56 From : The problem is when the stronger wants to enforce their will on the week
18:20:08 From : on the weak*
18:20:17 From Richard B : Bletchley part of Bell Labs?
18:20:41 From Peter Scott : Turing visited the US after WWII when Bletchley was dissolved.
18:20:47 From Richard B : after…..
18:21:05 From : who develops policies must be held accountable
18:21:12 From J J Bryson : He did his PhD at Princeton before the war, and came back to Bell Labs for a while.
18:21:24 From J J Bryson : Not his PhD must have been a postdoc.
18:21:28 From Richard B : you can make the same argument for Teflon…
18:21:32 From Gerry Copitch : Special ‘Family MKAI’ welcomes to Tristi Tanaka and Ansgar Koene. Great to have you with us!
18:22:09 From J J Bryson : I think he was at Kings Cambridge for UG or PhD? Anyway, he has like the shortest ever accepted letter of application to Princeton, you can find it with a Web search.
18:22:24 From Shashwat Tripathi : Turing is called father of computer because he designed logical computing machines which were developed into physical computers.
18:22:45 From Richard B : I think Abraham Darby was an early conflicted tech entrepreneur……my point, this is not a new phenomenon…..
18:23:16 From istvan : yes
18:23:50 From Wade : @Renato: in would apply same logic. who’s comments are toxic and part of a tribe so who’s opinion of there right or tribe becomes toxic….?
18:23:59 From Richard B : you can attribute any technology to the military machine, because the fact is that almost all technologies are utilised by the military machine, whether ISO container or XBOX controllers
18:24:16 From Shashwat Tripathi : Korean
18:24:19 From Vibhav Mithal : Hi everyone – Come and join us in the AI conversation on Telegram –
18:25:15 From Richard B : antitrust is useful when a company in a foreign country assume great market power too……, so the nation state can use antitrust to shore up their own monopoly
18:26:44 From : the book Upscale show that
18:27:02 From : digital colonialism by foreign multinational companies
18:27:19 From Karen Rea : Cuckoo clocks, Joanna!
18:27:27 From : with the subsidies of price in order to conquer markets
18:27:58 From Richard B : My circle of friends that have had hip and knees replaced using swiss tech are not that concerned about monopoly….
18:29:38 From Markus Krebsz : How to lie with Statistics, Darell Huff, 1947 – a classic worth reading!
18:30:10 From J J Bryson : Ah, it was the Internet that is from DARPA. Turing worked for the British during the war of course in inventing computers, and invented theoretical computer science before then!
18:30:15 From Patricia Shaw : Can you put the link to the papers please Joanna
18:30:24 From : the problem is the temptation of governments of to protect their political allies and to the dissent censorship and deplatformization with a Digital ostracism
18:30:30 From Markus Krebsz : Maybe MKAI could provide the links on the event page for today’s session?
18:30:36 From Jenn Huff : here is JJ’s google scholar:
18:30:36 From Vibhav Mithal : Hi everyone – Come and join us in the AI conversation on Telegram –
18:30:37 From Richard B : Yes, that is correct @jjbryson
18:30:41 From J J Bryson :
18:31:03 From J J Bryson : Argh, I still didn’t get to hear Mafunase’s answer, she did “un cloak” to answer & then I digressed 🙁
18:31:27 From Patricia Shaw : Please give me a moment. My dog is barking,
18:32:19 From Johanna Afrodita Zuleta : Dear MKAI fellows,
Fascinating to be here with you, and participate in the mix of topics you are dealing with, Richard thank you so much for this space.

I am a crosspollinator, cultivating interdisciplinary alliances, as a value catalyst; especially between the worlds of diplomacy, economics and arts & culture.
Particularly focused on sustainability and advancing the relevance of the humanities in the digital era.

Glad to connect, hear your views and explore collaborations.

Energised regards,
Johanna Afrodita
18:32:37 From Wade : what criteria are you using for “evil”…? I didn’t here’s standard introduced
18:32:41 From Evgeniya : #IsaacAsimov 🙂 Three Laws of Robotics
18:33:06 From Jenn Huff : So humans are not intellects floating at roughly brain height… we are embodied, so if we are able to achieve generalizable AI, then robots _will_ develop ethics quite different from ours based in their embodiment
18:33:15 From Brent Zuber – Calgary, AB : Paul’s metaphor comment reminds me of the quote – “Nothing is good nor bad, but thinking makes it so” …. so does ethics only apply to a thinking entity?
18:33:23 From J J Bryson : I’ve written quite a lot about “robot ethics” (robots as moral agents and patients) Here is a Web page
18:33:39 From Lavina Ramkissoon 🙂 : morals are personal to oneself whereas ethics is applied to community/social ? maybe
18:33:59 From J J Bryson : Here is a philosophical paper that’s probably the most relevant
18:34:20 From Evgeniya : @JJ Bryson: I suspect AI will have its own pyramid of needs soon! #MaslowAI
18:34:50 From hesam ghoochaninejad : from google: While morals are concerned with individuals feeling “good” or “bad,” ethics determine what behaviors are “right” or “wrong.”
18:35:11 From J J Bryson : @Evgeniya I don’t. Anyway, to finish out, here is probably the best written one because two lawyers helped, which talks about LEGAL agency
18:35:34 From Peter Scott : It’s been observed that we judge machines by their actions and people by their intentions. I presume this is because we don’t think machines can have intentions. I suspect we judge the creators of machines by the intentions we impute to them through the actions of their creations.
18:35:50 From Markus Krebsz : Worthwhile checking out what Megatron Transformer said about itself, humans and morals…
18:35:51 From J J Bryson : Thanks Hesam, bad for me, I’m using it exactly the reverse in the book I’m writing 🙂
18:35:51 From Vibhav Mithal : Hi everyone – Come and join us in the AI conversation on Telegram –
18:36:06 From Gerry Copitch : There are those who believe it’s ethical/moral to refuse a 14 year old rape victim an abortion?
18:36:12 From Jenn Huff : well, Maslow’s pyramid needs some historical deconstruction too…
18:36:15 From Richard B : Perhaps we have a privileged position on ethics in this group? If we stopped a person on the street, what would they think of ethics? I feel as though I have been fairly treated? I feel that I have not been screwed over this week? My point is, we use these words, but what do people interpret by these words?
18:36:22 From Evgeniya : @Jenn – agree!
18:36:52 From Markus Krebsz :
18:36:59 From Karen Rea : I am having wifi problems and also have to go sadly, but hope we can get there on ethics and morals! Goodnight all! Thank you and just terrific to see all here 🙏
18:37:04 From Vibhav Mithal : If you have any questions on MKAI and the Channels that we have for AI conversation (and are ok to connect with an IP lawyer, with an interest in AI and Data Protection), happy to connect with you:
18:37:11 From Vibhav Mithal : Thank you Karen !!
18:37:15 From Peter Scott : I do not think it is useful to attribute ethics to machines until they have independent agency. Otherwise that would be bypassing the ethical responsibilities of the people who were responsible for the machine’s actions.
18:37:16 From Jess : Maybe it is about introducing ‘circuit breakers’ (some how) in the design of how we integrate AI, so that there is intervening points where we can review on how we want to design the ethical frrameworks. A bit like GDPR it is always open to interpretations, right? And because it can be interpreted differently, there is morale dilemma rather than an ethical dilemma
18:37:33 From Markus Krebsz : HUMAN circuit-breakers!
18:37:36 From Nicola Strong : Ethics are distinct from morals in that they’re much more practical.
18:37:40 From Paul Levy : i think we are all currently confused, which may be no bad thing at this early stage of AI development – it’s just inconvenient for some 😉
18:37:53 From Markus Krebsz : A Ai-killswitch, that can be deployed by humans
18:37:58 From J J Bryson : AI doesn’t become a moral agent. Moral agents are determined by societies, and just societies need to be equitable. It is never equitable to design another person. So we cannot pretend to give moral agency to commercial products.
18:38:53 From Refilwe Tlhabanyane : The ethical theory on AI will or rather should be aligned to basic human rights. If we use morality as a basis of definition then we will have subjective view across various regions.
18:38:55 From Peter Scott : If it’s okay to put in a plug here, we frequently explore different facets of this in my podcast “Artificial Intelligence and You” (
18:39:00 From Richard B : Moral agents have been established long before AI
18:39:21 From : no society is equitable per se, and we can think about it in detriment of the individual rights
18:39:31 From : we can not think*
18:39:31 From Evgeniya : The Reasonable Robot book. A quote by Gabriel Hallevy: “If all of its specific requirements are met, criminal liability may be imposed upon any entity – human, corporate or AI entity”. 📚 Chapter 6 – Punishing AI
18:40:11 From Paul Levy : My own view is that ethical AI is categorical, whereas moral AI will be necesssarily mysterious, and moral AI will.become increasingly and inconveniently important
18:40:13 From : people who wants to “reshape” society can easily becomes dictators
18:40:33 From Markus Krebsz : Philosophical question: Does that make us the “gods” of the Ai?
18:40:46 From : in a sense yes
18:40:52 From Markus Krebsz : I.e. the ultimate source or ultimate creator
18:41:05 From Richard B : I think it makes us accountable for what we build
18:41:15 From : not the ultimate because we are not original creators, we are also creatures
18:41:25 From hesam ghoochaninejad : Hen or Egg
18:41:29 From Markus Krebsz : So Ai could at some point decide not to “believe” in its creators and turn against them
18:41:34 From Markus Krebsz : (i.e. us)
18:41:41 From : Asimov rules
18:41:46 From Richard B : and if you build something that has some capability to build itself, you are still accountable
18:42:05 From Peter Scott : Reminiscent of the observation: Dogs say, “My human gives me food, shelter, love… he must be god!” The cat says, “My human gives me food, shelter, love… I must be god!” Is AI a dog or a cat? 😉
18:42:14 From Paul Levy : Happy to continue conversations …
18:42:31 From Markus Krebsz : Catchup next week, @Paul?
18:42:41 From Matt Ensor : Thanks for a good discussion. You are welcome to join our webinar on Revitalising Indigenous Languages with Conversational AI on March 8:
18:42:42 From Vibhav Mithal : Hi everyone – Come and join us in the AI conversation on Telegram –
18:43:00 From Tristi Tanaka (she/her) : Thank you very much – very interesting discussion.
18:43:10 From Paul Levy : There is moral agency but there is also moral emergence and transcendence
18:43:11 From hesam ghoochaninejad : Unfortunately robots wont need food like dogs and probably wont need human help in further future
18:43:15 From Brent Zuber – Calgary, AB : It is interesting that it is our projection onto a technology is debated. I am reminded of a quote from Hans Moravec’s book Mind Children, The Future of Robot and Human Intelligence c.1988 “But intelligent machines, however benevolent, threaten our existence because they are alternative inhabitants of our ecological niche.”
18:43:23 From Evgeniya : Artificial Emotional Intelligence doesn’t exist yet – AI can only simulate elements of human emotions.
18:43:37 From J J Bryson : If you had a system of mechanical entities that really sustainably perpetuated themselves then they might evolve their own ethics. But I doubt we will do this as I just said (just finishing typing it for the record). I also disagree that consciousness is a big deal. If you think robots have hands or feet, I guarantee you they also have explicit memory, and that’s as much like consciousness as any robot hand is like a hand.
18:43:40 From Gerry Copitch : Key distinction between awareness and consciousness?
18:43:54 From J J Bryson : Jinx, Gerry 🙂
18:44:23 From Wade : humans say 0ercussive strike, yet Bible permits certain actions such as eating meat and “subduing Earth”
18:44:31 From Refilwe Tlhabanyane : The sense of singularity and the absence of consciousness in AI will always be an inhibitor for AI ethics. The ethical burden of an AI technology lies with its natural creator.
18:45:01 From Markus Krebsz : Company start-ups do not work like that though: they have an idea and do it, typically to “make money”
18:45:49 From Markus Krebsz : Thought to “impact”, particularly negative ones, is usually given last… and, if there’s no regulation, why bother…
18:45:59 From Vibhav Mithal : Thank you Chris!
18:46:09 From Paul Levy : Many AI influencers prioritised ethics and AI because it is easier to artifactualise and productise. Moral AI is more elusive, harder to pin down and always archetypally holds humans (and in the future, robots) to account
18:46:10 From J J Bryson : Oh, are you asking me? Sorry. I’m not discriminating much between that we have explicit memory of (so are “aware” of) and consciousness. But of course many people use “conscious” to mean “moral agent” or “human” or “divine” or something so in those cases then there are obviously huge differences.
18:46:16 From Markus Krebsz : (Not suggesting that’s right, but that’s how economic objectives overrule ethical ones)
18:47:18 From Jess : Big Bug – interesting new film that touches upon many of the things we talked about today 🙂
18:47:32 From Chris McClean (he/him) : Does anyone know whether there’s a way to export the chat into a Word doc or similar? There’s so many good ideas and resources I haven’t been able to keep up with.
18:47:42 From J J Bryson : @Paul I was talking about that earlier, that’s why the UN told us AI Ethics academics to start talking about Human Rights again, which is what I was trying to ask Mafunase & Animesh about their perspectives on that. So far I’ve heard Cansu & Patricia’s 🙂
18:47:48 From Markus Krebsz : Thank you @jess – my evening is saved!
18:48:06 From Peter Scott : Chris: click on chat, Control-A, Control-C, go to Word, Control-V
18:48:19 From Chris McClean (he/him) : Thanks Peter.
18:48:22 From Jaisal Surana : Thank you Markus for joining us
18:48:27 From Jenn Huff : go to the … menu (kebab menu). on the upper right of the text window — save chat
18:48:31 From J J Bryson : Steve Bannon said we should back Russia (really Putin) because it (he) is un woke.
18:48:55 From Markus Krebsz : @Jaisal – thank you for having me 🙏
18:49:21 From Jaisal Surana : Always its a pleasure to have you on MKAI forum @Markus
18:49:21 From Richard Foster-Fletcher – to Animesh Jain(Direct Message) : Would you like to comment?
18:49:36 From Paul Levy : such an important point you make @JJbryson
18:49:51 From Markus Krebsz : Thank you – I very much enjoy the company of my old and new MKAI friends 💖
18:49:54 From : I think that negotiation is a forgotten skill
18:50:27 From : BATNA – Harvard Law School negotiation technique shows how to map interests
18:50:37 From Jess : “law is the product of the ages, wrapped in the opinion of the moment” – can’t remember who said this, but it is interesting 🙂
18:50:44 From Richard B : set belts are a good example, as automobile safety if very different between USA and western europe in part because of cultural aspects enshrined in law….
18:50:48 From J J Bryson : Absolutely amazing book about this
18:51:08 From Jenn Huff : I mean, Bannon thinks that the whole of Western Civilization (whoever he defines it) is predicated on the gender hierarchy that puts (white cis het) men in charge… which is an interesting critique of “western civilization if I ever heard one…
18:51:17 From Paul Levy : What a superb conversation!
18:51:29 From Markus Krebsz : Agree @Paul!
18:51:29 From Chris McClean (he/him) : History of corporations goes back to East India Company… lots of interesting aspects of sovereignty and power to consider there.
18:52:02 From Markus Krebsz : I am a big fan of tea – am not a big fan of the East India Company and it’s role in tea trade!
18:52:16 From : the world needs to agree in disagree
18:52:18 From Vibhav Mithal : Nice one Markus !
18:52:19 From Richard B : activist shareholders rarely work on behalf of the collective, they work out of selfish self interest
18:52:27 From J J Bryson : Here’s an amazing video of that book in only 4 minutes
18:52:44 From Markus Krebsz : P.S. Growing my own green tea, despite the British weather 😉
18:52:44 From Vibhav Mithal : Thank you !
18:52:54 From Jenn Huff : @ Richard B — the gollective action problem?
18:53:04 From Markus Krebsz : Oh and a big fan of Richard FF’s Matcha tea!
18:53:17 From Jenn Huff : *collective
18:53:19 From Richard Foster-Fletcher – : ahhhhhhhh a plug! 🙂
18:53:29 From Richard B : most shareholders are passive – activist shareholders see an opportunity to make more money for themselves, not for the collective
18:53:32 From J J Bryson : Speaking of AI & tea, has anyone read Ancillary Justice?
18:53:33 From Markus Krebsz : ^^^ yes this – best Matcha you can get ^^^
18:54:07 From Wade : it’s fact Law of Non-Contradiction: all religions and spirituality are not the same or all right
18:54:12 From Markus Krebsz : Biased stakeholders unfortunately
18:54:21 From J J Bryson : Profit isn’t realised in an instant. Real efforts to maximise profit requires sustainability and equity and security.
18:54:37 From J J Bryson : The well being of employees and customers both.
18:54:46 From Richard B : short sellers may disagree JJ
18:55:07 From Richard B : one of the reasons short selling is against the law in some countries
18:55:09 From Gerry Copitch : Most fascinating and insightful discussion I’ve heard this year!!! Huge THANKS!!!
18:55:18 From Richard B : again, law informed by culture and values
18:55:41 From Jenn Huff : sure, but shareholder capitalism has been ascending for at least 50 years…. is it a local fitness optimum that we can’t escape, or is there a path out of here?
18:55:42 From Markus Krebsz : Agree @gerry – thank god it’s only February – so look forward to many, many more to come!
18:55:48 From J J Bryson : True, I meant profit from companies. Actually, France introduced an awesome law where shareholders couldn’t vote unless they’d held the stock for some amount of time (3 months?)
18:56:21 From Richard B : USA has similar laws on casting votes at shareholder meetings, perhaps 2 months
18:57:02 From Patricia Shaw : and the most mistrusted industry is financial services!
18:57:17 From Richard B : happy to connect with anyone that wants to….
18:57:44 From Richard B : financial services mistrusted by people who lose money……
18:58:00 From J J Bryson : @Jenn actually I work on inequality too, we were this unequal around the time of WWI & the crash of 1929, and finally the elite realised they needed more stability & brought down both inequality and polarisation (they come together in most regions) Unfortunately in 1978 we stopped doing so much redistribution, my guess is because we knew the USSR’s economy had plateaued. Everyone enjoyed the ride through the 1990s but now here we are again. I recommend again the Pistor book I linked above.
18:58:05 From Johanna Afrodita Zuleta : Chris : AMEN
18:58:13 From Richard B : those who have made money are ominously silent……
18:58:24 From Peter Scott : Another great event @Richard F-F, thank you!
18:58:33 From Richard Foster-Fletcher – : Thanks Peter, great to see you
18:58:36 From J J Bryson : Here’s that paper…
18:59:38 From : I need to go!
18:59:39 From Vibhav Mithal : Happy to connect with everyone here – and JOIN US on Telegram to continue the AI conversation.: –
18:59:48 From Vibhav Mithal : Thank you Karen !!
18:59:54 From :
19:00:04 From Brent Zuber – Calgary, AB : Many great points, and considerations – thank you Richard and panel !
19:00:25 From Jess : Amazing discussions, all! Thank you 🙂
19:00:34 From Quinn (they/them) : Thank you, all!
19:00:44 From Paul Levy : A proper “deep” panel!
19:00:48 From : Bye all!
19:00:49 From Nicola Strong : Thank you!!!
19:01:32 From Paul Levy : Happy to continue the conversation
19:01:36 From Susan : Thank you all, very insightful contributions!
19:01:39 From Markus Krebsz : Thank you very much to the fantastic speakers and the fabulous MKAI team – really great start to 2022! (unfortunately have to drop now but will try joining the next after hours party 😉 )
19:01:46 From J J Bryson : I type fast 🙂
19:02:03 From istvan : Thank you all
19:02:13 From Jaisal Surana : Thank you @Gerry!
19:02:14 From Chris McClean (he/him) :
19:02:22 From Domingas Assis : Thank you all
19:02:25 From J J Bryson : Thanks for having us 🙂
19:02:34 From abimbola oyedepooyinloye : very interesting discussion
19:02:36 From Carolina Sanchez : Thank you so much MKAI!
19:03:04 From Vibhav Mithal : Thank you everyone !!!
19:03:07 From Cansu Canca : Thank you everyone!!
19:03:19 From Mustafa : thank you
19:03:21 From Refilwe Tlhabanyane : Thank you – great session!
19:03:24 From Guillaume CLAMART-MÉZERAY : Thank you!
19:03:48 From Chris McClean (he/him) : So energized by the conversation… great to see so many brilliant people working on these issues. <3
19:03:53 From Dwight Nelson : Yes it’s working
19:04:00 From Patricia Shaw : Thank you everyone. I need to go to another meeting!
19:04:16 From Lavina Ramkissoon 🙂 : transcripts worked
19:04:36 From Jess : Yes, the transcripts works! Amazing feature 🙂
19:04:38 From Jenn Huff : wow… artists… yeah…
19:04:38 From : Yes on mobile it’s fine at the bottom of the screen
19:05:24 From Cansu Canca : Gotta jump to another meeting now… Again, happy to connect:
19:05:40 From Richard Foster-Fletcher – : Thanks Cansu
19:05:51 From Chris McClean (he/him) : The link again talking about indigenous futurism:
19:05:53 From Vibhav Mithal : Thank you everyone ! Going to drop off now.
19:06:34 From Mafunase ngosa Malenga : Thank you everyone 🙏🏾
19:08:11 From Richard Foster-Fletcher – : Defence of the dark arts
19:10:16 From Wade : if art is in the eye of beholder how would that be integrated in AI (proxy to General)..?
19:11:57 From Chris McClean (he/him) : Great comments, @johanna, thanks. Very curious to check out Documenta.
19:12:12 From Chris McClean (he/him) : Here’s the link I found:
19:12:32 From J J Bryson : IDK if anyone wants this level of legal geekery, but here’s a UK parliamentary testimony about intellectual property generated about art. I was one of the people (the only non lawyer I believe) & I mostly talked about art as a metaphor.
19:14:31 From J J Bryson : I should really go home, I’m still at the office and its 20:14. See you!
19:15:28 From Richard Foster-Fletcher – : Thank you again Joanna
19:16:41 From Johanna Afrodita Zuleta :
19:17:10 From Chris McClean (he/him) : Hate to leave this conversation. Thank you all for the terrific insights and points of view. Hope to continue the conversation and see you all online again soon.
19:17:42 From Carolina Sanchez : Need to leave now….such a fantastic session. Thank you!
19:18:05 From Paul Levy : I am not sure you can ‘deploy’ AI by its very nature. You cannot deploy lions, or trees – they become part of the mysterious ecosystem
19:18:34 From Johanna Afrodita Zuleta : Chris delighted to connect and share more
19:18:56 From Johanna Afrodita Zuleta : I also made many remarks from your comments!
19:19:11 From Johanna Afrodita Zuleta : @chris
19:19:33 From Jess : Thanks @Paul. It’s the easiest way to introduce the ideas of AI in our local government, jungle. There are definitely a few crouching tigers and hidden dragons in there 😉
19:20:34 From Jaisal Surana : Amazing event and after party I feel like staying but got to go for child care. Thanks to all panelists, Richard, Vibhav,Karen and all the attendees.
19:20:46 From Richard Foster-Fletcher – : Thanks Jaisal.
19:20:55 From Jaisal Surana : Cheers
19:21:22 From Jess : @Mariya, we are actually not looking to reduce staff : there are so few of us as it is #ThanksCentralGov! It is more about removing the mundane work from our officers 😀
19:21:40 From Jenn Huff : still listening!
19:21:41 From Jess : they can do their best work 😉
19:22:10 From : Thanks for a great conversation. See you soon!
19:23:06 From Jess : Thanks everyone.. unfortunately gotta go! will definitely catch up with you all at the next one 😀
19:23:38 From Paul Levy : Wont Ai create new AI robot artists?
19:23:41 From Vemir : Love Winston Churchill’s quote on this subject, The story is that when Churchill was asked to cut funding to the arts in order to support the war effort in World War II, he responded “Then what would we be fighting for?
19:24:04 From Vemir : I believe the most exciting part of future AI will be AI artists
19:24:29 From Odilia Coi : Thank you so much everybody, it was extremely interesting event! So insightful!I am sorry I have leave at this point. See you all to the next event!
19:24:39 From Richard Foster-Fletcher – : Thanks Odilia
19:25:31 From Odilia Coi : Thanks Richard and @Jaisal for the wonderful moderation, and all the team for organising that!
19:26:04 From Johanna Afrodita Zuleta : Refilwe ! That was excellently summarised
19:36:37 From Karen Beraldo : New Zealand is the home of my heart!
19:43:35 From Gerry Copitch : Getting hungry…..hung back as long as I could in case we got to see the carrots again. Thanks for a wonderful evening everyone 🙂
19:44:07 From Richard Foster-Fletcher – : Thanks Gerry
19:57:40 From Wade : thank you for bringing the forums together Rich. thx Rich for you opinions. Missed 9ut from Paul though. himPaul
19:58:19 From Wade : ^ second Rich is Rich B
19:58:19 From Richard Foster-Fletcher – : Thanks Wade
19:58:34 From Richard B : Thanks Wade!
19:58:38 From Richard Foster-Fletcher – : Good to hear your experiences today
19:59:12 From Wade : 👍
19:59:56 From Wade : hmmmm…cultural data collection priority for impress upon further human races value which is more anthropology
20:01:36 From Wade : if all Catholic/Orthodox leaders were implicated or if all had malicious intentions
20:01:59 From Wade : GPR is statistical representation
20:03:00 From Wade : hey maybe AI can determine absolute culpability in the Residential Schools problem
20:04:21 From Wade : Biblical Archy supports the reality of the Biblical timeline