Resilience cover

Over the last month, what was once a niche research topic has emerged into a critical question and a need for more debate in the MKAI community. When it comes to AI regulation, there is always a tricky part of any data-informed decision-making process – and that’s timing—being at the right time at right place. It seems that the EU came late to the party and wasn’t the first to notice the urge for regulation. Hence, the EU is still a tiny fish in a vast pond.

Although the EU recognized the importance of AI regulation as a critical pillar for future development in the field, the only problem is that – the EU is not exactly a world leader in artificial intelligence. Though, that is not the end of the story for Europe. On the other hand, the United Kingdom is doing exceptionally well in the field of artificial intelligence

While hoping that the EU will sort things out with AI regulation –China is setting a course to become a world leader in AI. What’s more, several critical Chinese AI companies have already become a significant part of everyday life in China, the United States, and the rest of the world. 

How can AI be regulated if many people have no idea that is happening? 

In the EU, some “high risk” activities would be permitted, subject to strict controls, such as safety measures against introducing racial, gender, or age bias into AI systems. In the meantime, China has already implemented a “social credit” scoring system, which tracks and rates the behaviors of its citizens, as shown in the film Coded Bias (2020). If regulations aren’t synched between countries, what can we say about AI ethics? This fast-evolving feed progress seems within reach in theory but demands to apply fairness guidelines to real-world problems. 

Bias in AI

As in real life, there are also biases in AI. There shouldn’t be, but the first iterations seemed to have a problem with racism and gender inequality that could easily disturb centuries of fights for rights worldwide. Driven by this topic, we highlight resources focused on fighting bias in algorithmsthe accuracy of AI-powered gender classification productsintersectional accuracy disparities in commercial gender classificationrace, and gender in cognitive AI from our community. 

Further recommendations on this topic inspired us to do some more research. It’s easy to point the finger at the algorithm. The entire force is focused not only on the algorithms but also on the data. Humans are incapable of tracking large amounts of data across networks, but we must also be aware of how data is collected and used. If information is messy, things would’ve become murkier in practice.

One of the reasons bias is such a problem in machine learning is that it is unobservable. It is becoming increasingly difficult to detect, even with careful model interrogation, as more complex modeling approaches emerge. Even when obvious bias is not observed in the results, one must consciously identify and quantify bias in algorithmic outputs to adopt an anti-bias mentality that can unearth bias where it occurs.

The question that is emerging from these discussions is – Do we have a baseline starting point? Are we able and have the right tools to score risk at layers? If we see the risk in the data trouble with the algorithm, how much would incorrect human-supervised choices affect the data and algorithm so that they are relatively bias-free – how can we build up from a baseline like this?

AI + Knowledge – a match made in heaven?

What can knowledge-based technologies do for Deep Learning? What is AI neural networks, how does it work, what can it do? What’s next? What are the roadblocks and opportunities?

Whether you are just getting started with neural networks and deep learning, or you are already advanced, MKAI Community is where you can get both inspiration and hands-on knowledge. The direct example explains bias in AI neural networks by pioneering and prolific researchers in our community. They brought us some more resources on backpropagation with united forces, how the backpropagation algorithms worksneural networks, and backpropagation

Ascending from neural networks to neuroscience, we have just started exploring the philosophy behind this match of AI and Knowledge, merged and united. But, once it becomes so advance, how would we know if an AI is consciousThis debate has already started with bias and Facebook trying to make fairer AI, but what are fairness and collective memory or consciousness when it comes to AI? However, the anti-philosophy approach may give us a different perspective on how to approach this sensitive topic.

If you have the answer to these questions or contribute to the discussions, you can always join MKAI on Telegram or where you like to be – WhatsApp or Signal.

Learning about emotions from scratch

When talking about AI, however, be it from an ethical or technical point of view, there’s semantics involved. And not just in terms of defining what AI can do in these fields. The fact is that AI has transcended far more for just ML And how far is enough – you might be wondering. Well, is identifying human emotions using machine learning algorithms good enough? 

Recently, there has been an increase in the number of people requesting methods to connect the worlds of machine learning and knowledge-based technologies. While emotion recognition technology may have some advantages, these must be balanced against concerns about accuracy, racial bias, and whether the technology was indeed the best tool for the job.

Although there are new ML models that could remove bias, we need to be having a much broader public conversation and deliberation about these technologies. Recognizing that the majority of social media users are reluctant to share sensitive information (gender, race), Penn State College of Information Sciences and Technology researchers developed a novel system that estimates sensitive data to assist GNNs in making effective recommendations. A small step for human, but great for humanity (or human-like AI?)

Rebooting AI: Human 2.0

For complex dynamic systems, however, knowing humans as we know them today is becoming less interesting. There is still a need to update humans perfomances in the labor market, despite the fact that systems can easily identify your face or emotion, emitting gender and race from equitation. Let’s look at the numbers and see how.

According to 77 percent of business leaders, under-the-skin chips and sensors will improve job efficiency and productivity. In Sweden, thousands of people have already had microchips implanted in their skin, which have replaced keys and bank cards.

For those who have read George Orwell’s “1984”, this is much too much of a continuation of Big Brother. How long before these outnumber humans and become more machine-like? While potentially more productive, how long before they outnumber humans and become more machine-like?



Upcoming MKAI events

Are you searching for more answers about ethics and regulations? 

Don’t worry! We’ve got you covered! If you are interested in how corporations seek to be compliant with laws and regulations. Or, how consumers and employees ask more corporations to handle their data securely and with more attention to privacy? And if so, what market forces or technological disruptions will bring about this change?

Suppose you are willing to get an answer to these questions. In that case, we’ll be grateful as ever for your support and looking forward to seeing you on the upcoming 

  • LinkedIn Live panel debate about ‘The Business Rationale for Ethical AI’ on Thursday, Apr 29, 2021, at 4:00 PM (BST). After that, come and join us for the community hour before:
  • MKAI 𝗔𝗽𝗿𝗶𝗹 𝗜𝗻𝗰𝗹𝘂𝘀𝗶𝘃𝗲 𝗙𝗼𝗿𝘂𝗺 𝗼𝗻 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 𝗘𝘃𝗲𝗻𝘁 𝗦𝗲𝗿𝗶𝗲𝘀 – Part 1: The Business Rationale for Ethical AI, Apr 29, 2021, 5 PM BST! 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝗵𝗲𝗿𝗲: https://lnkd.in/gmpAsgD.


Not yet a part of our community? Come and join us! 

We hope you enjoyed this month’s selection! If any of the topics left a particular impression or resonated with you – feel free to join our focused discussion group on Telegram or other social platforms such as WhatsApp

See you next week,

Alex

Share to your network:

Getting value from the MKAI blog? Then please consider supporting our work. It’s easy to do; just subscribe at www.coil.com. Since we are Coil enabled, your monthly allocation will automatically include MKAI. That’s all you need to do. Thank you very much for your kind support.  

Written by: Aleksandra Hadzic

Written by: Aleksandra Hadzic

Analyzing data while providing strategy for growth, reach, and impact of the community at MKAI.
Experimenting with Data Science in Digital Marketing at Studio 33.
Staying up-to-date with digital technology trends. Otherwise, I'm dancing the tango.

Visuals by: Pinal Patel

Visuals by: Pinal Patel

The brains behind the designs at MKAI, I have been assisting distinguished researchers with their research on AI for close to 2 years at Rennes School of Business. Always been a tech aficionado and keen to keep up with emerging trends. Oh, and I'm also a yoga enthusiast.

Share this post

4 Responses

  1. Absolutely fantastic Alex !!! Great insights into what the community is doing. Great to read this and keep going!

    1. Thank you, Vibhav, for your kind comments and support! I can’t wait to share new insights for the community made by the community itself 🙏

Leave a Reply

Your email address will not be published. Required fields are marked *

Resources