Lessons learnt so far – MKAI Simplified AI Series

The Simplified AI-series summarizes our community members’ thoughts that aim to help people to better understand complex AI topics.

The idea of online forums was once a very foreign concept to moderators and people who would listen to expert-led knowledge. However, during the last year, we have witnessed many changes due to many external factors. Most importantly, we had to adapt and learn how to be agile. 

While facing many challenges in the new typical environment, the trickiest task for being a moderator was to keep great energy in the virtual room while following the events’ flow. Many lessons have been learned that helped me both thrive and grow with the MKAI community along that path. In this blog series, I want to share lessons learned from both professional and personal aspects so that all of us can gain knowledge that can’t be learned in books. These are a direct output of the experience in its purest form.

AI and the future of AI

Technology is ubiquitous and is ever accelerating. We are constantly surrounded by technology powered by artificial intelligence. It has been interesting to see the booming waves of AI, from classical AI to deep learning to neural networks, and now going further beyond it. What’s more, as new technologies have been emerging, we had an opportunity to hear about their development from the professionals in this area. 

However, we had the community that has wholeheartedly shared their views and aspects on how they understood these ongoing topics. That’s where we’ve found our unique way of simplifying AI by shaping it at the same time. Whether we all are aware or not, we are always surrounded by Automation and AI. It can be in various forms such as Netflix, Amazon shopping, YouTube…you name it and we don’t always know to what extent we are participants in AI models, be it rule-based, pattern-based, etc.. Nevertheless, we don’t have to understand the infrastructure that’s behind it. We can educate ourselves on these topics present in our daily lives and then initiate our movements regarding areas of improvement.

As clearly mentioned at MKAI April 2020: You ain’t seen nothing yet: the coming acceleration of AI, by David Wood, one of our speakers, we learned that we are now in the 4th industrial revolution. We can choose to better understand cognitive technologies or let technology shape our future – the future of humanity.

AI Explainability

Most straightforwardly, explainable AI can be interpreted as humans can understand the path an IT system or AI model took to make a decision, unlike the black-box model of AI where machine learning or deep learning models take inputs and then produce outputs (or make decisions) with no decipherable explanation or context. As mentioned above, to explain or understand it, we need to simplify the definition of AI.

That’s precisely what we’ve done at the MKAI: Using AI for Good Series (part 1, part 2, part 3). We learned that Explainable AI can interpret the outcomes of AI while being able to clearly traverse back, from outputs to the inputs, on the path the AI took to arrive at the results, thus increasing the trust in AI systems.

It is vital to show the business stakeholders the model performance (Explanation) and the model’s understanding in a better way, feature importance, and features that might be a part of the prediction (Interpretation). 

These are just some of the reasons why Explainable Artificial Intelligence (XAI) may be needed to ensure a better ROI, more trust, enhanced and credible decision making, and feasibility. With these in mind, it is equally important to align AI to the user’s beliefs, indicate what data is relevant, suggest when the model is not applicable, be transparent and follow a common-sense approach.

Human Compatible AI

Once we understand why we would need to implement AI for our business, what is required for companies and organizations to survive and then thrive is extracting real value from data and using the specific information for making different business decisions. It’s time to expand our views and evolve from current data strategies.

At MKAI AI expert forum September 2020: Human-Compatible AI series, we learned that some companies are now looking at Augmented Intelligence which empowers their users. Augmented intelligence is a system built to enhance human capabilities and is more human-centric with solutions leading to higher productivity, higher earnings, and overall job growth.

It is essential to identify the personas, and along with the business solutions, it is crucial that first and foremost, we recognize the human needs behind the solution, the pain areas, the gains, and mainly empathy mapping. Simple metrics for understanding the human condition and whether or not the solution solves the human purpose leads to more success. 

Human-centric solutions will need simple techniques to research the users, survey the solution provided. We can look at the human-in-the-loop for machine learning models to be more human-centric, transparent, and explainable. At MKAI, we believe that AI is made more effective, fairer, and safer through human input. Beyond this, we believe in the need to self-regulate frontier technologies through external governance, oversight, and ethical boards.

To achieve that, design principles need to imbibe their purpose, transparency, and skills to deliver a human-centric AI solution and augmented intelligence.

Since explainability also empowers the users, it is essential to allow the interaction between UI users and the AI model, which indicates the importance of UX and helps the model learn faster and become more intelligent. Involving and empowering the user brings in more trust, thus supporting augmented intelligence.

For the success of AI, it is essential to have the human dimension added to it. Not just technology but a socio-technical solution or a symbio-tech solution. No matter how sophisticated the algorithms are, they should serve the customers’ needs again, making us look at the design principles.

Rethinking Data (Monetization)

Over time, data has become increasingly valuable. The more we make ourselves available online, the more data can be collected without people’s knowledge of creating a social profile, which brings in issues like data disclosure and transparency, which drives business models to enable more profits. 

But sometimes, the potential risks also bring in governance and regulations are much slower than the technological progress with the use of data. One of the most sensitive uses of online data is education surveillance, a safety concern for young children.

It takes XAI to explain AI. Using data from social media (posts, videos, photos) captures the personal behaviour millions of days/per day. Sometimes more than data privacy, it is crucial to know how it captures behavioural traits to represent personalities to create an AI content profile that becomes the e-business model. 

Because AI collects all the data from various social media platforms, transforms this information into vectors, and further learns to improve the model. However, some reasons may lead to the failure of some AI models, such as insufficient data, partial data, missing features, missing situations, and missing context.

That’s where explainability will help the solution more accessible and adapt to changes, increase the success rate of the AI solutions, make the business more trusted, help remove unwanted decision biases that come to the surface, and detect any missing common-sense knowledge.

To reduce the risks to privacy, AI needs to be human-centric and requires algorithmic impact assessments, clear guidelines, and policies that also include procedures for children’s safety that can aid the innovation, defining the data collection and processing practices to create an ecosystem.

Hence there is a need for an open and transparent marketplace with a consistent set of how Data Big Tech companies can use our personal data. According to BI Survey, only 17% of companies have an established data monetization initiative, while 13% of companies are currently developing prototypes, with an additional 10% creating a concept. These figures only serve to demonstrate that, in the twenty-first century, data is as valuable as any physical product.

Lessons that are going to be learned in the future are related to starting rethinking companies’ data monetization strategies and committing to more ethical practices. And the right time to start reshaping them is now.

Upcoming events

Curious about AI in 2030? We invite you to the MKAI Inclusive AI Forum that explores the impact of AI; distribution, power, adoption, control, safety and scale, this decade. 

On 24th June at 5 PM (BST), please join us as our guest. We would love to hear your thoughts. You can register here.

 

Share to your network:
Jaisal Surana

Jaisal Surana

Jaisal Surana works as a Senior Business and Data Analyst and has been in the Telecommunications industry for 15 years now. The passion and love for technology have helped drive the curiosity in Machine Learning and AI-based technologies. She is a Speaker Relationship Manager for MKAI and loves to moderate events. She is interested in making human life better with Responsible and Ethical Next generation technology.

Share

Lessons learnt so far – MKAI Simplified AI Series