This year (2021) has been a remarkably productive one in the field of artificial intelligence (AI). Several large artificial intelligence projects have received additional funding, and it is expected that the investment rate in AI startups will set a new record.
As additional funding speeds up developments in the sector new pressures emerge on educational institutions to enlighten their students to the risks associated with their future AI developments.
Generations raised with technology in their hands will need more assistance to help them develop reliable internal compass points and the tools necessary to navigate unfamiliar or ambiguous environments. Ironically it may be artificial intelligence that we need to turn to to help better educate the next generation of students about the risks of artificial intelligence. Covid-19 transformed teaching almost overnight, and an industry that was decades away from transformation was suddenly thrust online. Online interactions, including teaching a class, produce data and the use of AI and analytics on this data is helping educational institutions to streamline teaching operations and make time available to try out new ideas and to innovate.
“Education is the passport to the future, for tomorrow belongs to those who prepare for it today.” – Malcolm X
With more autonomous systems running in the world, the only thing between the systems and the impact they create (intended or otherwise) is the moral compass of those working on them. Social and cultural values will be reflected in advanced AI systems because humans themselves shaped them.
Human AI meets emotions and values
The growing significance of humans and machines working together is often cited in conversations about jobs and productivity. However, experts note that AI ethics are hard to define, implement and enforce. Cases are not always clear and rely on an emotional and contextual understanding of the human component. There are many ways that AI can make people, systems, and processes more efficient, resulting in less waste, better health, and more access to education and vital resources. But, this productivity may come at a high price if those in Office pay too much attention to sudo-philosophers arguing that we don’t know what ‘ethics’ means or what ‘good’ actually is. Leaders may do better instead to follow General Patton’s advice that when it comes to implementing Ethical AI policies that “a good plan, violently executed now, is better than a perfect plan next week”.
Rethinking Ethics in Artificial Intelligence
Good ethical AI programming is only possible when programmers have the time and inclination to imagine possible outcomes and scenarios of their work, and this must take place during the design process. An approach like this requires a supportive social and economic environment to be created that allows developers to consider the potential social and economic impacts of their endeavours. Glynn Rogers from the CSIRO Centre for Complex Systems Science argues that AI developers should also be taught moral and political philosophy alongside technical skills. One way to help with this could even be to team up developers with computer-based technologies that themselves have human-like abilities such as cognition, social relationships, and emotions.
Ethics and governance of artificial intelligence
In many countries, ethical AI is a critical social and moral challenge for governments and AI-based tools could benefit low and middle-income countries if used correctly and morally. Governments could improve public health; extending health care services to underserved populations, and enabling healthcare providers to better attend to patients and engage in complex care.
For AI to positively impact public health, however, considerations must be prioritized in designing, developing, and deploying AI technologies for health that avoid biases and prejudices. AI bias, also known as algorithm bias, is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous or incorrect assumptions in the machine learning process or incomplete data. Furthermore, existing biases in healthcare services and systems based on race, ethnicity, age, and gender, encoded in data used to train algorithms, must be managed effectively for AI to be used effectively for health.
The digital divide also influences the health of a nation in that country. A digital divide occurs when a significant gap emerges between those who have ready access to computers and the internet and those who do not. It can disadvantage people based on their gender, geography, culture, religion, language, or age. Suppose governments concentrate on reducing the pre-existing digital divides in their countries, poor- and middle-income countries. In that case, they will overcome unequal access to health-related AI, information, and communication technologies.
What we focus on is what we notice and act on. Let’s make sure we focus on what we want and not on what we don’t want. Let us clearly spell out the story we want for humanity and the place, scope, and direction we want AI to have within it.
– Ana Irueste –
Community success
AI IN RESIDENCE AT SILVERSTONE PARK
AI in Residence is an MKAI-led program based in Milton Keynes, United Kingdom. This new pilot project at Silverstone Park that MEPC has invested in aims to give its businesses an edge in the future using cognitive technologies, such as artificial intelligence, robotics, augmented reality, and virtual reality.
According to a media release, this will assist local businesses based in Silverstone Park in identifying key technologies that will serve their organizations as they face future challenges.
Oak Brook teen receives Diana Award for nonprofit focusing on AI education.
MKAI’s Executive Chair, Richard Foster-Fletcher, stated that AI needs more female role models like Jui Khankari, an Oak Brook teen that received Diana Award for her outstanding work.
She initially got recognized for running the nonprofit AInspire, which helps over 7,500 students in 58 countries. Following these accomplishments, in collaboration with the MKAI, she hosted machine learning workshops that assisted hundreds of people interested in entering the field of data science or who were already employed in data science to participate in or improve their skills.
Khankari, who hopes to attend Stanford, said her biggest goal is to create or work at an existing healthcare-related startup to create a product that can provide the best service to underserved people. MKAI wishes her the best of luck!
See you next week,
A.