Every month computers perform tasks that until recently needed human intelligence. Algorithm designs that are “similar or superior” to those made by people are being generated faster and faster. In the US, Google has utilised machine learning to help develop their next generation of machine learning chips. According to the IT giant, AI can complete tasks that would take people months to complete in under six hours. In China a new transformer has been announced that is significantly more advanced than OpenAI’s GPT-3 and learned from 10 times more data points (1.75 trillion). The Beijing Academy of Artificial Intelligence (BAAI) created the WuDao 2.0 model, which involves over 100 scientists from various organizations. An AI model that has been pre-trained is ideal for simulating conversations, understanding images, writing poetry, and creating recipes. Machine learning models provide parameters, and as they improve, algorithms can better predict results. After that, a trained model can be applied to similar problems.
Towards a social-relational justification of moral consideration
Today, almost everyone in society interacts with AI-enhanced services or products, whether consciously or unconsciously. As AI scales and expands exponentially across society, it faces unique challenges. Artificial intelligence ethics research is still in its early stages, with issues such as trustworthiness, transparency, accountability, diversity, and non-discrimination needing immediate attention. The narrative, as well as the discursive framework, are still being written. Many researchers in academia and technology companies are developing frameworks and technical processes for detecting AI ethical issues in data collection, model building, and implementation processes. However, as a field, we still have a lot of work to do, keeping AI use ethical and fair necessitates a deft combination of management and data science.
Responsibility and AI – Intellectual Freedom
Intellectual freedom is a necessary precondition for conducting ethical AI research. Restricting intellectual freedom may provide short-term benefits to corporations or governments. The negative externalities outweigh any services in the medium and long term. Where skepticism is required, trust will be called into question.
The intellectual freedom of researchers working in artificial intelligence ethics is critical to preserving society’s legitimate democratic interests alongside technological advances. While cautious regulatory approaches will be necessary and beneficial, it will be a strong, open, and communicative global AI ethics community that will serve as a model for corporations, universities, governments, and other stakeholders.
An economy that is increasingly based on and powered by artificial intelligence. Our responsibility as a community is to advocate for and defend intellectual freedom in AI ethics research, regardless of the context. Protecting the intellectual liberty of researchers employed and funded by corporations must be an inherent principle.
Right now, history is providing us with a once-in-a-lifetime opportunity to make things right.
Get involved with MKAI
Upcoming events
We examine the state of Artificial Intelligence at the start of the next decade at the MKAI Inclusive AI forum in June. Global Thought Leaders including Futurist David Wood and Rose Mwebaza, Director at UN Climate Technology Centre and Network will share their research and experiences about the impact of this technology; distribution, power, adoption, control, safety, and scale.
We will explore questions such as:
- Will a small group of Big-Tech companies maintain their dominance during this decade, or will new companies displace them? If this is the case, what are the market forces that will allow it to happen?
- How will artificial intelligence (AI) enter developing countries? What kind of leapfrogging do you expect to see?
- Over the next decade, how effective will various governance, policies, and legislation be in controlling artificial intelligence and data monopolies?
Until next week,
Alex