Over the years, human civilization has dreamt about cutting-edge technology that will change the world. The application of artificial intelligence (AI) is a perfect example of this. Now that we have open and transparent access to the latest technology development, over half of the world’s population actively uses their intelligent devices, e.g., smartphones, to take selfies and post them to social media platforms. How innovative, isn’t it? However, this serves to show how non-deterministic technological change is.
Besides the numerous social media platforms, the internet is full of advertisements. This commercial form of targeted advertising is the internet economy’s primary source of revenue. While the long-term mental health effects of these actions are unknown, the short-term effects are readily apparent.
There is a need to separate good content from bad content online, yet human supervisors can’t vet each campaign because of the many ads online. So these potential effects that impact people’s brains while scrolling through tons of ads can’t be precisely identified.
For these specific situations that are bigger than human capabilities to do too many complex things, there come machine learning algorithms to help. They are used to screen out ad content that fails to meet industry standards; new opportunities open up. While these predictions can be pretty accurate, they are prone to be distorted and only address the most obvious violations. These filters help cut down on potentially harmful advertisements, but a significant number of them still make it through.
AI legislation and bias in algorithmic decision-making systems
After learning that targeted or personalized advertising can shape people’s perspectives or thoughts about life, this can directly lead to confirmation bias. Confirmation bias is simply known as observer bias. If someone views data, they are more likely to interpret it consistently with their expectations or wants. While most labelers refrain from personal beliefs during the labeling process, some labelers allow their ideas to influence their labeling habits, resulting in inaccurate data. For example, researchers can unintentionally introduce bias into a study if they enter the project with subjective thoughts about their research, either conscious or unconscious.
Nevertheless, there is some promising legislation pending in the legislative body. It was introduced in May by Sen. Edward Markey and Rep. Doris Matsui. They seek to protect the public from harmful algorithms while encouraging transparency in websites’ content amplification and moderation practices. The Algorithmic Justice and Online Platform Transparency bill also call for a federal investigation into discriminatory algorithmic processes throughout the economy.
It is worthwhile to take into consideration the potential consequences of adverse outcomes. Analyzing the negative influences doesn’t’ necessarily diminish the positive results of social media to connect people and communities worldwide. On the other hand, it is also essential to consider all possible effects on their own.
The new approach to legal liability
In reality, collecting personal information from consumers in the absence of data protection legislation has been used to help develop targeted advertising. Many people have no idea that they have rights when it comes to their data. However, there are always platforms where people can learn more about their rights.
AI’s capabilities are advancing beyond novelty functions, and the technology “becomes real,” resulting in a different but no less significant developmental path. Artificial intelligence (AI) will continue to increase, transform our world, and fade from public awareness at the same time, despite the general public’s incorrect beliefs.
Following Ben Winter’s words, most AI-related legislation is focused almost solely on investment, research, and maintaining competitiveness with other countries, primarily China. Even though the future appears promising, the existing liability system in the United States and other countries is incapable of dealing with the potential negative consequences of artificial intelligence. sThat is a problem because it will impede the advancement and adoption of artificial intelligence.
Revising standards of care, changing who compensates parties when inevitable accidents occur through insurance and indemnity, changing default liability options, establishing new adjudicators, and overhauling regulations to prevent mistakes and exempt certain types of liability are all part of the solution.
It will be necessary to choose specific streams of law to investigate AI liability in greater depth. Following that, the current liability model should be reviewed. Only then will it be possible to see the effects of artificial intelligence on the development path of the future.
Upcoming event: MKAI AI Inclusive Forum – Adaptable Intelligence
The MKAI community contributes to the reemergence of AI for humans. MKAI’s mission includes a focus on long-term human values. As a result, we are working towards having a significant impact on the future of artificial intelligence. Using forums, we can effectively communicate our ideas, mission, and vision. In the upcoming AI Inclusive Forum titled Adaptable Intelligence, panelists and speakers will debate whether artificial intelligence is distinct from other technological waves.
Because this new wave of AI research and development is taking place in an open-source computer science field where early publications are the norm and large corporations’ investments are almost always for the benefit of the entire industry, there are several reasons to believe that AI will be distinct from previous technological waves.
It is critical to discuss a new approach to revising standards around AI and whether AI will have a long-term positive or long-term adverse effect on your organization. If you are interested in this topic, don’t miss out on this once-in-a-lifetime opportunity to learn about and participate in shaping your community’s future.
See you next week,