Survey XII: What Is the Future of Ethical AI Design?
Machine-driven algorithms are swiftly becoming a dominant force, thus global attention has turned to the purpose and impact of artificial intelligence (AI). Of primary concern to many experts in this 2020 canvassing is the fact that humanity’s rapidly advancing AI ecosystem is developed and dominated by businesses seeking to compete and maximize profits and by governments seeking to compete, surveil and exert control.
To analyze the direction of this technology, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?
Survey Results and Expert Predictions
Regarding the application of AI Ethics by 2030, the survey data highlights a significant divide in expectations:
- 68% of respondents said they expect that ethical principles focused primarily on the public good will not be employed in most AI systems by 2030.
- 32% of respondents said they expect or at least hope that ethical principles focused primarily on the public good will be employed in most AI systems by 2030.
A majority said it is quite unlikely that AI design will evolve to be more focused on the common good by 2030. They also noted that ethical behaviors and outcomes are extremely difficult to define, implement and enforce.
Key Themes and Worries
Among the key themes emerging in these respondents’ overall answers were several critical worries regarding the future of the industry:
- Definition Difficulties: It is difficult to define “ethical” AI: Context matters. There are cultural differences, and the nature and power of the actors in any given scenario are crucial. Norms and standards are currently under discussion, but global consensus may not be likely. In addition, formal ethics training and emphasis is not embedded in the human systems creating AI.
- Concentrated Control: Control of AI is concentrated in the hands of powerful companies and governments driven by motives other than ethical concerns. Over the next decade, AI development will continue to be aimed at finding ever-more-sophisticated ways to exert influence over people’s emotions and beliefs in order to convince them to buy goods, services and ideas.
- Systemic Opacity: The AI genie is already out of the bottle, abuses are already occurring, and some are not very visible and hard to remedy. AI applications are already at work in systems that are opaque at best and, at worst, impossible to dissect.
As experts look at the global competition, they continue to monitor scores of convenings and papers proposing ethical frameworks. These frameworks cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, trust, sustainability and dignity.