Numbers, Facts and Trends Shaping Your World

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

A majority worries that the evolution of artificial intelligence by 2030 will continue to be primarily focused on optimizing profits and social control. They also cite the difficulty of achieving consensus about ethics. Many who expect progress say it is not likely within the next decade. Still, a portion celebrate coming AI breakthroughs that will improve life

A majority worries that the evolution of artificial intelligence by 2030 will continue to be primarily focused on optimizing profits and social control. They also cite the difficulty of achieving consensus about ethics. Many who expect progress say it is not likely within the next decade. Still, a portion celebrate coming AI breakthroughs that will improve life

How we did this

This is the 12th “Future of the Internet” canvassing Pew Research Center and Elon University’s Imagining the Internet Center have conducted together to get expert views about important digital issues. In this case, the questions focused on the prospects for ethical artificial intelligence (AI) by the year 2030. This is a nonscientific canvassing based on a nonrandom sample; this broad array of opinions about where current trends may lead in the next decade represents only the points of view of the individuals who responded to the queries.

Pew Research and Elon’s Imagining the Internet Center built a database of experts to canvass from a wide range of fields, choosing to invite people from several sectors, including professionals and policy people based in government bodies, nonprofits and foundations, technology businesses, think tanks and in networks of interested academics and technology innovators. The predictions reported here came in response to a set of questions in an online canvassing conducted between June 30 and July 27, 2020. In all, 602 technology innovators and developers, business and policy leaders, researchers and activists responded to at least one of the questions covered in this report. More on the methodology underlying this canvassing and the participants can be found in the final section.

Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

They recognize people’s faces, translate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music that sounds quite like the Beatles and Bach.

Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer.

As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question:

By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?

In response, 68% chose the option declaring that ethical principles focused primarily on the public good will not be employed in most AI systems by 2030; 32% chose the option positing that ethical principles focused primarily on the public good will be employed in most AI systems by 2030.

This is a nonscientific canvassing, based on a nonrandom sample. The results represent only the opinions of the individuals who responded to the queries and are not projectable to any other population.

The bulk of this report covers these experts’ written answers explaining their responses. They sounded many broad themes about the ways in which individuals and groups are accommodating to adjusting to AI systems. It is important to note that the responses were gathered in the summer of 2020 in a different cultural context amid the pandemic, before COVID-19 vaccines had been approved, at a time when racial justice issues were particularly prominent in the U.S. and before the conclusion of the U.S. presidential election.

In addition, these responses came prior to the most recent studies aimed at addressing issues in ethical AI design and development. For instance, in early 2021 the Stanford Institute for Human-Centered Artificial Intelligence released an updated AI Index Report, the IEEE deepened its focus on setting standards for AI systems and the U.S. National Security Commission on AI, headed by tech leaders including Eric Schmidt, Andy Jassy, Eric Horvitz, Katharina McFarland and Robert Work, released its massive report on accelerating innovation while defending against malign uses of AI.

The key themes these experts voiced in the written elaborations explaining their choices are outlined in the shaded boxes below about “worries” and “hopes.”

Worries: The main developers and deployers of AI are focused on profit-seeking and social control, and there is no consensus about what ethical AI would look like

Even as global attention turns to the purpose and impact of artificial intelligence (AI), many experts worry that ethical behaviors and outcomes are hard to define, implement and enforce. They point out that the AI ecosystem is dominated by competing businesses seeking to maximize profits and by governments seeking to surveil and control their populations.

  • It is difficult to define “ethical” AI: Context matters. There are cultural differences, and the nature and power of the actors in any given scenario are crucial. Norms and standards are currently under discussion, but global consensus may not be likely. In addition, formal ethics training and emphasis is not embedded in the human systems creating AI.
  • Control of AI is concentrated in the hands of powerful companies and governments driven by motives other than ethical concerns: Over the next decade, AI development will continue to be aimed at finding ever-more-sophisticated ways to exert influence over people’s emotions and beliefs in order to convince them to buy goods, services and ideas.
  • The AI genie is already out of the bottle, abuses are already occurring, and some are not very visible and hard to remedy: AI applications are already at work in “black box” systems that are opaque at best and, at worst, impossible to dissect. How can ethical standards be applied under these conditions? While history has shown that when abuses arise as new tools are introduced societies always adjust and work to find remedies, this time it’s different. AI is a major threat.
  • Global competition, especially between China and the U.S., will matter more to the development of AI than any ethical issues: There is an arms race between the two tech superpowers that overshadows concerns about ethics. Plus, the two countries define ethics in different ways. The acquisition of techno-power is the real impetus for advancing AI systems. Ethics takes a back seat.

Source: Nonscientific canvassing of select experts conducted June 30-July 27, 2020.
“Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm in the Next Decade”
PEW RESEARCH CENTER and ELON UNIVERSITY’S IMAGINING THE INTERNET CENTER, 2021

Hopes: Progress is being made as AI spreads and shows its value; societies have always found ways to mitigate the problems arising from technological evolution

Artificial intelligence (AI) applications are already doing amazing things. Further breakthroughs will only add to this. The unstoppable rollout of new AI is inevitable. The development of harm-reducing strategies is also inevitable. Indeed, AI systems themselves can be used to identify and fix problems arising from unethical systems. The high-level global focus on ethical AI in recent years has been productive and is moving society toward agreement around the idea that further AI development should focus on beneficence, nonmaleficence, autonomy and justice.

  • AI advances are inevitable; we will work on fostering ethical AI design: Imagine a future where even more applications emerge to help make people’s lives easier and safer. Health care breakthroughs are coming that will allow better diagnosis and treatment, some of which will emerge from personalized medicine that radically improves the human condition. All systems can be enhanced by AI; thus, it is likely that support for ethical AI will grow.
  • A consensus around ethical AI is emerging and open-source solutions can help: There has been extensive study and discourse around ethical AI for several years, and it is bearing fruit. Many groups working on this are focusing on the already-established ethics of the biomedical community.
  • Ethics will evolve and progress will come as different fields show the way: No technology endures if it broadly delivers unfair or unwanted outcomes. The market and legal systems will drive out the worst AI systems. Some fields will be faster to the mark in getting ethical AI rules and code in place, and they will point the way for laggards.

Source: Nonscientific canvassing of select experts conducted June 30-July 27, 2020.
“Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm in the Next Decade”
PEW RESEARCH CENTER and ELON UNIVERSITY’S IMAGINING THE INTERNET CENTER, 2021

The respondents whose insights are shared in this report focus their lives on technology and its study. They addressed some of the toughest questions that cultures confront. How do you apply ethics to any situation? Is maximum freedom the ethical imperative or is maximum human safety? Should systems steer clear of activities that substantially impact human agency, allowing people to make decisions for themselves, or should they be set up to intervene when it seems clear that human decision-making may be harmful?

They wrestled with the meaning of such grand concepts as beneficence, nonmaleficence, autonomy and justice (the foundational considerations of bioethicists) when it comes to tech systems. Some described their approach as a comparative one: It’s not whether AI systems alone produce questionable ethical outcomes, it’s whether the AI systems are less biased than the current human systems and their known biases. A share of these respondents began their comments on our question by arguing that the issue is not, “What do we want AI to be?” Instead, they noted the issue should be, “What kind of humans do we want to be? How do we want to evolve as a species?”

Many experts noted that much is at stake in these arguments. AI systems will be used in ways that affect people’s livelihoods and well-being – their jobs, their family environment, their access to things like housing and credit, the way they move around, the things they buy, the cultural activities to which they are exposed, their leisure activities and even what they believe to be true. One respondent noted, “Rabelais used to say, ‘Science without conscience is the ruin of the soul.’”

In the section below, we quote some of the experts who gave wide-ranging answers to our question about the future of ethical AI. After that, there is a chapter covering the responses that touched on the most troubling concerns these experts have about AI and another chapter with comments from those who expressed hope these issues will be sorted out by the year 2030 or soon thereafter.

The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise. Some responses are lightly edited for style and readability.

Following is a selection of some of the most comprehensive overarching responses shared by 35 of the 602 thought leaders participating in this canvassing.

Is it possible to become comfortable with not knowing?

Barry Chudakov, founder and principal of Sertain Research, said,“Before answering whether AI will mostly be used in ethical or questionable ways in the next decade, a key question for guidance going forward will be, What is the ethical framework for understanding and managing artificial intelligence? Our ethical frameworks grew out of tribal wisdom, which was set down in so-called holy books that were the foundation of religions. These have been the ethical frameworks for the Judeo-Christian–Islamic–Buddhist world. While the humanitarian precepts of these teachings are valid today, modern technologies and artificial intelligence raise a host of AI quandaries these frameworks simply don’t address. Issues such as management of multiple identities; the impingement of the virtual world on the actual world and how boundaries should be drawn – if boundaries should be drawn; striking a balance between screen time and real-world time; parsing, analyzing and improving the use of tracking data to ensure individual liberty; collecting, analyzing and manipulating data exhaust from online ventures to ensure citizen privacy; the use of facial recognition technologies, at the front door of homes and by municipal police forces, to stop crime. That is a small set of examples, but there are many more that extend to air and water pollution, climate degradation, warfare, finance and investment trading and civil rights.

“Our ethical book is half-written. While we would not suggest our existing ethical frameworks have no value, there are pages and chapters missing. Further, while we have a host of regulatory injunctions such as speed limits, tax rates, mandatory housing codes and permits, etc., we consider our devices so much a part of our bodies that we use them without a moment’s thought for their effects upon the user. We accept the algorithms that enhance our searches and follow us around the internet and suggest another brand of facial moisturizer as a new wrinkle on a convenience and rarely give it a second thought. We do not acknowledge that our technologies change us as we use them; that our thinking and behaviors are altered by the cyber effect (Mary Aiken); that devices and gadgets don’t just turn us into gadget junkies, they may abridge our humanity, compassion, empathy and social fabric. As Greg Brockman, co-founder of OpenAI, remarked: ‘Now is the time to ask questions. Think about the kinds of thoughts you wish people had inventing fire, starting the industrial revolution, or [developing] atomic power.’

“Will AI mostly be used in ethical or questionable ways the next decade? I would start answering this question by referencing what Derrick de Kerckhove described recently in his ‘Five Words for the Future’: Big data is a paradigmatic change from networks and databases. The chief characteristic of big data is that the information does not exist until the question. It is not like the past where you didn’t know where the information was; it was somewhere, and you just had to find it. Now, and it’s a big challenge to intelligence, you create the answer by the question. (Ethics then effectively becomes) ‘How do you create the right question for the data?’ So, for AI to be mostly used in ethical ways, we must become comfortable with not knowing; with needing to ask the right question and understanding that this is an iterative process that is exploratory – not dogmatic. Beginner’s mind (Shunryu Suzuki) becomes our first principle – the understanding from which ethics flows. Many of our ethical frameworks have been built on dogmatic injunctions: Thou shalt and shalt not. Thus, big data effectively reimagines ethical discourse: If until you ask the question, you will not hear or know the answer, you proceed from unknowing. With that understanding, for AI to be used in ethical ways, and to avoid questionable approaches, we must begin by reimagining ethics itself.”

No matter how this complex problem is tackled, responses will be piecemeal and limited

Mike Godwin, former general counsel for the Wikimedia Foundation and creator of Godwin’s Law, wrote, “The most likely outcome, even in the face of increasing public discussions and convenings regarding ethical AI, will be that governments and public policy will be slow to adapt. The costs of AI-powered technologies will continue to decline, making deployment prior to privacy guarantees and other ethical safeguards more likely. The most likely scenario is that some kind of public abuse of AI technologies will come to light, and this will trigger reactive limitations on the use of AI, which will either be blunt-instrument categorical restrictions on its use or (more likely) a patchwork of particular ethical limitations addressed to particular use cases, with unrestricted use occurring outside the margins of these limitations.”

Sometimes there are no good answers, only varieties of bad outcomes

Jamais Cascio, research fellow at the Institute for the Future, observed, “I expect that there will be an effort to explicitly include ethical systems in AI that have direct interaction with humans but largely in the most clear-cut, unambiguous situations. The most important ethical dilemmas are ones where the correct behavior by the machine is situational: Health care AI that intentionally lies to memory care patients rather than re-traumatize them with news of long-dead spouses; military AI that recognizes and refuses an illegal order; all of the ‘trolley problem’-type dilemmas where there are no good answers, only varieties of bad outcomes. But, more importantly, the vast majority of AI systems will be deployed in systems for which ethical questions are indirect, even if they ultimately have outcomes that could be harmful.

“High-frequency trading AI will not be programmed to consider the ethical results of stock purchases. Deepfake AIs will not have built-in restrictions on use. And so forth.

“What concerns me the most about the wider use of AI is the lack of general awareness that digital systems can only manage problems that can be put in a digital format. An AI can’t reliably or consistently handle a problem that can’t be quantified. There are situations and systems for which AI is a perfect tool, but there are important arenas – largely in the realm of human behavior and personal interaction – where the limits of AI can be problematic. I would hate to see a world where some problems are ignored because we can’t easily design an AI response.”

AI is more capable than humans of delivering unemotional ethical judgment

Marcel Fafchamps, professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, commented, “AI is just a small cog in a big system. The main danger currently associated with AI is that machine learning reproduces past discrimination – e.g., in judicial processes for setting bail, sentencing or parole review. But if there hadn’t been discrimination in the first place, machine learning would have worked fine. This means that AI, in this example, offers the possibility of improvement over unregulated social processes.

“AI is just a small cog in a big system. The main danger currently associated with AI is that machine learning reproduces past discrimination.”


Marcel Fafchamps, professor of economics and senior fellow at THE CENTER ON DEMOCRACY

“A more subtle danger is when humans are actually more generous than machine-learning algorithms. For instance, it has been shown that judges are more lenient toward first offenders than machine learning in the sense that machine learning predicts a high probability of reoffending, and this probability is not taken into account by judges when sentencing. In other words, judges give first offenders ‘a second chance,’ a moral compass that the algorithm lacks. But, more generally, the algorithm only does what it is told to do: If the law that has been voted on by the public ends up throwing large fractions of poor young males in jail, then that’s what the algorithm will implement, removing the judge’s discretion to do some minor adjustment at the margin. Don’t blame AI for that: Blame the criminal justice system that has been created by voters.

“A more pernicious development is the loss of control people will have over their immediate environment, e.g., when their home appliances will make choices for them ‘in their interest.’ Again, this is not really new. But it will occur in a new way. My belief is as follows:

  1. By construction, AI implicitly or explicitly integrates ethical principles, whether people realize it or not. This is most easily demonstrated in the case of self-driving cars but will apply to all self-‘something’ technology, including health care AI apps, for instance. A self-driving car must, at some point, decide whether to protect its occupants or protect other people on the road. A human driver would make a choice partly based on social preferences, as has been shown for instance in ‘The Moral Machine Experiment’ (Nature, 2018), partly based on moral considerations (e.g., did the pedestrian have the right to be on the path of the car at that time? In the March 2018 fatality in Tempe, Florida, a human driver could have argued that the pedestrian ‘appeared out of nowhere’ in order to be exonerated).
  2. The fact that AI integrates ethical principles does not mean that it integrates ‘your’ preferred ethical principles. So the question is not whether it integrates ethical principles, but which ethical principles it integrates.

“Here, the main difficulty will be that human morality is not always rational or even predictable. Hence, whatever principle is built into AI, there will be situations in which the application of that ethical principle to a particular situation will be found unacceptable by many people, no matter how well-meant that principle was. To minimize this possibility, the guideline at this point in time is to embed into AI whatever factual principles are applied by courts. This should minimize court litigation. But, of course, if the principles applied by courts are detrimental to certain groups, this will be reproduced by AI.

“What would be really novel would be to take AI as an opportunity to introduce more coherent ethical judgment than what people make based on an immediate emotional reaction. For instance, if the pedestrian in Tempe had been a just-married young bride, a pregnant woman or a drug offender, people would judge the outcome differently, even though, at the moment of the accident, this could not be deduced by the driver, whether human or AI. That does not make good sense: An action cannot be judged differently based on a consequence that was materially unpredictable to the perpetrator. AI can be an opportunity to improve the ethical behavior of cars (and other apps), based on rational principles instead of knee-jerk emotional reaction.”

“When it comes to AI, we should pay close attention to China, which has talked openly about its plans for cyber sovereignty. But we should also remember that there are cells of rogue actors who could cripple our economies simply by mucking with the power or traffic grids, causing traffic spikes on the internet or locking us out of our connected home appliances.”

Amy Webb, founder of the Future Today Institute
Global politics and rogue actors are oft-ignored aspects to consider

Amy Webb, founder of the Future Today Institute, wrote, “We’re living through a precarious moment in time. China is shaping the world order in its own image, and it is exporting its technologies and surveillance systems to other countries around the world. As China expands into African countries and throughout Southeast Asia and Latin America, it will also begin to eschew operating systems, technologies and infrastructure built by the West. China has already announced that it will no longer use U.S.-made computers and software. China is rapidly expanding its 5G and mobile footprints. At the same time, China is drastically expanding its trading partners. While India, Japan and South Korea have plenty of technologies to offer the world, it would appear as though China is quickly ascending to global supremacy. At the moment, the U.S. is enabling this, and our leaders do not appear to be thinking about the long-term consequences.

“When it comes to AI, we should pay close attention to China, which has talked openly about its plans for cyber sovereignty. But we should also remember that there are cells of rogue actors who could cripple our economies simply by mucking with the power or traffic grids, causing traffic spikes on the internet or locking us out of our connected home appliances. These aren’t big, obvious signs of aggression, and that is a problem for many countries, including the United States. Most governments don’t have a paradigm describing a constellation of aggressive actions. Each action on its own might be insignificant. What are the escalation triggers? We don’t have a definition, and that creates a strategic vulnerability.”

Concentrated wealth works against hope for a Human Spring and social justice

Stowe Boyd, consulting futurist expert in technological evolution and the future of work, noted, “I have projected a social movement that would require careful application of AI as one of several major pillars. I’ve called this theHuman Spring, conjecturing that a worldwide social movement will arise in 2023, demanding the right to work and related social justice issues, a massive effort to counter climate catastrophe, and efforts to control artificial intelligence. AI, judiciously used, can lead to breakthroughs in many areas. But widespread automation of many kinds of work – unless introduced gradually, and not as fast as profit-driven companies would like – could be economically destabilizing.

“I’m concerned that AI will most likely be concentrated in the hands of corporations who are in the business of concentrating wealth for their owners and not primarily driven by bettering the world for all of us. AI applied in narrow domains that are really beyond the reach of human cognition – like searching for new ways to fold proteins to make new drugs or optimizing logistics to minimize the number of miles that trucks drive everyday – are sensible and safe applications of AI. But AI directed toward making us buy consumer goods we don’t need or surveilling everyone moving through public spaces to track our every move, well, that should be prohibited.”

The principal use of AI is likely to remain convincing people to buy things they don’t need

Jonathan Grudin, principal researcher with the Natural Interaction Group at Microsoft Research, said, “The past quarter-century has seen an accelerating rise of online bad actors (not all of whom would agree they are bad actors) and an astronomical rise in the costs of efforts to combat them, with AI figuring in this. We pose impossible demands: We would like social media to preserve individual privacy but also identify Russian or Chinese hackers that will require sophisticated construction of individual behavior patterns.

“The principal use of AI is likely to be finding ever more sophisticated ways to convince people to buy things that they don’t really need, leaving us deeper in debt with no money to contribute to efforts to combat climate change, environmental catastrophe, social injustice and inequality and so on.”

User-experience designers must play a key role in shaping human control of systems

Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab, University of Maryland, commented, “Ethical principles (responsibility, fairness, transparency, accountability, auditability, explainable, reliable, resilient, safe, trustworthy) are a good starting point, but much more is needed to bridge the gap with the realities of practice in software engineering, organization management and independent oversight. … I see promising early signs. A simple step is a flight data recorder for every robot and AI system. The methods that have made civil aviation so safe could be adopted to recording what every robot and AI system does, so that when errors occur, the forensic investigation will have the data it needs to understand what went wrong and make enforceable measurable testable improvements. AI applications can bring many benefits, but they are more likely to succeed when user-experience designers have a leading role in shaping human control of highly automated systems.”

AI systems today fetishize efficiency, scale and automation; should embrace social justice

danah boyd, founder and president of the Data & Society Research Institute, and principal researcher at Microsoft, explained, “We misunderstand ethics when we think of it as a binary, when we think that things can be ethical or unethical. A true commitment to ethics is a commitment to understanding societal values and power dynamics – and then working toward justice.

“Most data-driven systems, especially AI systems, entrench existing structural inequities into their systems by using training data to build models. The key here is to actively identify and combat these biases, which requires the digital equivalent of reparations. While most large corporations are willing to talk about fairness and eliminating biases, most are not willing to entertain the idea that they have a responsibility for data justice. These systems are also primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale and automation. A truly ethical stance on AI requires us to focus on augmentation, localized context and inclusion, three goals that are antithetical to the values justified by late-stage capitalism. We cannot meaningfully talk about ethical AI until we can call into question the logics of late-stage capitalism.”

“These systems are also primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale and automation. A truly ethical stance on AI requires us to focus on augmentation, localized context and inclusion, three goals that are antithetical to the values justified by late-stage capitalism. We cannot meaningfully talk about ethical AI until we can call into question the logics of late-stage capitalism.”

DanaH BOYD, founder and president of the Data & Society Research Institute, and Principal researcher at microsoft
If we don’t fix this, we can’t even imagine how bad it will get when AI is creating AI

Gary A. Bolles, chair for the future of work at Singularity University, responded, “I hope we will shift the mindset of engineers, product managers and marketers from ethics and human centricity as a tack-on after AI products are released, to a model that guarantees ethical development from inception. Everyone in the technology development food chain will have the tools and incentives to ensure the creation of ethical and beneficial AI-related technologies, so there is no additional effort required. Massive energy will be focused on new technologies that can sense when new technologies are created that violate ethical guidelines and automatically mitigate those impacts.

“Humans will gain tremendous benefits as an increasing amount of technology advocates for them automatically. My concerns: None of this may happen, if we don’t change the financial structure. There are far too many incentives – not just to cut corners but to deliberately leave out ethical and inclusive functions, because those technologies aren’t perceived to make as much money, or to deliver as much power, as those that ignore them. If we don’t fix this, we can’t even imagine how much off the rails this can go once AI is creating AI.”

Even ethical people think in terms of using tech on humans instead of the opposite

Douglas Rushkoff, well-known media theorist, author and professor of media at City University of New York, wrote, “Why should AI become the very first technology whose development is dictated by moral principles? We haven’t done it before, and I don’t see it happening now. Most basically, the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money – not to improve the human condition. So, while there will be a few simple AIs used to optimize water use on farms or help manage other limited resources, I think the majority is being used on people.

“My concern is that even the ethical people still think in terms of using technology on human beings instead of the other way around. So, we may develop a ‘humane’ AI, but what does that mean? It extracts value from us in the most ‘humane’ way possible?”

AIs built to be reciprocally competitive could keep an eye on each other, report bad things

David Brin, physicist, futures thinker and author of “Earth” and “Existence,” commented, “Isaac Asimov in his ‘Robots’ series conceived a future when ethical matters would be foremost in the minds of designers of AI brains, not for reasons of judiciousness, but in order to quell the fears of an anxious public, and hence Asimov’s famed ‘Three Laws of Robotics.’ No such desperate anxiety about AI seems to surge across today’s populace, perhaps because we are seeing our AI advances in more abstract ways, mostly on screens and such, not in powerful, clanking mechanical men. Oh, there are serious conferences on this topic. I’ve participated in many. Alas, statements urging ethical consideration in AI development are at best palliatives. I am often an outlier, proposing that AIs’ ‘ethical behavior’ be promoted the way it is in most humans – especially most males – via accountability.

“If AIs are many and diverse and reciprocally competitive, then it will be in their individual interest to keep an eye on each other and report bad things, because doing so will be to their advantage. This depends on giving them a sense of separate individuality. It is a simple recourse, alas seldom even discussed.”

Within the next 300 years, humans will be replaced by their own sentient creations

Michael G. Dyer, professor emeritus of computer science at UCLA, expert in natural language processing, responded, “Ethical software is an ambiguous notion and includes:

“Consider that you, in the more distant future, own a robot and you ask it to get you an umbrella because you see that it might rain today. Your robot goes out and sees a little old lady with an umbrella. Your robot takes the umbrella away from her and returns to hand it to you. That is a robot without ethical reasoning capability. It has a goal, and it achieves that goal without considering the effect of its plan on the goals of other agents; therefore, ethical planning is a much more complicated form of planning because it has to take into account the goals and plans of other agents. Another example. You tell your robot that Mr. Mean is your enemy (vs. friend). In this case, the robot might choose a plan to achieve your goal that, at the same time, harms some goal of Mr. Mean.

“Ethical reasoning is more complicated than ethical planning, because it requires building inverted ‘trees’ of logical (and/or probabilistic) support for any beliefs that themselves might support a given plan or goal. For example, if a robot believes that goal G1 is wrong, then the robot is not going to plan to achieve G1. However, if the robot believes that agent A1 has goal G1, then the robot might generate a counterplan to block A1 in executing the predicted plan (or plans) of agent A1 to achieve G1 (which is an undesirable goal for the robot). Software that is trained on data to categorize/classify already exists and is extremely popular and has been and will continue to be used to also classify people (does Joe go to jail for five years or 10 years? Does Mary get that job? etc.).

“Software that performs sophisticated moral reasoning will not be widespread by 2025 but will become more common in 2030. (You asked for predictions, so I am making them.) Like any technology, AI can be used for good or evil. Face recognition can be used to enslave everyone (à la Orwell’s ‘Nineteen Eighty-Four’) or to track down serial killers. Technology depends on how humans use it (since self-aware sentient robots are still at least 40 years away). It is possible that a ‘critical mass’ of intelligence could be reached, in which an AI entity works on improving its own intelligent design, thus entering into a positive feedback loop resulting rapidly in a super-intelligent form of AI (e.g., see D. Lenat’s Heurisko work done years ago, in which it invented not only various structures but also invented new heuristics of invention). A research project that also excites me is that of computer modeling of the human connectome. One could then build a humanoid form of intelligence without understanding how human neural intelligence actually works (which could be quite dangerous).

“I am concerned and also convinced that, at some point within the next 300 years, humanity will be replaced by its own creations, once they become sentient and more intelligent than ourselves. Computers are already smarter at many tasks, but they are not an existential threat to humanity (at this point) because they lack sentience. AI chess- (and now Go-) playing systems beat world grand masters, but they are not aware that they are playing a game. They currently lack the ability to converse (in human natural languages, such as English or Chinese) about the games they play, and they lack their own autonomous goals. However, subfields of AI include machine learning and computational evolution. AI systems are right now being evolved to survive (and learn) in simulated environments and such systems, if given language comprehension abilities (being developed in the AI field of natural language processing), would then achieve a form of sentience (awareness of one’s awareness and ability to communicate that awareness to others, and an ability to debate beliefs via reasoning, counterfactual and otherwise, e.g., see work of Judea Pearl).”

There are challenges, but better systems will emerge to improve the human condition

Marjory S. Blumenthal, director of the science, technology and policy program at RAND Corporation, observed, “This is the proverbial onion; there is no simple answer. Some of the challenge is in the lifecycle – it begins with how the data are collected, labeled (if they are for training) and then used, possibly involving different actors with different views of what is ethical. Some of the challenges involve the interest-balancing of companies, especially startups, that have always placed function and product delivery over security and privacy.

“Some of the challenges reflect the fact that, in addition to privacy and security for some applications, safety is also a concern (and there are others). Some of the challenges reflect the fact that. even with international efforts like that of IEEE, views of what ethics are appropriate differ around the world.

“Today’s AI mania implies that a lot of people are rushing to be able to say that they use or produce AI, and anything rushed will have compromises. Notwithstanding the concerns, the enthusiasm for AI builds on long histories improving processing hardware, data-handling capability and algorithms. Better systems for education and training should be available and should enable the kind of customization long promised but seldom achieved. Aids to medical diagnoses should become more credible, along with aids to development of new therapies. The support provided by today’s ‘smart speakers’ should become more meaningful and more useful (especially if clear attention to privacy and security comes with the increased functionality).”

Ethical AI is definitely being addressed, creating a rare opportunity to deploy it positively

Ethan Zuckerman, director of MIT’s Center for Civic Media and associate professor at the MIT Media Lab, commented, “The activists and academics advocating for ethical uses of AI have been remarkably successful in having their concerns aired even as harms of misused AI are just becoming apparent. The campaigns to stop the use of facial recognition because of racial biases is a precursor of a larger set of conversations about serious ethical issues around AI. Because these pioneers have been so active in putting AI ethics on the agenda, I think we have a rare opportunity to deploy AI in a vastly more thoughtful way that we otherwise might have.”

Failures despite good intentions loom ahead, but society will still reap benefits

Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, observed, “There will be a good-faith effort, but I am skeptical that the good intentions will necessarily result in the desired outcomes. Machine learning is still in its early days, and our ability to predict various kinds of failures and their consequences is limited. The ML design space is huge and largely unexplored. If we have trouble with ordinary software whose behavior is at least analytic, ML is another story. And our track record on normal software stinks (buggy code!). We are, however, benefiting enormously from many ML applications, including speech recognition and language translation, search efficiency and effectiveness, medical diagnosis, exploration of massive data to find patterns, trends and unique properties (e.g., pharmaceuticals). Discovery science is benefiting (e.g., finding planets around distant stars). Pretty exciting stuff.”

“There will be a good-faith effort, but I am skeptical that the good intentions will necessarily result in the desired outcomes. Machine learning is still in its early days, and our ability to predict various kinds of failures and their consequences is limited.”

Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google
Commitment can bring positive results; all bets are off when it comes to weapons

Susan Etlinger, industry analyst for Altimeter, wrote, “AI is, fundamentally, an idea about how we can make machines that replicate some aspects of human ability. So, we should expect to see ethical norms around bias, governance and transparency become more common, much the same way we’ve seen the auto industry and others adopt safety measures like seatbelts, airbags and traffic signals over time. But of course people are people, so for every ethical principle there will always be someone who ignores or circumvents it.

“I’m heartened by some of the work I’ve seen from the large tech companies. It’s not consistent, it’s not enough, but there are enough people who are genuinely committed to using technology responsibly that we will see some measure of positive change. Of course, all claims of AGI – automated general intelligence – are immediately suspect, not only because it’s still hypothetical at this point, but because we haven’t even ironed out the governance implications of automation. And all bets are off when we are talking about AI-enabled weaponry, which will require a level of diplomacy, policy and global governance similar to nuclear power.”

More transparency in digital and human systems can emerge from all of this

Esther Dyson, internet pioneer, journalist, entrepreneur and executive founder of Wellville, responded, “With luck, we’ll increase transparency around what AI is doing (as well as around what people are doing), because it will be easier to see the impact of decisions made by both people and algorithms. Cue the research about what time of day you want to go to trial (e.g., before or after the judge has lunch). The more we use AI to reveal such previously hidden patterns, the better for us all. So, a lot depends on society’s willingness to look at the truth and to act/make decisions accordingly. With luck, a culture of transparency will cause this to happen. But history shows that a smooth trajectory toward enlightenment is unlikely.”

AI making decisions on its own is an understandable but possibly unstoppable worry

Brad Templeton, internet pioneer, futurist, activist and chair emeritus of the Electronic Frontier Foundation, said, “For now, at least, and probably to 2030, AI is a tool, not an actor in its own right. It will not be good or evil, but it will be used with good and evil intent and also for unintended reasons. But this is not a question for a simple survey. People are writing books about this question. To go into a few of the popular topics: The use of AI to replace jobs is way overblown. We have 150 years of Chicken Little predictions that machines would take all the jobs, and they’ve always been wrong. First, that in most cases they didn’t take the jobs, or that we would be bothered when they did. There are more bank tellers today than in 1970, it is reported. At the same time, half of us worked in agriculture in 1900, and now a small percentage do.

“The privacy worries are real, including the undefined threat that AI in the future will be able to examine the data of the present (which we are recording, but can’t yet process) in ways that will come back to bite you. I call this the threat of ‘time travelling robots from the future.’ They don’t really go back in time, but the AI of the future can affect what you do today. The fears of bias are both real and overblown. Yes, we will encode our biases into AIs. At the same time, the great thing about computers is once you see a problem you can usually fix it. Studies have shown it’s nearly impossible for humans to correct their biases, even when aware of them. For machines, that will be nothing. Strangely, when some people hear ‘AIs will be able to do one-third of the tasks you do in your work,’ some of them react with fear of losing a job. The other group reacts with, ‘Shut up and take my money!’ – they relish not having to do those tasks.

“When we start to worry about AI with agency – making decisions on its own – it is understandable why people worry about that. Unfortunately, relinquishment of AI development is not a choice. It just means the AIs of the future are built by others, which is to say your rivals. You can’t pick a world without AI; you can only pick a world where you have it or not.”

It will be very difficult to predict what will be important and how things will work

John L. King, a professor at the University of Michigan School of Information, commented, “There will be a huge increase in the discussion of revolutionary AI in daily life, but on closer inspection, things will be more incremental than most imagine. The ethical issues will sneak up on us as we move more slowly than people think when, suddenly, we cross some unforeseen threshold (it will be nonlinear) and things get serious. It will be very difficult to predict what will be important and how things will work.”

The public must take action to better align corporate interests with the public good

David Karger, professor at MIT’s Computer Science and Artificial Intelligence Laboratory, said, “The question as framed suggests that AI systems will be thinking by 2030. I don’t believe that’s the case. In 2030, AI systems will continue to be machines that do what their human users tell them to do. So, the important question is whether their human users will employ ethical principles focused primarily on the public good. Since that isn’t true now, I don’t expect it will be true in 2030 either. Just like now, most users of AI systems will be for-profit corporations, and just like now, they will be focused on profit rather than social good. These AI systems will certainly enable corporations to do a much better job of extracting profit, likely with a corresponding decrease in public good, unless the public itself takes action to better align the profit-interests of these corporations with the public good.

“In great part, this requires the passage of laws constraining what corporations can do in pursuit of profit; it also means the government quantifying and paying for public goods so that companies have a profit motive in pursuing them.

“Even in this time of tremendous progress, I find little to excite me about AI systems. In our frenzy to enhance the capabilities of machines, we are neglecting the existing and latent capabilities of human beings, where there is just as much opportunity for progress as there is in AI. We should be directing far more attention to research on helping people learn better, helping them interact online better and helping them make decisions better.”

AI tools must be designed with input from diverse groups of those affected by them

Beth Noveck, director, NYU Governance Lab and its MacArthur Research Network on Opening Governance, responded, “Successful AI applications depend upon the use of large quantities of data to develop algorithms. But a great deal of human decision-making is also involved in the design of such algorithms, beginning with the choice about what data to include and exclude. Today, most of that decision-making is done by technologists working behind closed doors on proprietary private systems.

“If we are to realize the positive benefits of AI, we first need to change the governance of AI and ensure that these technologies are designed in a more participatory fashion with input and oversight from diverse audiences, including those most affected by the technologies. While AI can help to increase the efficiency and decrease the cost, for example, of interviewing and selecting job candidates, these tools need to be designed with workers lest they end up perpetuating bias.

“While AI can make it possible to diagnose disease better than a single doctor can with the naked eye, if the tool is designed only using data from white men, it may be less optimal for diagnosing diseases among Black women. Until we commit to making AI more transparent and participatory, we will not realize its positive potential or mitigate the significant risks.”

There is no way to force unethical players to follow the ethics playbook

Sam S. Adams, a 24-year veteran of IBM, now working as a senior research scientist in artificial intelligence for RTI International, architecting national-scale knowledge graphs for global good, wrote, “The AI genie is completely out of the bottle already, and by 2030 there will be dramatic increases in the utility and universal access to advanced AI technology. This means there is practically no way to force ethical use in the fundamentally unethical fractions of global society.

“The multimillennial problem with ethics has always been: Whose ethics? Who decides and then who agrees to comply? That is a fundamentally human problem that no technical advance or even existential threat will totally eliminate. Basically, we are stuck with each other and hopefully at least a large fraction will try to make the best of it. But there is too much power and wealth available for those who will use advanced technology unethically, and universal access via cloud, IoT [Internet of Things] and open-source software will make it all too easy for an unethical player to exploit.

“I believe the only realistic path is to provide an open playing field. That universal access to the technology at least arms both sides equally. This may be the equivalent of a mutually assured destruction policy, but to take guns away from the good guys only means they can’t defend themselves from the bad guys anymore.”

AI for personalized medicine could lead to the ‘Brave New World’ of Aldous Huxley

Joël Colloc, professor of computer sciences at Le Havre University, Normandy, responded, “Most researchers in the public domain have an ethical and epistemological culture and do research to find new ways to improve the lives of humanity. Rabelais used to say, ‘Science without conscience is the ruin of the soul.’ Science provides powerful tools. When these tools are placed only in the hands of private interests, for the sole purpose of making profit and getting even more money and power, the use of science can lead to deviances and even uses against the states themselves – even though it is increasingly difficult for these companies to enforce the laws, which do not necessarily have the public interest as their concern. It all depends on the degree of wisdom and ethics of the leader.

“Hope: Some leaders have an ethical culture and principles that can lead to interesting goals for citizens. All applications of AI (especially when they are in the field of health, the environment, etc.) should require a project submission to an ethics board composed of scientists and respect general charters of good conduct. A monitoring committee can verify that the ethics and the state of the art are well respected by private companies.

“The concern is what I see: clinical trials on people in developing countries where people are treated like guinea pigs under pretext that one claims to discover knowledge by applying deep learning algorithms. This is disgusting. AI can offer very good tools, but it can also be used to profile and to sort, monitor and constrain fundamental freedoms as seen in some countries. On AI competition, it is the acceptability and ability to make tools that end users find useful in improving their lives that will make the difference. Many gadgets or harmful devices are offered.

“I am interested in mastering time in clinical decision-making in medicine and how AI can take it into account. What scares me most is the use of AI for personalized medicine that, under the guise of prevention, will lead to a new eugenics and all the cloning drifts, etc., that can lead to the ‘Brave New World’ of Aldous Huxley.”

We have no institutions that can impose ethical constraints upon AI designers

Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for Science Technology and Innovation Policy, noted, “For AI, just substitute ‘digital processing.’ We have no basis on which to believe that the animal spirits of those designing digital processing services, bent on scale and profitability, will be restrained by some internal memory of ethics, and we have no institutions that could impose those constraints externally.”

Unless the idea that all tech is neutral is corrected there is little hope

Paul Jones, professor emeritus of information science at the University of North Carolina, Chapel Hill, observed, “Unless, as I hope happens, the idea that all tech is neutral is corrected, there is little hope or incentive to create ethical AI. Current applications of AI and their creators rarely interrogate ethical issues except as some sort of parlor game. More often I hear data scientists disparaging what they consider ‘soft sciences’ and claiming that their socially agnostic engineering approach or their complex statistical approach is a ‘hard science.’ While I don’t fear an AI war, a Capek-like robot uprising, I do fear the tendency not to ask the tough questions of AI – not just of general AI, where most of such questions are entertained, but in narrow AI where most progress and deployment are happening quickly.

“I love to talk to Google about music, news and trivia. I love my home being alert to my needs. I love doctors getting integrated feedback on lab work and symptoms. I could not now live without Google Maps. But I am aware that ‘We become what we behold. We shape our tools and then our tools shape us,’ as Father John Culkin reminded us.

“For most of us, the day-to-day conveniences of AI by far outweigh the perceived dangers. Dangers will come on slow and then cascade before most of us notice. That’s not limited to AI. Can AI help us see the dangers before they cascade? And if AI does, will it listen and react properly?”

AI can and will be engineered toward utopian and dystopian ends

Dan S. Wallach, a professor in the systems group at Rice University’s Department of Computer Science, said, “Building an AI system that works well is an exceptionally hard task, currently requiring our brightest minds and huge computational resources. Adding the additional constraint that they’re built in an ethical fashion is even harder yet again.

“Building an AI system that works well is an exceptionally hard task, currently requiring our brightest minds and huge computational resources. Adding the additional constraint that they’re built in an ethical fashion is even harder yet again.”

DAN S. Wallach, A PRofessor In the Systems Group at Rice University’s Department of Computer Science

“Consider, for example, an AI intended for credit rating. It would be unethical for that AI to consider gender, race or a variety of other factors. Nonetheless, even if those features are explicitly excluded from the training set, the training data might well encode the biases of human raters, and the AI could pick up on secondary features that infer the excluded ones (e.g., silently inferring a proxy variable for race from income and postal address).

“Consider further the use of AI systems in warfare. The big buzzword today is ‘autonomy,’ which is to say, weapon systems that can make on-the-fly tactical decisions without human input while still following their orders. An ethical stance might say that we should never develop such systems, under any circumstances, yet exactly such systems are already in conception or development now and might well be used in the field by 2030.

“Without a doubt, AI will do great things for us, whether it’s self-driving cars that significantly reduce automotive death and injury, or whether it is computers reading radiological scans and identifying tumors earlier in their development than any human radiologist might do reliably. But AI will also be used in horribly dystopian situations, such as China’s rollout of facial-recognition camera systems throughout certain western provinces in the country. As such, AI is just a tool, just like computers are a tool. AI can and will be engineered toward utopian and dystopian ends.”

Government shouldn’t regulate it until it is dedicated to serving the needs of the people

Shel Israel, Forbes columnist and author of many books on disruptive technologies,commented, “Most developers of AI are well-intentioned, but issues that have been around for over 50 years remain unresolved:

  1. Should AI replace people or back them up? I prefer the latter in many cases. But economics drive business and returns to shareholders. So current trends will continue for more than five years because the problems will not be overwhelmingly obvious for more years than five.
  2. Google already knows who we are, where we are, the context of our activities, who we are with. Five years from now, technology will know our health, when we will die, if it is by natural causes, and so on down the line. Will AI help a patient by warning her/him of a cancer likelihood so they can get help, or an employer so they can get rid of those employees before they become an expense? I think both will occur, so AI will make things both better and worse.
  3. The technology itself is neither good nor evil. It is just a series of algorithms. It is how people will use it that will make a difference. Will government regulate it better? I doubt it. Should it? Not until we can have better governments who are more dedicated to serving the needs of everyday people.”
We are ill-prepared for the onslaught and implications of bad AI applications

Calton Pu, professor and chair in the School of Computer Science at Georgia Tech, wrote, “The main worry about the development of AI and ML (machine learning) technologies is the current AI/ML practice of using fixed training data (ground truth) for experimental evaluation as proof that they work. This proof is only valid for the relatively small and fixed training datasets. The gap between the limited ground truth and the actual reality has severely restricted the practical applicability of AI/ML systems, which rely on human operators to handle the gap. For example, the chatbots used in customer support contact centers can only handle the subset of most common conversations. …

“There is a growing gap between AI systems and the evolving reality, which explains the difficulties in the actual deployment of autonomous vehicles. This growing gap appears to be a blind spot for current AI/ML researchers and companies. With all due respect to the billions of dollars being invested, it is an inconvenient truth. As a result of this growing gap, the ‘good’ AI applications will see decreasing applicability, as their ground truth lags behind the evolving actual reality. However, I imagine the bad guys will see this growing gap soon and utilize it to create ‘bad’ AI applications by feeding their AI systems with distorted ground truth through skillful manipulations of training data. This can be done with today’s software tools. These bad AI applications can be distorted in many ways, one of them being unethical. With the AI/ML research community turning a blind eye to the growing gap, we will be ill-prepared for the onslaught of these bad AI applications. An early illustration of this kind of attack was Microsoft’s Tay chatbot, introduced in 2016 and deactivated within one day due to inappropriate postings learned from purposeful racists interactions.

“The global competition over AI systems with fixed training data is a game. These AI systems compete within the fixed ground truth and rules. Current AI/ML systems do quite well with games with fixed rules and data, e.g., AlphaGo. However, these AI systems modeled after games are unaware of the growing gap between their ground truth (within the game) and the evolving actual reality out there. … To change these limitations, the ML/AI community and companies will need to face the inconvenient truth, the growing gap, and start to work on the growing gap instead of simply shutting down AI systems that no longer work (when the gap grew too wide), which has been the case of the Microsoft Tay chatbot and Google Flu Trends, among others.”

AI may not be as useful in the future due to its dependency on past data and patterns

Greg Sherwin, vice president for engineering and information technology at Singularity University, responded, “Explainable AI will become ever more important. As privileged classes on the edges get caught up on the vortex of negative algorithmic biases, political will must shift toward addressing the challenges of algorithmic oppression for all. For example, companies will be sued – unsuccessfully at first – for algorithmic discrimination. Processes for redress and appeal will need to be introduced to challenge the decisions of algorithms. Meanwhile, the hype cycle will drop for the practical value of AI.

“As the world and society become more defined by VUCA [volatile, uncertain, complex, ambiguous] forces, the less AI will be useful given its complete dependency on past data, existing patterns and its ineffectiveness in novel situations. AI will simply become much like what computers were to society a couple decades ago: algorithmic tools in the background, with inherent and many known flaws (bugs, etc.), that are no longer revered for their mythical innovative novelty but are rather understood in context within limits, within boundaries that are more popularly understood.”

How will AI be used to assess, direct, control and alter human interaction?

Kathleen M. Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, commented, “While there is a move toward ethical AI, it is unlikely to be realized in the next decade. First, there are a huge number of legacy systems that would need to be changed. Second, what it means for AI to be ethical is not well understood; and once understood, it is likely to be the case that there are different ethical foundations that are not compatible with each other. Which means that AI might be ethical by one framework but not by another. Third, for international conflict and for conflict with nonstate actors, terror groups and crime groups – there will be AI on both sides. It is unlikely that both sides would employ the same ethical frameworks.

“What gives me the most hope is that most people, regardless of where they are from, want AI and technology in general to be used in more ethical ways. What worries me the most is that, without a clear understanding of the ramifications of ethical principles, we will put in place guidelines and policies that will cripple the development of new technologies that would better serve humanity.

“AI will save time, allow for increased control over your living space, do boring tasks, help with planning, auto-park your car, fill out grocery lists, remind you to take medicines, support medical diagnosis, etc.

“The issues that are both exciting and concerning center on how AI will be used to assess, direct, control and alter human interaction and discourse. Where AI meets human social behavior is a difficult area. Tools that auto-declare messages as disinformation could be used by authoritarian states to harm individuals.”

We don’t really know what human, ethical, public-interest decision-making looks like

Chris Savage, a leading expert in legal and regulatory issues based in Washington, D.C., wrote, “AI is the new social network, by which I mean: Back in 2007 and 2008, it was easy to articulate the benefits of robust social networking, and people adopted the technology rapidly, but its toxic elements – cognitive and emotional echo chambers, economic incentives of the platforms to drive engagement via stirred-up negative emotions, rather than driving increased awareness and acceptance (or at least tolerance) of others – took some time to play out.

“Similarly, it is easy to articulate the benefits of robust and ubiquitous AI, and those benefits will drive substantial adoption in a wide range of contexts.

“But we simply do not know enough about what ‘ethical’ or ‘public-interested’ algorithmic decision-making looks like to build those concepts into actually deployed AI (actually, we don’t actually know enough about what human ‘ethical’ and ‘public-interested’ decision-making looks like to effectively model it). Trying to address those concerns will take time and money on the part of the AI developers, with no evident return on that expenditure. So, it won’t happen, or will be short-changed, and – as with social media – I predict a ‘Ready, Fire, Aim’ scenario for the deployment of AI. On a longer timescale – give me 50 years instead of 10 – I think AI will be a net plus even in ethical/public interest terms. But the initial decade or so will be messy.”

Why the moral panic? Does this really require an entirely new branch of ethics?

Jeff Jarvis, director of the Tow-Knight Center and professor of journalism innovation at City University of New York, said, “AI is an overbroad label for sets of technical abilities to gather, analyze and learn from data to predict behavior, something we have done in our heads since some point in our evolution as a species. We did likewise with computers once we got them, getting help looking for correlations, asking ‘what if?’ and making predictions.

“Now, machines will make some predictions – often without explanation – better than we could, and that is leading to a level of moral panic sufficient to inspire questions such as this.

“The ethical challenges are not vastly different than they have ever been: Did you have permission to gather the data you did? Were you transparent about its collection and use? Did you allow people a choice in taking part in that process? Did you consider the biases and gaps in the data you gathered? Did you consider the implications of acting on mistaken predictions? And so on. I have trouble seeing this treated as if it is an entirely new branch of ethics, for that brings an air of mystery to what should be clear and understandable questions of responsibility.”

Perhaps traditional notions of civil liberties need to be revised and updated

David Krieger, director of the Institute for Communication and Leadership, based in Switzerland, commented, “It appears that, in the wake of the pandemic, we are moving faster toward the data-driven global network society than ever before. Some have predicted that the pandemic will end the ‘techlash,’ since what we need to survive is more information and not less about everyone and everything. This information must be analyzed and used as quickly as possible, which spurs on investments in AI and big data analytics.

“Calls for privacy, for regulation of tech giants and for moratoriums on the deployment of tracking, surveillance and AI are becoming weaker and losing support throughout the world. Perhaps traditional notions of civil liberties need to be revised and updated for a world in which connectivity, flow, transparency and participation are the major values.”

Post-2040, we’ll see truly powerful personal AIs that will help improve civil society

John Smart, foresight educator, scholar, author, consultant and speaker, predicted, “Ethical AI frameworks will be used in high-reliability and high-risk situations, but the frameworks will remain primitive and largely human-engineered (top-down) in 2030. Truly bottom-up, evolved and selected collective ethics and empathy (affective AI), similar to what we find in our domestic animals, won’t emerge until we have truly bottom-up, evo-devo [evolutionary developmental biology] approaches to AI. AI will be used well and poorly, like any tool. The worries are the standard ones, plutocracy, lack of transparency, unaccountability of our leaders. The real benefits of AI will come when we’ve moved into a truly bottom-up style of AI development, with hundreds of millions of coders using open-source AI code on GitHub, with natural language development platforms that lower the complexity of altering code, with deeply neuro-inspired commodity software and hardware, and with both evolutionary and developmental methods being used to select, test and improve AI. In that world, which I expect post-2040, we’ll see truly powerful personal AIs. Personal AIs are what really matter to improving civil society. The rest are typically serving the plutocracy.” The sections of this report that follow organize hundreds of additional expert quotes under the headings that follow the common themes listed in the tables at the beginning of this report. For more on how this canvassing was conducted, including full question wording, see “About this canvassing” at the end of this report.

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information