Numbers, Facts and Trends Shaping Your World

Artificial Intelligence and the Future of Humans

1. Concerns about human agency, evolution and survival

A clear majority of the responses from these experts contained material outlining certain challenges, fears or concerns about the AI-infused future. The five most-often mentioned concerns were: 1) the use of AI reduces individuals’ control over their lives; 2) surveillance and data systems designed primarily for efficiency, profit and control are inherently dangerous; 3) displacement of human jobs by AI will widen economic and digital divides, possibly leading to social upheaval; 4) individuals’ cognitive, social and survival skills will be diminished as they become dependent on AI; and 5) citizens will face increased vulnerabilities, such as exposure to cybercrime and cyberwarfare that spin out of control and the possibility that essential organizations are endangered by weaponized information. A few also worried about the wholesale destruction of humanity. The sections of this chapter will cover experts’ answers tied to these themes.

The use of AI reduces individuals’ control over their lives

Autonomous systems can reduce or eliminate the need for human involvement in some tasks. Today’s ever-advancing artificial narrow intelligence (ANI) tools – for instance, search engines and digital “agents” such as Siri, Alexa and Cortana – are not close to reaching the goal of human-like artificial general intelligence (AGI). They are, however, continually becoming more powerful thanks to developments in machine learning and natural language processing and advances in materials science, networking, energy-storage and hardware capabilities.

ANI is machine intelligence that equals or exceeds people’s abilities or efficiency at a specific task. For years, code-based tools in robots and other systems have performed repetitive tasks like factory-floor assembly activities. Today, these tools are quickly evolving to master human traits such as reason, logic, learning, task-performance and creativity. Today’s smart, networked,  software-equipped devices, cars, digital assistants and platforms, such as Google search and Facebook social mapping, accomplish extremely complex tasks. The systems underpinning today’s global financial markets, businesses, militaries, police forces, and medical, energy and industrial operations are all dependent upon networked AI of one type or another.

What is the future of humans in an age of accelerating technological change?

Many experts in this canvassing said that as AI advances human autonomy and agency are at risk. They note that decision-making on key aspects of life is ceded to code-driven tools. Individuals who function in this digital world sacrifice, to varying degrees, their independence, right to privacy and power over choice. Many of the experts who worry about this say humans accede to this in order to stay competitive, to participate socially and professionally in the world, to be entertained and to get things done. They say people hand over some control of their lives because of the perceived advantages they gain via digital tools – efficiency, convenience and superior pattern recognition, data storage, and search-and-find capabilities. Here is a selection of responses from these experts that touch on this:

An anonymous respondent summed up the concerns of many, writing, “The most-feared reversal in human fortune of the AI age is loss of agency. The trade-off for the near-instant, low-friction convenience of digital life is the loss of context about and control over its processes. People’s blind dependence on digital tools is deepening as automated systems become more complex and ownership of those systems is by the elite.”

By 2030, we may cram more activities and interactions into our days, but I don’t think that will make our lives ‘better.’Baratunde Thurston

Baratunde Thurston

Baratunde Thurston, futurist, former director of digital at The Onion and co-founder of comedy/technology start-up Cultivated Wit, said, “For the record, this is not the future I want, but it is what I expect given existing default settings in our economic and sociopolitical system preferences. … The problems to which we are applying machine learning and AI are generally not ones that will lead to a ‘better’ life for most people. That’s why I say in 2030, most people won’t be better due to AI. We won’t be more autonomous; we will be more automated as we follow the metaphorical GPS line through daily interactions. We don’t choose our breakfast or our morning workouts or our route to work. An algorithm will make these choices for us in a way that maximizes efficiency (narrowly defined) and probably also maximizes the profitability of the service provider. By 2030, we may cram more activities and interactions into our days, but I don’t think that will make our lives ‘better.’ A better life, by my definition, is one in which we feel more valued and happy. Given that the biggest investments in AI are on behalf of marketing efforts designed to deplete our attention and bank balances, I can only imagine this leading to days that are more filled but lives that are less fulfilled. To create a different future, I believe we must unleash these technologies toward goals beyond profit maximization. Imagine a mapping app that plotted your work commute through the most beautiful route, not simply the fastest. Imagine a communications app that facilitated deeper connections with people you deemed most important. These technologies must be more people-centric. We need to ask that they ask us, ‘What is important to you? How would you like to spend your time?’ But that’s not the system we’re building. All those decisions have been hoarded by the unimaginative pursuit of profit.”

Thad Hall, a researcher and coauthor of “Politics for a Connected American Public,” added: “AI is likely to have benefits – from improving medical diagnoses to improving people’s consumer experiences. However, there are four aspects of AI that are very problematic. 1) It is likely to result in more economic uncertainty and dislocation for people, including employment issues and more need to change jobs to stay relevant. 2) AI will continue to erode people’s privacy as search becomes more thorough. China’s monitoring of populations illustrates what this could look like in authoritarian and Western countries, with greater facial recognition used to identify people and affect their privacy. 3) AI will likely continue to have biases that are negative toward minority populations, including groups we have not considered. Given that algorithms often have identifiable biases (e.g., favoring people who are white or male), they likely also have biases that are less well-recognized, such as biases that are negative toward people with disabilities, older people or other groups. These biases may ripple through society in unknown ways. Some groups are more likely to be monitored effectively. 4) AI is creating a world where reality can be manipulated in ways we do not appreciate. Fake videos, audio and similar media are likely to explode and create a world where ‘reality’ is hard to discern. The relativistic political world will become more so, with people having evidence to support their own reality or multiple realities that mean no one knows what is the ‘truth.’”

Thomas Schneider, head of International Relations Service and vice-director at the Federal Office of Communications (OFCOM) in Switzerland, said, “AI will help mankind to be more efficient, live safer and healthier, and manage resources like energy, transport, etc., more efficiently. At the same time, there are a number of risks that AI may be used by those in power to manipulate, control and dominate others. (We have seen this with every new technology: It can and will be used for good and bad.) Much will depend about how AI will be governed: If we have an inclusive and bottom-up governance system of well-informed citizens, then AI will be used for improving our quality of life. If only a few people decide about how AI is used and what for, many others will be dependent on the decisions of these few and risk being manipulated by them. The biggest danger in my view is that there will be a greater pressure on all members of our societies to live according to what ‘the system’ will tell us is ‘best for us’ to do and not to do, i.e., that we may lose the autonomy to decide ourselves how we want to live our lives, to choose diverse ways of doing things. With more and more ‘recommendations,’ ‘rankings’ and competition through social pressure and control, we may risk a loss of individual fundamental freedoms (including but not limited to the right to a private life) that we have fought for in the last decades and centuries.”

Bart Knijnenburg, assistant professor of computer science who is active in the Human Factors Institute at Clemson University, said, “Whether AI will make our lives better depends on how it is implemented. Many current AI systems (including adaptive content-presentation systems and so-called recommender systems) try to avoid information and choice overload by replacing our decision-making processes with algorithmic predictions. True empowerment will come from these systems supporting rather than replacing our decision-making practices. This is the only way we can overcome choice/information overload and at the same time avoid so-called ‘filter bubbles.’ For example, Facebook’s current post ranking systems will eventually turn us all into cat video watching zombies, because they follow our behavioral patterns, which may not be aligned with our preferences. The algorithms behind these tools need to support human agency, not replace it.”

Peter Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, commented, “I am confident that in 2030 both arms of this query will be true: AI-driven algorithms will substantially enhance our abilities as humans and human autonomy and agency will be diminished. Whether people will be better off than they are today is a separate question, and the answer depends to a substantial degree on how looming technological developments unfold. On the one hand, if corporate entities retain unbridled control over how AI-driven algorithms interact with humans, people will be less well off, as the loss of autonomy and agency will be largely to the benefit of the corporations. On the other hand, if ‘we the people’ demand that corporate entities deploy AI-algorithms in a manner that is sensitive to the issues of human autonomy and agency, then there is a real possibility for us to be better off – enhanced by the power of the AI-driven algorithm and yet not relegated to an impoverished seat at the decision-making table. One could even parse this further, anticipating that certain decisions can be comfortably left in the hands of the AI-driven algorithm, with other decisions either falling back on humans or arrived at through a combination of AI-driven algorithmic input and human decision making. If we approach these issues skillfully – and it will take quite a bit of collaborative work between ethicists and industry – we can have the best of both worlds. On the other hand, if we are lax in acting as watchdogs over industry we will be functionally rich and decisionally poor.”

Paul Vixie, an Internet Hall of Fame member known for designing and implementing several Domain Name System protocol extensions and applications, wrote, “Understanding is a perfect proxy for control. As we make more of the world’s economy non-understandable by the masses, we make it easier for powerful interests to practice control. Real autonomy or privacy or unpredictability will be seen as a threat and managed around.”

João Pedro Taveira, embedded systems researcher and smart grids architect for INOV INESC Inovação in Portugal, wrote, “Basically, we will lose several degrees of freedom. Are we ready for that? When we wake up to what is happening it might be too late to do anything about it. Artificial intelligence is a subject that must be studied philosophically, in open-minded, abstract and hypothetical ways. Using this perspective, the issues to be solved by humans are (but not limited to) AI, feelings, values, motivation, free will, solidarity, love and hate. Yes, we will have serious problems. Dropping the ‘artificial’ off AI, look at the concept of intelligence. As a computer-science person, I know that so-called ‘AI’ studies how an agent (a software program) increases its knowledge base using rules that are defined using pattern-recognition mechanisms. No matter which mechanisms are used to generate this rule set, the result will be always behavioral profiling. Right now, everybody uses and agrees to use a wide set of appliances, services and products without a full understanding of the information that is being shared with enterprises, companies and other parties. There’s a lack of needed regulation and audit mechanisms on who or what uses our information and how it is used and whether it is stored for future use. Governments and others will try to access this information using these tools by decree, arguing national security or administration efficiency improvements. Enterprises and companies might argue that these tools offer improvement of quality of service, but there’s no guarantee about individuals’ privacy, anonymity, individual security, intractability and so on.”

Ramon Lopez de Mantaras, director of the Spanish National Research Council’s Artificial Intelligence Research Institute, said, “I do not think it is a good idea to give high levels of autonomy to AI systems. They are, and will be, weak AI systems without commonsense knowledge. They will have more and more competence, yes, but this will be competence without comprehension. AI machines should remain at the level of tools or, at most, assistants, always keeping the human in the loop. We should all read or re-read the book ‘Computer Power and Human Reason’ by Joseph Weizenbaum before deciding whether or not to give lots of autonomy to stupid machines.”

Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, responded, “AI systems will make quite substantial and important contributions to the ability of health care providers to generate accurate diagnoses of maladies and threats to my well-being, now and in the future. I can imagine the development and deployment of systems in which my well-being is the primary basis of our relationship. I am less sure about how my access to and use of this resource may be constrained or distorted by the interests of the other actors (humans within profit/power-seeking orientations). I assume that they will be aided by their own AI systems informing them how to best present options to me. I am hopeful that we will have agents (whether private, social, governmental) whose interest and responsibility is in ensuring that my interests govern those relationships.”

Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology and the founding director of the Loebner Prize, a competition in artificial intelligence, said, “By 2030, it is likely that AIs will have achieved a type of sentience, even if it is not human-like. They will also be able to exercise varying degrees of control over most human communications, financial transactions, transportation systems, power grids and weapon systems. As I noted in my 2008 book, ‘Parsing the Turing Test,’ they will reside in the ‘InterNest’ we have been building for them, and we will have no way of dislodging them. How they decide to deal with humanity – to help us, ignore us or destroy us – will be entirely up to them, and there is no way currently to predict which avenue they will choose. Because a few paranoid humans will almost certainly try to destroy the new sentient AIs, there is at least a reasonable possibility that that they will swat us like the flies we are – the possibility that Stephen Hawking, Elon Musk and others have warned about. There is no way, to my knowledge, of stopping this future from emerging. Driven by the convenience of connectivity, the greed that underlies business expansion and the pipedreams of muddle-headed people who confuse machine-like intelligence with biological intelligence, we will continue to build AIs we can barely understand and to expand the InterNest in which they will live – until the inevitable – whatever that proves to be – occurs.”

An attorney specializing in policy issues for a global digital rights organization commented, “I’m not sure, even today, whether the tech advances of the last 12 years have been net positive over the global population. We’ve seen a widening gap between the very rich and everybody else. That is likely bad for democracy. AI seems likely to make the employment/training problem worse in the U.S., and AI may have similar effects in countries that currently provide cheap labor. On the political-governmental side, AI will exacerbate current surveillance and accountability problems. I figure that AI will improve and speed up all biometric pattern recognition as well as DNA analysis and natural language processing. And though we know that much of this is biased, we’re not adequately counteracting the bias we know about. The companies who generate and disseminate AI technology have every incentive to continue. I’m not optimistic that collective action – at least in the U.S. system – will successfully counter those incentives.”

Brian Behlendorf, executive director of the Hyperledger project at The Linux Foundation and expert in blockchain technology, wrote, “I am concerned that AI will not be a democratizing power, but will enhance further the power and wealth of those who already hold it. This is because more data means better AI, and data is expensive to acquire, especially personal data, the most valuable kind. This is in contrast to networking technologies, whose benefits were shared fairly widely as the prices for components came down equally fast for everyone. One other reason: AI apps will be harder to debug than ordinary apps, and we already see hard-to-debug applications leading to disenfranchisement and deterioration of living. So, I do not take as a given that AI will enrich ‘most’ people’s lives over the next 12 years.”

While I do believe human-machines collaboration will bring many benefits to society over time, I fear that we will not have made enough progress by 2030 to ensure that benefits will be spread evenly…Eileen Donahoe

Eileen Donahoe

Eileen Donahoe, executive director of the Global Digital Policy Incubator at Stanford University, commented, “While I do believe human-machines collaboration will bring many benefits to society over time, I fear that we will not have made enough progress by 2030 to ensure that benefits will be spread evenly or to protect against downside risks, especially as they relate to bias, discrimination and loss of accountability by that time.”

David Bray, executive director of People-Centered Internet, commented, “Hope: Human-machine/AI collaborations extend our abilities of humans while we (humans) intentionally strive to preserve values of respect, dignity and agency of choice for individuals. Machines bring together different groups of people and communities and help us work and live together by reflecting on our own biases and helping us come to understand the plurality of different perspectives of others. Big concern: Human-machine/AI collaborations turn out to not benefit everyone, only a few, and result in a form of ‘indentured servitude’ or ‘neo-feudalism’ that is not people-centered and not uplifting of people. Machines amplify existing confirmation biases and other human characteristics resulting in sensationalist, emotion-ridden news and other communications that gets page views and ad-clicks yet lack nuance of understanding, resulting in tribalism and a devolution of open societies and pluralities to the detriment of the global human condition.”

Bernie Hogan, senior research fellow at Oxford Internet Institute, wrote, “The current political and economic climate suggests that existing technology, especially machine learning, will be used to create better decisions for those in power while creating an ever more tedious morass of bureaucracy for the rest. We see little example of successful bottom-up technology, open source technology and hacktivism relative to the encroaching surveillance state and attention economy.”

Dan Buehrer, a retired professor of computer science formerly with the National Chung Cheng University in Taiwan, warned, “Statistics will be replaced by individualized models, thus allowing control of all individuals by totalitarian states and, eventually, by socially intelligent machines.”

Nathalie Marechal, doctoral candidate at the University of Southern California’s Annenberg School for Communication who researches the intersection of internet policy and human rights, said, “Absent rapid and decisive actions to rein in both government overreach and companies’ amoral quest for profit, technological developments – including AI – will bring about the infrastructure for total social control, threatening democracy and the right to individual self-determination.”

Katja Grace, contributor to the AI Impacts research project and a research associate with the Machine Intelligence Research Institute, said, “There is a substantial chance that AI will leave everyone worse off, perhaps radically so. The chance is less than 50 percent, but the downside risk is so large that there could be an expectation the world might be worse for AI.”

David A. Banks, an associate research analyst with the Social Science Research Council, said, “AI will be very useful to a small professional class but will be used to monitor and control everyone else.”

Luis German Rodriguez Leal, teacher and researcher at the Universidad Central de Venezuela and consultant on technology for development, said, “Humankind is not addressing properly the issue of educating people about possibilities and risks of human-machine/AI collaboration. One can observe today the growing problems of ill-intentioned manipulation of information and technological resources. There are already plenty of examples about how decision-making is biased using big data, machine learning, privacy violations and social networks (just to mention a few elements) and one can see that the common citizen is unaware of how much of his/her will does not belong to him/her. This fact has a meaningful impact on our social, political, economic and private life. We are not doing enough to attend to this issue, and it is getting very late.”

Llewellyn Kriel, CEO of TopEditor International, a media services company based in Johannesburg, South Africa, wrote, “Current developments do not augur well for the fair growth of AI. Vast swaths of the population simply do not have the intellectual capacity or level of sophistication to understand 1) the technology itself and 2) the implications of its safe use. This entrenches and widens the digital divide in places like Africa. The socio-political implications of this breed deep primitive superstition, racial hatred toward whites and Asians who are seen as techno-colonialists and the growth of kleptocracies amid the current mushrooming of corruption.”

Steven Thompson, an author specializing in illuminating emerging issues and editor of “Androids, Cyborgs, and Robots in Contemporary Culture and Society,” wrote, “The keyword from the query is ‘dependence.’ I published pioneering quantitative research on internet addiction and dependency in 1996, and followed up 15 years later with a related, updated research talk on the future of AI and internet dependency at a UNESCO-sponsored conference on information literacy in Morocco. My expertise is in ethical and technological issues related to moving the internet appliance into the human body. … The internet is moving into the human body, and, in that process, societal statuses are altered, privileging some while abandoning others in the name of emerging technologies, and the global order is restructuring to the same effect. Think of net neutrality issues gone wild, corporately and humanly sustained with the privileges such creation and maintenance affords some members of society. Now think of the liberty issues arising from those persons who are digital outcasts, and wish to not be on the grid, yet will be forced to do so by society and even government edicts.”

Alan Mutter, a longtime Silicon Valley CEO, cable TV executive and now a teacher of media economics and entrepreneurism at the University of California, Berkeley, said, “The danger is that we will surrender thinking, exploring and experimentation to tools that hew to the rules but can’t color outside the lines. Would you like computers to select the president or decide if you need hip surgery?”

Dan Geer, a respondent who provided no identifying details, commented, “If you believe, as do I, that having a purpose to one’s life is all that enables both pride and happiness, then the question becomes whether AI will or will not diminish purpose. For the irreligious, AI will demolish purpose, yet if AI is truly intelligent, then AI will make serving it the masses’ purpose. Ergo …”

Cristobal Young, an associate professor of sociology at Cornell University specializing in economic sociology and stratification, commented, “I mostly base my response [that tech will not leave most people better off than they are today] on Twitter and other online media, which were initially praised as ‘liberation technology.’ It is clear that the internet has devastated professional journalism, filled the public sphere with trash that no one believes and degraded civil discourse. This isn’t about robots, but rather about how humans use the internet. Donald Trump himself says that without Twitter, he could never have been elected, and Twitter continues to be his platform for polarization, insult and attacks on the institutions of accountability.”

David J. Krieger, co-director of the Institute for Communication & Leadership in Lucerne, Switzerland, wrote, “The affordances of digital technologies bind people into information networks such that the network becomes the actor and intelligence as well as agency are qualities of the network as a whole and not any individual actors, whether human or non-human. Networks will have access to much more information than do any present-day actors and therefore be able to navigate complex environments, e.g., self-driving cars, personal assistants, smart cities. Typically, we will consult and cooperate with networks in all areas, but the price will be that we have no such thing as privacy. Privacy is indeed dead, but in the place of personal privacy management there will be network publicy governance [‘publicy’ is the opposite of privacy]. To ensure the use of these technologies for good instead of evil it will be necessary to dismantle and replace current divides between government and governed, workers and capitalists as well as to establish a working global governance.”

Wendy M. Grossman, author of “net.wars” and technology blogger, wrote, “2030 is 12 years from now. I believe human-machine AI collaboration will be successful in many areas, but that we will be seeing, like we are now over Facebook and other social media, serious questions about ownership and who benefits. It seems likely that the limits of what machines can do will be somewhat clearer than they are now, when we’re awash in hype. We will know by then, for example, how successful self-driving cars are going to be, and the problems inherent in handing off control from humans to machines in a variety of areas will also have become clearer. The big fight is to keep people from relying on experimental systems and turning off the legacy ones too soon – which is our current situation with the internet.”

Karl M. van Meter, founding editor of the Bulletin of Sociological Methodology and author of “Computational Social Science in the Age of Big Data,” said, “The well-being of the world’s population depends on governments making ‘intelligent’ decisions based on AI or other means. Moreover, environmental change may well be the determining factor for future well-being, with or without ‘intelligent’ decisions by world governments.”

Andrew Whinston, computer science professor and director of the Center for Research in Electronic Commerce at the University of Texas at Austin, said, “There are several issues. First, security problems do not get the attention needed. Secondly, there may be use of the technology to control the population – as we see developing in China. AI methodology is focused on prediction, at least so far, so methods to improve health or general welfare are lacking. Deep learning, which is getting the big hype, does not have a clear foundation. That makes it scientifically weak.”

An information administration manager responded, “We cede more and more decision-making and policy making to self-interested parties in the private sphere. Our institutions are insufficiently nimble to keep up with the policy questions that arise and attempts to regulate new industries are subverted by corrupt money politics at both the federal and state levels.”

There is no evidence that more AI will improve the lives of most people. In fact, the opposite is likely to be the case. There will be more unemployment, less privacy, etc.An internet pioneer

An internet pioneer

An internet pioneer said, “Nothing in our current social, economic or political structures points to a positive outcome. There is no evidence that more AI will improve the lives of most people. In fact, the opposite is likely to be the case. There will be more unemployment, less privacy, etc.”

The following one-liners from anonymous respondents also tie into human agency:

  • An Internet Hall of Fame member commented, “AI will not leave most people better off than they are today because individuals will not be able to control their lives.”
  • A professor of AI and soft computing at a university in Italy said, “Development has brought humanity past the boundary, the survival limit; it is too easy to control technology in ways that are dangerous for people.”
  • An assistant professor of social justice wrote, “Technology magnifies what exists (for good or bad). There is simply more bad than good to be magnified.”
  • A professor of digital humanities at a Silicon-Valley-area university said, “Given increasing income disparity in much of the world, my fear is that AI will be used to repress the disenfranchised and create even more privilege for the few.”
  • A distinguished engineer and chief scientist at major technology companies commented, “Large actors will use AI for their benefit. Individual customers may have some benefits as a side effect, at a cost of lower autonomy.”
  • A professor of electrical engineering and innovation based in Europe said, “People will lose control of their lives, which will remain in the hands of a small group of experts or companies.”
  • A respondent based in Turkey wrote, “Due to unknown logic of algorithms we will lose our autonomy over our lives and everyday life decisions; humankind is depending on AI and not learning to be algorithmically literate.”
  • An engineer and chief operating officer said, “AI will be used to suppress rights.”
  • A technology fellow for a global organization commented, “I fear that AI will control many background choices with great implicating effects.”

Other anonymous respondents commented:

  • “More will be delegated to technology – smartphones, software. People will stop thinking or caring about ‘control’ and just delegate everything to ‘the system.’”
  • “You can deploy most any technology in ways that enhance freedom [and] autonomy [or] have the opposite effect.”
  • “With China aiming to ‘win’ the AI lead, I have serious doubts that any benefits will outweigh the negative effects on human rights for a majority of people.”
  • “AI is not intelligent, it is human-made, and therefore biased and unreliable, it cannot do now what it is claimed it can do.”
  • “Provided we are still locked in capitalism I do not see how technology will help people stay engaged and empowered in our society.”
  • “My fear is that AI will be developed too quickly and that there may be severe repercussions once the genie is out of the bottle.”

Surveillance and data systems designed primarily for efficiency, profit and control are inherently dangerous

Who decides what about people’s code-defined lives, when, where, why and how? Many of these respondents cited concerns that the future of AI will be shaped by those driven by profit motives and power thirst. They note that many AI tools rely on individuals’ sharing of information, preferences, search strategies and data. Human values and ethics are not necessarily baked into the systems making peoples’ decisions for them. These experts worry that data-based decision-making can be prone to errors, biases, and false logic or mistaken assumptions. And these experts argue that machine-based decisions often favor “efficiencies” in the name of profit or power that are extremely unfavorable to individuals and the betterment of the human condition.

Michael Kleeman, a senior fellow at the University of California, San Diego and board member at the Institute for the Future, wrote, “The utilization of AI will be disproportionate and biased toward those with more resources. In general, it will reduce autonomy, and, coupled with big data, it will reduce privacy and increase social control. There will be some areas where IA [intelligence augmentation] helps make things easier and safer, but by and large it will be a global net negative.”

A professor at a major U.S. university and expert in artificial intelligence as applied to social computing said, “As AI systems take in more data and make bigger decisions, people will be increasingly subject to their unaccountable decisions and non-auditable surveillance practices. The trends around democratic governance of AI are not encouraging. The big players are U.S.-based, and the U.S. is in an anti-regulation stance that seems fairly durable. Therefore, I expect AI technologies to evolve in ways that benefit corporate interests, with little possibility of meaningful public response.”

Justin Reich, executive director of MIT Teaching Systems Lab and research scientist in the MIT Office of Digital Learning, responded, “Systems for human-AI collaborations will be built by powerful, affluent people to solve the problems of powerful, affluent people. In the hands of autocratic leaders, AI will become a powerful tool of surveillance and control. In capitalist economies, human-AI collaboration will be deployed to find new, powerful ways of surveilling and controlling workers for the benefit of more-affluent consumers.”

Seth Finkelstein, consulting programmer at Finkelstein Consulting and EFF Pioneer Award winner, commented, “AI depends on algorithms and data. Who gets to code the algorithms and to challenge the results? Is the data owned as private property, and who can change it? As a very simple example, let’s take the topic of algorithmic recommendations for articles to read. Do they get tuned to produce suggestions which lead to more informative material – which, granted, is a relatively difficult task, and fraught with delicate determinations? Or are they optimized for ATTENTION! CLICKS! *OUTRAGE*!? To be sure, the latter is cheap and easy – and though it has its own share of political problems, they’re often more amenable to corporate management (i.e., what’s accurate vs. what’s unacceptable). There’s a whole structure of incentives that will push toward one outcome or the other.”

Douglas Rushkoff, professor of media at City University of New York, responded, “The main reason I believe AI’s impact will be mostly negative is that we will be applying it mostly toward the needs of the market, rather than the needs of human beings. So while AI might get increasingly good at extracting value from people, or manipulating people’s behavior toward more consumption and compliance, much less attention will likely be given to how AI can actually create value for people. Even the most beneficial AI is still being measured in terms of its ability to provide utility, value or increase in efficiency – fine values, sure, but not the only ones that matter to quality of life.”

Annalie Killian, futurist and vice president for strategic partnerships at Sparks & Honey, wrote, “More technology does not make us more human; we have evidence for that now within 10 years of combining the smartphone device with persuasive and addictive designs that shape and hijack behavior. Technologists who are using emotional analytics, image-modification technologies and other hacks of our senses are destroying the fragile fabric of trust and truth that is holding our society together at a rate much faster than we are adapting and compensating – let alone comprehending what is happening. The sophisticated tech is affordable and investible in the hands of very few people who are enriching themselves and growing their power exponentially, and these actors are NOT acting in the best interest of all people.”

Collin Baker, senior AI researcher at the International Computer Science Institute at the University of California, Berkeley, commented, “I fear that advances in AI will be turned largely to the service of nation states and mega-corporations, rather than be used for truly constructive purposes. The positive potential, particularly in education and health care, is enormous, but people will have to fight to make it come about. … I hope that AI will get much better at understanding Gricean maxims for cooperative discourse and at understanding people’s beliefs, intentions and plans.”

Brian Harvey, lecturer on the social implications of computer technology at the University of California, Berkeley, said, “The question makes incorrect presuppositions, encapsulated in the word ‘we.’ There is no we; there are the owners and the workers. The owners (the 0.1%) will be better off because of AI. The workers (bottom 95%) will be worse off, as long as there are owners to own the AI, same as for any piece of technology.”

One of the world’s foremost social scientists studying human-technology interactions said, “My chief fear is face-recognition used for social control. Even Microsoft has begged for government regulation! Surveillance of all kinds is the future for AI. It is not benign if not controlled!”

Devin Fidler, futurist and founder of Rethinkery Labs commented, “If earlier industrialization is any guide, we may be moving into a period of intensified creative destruction as AI technologies become powerful enough to overturn the established institutions and the ordering systems of modern societies. If the holes punched in macro-scale organizational systems are not explicitly addressed and repaired, there will be increased pressures on everyday people as they face not only the problems of navigating an unfamiliar new technology landscape themselves, but also the systemic failure of institutions they rely on that have failed to adapt.”

An anonymous respondent said, “My fear is that technology will further separate us from what makes us human and sensitive to others. My hope is that technology would be used to improve the quality of living, not supplant it. Much of the AI innovation is simply clogging our senses, stealing our time, increasing the channels and invasion of adverts. This has destroyed our phones, filled our mailboxes and crowded our email. No product is worth that level of incursion.”

Paola Perez, vice president of the Internet Society’s Venezuela chapter and chair of the LACNIC Public Policy Forum, responded, “Humans will be better with AI. Many problems will be solved, but many jobs are going to disappear, and there may be more poor people as a result. Will we see life extension? Maybe, and maybe not, because our dependence on technology may also be destructive to our health.”

Eliot Lear, principal engineer at Cisco Systems, predicted, “AI and tech will not leave most people better off than they are today. As always, technology outpaces our ability to understand its ramifications so as to properly govern its use. I have no reason to believe that we will have caught up by 2030.”

Olivia Coombe, a respondent who provided no identifying details, wrote, “Children learn from their parents. As AI systems become more complex and are given increasingly important roles in the functioning of day-to-day life, we should ask ourselves what are we teaching our artificial digital children? If we conceive and raise them in a world of individual self-interest, will they just strengthen these existing, and often oppressive, systems of capitalist competition? Or could they go their own way, aspiring to a life of entrepreneurship to collaboration? Worse yet, will they see the reverence we hold for empires and seek to build their own through conquest?”

AI will produce many advantages for many people, but it will also exacerbate many forms of inequality in society.Peter Asaro

Peter Asaro

Peter Asaro, a professor at The New School and philosopher of science, technology and media who examines artificial intelligence and robotics, commented, “AI will produce many advantages for many people, but it will also exacerbate many forms of inequality in society. It is likely to benefit a small group who design and control the technology greatly, benefit a fairly larger group of the already well-off in many ways, but also potentially harm them in other ways, and for the vast majority of people in the world it will offer few visible benefits and be perceived primarily as a tool of the wealthy and powerful to enhance their wealth and power.”

Mark Deuze, a professor of media studies at the University of Amsterdam, wrote, “With the advances in AI and tech, the public debate grows over their impact. It is this debate that will contribute to the ethical and moral dimensions of AI, hopefully inspiring a society-wide discussion on what we want from tech and how we will take responsibility for that desire.”

Rob Frieden, professor and Pioneers Chair in Telecommunications and Law at Penn State University, said, “Any intelligent system depends on the code written to support it. If the code is flawed, the end product reflects those flaws. An old-school acronym spells this out: GIGO, Garbage In, Garbage Out. I have little confidence that AI can incorporate any and every real-world scenario, even with likely developments in machine learning. As AI expands in scope and reach, defects will have ever increasing impacts, largely on the negative side of the ledger.”

Anthony Judge, author, futurist, editor of the Encyclopedia of World Problems and Human Potential, and former head of the Union of International Associations, said, “AI will offer greater possibilities. My sense is that it will empower many (most probably 1% to 30%) and will disempower many (if not 99%). Especially problematic will be the level of complexity created for the less competent (notably the elderly) as is evident with taxation and banking systems – issues to which sysadmins are indifferent. For some it will be a boon – proactive companions (whether for quality dialogue or sex). Sysadmins will build in unfortunate biases. Missing will be the enabling of interdisciplinarity – as has long been possible but carefully designed out for the most dubious divide-and-rule reasons. Blinkered approaches and blind spots will set the scene for unexpected disasters – currently deniably incomprehensible (Black Swan effect). Advantages for governance will be questionable. Better oversight will be dubiously enabled.”

Stephanie Perrin, president of Digital Discretion, a data-privacy consulting firm, wrote, “There is a likelihood that, given the human tendency to identify risk when looking at the unknown future, AI will be used to attempt to predict risk. In other words, more and deeper surveillance will be used to determine who is a good citizen (purchaser, employee, student, etc.) and who [is] bad. This will find itself into public-space surveillance systems, employee-vetting systems (note the current court case where LinkedIn is suing data scrapers who offer to predict ‘flight risk’ employees), and all kinds of home-management systems and intelligent cars. While this might possibly introduce a measure of safety in some applications, the impact of fear that comes with unconscious awareness of surveillance will have a severe impact on creativity and innovation. We need that creativity as we address massive problems in climate change and reversing environmental impacts, so I tend to be pessimistic about outcomes.”

Alistair Knott, an associate professor specializing in cognitive science and AI at the University of Otago in Dunedin, New Zealand, wrote “AI has the potential for both positive and negative impacts on society. [Negative impacts are rooted in] the current dominance of transnational companies (and tech companies in particular) in global politics. These companies are likely to appropriate the majority of advances in AI technology – and they are unlikely to spread the benefit of these advances throughout society. We are currently witnessing an extraordinary concentration of wealth in the hands of a tiny proportion of the world’s population. This is largely due to the mainstreaming of neoliberalism in the world’s dominant economies – but it is intensified by the massive success of tech companies, which achieve huge profits with relatively small workforces. The advance of AI technologies is just going to continue this trend, unless quite draconian political changes are effected that bring transnational companies under proper democratic control.”

Richard Forno, of the Center for Cybersecurity at the University of Maryland-Baltimore County, wrote, “AI is only as ‘smart’ and efficient as its human creators can make it. If AI in things like Facebook algorithms is causing this much trouble now, what does the future hold? The problem is less AI’s evolution and more about how humankind develops and uses it – that is where the real crisis in AI will turn out.”

Sam Punnett, research and strategy officer at TableRock Media, wrote, “The preponderance of AI-controlled systems are designed to take collected data and enable control advantage. Most of the organizations with the resources to develop these systems do so to enable advantages in commercial/financial transactions, manufacturing efficiency and surveillance. Self-regulation by industry has already been shown to fail (e.g., social media platforms and Wall Street). Government agencies are lagging in their will and understanding of the implications of the technology to effectively implement guidelines to curtail the impacts of unforeseen circumstances. As such, government participation will be reactive to the changes that the technology will bring. My greatest fear is a reliance on faulty algorithms that absolve responsibility while failing to account for exceptions.”

Luis Pereira, associate professor of electronics and nanotechnologies, Universidade NOVA de Lisboa, Portugal, responded, “I fear that more control and influence will be exerted on people, such as has started in China. There will be a greater wealth gap, benefits will not spread to all and a caste system will develop, unless a new social compact is put in place, which is unlikely. Widespread revolt is plausible.”

Stavros Tripakis, an associate professor of computer science at Aalto University in Finland and adjunct professor at the University of California, Berkeley, wrote, “‘1984,’ George Orwell, police state.”

A principal architect for a top-five technology company commented, “AI will enable vicious regimes to track citizens at all times. Mistaken identifications will put innocent people in jail and even execute them with no hope of appeal. In general, AI will only have a positive contribution in truly democratic states, which are dwindling in number.”

John Sniadowski, a director for a technology company, wrote, “As technology is currently instantiated it simply concentrates power into a smaller number of international corporations. That needs fixing for everyone to gain the best from AI.”

David Brake, senior lecturer in communications at the University of Bedfordshire, UK, said, “Like many colleagues I fear that AI will be framed as ‘neutral’ and ‘objective’ and thereby used as cover to make decisions that would be considered unfair if made by a human. If we do not act to properly regulate the use of AI we will not be able to interrogate the ways that AI decision-making is constructed or audit them to ensure their decisions are indeed fair. Decisions may also be made (even more than today) based on a vast array of collected data and if we are not careful we will be unable to control the flows of information about us used to make those decisions or to correct misunderstandings or errors which can follow us around indefinitely. Imagine being subject to repeated document checks as you travel around the country because you know a number of people who are undocumented immigrants and your movements therefore fit the profile of an illegal immigrant. And you are not sure whether to protest because you don’t know whether such protests could encourage an algorithm to put you into a ‘suspicious’ category which could get you harassed even more often.”

A longtime veteran of a pioneering internet company commented, “Profit motive and AI at scale nearly guarantee suffering for most people. It should be spiffy for the special people with wealth and power, though. Watching how machines are created to ensure addiction (to deliver ads) is a reminder that profit-driven exploitation always comes first. The push for driverless cars, too, is a push for increased profits.”

Joshua Loftus, assistant professor of information, operations and management sciences at New York University and co-author of “Counterfactual Fairness in Machine Learning,” commented, “How have new technologies shaped our lives in the past? It depends on the law, market structure and who wields political power. In the present era of extreme inequality and climate catastrophe, I expect technologies to be used by employers to make individual workers more isolated and contingent, by apps to make users more addicted on a second-by-second basis, and by governments for surveillance and increasingly strict border control.”

Eugene H. Spafford, internet pioneer and founder and executive director emeritus of the Center for Education and Research in Information Assurance and Security, commented, “Without active controls and limits, the primary adopters of AI systems will be governments and large corporations. Their use of it will be to dominate/control people, and this will not make our lives better.”

Michael Muller, a researcher in the AI interactions group for a global technology solutions provider, said it will leave some people better off and others not, writing, “For the wealthy and empowered, AI will help them with their daily lives – and it will probably help them to increase their wealth and power. For the rest of us, I anticipate that AI will help the wealthy and empowered people to surveil us, to manipulate us, and (in some cases) to control us or even imprison us. For those of us who do not have the skills to jump to the AI-related jobs, I think we will find employment scarce and without protections. In my view, AI will be a mixed and intersectional blessing at best.”

Estee Beck, assistant professor at the University of Texas at Arlington and author of “A Theory of Persuasive Computer Algorithms for Rhetorical Code Studies,” responded, “Tech design and policy affects our privacy in the United States so much so that most people do not think about the tracking of movements, behaviors and attitudes from smartphones, social media, search engines, ISPs [internet service providers] and even Internet of Things-enabled devices. Until tech designers and engineers build privacy into each design and policy decision for consumers, any advances with human-machine/AI collaboration will leave consumers with less security and privacy.”

Michael H. Goldhaber, an author, consultant and theoretical physicist who wrote early explorations on the digital attention economy, said, “For those without internet connection now, its expansion will probably be positive overall. For the rest we will see an increasing arms race between uses of control, destructive anarchism, racism, etc., and ad hoc, from-below efforts at promoting social and environmental good. Organizations and states will seek more control to block internal or external attacks of many sorts. The combined struggles will take up an increasing proportion of the world’s attention, efforts and so forth. I doubt that any very viable and democratic, egalitarian order will emerge over the next dozen years, and – even in a larger time frame – good outcomes are far from guaranteed.”

Dave Burstein, editor and publisher at Fast Net News, said, “There’s far too much second-rate AI that is making bad decisions based on inadequate statistical understanding. For example, a parole or sentencing AI probably would find a correlation between growing up in a single parent household and likelihood of committing another crime. Confounding variables, like the poverty of so many single mothers, need to be understood and dealt with. I believe it’s wrong for someone to be sent to jail longer because their father left. That kind of problem, confounding variables and the inadequacy of ‘preponderant’ data, is nearly ubiquitous in AI in practice.”

Ian Peter, pioneer internet activist and internet rights advocate, said, “Personal data accumulation is reaching a point where privacy and freedom from unwarranted surveillance are disappearing. In addition, the algorithms that control usage of such data are becoming more and more complex leading to inevitable distortions. Henry Kissinger may have not been far off the mark when he described artificial intelligence as leading to ‘The End of the Age of Enlightenment.’”

Michael Zimmer, associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee, commented, “I am increasingly concerned that AI-driven decision making will perpetuate existing societal biases and injustices, while obscuring these harms under the false belief such systems are ‘neutral.’”

Martin Shelton, a professional technologist, commented, “There are many kinds of artificial intelligence – some kinds reliant on preset rules to appear ‘smart,’ and some which respond to changing conditions in the world. But because AI can be used anywhere we can recognize patterns, the potential uses for artificial intelligence are pretty huge. The question is, how will it be used? … While these tools will become cheaper and more widespread, we can expect that – like smartphones or web connectivity – their uses will be primarily driven by commercial interests. We’re beginning to see the early signs of AI failing to make smart predictions in larger institutional contexts. If Amazon fails to correctly suggest the right product in the future, everything is fine. You bought a backpack once, and now Amazon thinks you want more backpacks, forever. It’ll be okay. But sometimes these decisions have enormous stakes. ProPublica documented how automated ‘risk-assessment’ software used in U.S. courtroom sentencing procedures is only slightly more accurate at predicting recidivism than the flip of a coin. Likewise, hospitals using IBM Watson to make predictions about cancer treatments find the software often gives advice that humans would not. To mitigate harm in high-stakes situations, we must critically interrogate how our assumptions about our data and the rules that we use to create our AI promote harm.”

Nigel Hickson, an expert on technology policy development for ICANN based in Brussels, responded, “I am optimistic that AI will evolve in a way that benefits society by improving processes and giving people more control over what they do. This will only happen though if the technologies are deployed in a way in which benefits all. My fear is that in non-democratic countries, AI will lessen freedom, choice and hope.”

Vian Bakir, a professor of political communication and journalism at Bangor University, responded, “I am pessimistic about the future in this scenario because of what has happened to date with AI and data surveillance. For instance, the recent furor over fake news/disinformation and the use of complex data analytics in the U.K.’s 2016 Brexit referendum and in the U.S. 2016 presidential election. To understand, influence and micro-target people in order to try get them to vote a certain way is deeply undemocratic. It shows that current political actors will exploit technology for personal/political gains, irrespective of wider social norms and electoral rules. There is no evidence that current bad practices would not be replicated in the future, especially as each new wave of technological progress outstrips regulators’ ability to keep up, and people’s ability to comprehend what is happening to them and their data. Furthermore, and related, the capabilities of mass dataveillance in private and public spaces is ever-expanding, and their uptake in states with weak civil society organs and minimal privacy regulation is troubling. In short, dominant global technology platforms show no signs of sacrificing their business models that depend on hoovering up ever more quantities of data on people’s lives then hyper-targeting them with commercial messages; and across the world, political actors and state security and intelligence agencies then also make use of such data acquisitions, frequently circumventing privacy safeguards or legal constraints.”

Tom Slee, senior product manager at SAP SE and author of “What’s Yours is Mine: Against the Sharing Economy,” wrote, “Many aspects of life will be made easier and more efficient by AI. But moving a decision such as health care or workplace performance to AI turns it into a data-driven decision driven by optimization of some function, which in turn demands more data. Adopting AI-driven insurance ratings, for example, demands more and more lifestyle data from the insured if it is to produce accurate overall ratings. Optimized data-driven decisions about our lives unavoidably require surveillance, and once our lifestyle choices become input for such decisions we lose individual autonomy. In some cases we can ignore this data collection, but we are in the early days of AI-driven decisions: By 2030 I fear the loss will be much greater. I do hope I am wrong.”

Timothy Graham, a postdoctoral research fellow in sociology and computer science at Australian National University, commented, “There is already an explosion of research into ‘fairness and representation’ in ML (and conferences such as Fairness, Accountability and Transparency in Machine Learning), as it is difficult to engineer systems that do not simply reproduce existing social inequality, disadvantage and prejudice. Deploying such systems uncritically will only result in an aggregately worse situation for many individuals, whilst a comparatively small number benefit.”

A senior researcher and programmer for a major global think tank commented, “I expect AI to be embedded in systems, tools, etc., to make them more useful. However, I am concerned that AI’s role in decision-making will lead to more-brittle processes where exceptions are more difficult than today – this is not a good thing.”

Jenni Mechem, a respondent who provided no identifying details, said, “My two primary reasons for saying that advances in AI will not benefit most people by 2030 are, first, there will continue to be tremendous inequities in who benefits from these advances, and second, if the development of AI is controlled by for-profit entities there will be tremendous hidden costs and people will yield control over vast areas of their lives without realizing it. … The examples of Facebook as a faux community commons bent on extracting data from its users and of pervasive internet censoring in China should teach us that neither for-profit corporations nor government can be trusted to guide technology in a manner that truly benefits everyone. Democratic governments that enforce intelligent regulations as the European Union has done on privacy may offer the best hope.”

Suso Baleato, a fellow at Harvard University’s Institute of Quantitative Social Science and liaison for the Organization for Economic Cooperation and Development (OECD)’s Committee on Digital Economy Policy, commented, “The intellectual property framework impedes the necessary accountability of the underlying algorithms, and the lack of efficient redistributive economic policies will continue amplifying the bias of the datasets.”

Unfortunately it is most likely that AI will be deployed in ways that deepen existing structural inequality along lines of race, class, gender, ability and so on.Sasha Costanza-Chock

Sasha Costanza-Chock

Sasha Costanza-Chock, associate professor of civic media at MIT, said, “Unfortunately it is most likely that AI will be deployed in ways that deepen existing structural inequality along lines of race, class, gender, ability and so on. A small portion of humanity will benefit greatly from AI, while the vast majority will experience AI through constraints on life chances. Although it’s possible for us to design AI systems to advance social justice, our current trajectory will reinforce historic and structural inequality.”

Dalsie Green Baniala, CEO and regulator of the Telecommunications and Radiocommunications Regulator of Vanuatu, wrote, “Often, machine decisions do not produce an accurate result, they do not meet expectations or specific needs. For example, applications are usually invented to target the developed-world market. They may not work appropriately for countries like ours – small islands separated by big waters.”

Michiel Leenaars, director of strategy at NLnet Foundation and director of the Internet Society’s Netherlands chapter, responded, “Achieving trust is not the real issue; achieving trustworthiness and real empowerment of the individual is. As the technology that to a large extent determines the informational self disappears – or in practical terms is placed out of local control, going ‘underground’ under the perfect pretext of needing networked AI – the balance between societal well-being and human potential on the one hand and corporate ethics and opportunistic business decisions on the other stands to be disrupted. Following the typical winner-takes-all scenario the internet is known to produce, I expect that different realms of the internet will become even less transparent and more manipulative. For the vast majority of people (especially in non-democracies) there already is little real choice but to move and push along with the masses.”

Mike O’Connor, a retired technologist who worked at ICANN and on national broadband issues, commented, “I’m feeling ‘internet-pioneer regret’ about the Internet of S*** that is emerging from the work we’ve done over the last few decades. I actively work to reduce my dependence on internet-connected devices and the amount of data that is collected about me and my family. I will most certainly work equally hard to avoid human/AI devices/connections. I earnestly hope that I’m resoundingly proven wrong in this view when 2030 arrives.”

Luke Stark, a fellow in the department of sociology at Dartmouth College and at Harvard University’s Berkman Klein Center for Internet & Society, wrote, “AI technologies run the risk of providing a comprehensive infrastructure for corporate and state surveillance more granular and all-encompassing than any previous such regime in human history.”

Zoetanya Sujon, a senior lecturer specializing in digital culture at the University of the Arts London, commented, “Like the history of so many technologies show us, AI will not be the magic solution to the world’s problems or to symbolic and economic inequalities. Instead, AI is most benefitting those with the most power.”

Larry Lannom, internet pioneer and vice president at the Corporation for National Research Initiatives (CNRI), said, “I am hopeful that networked human-machine interaction will improve the general quality of life. … My fear: Will all of the benefits of more-powerful artificial intelligence benefit the human race as a whole or simply the thin layer at the top of the social hierarchy that owns the new advanced technologies?”

A professor and researcher in AI based in Europe noted, “Using technological AI-based capabilities will give people the impression that they have more power and autonomy. However, those capabilities will be available in contexts already framed by powerful companies and states. No real freedom. For the good and for the bad.”

An anonymous respondent said, “In the area of health care alone there will be tremendous benefits for those who can afford medicine employing AI. But at the same time, there is an enormous potential for widening inequality and for abuse. We can see the tip of this iceberg now with health insurance companies today scooping up readily available, poorly protected third-party data that will be used to discriminate.”

A senior data analyst and systems specialist expert in complex networks responded, “Artificial intelligence software will implement the priorities of the entities that funded development of the software. In some cases, this will [be] a generic service sold to the general public (much as we now have route-planning software in GPS units), and this will provide a definite benefit to consumers. In other cases, software will operate to the benefit of a large company but to the detriment of consumers (for example, calculating a price for a product that will be the highest that a given customer is prepared to pay). In yet a third category, software will provide effective decision-making in areas ranging from medicine to engineering, but will do so at the cost of putting human beings out of work.”

My biggest concern is responsible gathering of information and its use. Information can be abused in many ways as we are seeing today.A distinguished engineer at one of the world’s largest computing hardware companies

A distinguished engineer at one of the world’s largest computing hardware companies

A distinguished engineer at one of the world’s largest computing hardware companies commented, “Tech will continue to be integrated into our lives in a seamless way. My biggest concern is responsible gathering of information and its use. Information can be abused in many ways as we are seeing today.”

A digital rights activist commented, “AI is already (through racial recognition, in particular) technologically laundering longstanding and pervasive bias in the context of police surveillance. Without algorithmic transparency and transparency into training data, AIs can be bent to any purpose.”

The following one-liners from anonymous respondents also tie into this theme:

  • A longtime economist for a top global technology company predicted, “The decline of privacy and increase in surveillance.”
  • A journalist and leading internet activist wrote, “Computer AI will only be beneficial to its users if it is owned by humans, and not ‘economic AI’ (that is, corporations).”
  • A strategy consultant wrote, “The problem is one of access. AI will be used to consolidate power and benefits for those who are already wealthy and further surveil, disenfranchise and outright rob the remaining 99% of the world.”
  • A policy analyst for a major internet services provider said, “We just need to be careful about what data is being used and how.”
  • A professor of information science wrote, “Systems will be developed that do not protect people’s privacy and security.”
  • The founder of a technology research firm wrote, “Neoliberal systems function to privilege corporations over individual rights, thus AI will be used in ways to restrict, limit, categorize – and, yes, it will also have positive benefits.”
  • A professor of electrical and computer engineering based in Europe commented, “The problem lies in human nature. The most powerful will try to use AI and technology to increase their power and not to the benefit of society.”

Other anonymous respondents commented:

  • “The panopticon and invasion of all personal aspects of our lives is already complete.”
  • “AI will allow greater control by the organized forces of tyranny, greater exploitation by the organized forces of greed and open a Pandora’s box of a future that we as a species are not mature enough to deal with.”
  • “The combination of widespread device connectivity and various forms of AI will provide a more pleasant everyday experience but at the expense of an even further loss of privacy.”
  • “I have two fears 1) loss of privacy and 2) building a ‘brittle’ system that fails catastrophically.”
  • “AI strategic decisions with the most clout are made by corporations and they do not aim for human well-being in opposition to corporate profitability.”
  • “Data is too controlled by corporations and not individuals, and privacy is eroding as surveillance and stalking options have grown unchecked.”
  • “The capabilities are not shared equally, so the tendency will be toward surveillance by those with power to access the tools; verbal and visual are coming together with capacities to sort and focus the masses of data.”
  • “Knowing humanity, I assume particularly wealthy, white males will be better off, while the rest of humanity will suffer from it.”

Displacement of human jobs by AI will widen economic and digital divides, possibly leading to economic and social upheaval

One of the chief fears about today’s technological change is the possibility that autonomous hardware and software systems will cause millions of people globally to lose their jobs and, as a result, their means for affording life’s necessities and participating in society. Many of these experts say new jobs will emerge along with the growth of AI just as they have historically during nearly every human transition to new tools.

Brad Templeton, chair for computing at Singularity University, said, “While obviously there will be good and bad, the broad history of automation technologies is positive, even when it comes to jobs. There is more employment today than ever in history.”

Ben Shneiderman, distinguished professor and founder of the Human Computer Interaction Lab at the University of Maryland, said, “Automation is largely a positive force, which increases productivity, lowers costs and raises living standards. Automation expands the demand for services, thereby raising employment, which is what has happened at Amazon and FedEx. My position is contrary to those who believe that robots and artificial intelligence will lead to widespread unemployment. Over time I think AI/machine learning strategies will become merely tools embedded in ever-more-complex technologies for which human control and responsibility will become clearer.”

Robert D. Atkinson, president of the Information Technology and Innovation Foundation, wrote about how advances in AI are essential to expanded job opportunities: “The developed world faces an unprecedented productivity slowdown that promises to limit advances in living standards. AI has the potential to play an important role in boosting productivity and living standards.”

Toby Walsh, a professor of AI at the University of New South Wales in Australia and president of the AI Access Foundation, said, “I’m pessimistic in short term – we’re seeing already technologies like AI being used to make life worse for many – but I’m optimistic in long term that we’ll work out how to get machines to do the dirty, dull, dangerous and difficult, and leave us free to focus on all the more-important and human parts of our lives.”

Yet many others disagree. Some fear the collapse of the middle class and social and economic upheaval if most of the world’s economic power is held by a handful of technology behemoths that are reaping the great share of financial rewards in the digital age while employing far fewer people than the leading companies of the industrial age. A fairly large share of these experts warn that if steps are not taken now to adjust to this potential future that AI’s radical reduction in human work will be devastating.”

David Cake, an leader with Electronic Frontiers Australia and vice-chair of the ICANN Generic Names Supporting Organization Council, wrote, “The greatest fear is that the social disruption due to changing employment patterns will be handled poorly and lead to widespread social issues.”

Jerry Michalski, founder of the Relationship Economy eXpedition, said, “We’re far from tipping into a better social contract. In a more-just world, AI could bring about utopias. However, many forces are shoving us in the opposite direction. 1) Businesses are doing all they can to eliminate full-time employees, who get sick and cranky, need retirement accounts and raises, while software gets better and cheaper. The precariat will grow. 2) Software is like a flesh-eating bacterium: Tasks it eats vanish from the employment landscape. Unlike previous technological jumps, this one unemploys people more quickly than we can retrain and reemploy them. 3) Our safety net is terrible and our beliefs about human motivations suck. 4) Consumerism still drives desires and expectations.”

James Hendler, professor of computer, web and cognitive sciences and director of the Rensselaer Polytechnic Institute for Data Exploration and Application, wrote, “I believe 2030 will be a point in the middle of a turbulent time when AI is improving services for many people, but it will also be a time of great change in society based on changes in work patterns that are caused, to a great degree, by AI. On the one hand, for example, doctors will have access to information that is currently hard for them to retrieve rapidly, resulting in better medical care for those who have coverage, and indeed in some countries the first point of contact in a medical situation may be an AI, which will help with early diagnoses/prescriptions. On the other hand, over the course of a couple of generations, starting in the not-too-distant future we will see major shifts in work force with not just blue-collar jobs, but also many white-collar jobs lost. Many of these will not be people ‘replaced’ by AIs, but rather the result of a smaller number of people being able to accomplish the same amount of work – for example in professions such as law clerks, physicians assistants and many other currently skilled positions we would project a need for less people (even as demand grows).”

Betsy Williams, a researcher at the Center for Digital Society and Data Studies at the University of Arizona, wrote, “AI’s benefits will be unequally distributed across society. Few will reap meaningful benefits. Large entities will use AI to deliver marginal improvements in service to their clients, at the cost of requiring more data and risking errors. Employment trends from computerization will continue. AI will threaten medium-skill jobs. Instead of relying on human expertise and context knowledge, many tasks will be handled directly by clients using AI interfaces or by lower-skilled people in service jobs, boosted by AI. AI will harm some consumers. For instance, rich consumers will benefit from self-driving cars, while others must pay to retrofit existing cars to become more visible to the AI. Through legal maneuvering, self-driving car companies will avoid many insurance costs and risks, shifting them to human drivers, pedestrians and bicyclists. In education, creating high quality automated instruction requires expertise and money. Research on American K-12 classrooms suggests that typical computer-aided instruction yields better test scores than instruction by the worst teachers. By 2030, most AI used in education will be of middling quality (for some, their best alternative). The children of the rich and powerful will not have AI used on them at school; instead, they will be taught to use it. For AI to significantly benefit the majority, it must be deployed in emergency health care (where quicker lab work, reviews of medical histories or potential diagnoses can save lives) or in aid work (say, to coordinate shipping of expiring food or medicines from donors to recipients in need).”

Nathaniel Borenstein, chief scientist at Mimecast, wrote, “Social analyses of IT [information technology] trends have consistently wildly exaggerated the human benefits of that technology, and underestimated the negative effects. … I foresee a world in which IT and so-called AI produce an ever-increasing set of minor benefits, while simultaneously eroding human agency and privacy and supporting authoritarian forms of governance. I also see the potential for a much worse outcome in which the productivity gains produced by technology accrue almost entirely to a few, widening the gap between the rich and poor while failing to address the social ills related to privacy. But if we can find a way to ensure that these benefits are shared equally among the population, it might yet prove to be the case that the overall effect of the technology is beneficial to humanity. This will only happen, however, if we manage to limit the role of the rich in determining how the fruits of increased productivity will be allocated.”

Andrea Romaoli Garcia, an international lawyer active in internet governance discussions, commented, “AI will improve the way people make decisions in all industries because it allows instant access to a multitude of information. People will require training for this future – educational and technological development. … This is a very high level of human development that poor countries don’t have access to. Without proper education and policies, they will not have access to wealth. The result may be a multitude of hungry and desperate people. This may be motivation for wars or invasion of borders. Future human-machine interaction (AI) will only be positive if richer countries develop policies to help poorer countries to develop and gain access to work and wealth.”

Josh Calder, a partner at the Foresight Alliance, commented, “The biggest danger is that workers are displaced on a mass scale, especially in emerging markets.”

Jeff Johnson, computer science professor at the University of San Francisco, previously with Xerox, HP Labs and Sun Microsystems, responded, “I believe advances in AI will leave many more people without jobs, which will increase the socioeconomic differences in society, but other factors could help mitigate this, e.g., adoption of guaranteed income.”

Alan Bundy, a professor of automated reasoning at the University of Edinburgh, wrote, “Unskilled people will suffer because there will be little employment for them. This may create disruption to society, some of which we have already seen with Trump, Brexit, etc.”

Peter Levine, associate dean for research and professor of citizenship and public affairs in Tufts University’s Tisch College of Civic Life, wrote, “Being a fully-fledged citizen has traditionally depended on work. I’m worried that rising levels of non-employment will detract from civic engagement. Also, AI is politically powerful and empowers the people and governments that own it. Thus, it may increase inequality and enhance authoritarianism.”

Humans with higher levels of intellect can survive this age, and those on the lower ends of spectrum of mental acumen would be rendered unnecessary.Hassaan Idrees

Hassaan Idrees

Hassaan Idrees, an electrical engineer and Fulbright Scholar active in creating energy systems for global good, commented, “I believe human-machine interaction will be more of [a] utility, and less fanciful as science fiction puts it. People will not need to see their physicians in person, their automated doctors making this irrelevant. Similarly, routine workplace activities like data processing and financial number crunching would be performed by AI. Humans with higher levels of intellect can survive this age, and those on the lower ends of spectrum of mental acumen would be rendered unnecessary.”

Ethem Alpaydın, a professor of computer engineering at Bogazici University in Istanbul, responded, “As with other technologies, I imagine AI will favor the developed countries that actually develop these technologies. … For the developing countries, however, whose labor force is mostly unskilled and whose exports are largely low-tech, AI implies higher unemployment, lower income and more social unrest. The aim of AI in such countries should be to add skill to the labor force rather than supplant them.”

Sam Ladner, a former UX researcher for Amazon and Microsoft, now an adjunct professor at Ontario College of Art and Design, wrote, “Technology is not a neutral tool, but one that has our existing challenges imprinted onto it. Inequality is high and growing. Too many companies deny their employees a chance to work with dignity, whether it be through providing them meaningful things to do, or with the basic means to live. AI will be placed on top of that existing structure. Those who already have dignified work with a basic income will see that enhanced; those who are routinely infantilized or denied basic rights will see that amplified. Some may slip into that latter category because their work is more easily replaced by AI and machine learning.”

Jonathan Swerdloff, consultant and data systems specialist for Driven Inc., wrote, “The more reliant on AI we become, the more we are at the mercy of its developers. While AI has the ability to augment professionals and to make decisions, I have three concerns which make me believe it will not leave us better off by 2030. This does not address fears that anything run via AI could be hacked and changed by bad faith third parties. 1) Until any sort of self-policed AI sentience is achieved, it will suffer from a significant GIGO [garbage-in, garbage-out] problem. As AI as currently conceived only knows what it is taught, the seed sets for teaching must be thought out in detail before the tools are deployed. Based on the experience with Microsoft’s Tay and some responses I’ve heard from the Sophia robot, I am concerned that AI will magnify humanities flaws. 2) Disparate access. Unless the cost for developing AI drops precipitously – and it may, since one AI tool could be leveraged into building further less expensive AI tools – access to whatever advantages the tools will bring will likely be clustered among a few beneficiaries. I view this akin to high frequency trading on Wall Street. Those who can, do. Those who can’t, lose. 3) Tool of control. If AI is deployed to make civic or corporate decisions, those who control the algorithms control everything. In the U.S. we’ve recently seen Immigration and Customs Enforcement change its bond algorithm to always detain in every case.”

Stuart A. Umpleby, a professor and director of the research program in social and organizational learning at George Washington University, wrote, “People who use AI and the internet will have their lives enhanced by these technologies. People who do not use them will be increasingly disconnected from opportunities. As the digital world becomes more complicated and remote from real-world experiences, the need will grow for people and software to make connections. There will be a need for methods to distinguish the real world from the scam world.”

Simeon Yates, director of the Centre for Digital Humanities and Social Science at the University of Liverpool, said, “AI will simply increase existing inequalities – it, like the internet, will fail in its emancipatory promise.”

Panagiotis T. Metaxas, author of “Technology, Propaganda and the Limits of Human Intellect” and professor of computer science at Wellesley College, responded, “There will be a lot of wealth that AI-supported devices will be producing. The new technologies will make it easier and cheaper to produce food and entertainment massively (‘bread and circus’). This wealth will not be distributed evenly, increasing the financial gap between the top small percentage of people and the rest. Even though this wealth will not be distributed evenly, the (relatively small) share given to the vast majority of people will be enough to improve their (2018) condition. In this respect, the majority of people will be ‘better off’ than they are today. They may not feel better off if they were aware of the inequalities compared to the top beneficiaries, but they will not be aware of them due to controlled propaganda. Unfortunately, there will not be much they could do about the increased inequalities. Technologies of police enforcement by robots and lack of private communication will make it impossible for them to organize, complain or push for change. They will not be valued as workers, citizens or soldiers. The desire for democracy as we know it today will be coming to an end. Many will feel depressed, but medical products will make it easy for them to increase pleasure and decrease pain.”

Grace Mutung’u, co-leader of the Kenya ICT Action Network, responded, “New technologies will more likely increase current inequalities unless there is a shift in world economics. From the experience of the UN work on Millennium Development Goals, while there has been improvement with the quality of life generally, low- and middle-income countries still suffer disparate inequalities. This will likely lead to governance problems. In any case, governments in these countries are investing heavily in surveillance which will likely have more negative effects on society.”

Danny Gillane, a netizen from Lafayette, Louisiana, commented, “Technology promises so much but delivers so little. Facebook gave us the ability to stay in touch with everyone but sacrificed its integrity and our personal information in pursuit of the dollar. The promise that our medical records would be digitized and more easily shared and drive costs down still has not materialized on a global scale. The chief drivers of AI innovation and application will be for-profit companies who have shown that their altruism only extends to their bottom lines. Like most innovations, I expect AI to leave our poor even poorer and our rich even richer, increasing the numbers of the former while consolidating power and wealth in an ever-shrinking group of currently rich people.”

A professional working on the setting of web standards wrote, “Looking ahead 12 years from now, I expect that AI will be enhancing the quality of life for some parts of some populations, and in some situations, while worsening the quality of life for others. AI will still be uneven in quality, and unevenly available throughout different parts of society. Privacy and security protections will be inadequate; data bias will still be common; many technologies and response patterns will be normed to the needs of the ‘common denominator’ user and misidentify or misinterpret interactions with people with disabilities or, if appropriately identifying their disability, will expose that information without user consent or control.”

So many people included comments and concerns about the future of jobs for humans in their wide-ranging responses to this canvassing that a later section of this report has more expert opinions on this topic.

The following one-liners from anonymous respondents also tie into AI and jobs:

  • An associate professor of computer science commented, “Machines will be able to do more-advanced work and improve accuracy, but this likely will expand manipulation of consumers/voters and automation may reduce available jobs.”
  • A director for a global digital rights organization said, “My concern is that human-machine collaboration will leave some of us far better off by automating our jobs, giving us more free and creative time, while doing little to improve the lives of billions of others.”
  • A professor expert in cultural geography and American studies said, “Given the majority human assumption that capitalism is something worth reproducing, the evacuation of most labor positions by AI would create vast poverty and cruelty by the ruling class.”
  • A lecturer in media studies based in New Zealand wrote, “The automation of large volumes of work by machine learning-based systems is unlikely to lead to an increase in social equity within a capitalist economy.”
  • A senior partner at one of the world’s foremost management consulting firms commented, “AI will benefit businesses, the economy and people as consumers, but likely increase income/wage polarization so most people as workers may not benefit.”
  • An engineer and chief operating officer for project automating code said, “Those with the most money will leverage their position of power through AI; it will lead to possibly cataclysmic wealth disparity.
  • A digital anthropologist for a major global technology company wrote, “The gap between those who benefit from advances in technology and those who do not have widened over the past three decades; I can’t see an easy or quick reversal.”

Other anonymous respondents commented:

  • “Some will benefit, while others will suffer. The bifurcated economy will continue to grow. … Those at the bottom of the ladder will see greater numbers of jobs being taken away by technology.”
  • “All in all, AI can be of great use, but we need to be vigilant of the repercussions instead of constantly leaping ‘forward’ only to find out later about all of the negatives.”
  • “In the U.S., the blue-collar job wages have been stagnant since the 1970s despite all of the advances with the internet and mobile devices, so I am not optimistic regarding AI.”
  • “Wealth distribution will continue to widen as the rich get richer.”
  • “AI is going to lead to the destruction of entire rungs of the economy, and the best way to boost and economy while holding together a fractured economy is war.”
  • “Many people will no longer be useful in the labor market. Such rapid economic and social change will leave many frightened and angry.”
  • “In 12 years AI may be more disruptive than enabling, leaving many without work until they retrain and transition.”
  • “There could be a thinning out of the middle – middle management and class.”
  • “AI will increasingly allow low-quality but passable substitutes for previously-skilled labor.”
  • “There are significant implications for unskilled or easily-automated tasks on one end of the spectrum and certain types of analysis on the other that will be automated away. My concern is that we have no plan for these people as these jobs disappear.”

Individuals’ cognitive, social and survival skills will be diminished as they become dependent on AI

While these experts expect AI to augment humans in many positive ways, some are concerned that a deepening dependence upon machine-intelligence networks will diminish crucial human capabilities. Some maintain there has already been an erosion of people’s abilities to think for themselves, to take action independent of automated systems and to interact effectively face-to-face with others.

Charles Ess, an expert in ethics and professor with the department of media and communication at the University of Oslo, said, “It seems quite clear that evolving AI systems will bring about an extraordinary array of options, making our lives more convenient. But convenience almost always comes at the cost of deskilling – of our offloading various cognitive practices and virtues to the machines and thereby our becoming less and less capable of exercising our own agency, autonomy and most especially our judgment (phronesis). In particular, empathy and loving itself are virtues that are difficult to acquire and enhance. My worst fears are not only severe degradation, perhaps more or less loss of such capacities – and, worst of all, our forgetting they even existed in the first place, along with the worlds they have made possible for us over most of our evolutionary and social history.”

Daniel Siewiorek, a professor with the Human-Computer Interaction Institute at Carnegie Mellon University, predicted, “The downside: isolating people, decreasing diversity, a loss of situational awareness (witness GPS directional systems) and ‘losing the receipt’ of how to do things. In the latter case, as we layer new capabilities on older technologies if we forget how the older technology works we cannot fix it and layered systems may collapse, thrusting us back into a more-primitive time.”

Marilyn Cade, longtime global internet policy consultant, responded, “Technology often reflects the ethics of its creators, but more significantly, those who commercialize it. Most individuals focus on how they personally use technology. They do not spend time (or even have the skills/expertise) to make judgments about the attributes of the way that technology is applied. … We must introduce and maintain a focus on critical thinking for our children/youth, so that they are capable of understanding the implications of a different fully digitized world. I love the fact that my typos are autocorrected, but I know how to spell all the words. I know how to construct a logical argument. If we don’t teach critical thinking at all points in education, we will have a 2030 world where the elites/scientists make decisions that are not even apparent to the average ‘person’ on the street/neighborhood.”

Garland McCoy, founder and chief development officer of the Technology Education Institute, wrote, “I am an optimist at heart and so believe that, given a decade-plus, the horror that is unfolding before our eyes will somehow be understood and resolved. That said, if the suicide epidemic we are witnessing continues to build and women continue to opt out of motherhood all bets are off. I do think technology is at the core of both the pathology and choice.”

Aneesh Aneesh, professor at the University of Wisconsin, Milwaukee, said, “Socially, AI systems will automate tasks that currently require human negotiation and interaction. Unless people feel the pressure, institutionally or otherwise, to interact with each other, they – more often than not – choose not to interact. The lack of physical, embodied interaction is almost guaranteed to result in social loneliness and anomie, and associated problems such as suicide, a phenomenon already are on the rise in the United States.”

Every expression of daily life, either civil or professional or familial or personal, will be diminished by the iron grip of AI on the fundamental realities of interpersonal communications.Ebenezer Baldwin Bowles

Ebenezer Baldwin Bowles

Ebenezer Baldwin Bowles, author, editor and journalist, responded, “If one values community and the primacy of face-to-face, eye-to-eye communication, then human-machine/AI collaboration in 2030 will have succeeded in greatly diminishing the visceral, primal aspects of humanity. Every expression of daily life, either civil or professional or familial or personal, will be diminished by the iron grip of AI on the fundamental realities of interpersonal communications. Already the reliance on voice-to-text technology via smartphone interface diminishes the ability of people to write with skill and cogency. Taking the time to ring-up another and chat requires too much psychic energy, so we ‘speak’ to one another in text box fragments written down and oft altered by digital assistants. The dismissive but socially acceptable ‘TL;DR’ becomes commonplace as our collective attention span disintegrates. Yes, diagnostic medicine and assembly-line production and expanded educational curriculum will surely be enhanced by cyber-based, one-and-zero technologies, but at what cost to humanity? Is it truly easier and safer to look into a screen and listen to an electronically delivered voice, far away on the other side of an unfathomable digital divide, instead of looking into another’s eyes, perhaps into a soul, and speaking kind words to one another, and perhaps singing in unison about the wonders of the universe? We call it ‘artificial intelligence’ for good reason.”

A principal design researcher at one of the world’s largest technology companies commented, “Although I have long worked in this area and been an optimist, I now fear that the goal of most AI and UX is geared toward pushing people to interact more with devices and less with other people. As a social species that is built to live in communities, reductions in social interaction will lead to erosion of community and rise in stress and depression over time. Although AI has the potential to improve lives as well, those advances will come more slowly than proponents think, due to the ‘complexity brake’ Paul Allen wrote about, among other things. There have been AI summers and AI winters. This is not an endless summer.”

A chief operating officer wrote, “No doubt in my mind, AI is and will continue to present benefits in simplifying and aiding human activities; however, the net effect is not likely ‘to leave people better off.’ The advances in AI-enabled tools are likely to expand the digital gap in human competencies. This growing gap will decrease the capacity of sizable portions of the population to survive an outage of the technology. This raises humanitarian and national-security concerns.”

Dalsie Green Baniala, CEO and regulator of the Telecommunications and Radiocommunications Regulator of Vanuatu, wrote, “With the introduction of the Internet of Things, human senses are in decline.”

Alper Dincel of T.C. Istanbul Kultur University in Turkey, wrote, “Personal connections will continue to drop, as they are in today’s world. We are going to have more interest in fiction than in reality. These issues will affect human brain development as a result.”

Michael Dyer, an emeritus professor of computer science at the University of California, Los Angeles, commented, “As long as GAI (general AI) is not achieved then specialized AI will eliminate tasks associated with jobs but not the jobs themselves. A trucker does a lot more than merely drive a truck. A bartender does a lot more than merely pour drinks. Society will still have to deal with the effects of smart technologies encroaching ever into new parts of the labor market. A universal basic income could mitigate increasing social instability. Later on, as general AI spreads, it will become an existential threat to humanity. My estimate is that this existential threat will not begin to arise until the second half of the 21st century. Unfortunately, by then humanity might have grown complacent, since specialized AI systems do not pose an existential threat.”

Mauro D. Ríos, an adviser to the E-Government Agency of Uruguay and director of the Internet Society’s Uruguay chapter, responded, “In 2030 dependence on AI will be greater in all domestic, personal, work and educational contexts; this will make the lives of many people better. However, it has risks. We must be able to maintain active survival capabilities without AI. Human freedom cannot be lost in exchange for the convenience of improving our living standards. … AI must continue to be subject to the rationality and control of the human being.”

Nancy Greenwald, a respondent who provided no identifying details, wrote, “Perhaps the primary downside is overreliance on AI, which 1) is only as good as the algorithms created (how are they instructed to ‘learn?’) and 2) has the danger of limiting independent human thinking. How many Millennials can read a map or navigate without the step-by-step instructions from Waze, Google or their iPhones? And information searches online don’t give you an overview. I once wasted 1.5 billable hours searching for a legal concept when two minutes with the human based BNA outline got me the result in two minutes. Let’s be thoughtful about how we use the amazing technology.”

Valarie Bell, a computational social scientist at the University of North Texas, commented, “As a social scientist I’m concerned that never before have we had more ways in which to communicate and yet we’ve never done it so poorly, so venomously and so wastefully. With devices replacing increasingly higher-order decisions and behaviors, people have become more detached, more disinterested and yet more self-focused and self-involved.”

Lane Jennings, managing editor for the World Future Review from 2009 to 2015, wrote, “It is most likely that advances in AI will improve technology and thus give people new capabilities. But this ‘progress’ will also make humanity increasingly vulnerable to accidental breakdowns, power failures and deliberate attacks. Example: Driverless cars and trucks and pilotless passenger aircraft will enhance speed and safety when they work properly, but they will leave people helpless if they fail. Fear and uncertainty could negate positive benefits after even a few highly publicized disasters.”

Michael Veale, co-author of “Fairness and Accountability Designs Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making” and a technology policy researcher at University College London, responded, “AI technologies will turn out to be more narrowly applicable than some hope. There will be a range of small tasks that will be more effectively automated. Whether these tasks leave individuals with increased ability to find meaning or support in life is debatable. Freed from some aspects of housework and administration, some individuals may feel empowered whereas others might feel aimless. Independent living for the elderly might be technologically mediated, but will it have the social connections and community that makes life worth living? Jobs too will change in nature, but it is not clear that new tasks will make people happy. It is important that all technologies and applications are backed up with social policies and systems to support meaning and connection, or else even effective AI tools might be isolating and even damaging on aggregate.”

The following one-liners from anonymous respondents also tie into this theme:

  • A British-American computer scientist commented, “Increasing dependence on AI will decrease societal resilience through centralization of essential systems in a few large companies.”
  • A leading infrastructure engineer for a social network company commented, “AI may make people’s lives better by making some things easier, but it will likely reduce human value along the way – I expect people to be less able to make decisions, less able to tolerate human interaction, etc.”
  • A representative for a nation-state’s directorate of telecommunications wrote, “My fear is that humans will become more and more dependent on AI, to the extent that their natural intelligence would be more and more diminished. The concern is that in the absence of AI they may not be able to act in a timely manner.”

Other anonymous respondents commented:

  • “We need to assure that we have individuals who are able to think and problem-solve and monitor that thinking without assistance.”
  • “Our ethical capabilities lag far behind our technical capabilities.”
  • “Increasing dependence on AI will decrease societal resilience through centralization of essential systems in a few large companies.”
  • “Lack of education in AI and inclusiveness of individual in their own decision-making will make most people worse off in 2030.”
  • “Few people will understand what the AI is attempting to do and how it’s doing it; regular people without this knowledge will become more like sheep.”
  • “I have concerns about how people are adapting to these new changes, the continuing disconnection people have due to advances in AI, substituting AI connections for real people, leading to greater depression.”
  • “My fear is that we will spend even more time with machines than we do with talking with each other.”
  • “My fear is that the increasing ‘datafication’ of work and our lives as a whole will further increase the pressure we feel to reach an unrealistic apex of perfection.”
  • “As one is more and more people have AI/automation support in their daily lives the interactions between people will lessen. People may feel more isolated and less socially interrelated. Social interaction must be carefully maintained and evolved.”

Citizens will face increased vulnerabilities, such as exposure to cybercrime and cyberwarfare that spins out of control, and the possibility that essential organizations are endangered by weaponized information

Some of these experts are particularly worried about how networked artificial intelligence can amplify cybercrime and create fearsome possibilities in cyberwarfare and enable the erosion of essential institutions and organizations.

Anthony Nadler, assistant professor of media and communication studies at Ursinus College, commented, “The question has to do with how decisions will be made that shape the contingent development of this potentially life-changing technology. And who will make those decisions? In the best-case scenario, the development of AI will be influenced by diverse stakeholders representing different communities who will be affected by its implementation (and this may) mean that particular uses of AI – military applications, medical, marketing, etc. – will be overseen by reflective ethical processes. In the absolute worst-case scenario, unrestricted military development will lead to utter destruction – whether in a situations in which the ‘machines take over’ or, more likely, in which weapons of tremendous destruction become all the more readily accessible.”

Jennifer J. Snow, an innovation officer with the U.S. Air Force, wrote, “Facets, including weaponized information, cyberbullying, privacy issues and other potential abuses that will come out of this technology will need to be addressed by global leaders.”

Lee McKnight, associate professor at Syracuse University’s School of Information Studies, commented, “There will be good, bad and ugly outcomes from human-machine interaction in artificially intelligent systems, services and enterprises. … Poorly designed artificially intelligent services and enterprises will have unintended societal consequences, hopefully not catastrophic, but sure to damage people and infrastructure. Even more regrettably, defending ourselves against evil – or to be polite, bad AI systems turned ugly by humans, or other machines – must become a priority for societies well before 2030, given the clear and present danger. How can I be sure? What are bots and malware doing every day, today? Is there a reason to think ‘evil-doers’ will be less motivated in the future? No. So my fear is that the hopefully sunny future of AI, which in aggregate we may assume will be a net positive for all of us, will be marred by – many – unfortunate events.”

Robert M. Mason, a professor emeritus at the University of Washington’s Information School, responded, “Technologies, including AI, leverage human efforts. People find ways to apply technologies to enhance the human spirit and the human experience, yet others can use technologies to exploit human fears and satisfy personal greed. As the late Fred Robbins, Nobel Laureate in Physiology/Medicine, observed (my paraphrase when I asked why he was pessimistic about the future of mankind): ‘Of course I’m pessimistic. Humans have had millions of years to develop physically and mentally, but we’ve had only a few thousand years – as the world population has expanded – to develop the social skills that would allow us to live close together.’ I understand his pessimism, and it takes only a few people to use AI (or any technology) in ways that result in widespread negative societal impacts.”

Frank Feather, futurist and consultant with StratEDGY, commented, “AI by 2030 …. This is only about a decade away, so despite AI’s continuing evolution, it will not have major widespread effects by 2030. With care in implementation all effects should be positive in social and economic impact. That said, the changes will represent a significant step toward what I call a DigiTransHuman Future, where the utility of humans will increasingly be diminished as this century progresses, to the extent that humans may become irrelevant or extinct, replaced by DigiTransHumans and their technologies/robots that will appear and behave just like today’s humans, except at very advanced stages of humanoid development. This is not going to be a so-called ‘singularity’ and there is nothing ‘artificial’ about the DigiTransHuman Intelligence. It is part of designed evolution of the species.”

If there are evil things to be done with AI, people will find out about them and do them.John Leslie King

John Leslie King

John Leslie King, a computer science professor at the University of Michigan and a consultant for several years on cyberinfrastructure for the National Science Foundation’s directorates for Computer and Information Science and Engineering (CISE) and Social, Behavioral, and Economic (SBE) sciences, commented, “If there are evil things to be done with AI, people will find out about them and do them. There will be an ongoing fight like the one between hackers and IT security people.”

John Markoff, fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University and author of “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots,” wrote, “There are expected and unexpected consequences to ‘AI and related technologies.’ It is quite possible that improvements in living standards will be offset by the use of autonomous weapons in new kinds of war.”

A veteran of a pioneering internet company commented, “In the face of managing resources and warfare – the big issues for AI at scale – the goals are not likely to be sharing and co-existence.”

Dan Schultz, senior creative technologist at Internet Archive, responded, “AI will no doubt result in life-saving improvements for a huge portion of the world’s population, but it will also be possible to weaponize in ways that further exacerbate divides of any kind you can imagine (political, economic, education, privilege, etc.). AI will amplify and enable the will of those in power; its net impact on humanity will depend on the nature of that will.”

Sam Gregory, director of WITNESS and digital human rights activist, responded, “Trends in AI suggest it will enable more individualized, personalized creation of synthetic media filter bubbles around people, including the use of deepfakes and related individualized synthetic audio and video micro-targeting based on personal data and trends in using AI-generated and directed bots. These factors may be controlled by increasing legislation and platform supervision, but by 2030 there is little reason to think that most peoples’ individual autonomy and ability to push back to understand the world around them will have improved.”

Miguel Moreno-Muñoz, a professor of philosophy specializing in ethics, epistemology and technology at the University of Granada in Spain, said, “There is a risk of overreliance on systems with poorly experienced intelligence augmentation due to pressure to reduce costs. This could lead to major dysfunctions in health care or in the supervision of highly complex processes. A hasty application of management systems based on the Internet of Things could be problematic in certain sectors of industry, transport or health, but its advantages will outweigh its disadvantages. I do believe there may be significant risks in the military applications of AI.”

Denise N. Rall, a professor of arts and social sciences at Southern Cross University in Australia, responded, “The basic problem with the human race and its continued existence on this planet is overpopulation and depletion of the Earth’s resources. So far, interactions with technology have reduced population in the ‘first world’ but not in developing countries, and poverty will fuel world wars. Technology may support robotic wars and reduce casualties for the wealthy countries. The disparity between rich and poor will continue unabated.”

Patrick Lambe, a partner at Straits Knowledge and president of the International Society for Knowledge Organization’s Singapore chapter, wrote, “I chose the negative answer not because of a dystopian vision for AI itself and technology interaction with human life, but because I believe social, economic and political contexts will be slow to adapt to technology’s capabilities. The real-world environment and the technological capability space are becoming increasingly disjointed and out of synch. Climate change, migration pressures, political pressures, food supply and water will create a self-reinforcing ‘crisis-loop’ with which human-machine/AI capabilities will be largely out of touch. There will be some capability enhancement (e.g., medicine), but on the whole technology contributions will continue to add negative pressures to the other environmental factors (employment, job security, left-right political swings). On the whole I think these disjoints will continue to become more enhanced until a major crisis point is reached (e.g., war).”

Mechthild Schmidt Feist, department coordinator for digital communications and media at New York University, said, “Historical precedent shows that inventions are just as powerful in the hands of criminals or irresponsible or uninformed people. The more powerful our communication, the more destructive it could be. We would need global, enforceable legislation to limit misuse. 1) That is highly unlikely. 2) It is hard to predict all misuses. My negative view is due to our inability to make responsible use of our current online communication and media models. The utopian freedom has become a dystopian battleground.”

Marc Brenman, managing partner at IDARE LLC, said, “We do not know all that machines can do. There is no inherent necessity that they will care for us. We may be an impediment to them. They may take orders from evil-doers. They will enable us to make mistakes even faster than we do now. Any technology is only as good as the morality and ethics of its makers, programmers and controllers. If machines are programmed to care more for the earth than for people, they may eliminate us anyway, since we are destroying the earth.”

Robert K. Logan, chief scientist at the Strategic Innovation Lab (sLab) at OCAT University and professor emeritus of physics at the University of Toronto, said, “The idea of the Singularity is an example of the over-extension of AI. Computers will never achieve an equivalency to human intelligence. There is no such thing as AW (artificial wisdom). AI as a tool to enhance human intelligence makes sense but AI to replace human intelligence makes no sense and therefore is nonsense.”

Alexey Turchin, existential risks researcher and futurist, responded, “There are significant risks of AI misuse before 2030 in the form of swarms of AI empowered drones or even non-aligned human-level AI.”

Adam Popescu, a writer who contributes frequently to the New York Times, Washington Post, Bloomberg Businessweek, Vanity Fair and the BBC, wrote, “We put too much naive hope in everything tech being the savior.”

The following one-liners from anonymous respondents also tie into this theme:

  • A cybersecurity strategist said, “The world has become technologically oriented and this creates challenges – for example, cybercrime.”
  • A respondent who works at a major global privacy initiative predicted AI and tech will not improve most people’s lives, citing, “Loss of jobs, algorithms run amuck.”

Other anonymous respondents commented:

  • “With increasing cyberattacks and privacy concerns AI could connect people to bad actors, which could cause stress and new problems – even the simplest of attacks/pranks could negatively affect people’s lives.”
  • “The increasing dependence of humans on computing coupled with the fundamental un-securability of general-purpose computing is going to lead to widespread exploitation.”
Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings