Numbers, Facts and Trends Shaping Your World

The Future of Human Agency

3. Themes from those who expect tech will not be designed to allow humans to control key decision-making

The next two sections of this report include hundreds of additional comments from experts, organized under the six most common themes in these experts’ responses. The remarks of many of the experts already quoted in the earlier pages of this report also tie into these themes.

This section covers the three themes that emerged among the answers from respondents who expect the level of human agency in regard to individuals’ control of tech-abetted activities will not be improved by 2035.

  • Powerful interests have little incentive to honor human agency: The dominant digital-intelligence tools and platforms the public depends upon are operated or influenced by powerful elites – both capitalist and authoritarian – who have little incentive to design them to allow individuals to exert more control over their tech-abetted daily activities. One result of this could be a broadening of the digital divide.
  • Humans value convenience and will continue to allow black-box systems to make decisions for them: People already allow invisible algorithms to influence and even sometimes “decide” many if not most aspects of their daily lives – that won’t change. In addition, when they have been given an opportunity to exercise some control over their tech tools and activities, most have not opted to do so.
  • AI technology’s scope, complexity, cost and rapid evolution are just too confusing and overwhelming to enable users to assert agency: They are designed for centralized control, not personalized control. It is not easy to allow the kind of customization that would hand essential decision-making power to individuals. And these systems can be too opaque even to their creators to allow for individual interventions.

Powerful interests that build this technology have little incentive to honor human agency

The largest share of the experts who participated in this canvassing argued that the world’s networked-knowledge, artificial intelligence (AI) and communications ecosystem is operated by powerful interests that follow the principles of market capitalism or political authoritarianism.

Bert Huang, a professor and scientist at Tufts University’s Data-Intensive Studies Center whose research is focused on incorporating human knowledge into algorithms to make them more reliable, efficient and fair, commented, “My pessimism about the chances that these tools will be built with humans retaining agency comes from the fact that primitive versions of them allowing no human agency are already embedded in our society. I find it hard to imagine any efforts to counteract this trend outpacing the incentives to deploy new technology.”

AI and smart systems are less likely to ‘take control’ autonomously than they are to be taken control of from the start by already existing power structures and systems.

Peter Lunenfeld, professor and vice chair of design and media arts, UCLA

Peter Lunenfeld, professor and vice chair of design and media arts, UCLA, predicted, “Humans will not be in control of important decision-making in the year 2035. They are not in charge of those decisions now, and in fact rarely or never have been throughout human history. AI and smart systems are less likely to ‘take control’ autonomously than they are to be taken control of from the start by already existing power structures and systems. We already have algorithms controlling access to health care, economic metrics impeding social action on climate change and social media targeting propaganda to influence or even dismantle democratic institutions. If the first two decades of the 21st century haven’t been able to dim 1990s techno-positivism, I’m not sure what will. AI and smart systems could conceivably be integrated into self-monitoring systems – think advanced Fitbits – and allow people to actually understand their own health and how to contribute to the development of a healthier society. Likewise, such advances in machine intelligence could be harnessed to clarify complex, data-driven decision-making that true citizenship increasingly demands. But as long as the long tail of neo-liberalism is driven by profit rather than concerns for the greater good, and our society’s most powerful stakeholders benefit personally and professionally from interacting with avid consumers rather than informed citizens, AI and smart systems are likely to reduce rather than increase human flourishing.”

Lea Schlanger, a senior business intelligence analyst based in North America, commented, “Absent a major shift in practice and policies, tech companies will keep churning out technologies designed primarily for their own agency (as long as they are profitable and don’t generate an egregious amount of bad PR). Based on the current state of the tech industry and American policies specifically, the main reason(s) individuals will not be in control are:

1) “Advancements in AI and machine learning automation are currently happening faster than research on the impacts they’ll have on society as a whole.

2) “Not enough research into how new technologies will impact society is being conducted as part of the technology development process (see the issues with facial recognition AI only being trained on data that is skewed toward white men).

3) “Our most recent and current legislative bodies are so out of touch with how current technology works or is viewed that not only are they barely working through policies around them, but they are also more likely to use the talking points or full-on policy drafts from lobbyists and their political parties when it comes time to create and vote on legislation.”

Luis Germán Rodríguez Leal, teacher and researcher at the Universidad Central de Venezuela and consultant on technology for development, wrote, “Humans will not be in control of important decision-making in the year 2035 because the digitalization of society will continue to advance just as it has been to this point. Promoters of these controlling technologies encourage people to appreciate their convenience, and they say the loss of agency is necessary, unstoppable and imminent, a part of helping refine and enhance people’s experiences with this tech.

“The society based on the data economy will advance in the surveillance and control of people as citizens and as consumers. The creation, manipulation and propagation of consumption habits or ideological positions will be increasingly based on these types of resources. This is an issue of grave concern.

“Society might be entering a dangerous state of digital submission that will rapidly progress without the necessary counterweights that may recover and promote the role of the human over economic indicators. Individuals’ ownership of their digital identity must be guaranteed in order to stimulate the exercise of free will and to encourage a reasoned commitment to participate in the creation and achievement of objectives in the social contexts in which they live.

“The progress of thoughtless digital transformation led by commercial interests is stimulated by the actions or lack of action of governments, the private sector and a broad sector of academia and civil society. Many relevant voices have been raised with warnings about the digital emergency we are experiencing. International organizations such as UNESCO, the European Commission and others have highlighted the need to advance information and digital literacy strategies, together with alternative skills of personal communication, promotion of empathy, resilience and, above all, the raising of ethical awareness among those who create these tools and systems on the influence and impact of all aspects of the creation, introduction and use of digital tools.”

The co-founder of an award-winning nonprofit action network wrote, “The significant level of control corporations first gained over workers has now been extended to consumers. This does not bode well for democracy. Everything online is made ‘easy,’ and it is addictive. Children and young adults have no idea what they are missing – they expect the real world to operate this way. But it doesn’t, or it shouldn’t. Civilization and democracy require patience, grit, fortitude, long-term thinking, relationship skills and initiative. Online life does not foster these. As long as the profit motive rules the tech sector, this trend of decreasing agency of consumers and increasing power of tech companies will continue. Control equals profit, at least in the short to medium turn. I am extremely pessimistic.”

Ayden Férdeline, a public-interest technologist based in Berlin, Germany, commented, “In 2035 smart machines, bots and systems powered by AI and ML will invariably be more sophisticated and perhaps even more opaque than current technologies. In a world where sensors could be nearly invisible or installed in such great numbers that it is impractical to understand how they are surveilling us, persistent power imbalances have the potential to reorder our society in ways that cause more pains than gains. As the [saying] goes, ‘It’s the business model, stupid!’ Enabled by technological change, we have already seen a series of economic, social and cultural adaptations that have radically undermined the potential for the Internet and other emerging technologies to elevate human trust, agency and democratic values. Persistent market failures have arisen as dominant platforms and mysterious algorithms box consumers inside of echo chambers. It is difficult to imagine the same market that has supported these problematic practices to, in parallel, support the emergence of technologies that promote human autonomy and decision-making. Unless there are incentives to develop an alternate ecosystem – perhaps regulation is needed to impose some kind of duty of care on data-extractive businesses – the supply of suitable for-profit entities willing to voluntarily surrender power to consumers is likely to be adversely imbalanced against the demand from consumers for such a marketplace.”

Aram Sinnreich, professor and chair of communication studies at American University, commented, “There is neither a political nor a financial incentive for powerful organizations to build systems that allow human beings to intercede into automated decision-making processes. This is already the case in 2022, when decisions about loan approval, bail amounts, health care options and other life-changing matters have been delegated to black-box algorithms embedded in unaccountable institutions. Arguably, part of the social function of AI is to serve as a (semi)credible mechanism for powerful institutions to avoid accountability for the consequences of their decisions.”

Ojelanki Ngwenyama, professor of global management and director of the Institute for Innovation and Technology Management at Toronto Metropolitan University, said, “It is pretty clear to me that it is not about the technology, but who controls it. Already tech firms determine what technologies we have and how we interact with them. Presently, we cannot even stop a mobile phone from recording our conversations and sending them to service providers, the makers of the technology, the developers of mobile operating systems, security agencies, etc.”

Cláudio Lucena, member of the National Data Protection Council of Brazil and professor of law at Paraíba State University, commented, “For the sake of efficiency and agility, most processes will depend upon some extent of automation in 2035. Proper oversight will be demanded by some segments and groups, but their influence will not be strong enough to prevent the broader rollout of automated decision-making. It is possible that a grave, impactful event may somehow shake things up and alter economic, social and political priorities. Incremental steps toward some sort of oversight might be expected if that happens, but the automation path will move further in spite of this, barely embedding mild adjustments.”

Rich Miller, CEO and managing director at Telematica and chair at Provenant Data, said, “For the next 10 to 12 years, the use of AI will be not the totally autonomous, but rather provide the sense to the end user that the action being taken is ‘assisted’ by the AI, and that the human is supervising and directing the offering. The illusion of control may be provided for and important to adoption and continued use, but it will be illusion, nonetheless. The sources, the intent and the governance of the AI are as much the factor as any other one could name in where this all will go. The intent of the provider/developer/ trainer of the AI must be considered as the real locus of ‘control of important decision-making.’ Because the intent and objectives of these AI offerings are more than likely to be related to impacting the end-user’s behavior (whether consuming media, purchasing merchandise or services, voting, or managing the navigation of an automobile), it is unlikely that even well-intentioned attempts by government to regulate these offerings will be very effective.”

Rich Salz, senior architect and principal engineer at Akamai Technologies, wrote, “No, by 2035 machines, bots and systems powered by AI will not be designed to allow people to easily be in control over most tech-aided decision-making. Commerce will deploy things before government can catch up. Commerce has money as its focus, not individuals. (I’m not sure government has individuals as its focus either, but it’s better than commerce.)”

Mark Crowley, an assistant professor of computer engineering at the University of Waterloo, Canada, whose research seeks dependable and transparent ways to augment human decision-making, responded, “I see two completely separate questions here: 1) Will scientific advances in AI make it possible to give decision-making assistance in most human decision-making domains by 2035? 2) Will designers of available and popular AI systems such as bots, tools, search engines, cellphones, productivity software, etc., design their tools in such a way as to give people meaningful control over decision-making?

“My take on each: 1) Yes, it is entirely possible that most fields of human endeavour could have meaningful AI powered decision-making assistance by 2035, and that it would be possible to allow meaningful human input, oversight, veto and control. 2) No, I am not confident at all that those who create these tools, or those who pay for them to be created, will do so in a way that supports meaningful human input from individuals.

“Here’s a related issue and another question that needs an answer. Ask yourself: Will the larger users, such as industry and government, create or request creation of tools that enable their constituencies to have meaningful control?”

Fred Zimmerman, publisher at Nimble Books, said, “Big Tech has one primary objective: making money. The degree to which ‘AI’ is available to enhance human decision-making will be primarily a function of revenue potential.”

Avi Bar-Zeev, an XR pioneer who has developed the tech at Microsoft, Apple, Amazon, Google and other businesses, wrote, “Ad-tech is a technology platform business model designed to offset the apparent price of digital goods and services down to ‘free.’ After decades of dominance by that model, the public has learned that nothing is free, and the actual cost has been harm to society. One price of this business model is in the loss of human agency. Advertising and other methods of influence serve a remote master who has an agenda. The better the ads are, the less autonomy we have. By 2035, ad-tech will finally somewhat diminish in dominance. There will be greater privacy controls. However, it is being replaced by personalized AI, which is attractive because it makes recommendations tailored to our personal needs based on data we share about ourselves. The algorithm has control over us because we only see what it shows us. We have to work harder to escape its natural bubble. The personal AI revolution has the potential to help AI make decisions the way we would, and thus do so to our benefit, or to do it mostly to benefit the people who control it. We will rely on it, one way or another. The key question is: Will we gain or lose by automating so much of our lives?”

The algorithm has control over us because we only see what it shows us. We have to work harder to escape its natural bubble. The personal AI revolution has the potential to help AI make decisions the way we would, and thus do so to our benefit, or to do it mostly to benefit the people who control it.

Avi Bar-Zeev, an XR pioneer who has developed the tech at Microsoft, Apple, Amazon, Google and other businesses

Dan McGarry, journalist, editor and investigative reporter, said, “Human control of all decision-making must be vested in the tech equally as in law and regulation. Very little agency over agency should be given to algorithmically -based services. While machine learning is an exceptionally good manner of facilitating interaction with large volumes of data, even apparently trivial decisions may lead to unforeseen negative consequences. The challenge we face in spreading the role of machine learning and algorithmically driven tech is that it’s treated as proprietary ‘secret sauce,’ owned and operated centrally by companies capable of insanely resource-intensive computation. Until that changes, we face a risk of increased authoritarianism, surveillance and control over human behaviour, some of it insidious and unremarked.”

Christopher Richter, professor and chair of communication studies at Hollins University, responded, “I am only moderately optimistic that AI will give people more extensive control by 2035 for three overlapping reasons. First, it will be designed up front and leveraged for profit, not for social benefit. Second, as with social media, there will be unintended and unforeseen consequences, both positive and negative, of its development. Third, even given the ever-increasing rate of tech development, 13 years seems too soon for both development and adoption of solutions.”

Thomas Levenson, professor and director of the graduate program in science writing at the Massachusetts Institute of Technology, commented, “The diminishment of human agency is already a feature – not a bug – of U.S. and world society. A lot of social infrastructure and the harsh realities of power are doing so. To some extent, the use of AI (or AI-labeled automated systems) is just a way to mask existing threats to individual autonomy/agency. A brief, first-order answer to this difficult question is that AI-powered systems will be deeply embedded in decision-making because doing so serves several clear interests of those developing and deploying those systems. They’re cheaper at the marginal action than a human-staffed system (much of the time). Embedded assumptions disappear behind the technology; such assumptions most often reflect and serve the status quo ante, in which the deployer of bots/systems is a successful player.”

Ben Waber, president and CEO of Humanyze, a behavioral analytics company, and author of “People Analytics,” wrote, “Today, leaders in organizations (companies, governments, etc.) are making the design decisions for systems and algorithms they deploy. Individuals affected by those systems have little ability to question or influence them. This has been true for decades, think of credit rating systems, search algorithms and the like. These systems will continue to be controlled by humans – the humans with power within those organizations.”

Andrew Nachison, chief communications and marketing officer for the National Community Reinvestment Coalition, said, “Corporate interests will continue to shape and overshadow individual interests. Decisions mediated by computing may become more transparent, but transparency is not the same as agency. For instance, mortgage, insurance and other risk-related decisions that once were made by humans will increasingly be made by computing and algorithms. If new laws and regulations demand it, then we, the ‘subjects’ of these decisions, may have greater insight into how those algorithmic decisions were made. But we will not have agency over the decisions themselves. Most of these decisions will never be reviewed or vetted by humans.”

Steve Jones, professor of communication at the University of Illinois-Chicago, observed, “Unfortunately I think we have to look at this – to borrow from the film ‘All the President’s Men’ – in ‘follow the money’ fashion. Who would benefit from designing AI etc. in a way that gives people agency and who would benefit from not giving them agency? I expect few companies will want to give users much control, for a variety of reasons, not the least of which is that they will want to constrain what users can do within parameters that are a) profitable and b) serviceable (that is, achievable given the capabilities of the software and hardware). This is also going to be a delicate dance between what uses designers envision the technology being put to and what uses people are going to find for it, just as it has been for most every technology used widely by people. It will be the unanticipated uses that will surprise us, delight us and frighten us.”

Gary Marchionini, dean at the University of North Carolina-Chapel Hill School of Information and Library Science, responded, “There is little incentive for systems to be designed, marketed and improved unless there are strong economic payoffs for the tech companies. That is why I doubt the design advances for human agency will be significant. The advances will come from smaller companies or niche services that afford individuals control. For example, the Freedom app gives people control by blocking selected services. Full control would give people much more agency at the infrastructure level (e.g., to be able to manage the full range of data flows in one’s home or office or while mobile) but such control requires companies like AT&T, Spectrum, Apple, Alphabet, etc., to give people the ability to limit the fundamental profitability underlying the surveillance economy. I see little incentive for the platforms to give up such control without regulation.”

Marydee Ojala, editor-in-chief of Online Searcher, Information Today, said, “At what point will the ‘human in the loop’ be much more able to affect autonomous decision-making in the future? Will we only expand upon our reliance on algorithms we don’t understand? ‘Data-driven’ decisions are becoming more and more prevalent, but this type of decision-making is often a numbers game that ignores the human implications of its decisions.

“Example: If research and development were totally data-driven in the pharma industry, decisions about which diseases to research and fund a cure for would concentrate only on diseases that are the most widespread and reasonably common (and profitable?) at the expense of addressing the damage caused by lesser-known diseases affecting a smaller number of people. While the research into COVID that resulted in vaccines was stellar – with huge ramifications for immunity worldwide to a deadly disease – would AI-based decision-making, in the future, discount doing the same type of research if the disease affected only a small number of people rather than a larger population? Might we no longer work to develop ‘orphan drugs’”?

Daniel Berleant, professor of information science at the University of Arkansas at Little Rock and author of the book “The Human Race to the Future,” commented, “Software development focuses on the goal of meeting specific requirements. If human agency in such systems is not a specific requirement, it will not be specifically addressed. So, the real question is whether human agency will be required. Given the generally hands-off approach of government regulation of software design and many other areas, the only way it would become legally required is if there is a compelling reason that would force governments to respond. I don’t foresee such a compelling reason at this time.”

Heather Roff, nonresident fellow in the law, policy and ethics of emerging military technologies at Brookings Institution and senior research scientist at the University of Colorado -Boulder, wrote, “In the present setting, companies are obtaining free data on human users and then using such data to provide them the use of intellectual property-protected applications. To crack this open for users to have ‘control’ over ‘important decision-making’ – whatever that really means – is to provide users with not merely access to their data, but also the underlying ‘how’ of the systems themselves. That would undermine their intellectual property, and thus their profits. Additionally, even with some privacy-control tools, users still have very little control about how their data is used or reused, or how that data places them in certain classes or ‘buckets’ for everything as simple as shopping histories to predictive policing, predictive intelligence, surveillance and reconnaissance, etc. The tools themselves are usually built to allow for the minimal levels of control while protecting IP.”

Ruben Nelson, executive director of Foresight Canada, predicted, “My sense is that the slow but continuous disintegration of modern techno-industrial (MTI) cultures will not be reversed. Even today many people, if not yet a majority, are unable to trust their lives to the authority claimed by the major institutions of the culture-science, religion, the academy, corporate business. Over the last 30 to 40 years, more and more folks – still a minority but more than a critical mass – have quietly withdrawn their trust in such institutions. They no longer feel a deep sense of safety in their own culture. The result is a great fracturing of what used to be a taken-for-granted societal cohesion. One result is that many no longer trust the culture they live in enough to be deferential and obedient enough to enable the cultures to work well. This means that those who can get away with behaviours that harm the culture will have no capacity for the self-limitation required to a civil society to be the norm.

“I expect greater turmoil and societal conflict between now and 2035, and many with power will take advantage of the culture without fear of being held accountable. So, yes, some AI will be used to serve a deeper sense of humanity, but minorities with power will use it to enhance their own game without regard for the game we officially say we are playing – that of liberal democracy. Besides, by 2035, the cat will be out of the bag that we are past the peak of MTI ascendency and into a longish decline into greater incoherence. This is not a condition that will increase the likelihood of actions which are self-sacrificial.”

An internet systems consultant wrote, “Large companies are dictating more and more decisions for consumers, generally favoring themselves. Their use of automatic remote firmware updates allows them to arbitrarily change the behavior of many products long after those products are initially purchased. The lack of competition for many kinds of technology products exacerbates this problem. The introduction of AI in various aspects of product and service design, implementation and operation will likely make the effects of choices still delegated to ordinary consumers less effective for those consumers. One example: A consumer will have no idea whether their deliberate or accidental viewing of particular images online will be seen by an AI as evidence of possible criminal activity warranting investigation by the government. Such triggers will have a chilling effect on human inquisitiveness. The use of AI will tip the balance away from individual rights toward increased automated surveillance. Companies’ uses of AI to detect potential criminal activity may even be seen by courts to shield those companies from violating privacy protection laws. It’s also possible that courts will treat AI estimates of potential criminal activity as relatively unbiased agents that are less subject to remedial civil action when their estimations are in error.”

Ramon F. Brena, a longtime professor of computer science and professor based at Tecnológico de Monterrey, Mexico, commented, “A large percentage of humans will not be in control of many decisions that will impact their lives because the primary incentive for Big Tech is to make things that way. The problem is not in the technology itself, the incentive for large tech companies, like Meta, Google, Tesla and so on. Much of the relationship between people and digital products is shaped by marketing techniques like the ones described in the book ‘Hooked.’ Tech design is centered on making products addictive, thus driving people to make decisions with very little awareness about the likely consequences of their interactions with them. Digital products appear to make life ‘easy,’ but there is a hidden price. There is an implicit struggle between people’s interests and big companies’ interests. They could be aligned to some degree, but Big Tech companies choose to follow their own financial goals.”

An anonymous respondent wrote, “Your question should be ‘Will any human have control over large AI systems?’ Corporations are mostly agents already; they actually were way back in the 1970s and ’80s, before computers really took off. Large corporations are entities with minds of their own, with desires and ethics independent of any human working for them. I am more worried about monopolization and market power than about AI having a mind of its own. The problem of ‘Amazon controls the AI that runs your world’ is Amazon, not the AI.”

Sarita Schoenebeck, associate professor and director of the Living Online Lab at the University of Michigan, said, “Some people will be in charge of some automated decision-making systems by 2035, but I’m not confident that most people will be. Currently, people in positions of power have control over automated decision-making systems and the people whose lives are affected by such systems have very little power over them. We see this across industries: tech, health care, education, policing, etc. I believe that the people and companies building automated systems will recognize that humans should have oversight and involvement in those systems, but I also believe it is unlikely that there will be any meaningful redistribution in regard to who gets to have oversight.”

I believe that the people and companies building automated systems will recognize that humans should have oversight and involvement in those systems, but I also believe it is unlikely that there will be any meaningful redistribution in regard to who gets to have oversight.

Sarita Schoenebeck, associate professor and director of the Living Online Lab at the University of Michigan

Ebenezer Baldwin Bowles, an activist and voice of the people who blogs at corndancer.com, wrote, “By keeping the proletariat clueless about the power of technology to directly and intentionally influence the important decisions of life, big money and big government will thrive behind a veil of cyber mystery and deception, designed and implemented to confuse and manipulate.

“To parse the question, it is not that humans won’t be in control, but rather that things won’t ‘easily be in control.’ (That is, of course, if we as a human race haven’t fallen into global civil warfare, insurrection and societal chaos by 2035, which some among us suspect is a distinct possibility.) Imagine yourself to be a non-expert and then see how not easy it becomes to navigate the cyberlands of government agencies, money management, regulatory bodies, medical providers, telecommunications entities and Internet pipelines (among others, I’m sure). Nothing in these realms shall come easily to most citizens.

“Since the early aughts I’ve maintained that there is no privacy on the Internet. I say the same now about the illusion of control over digital decision-making powered by AI. The choices offered, page to page, belong to the makers. We are seldom – and n’er fully – in charge of technology – that is, unless we break connections with electricity. Systems are created by hundreds of tech-team members, who sling together multiple algorithms into programs and then spill them out into the universe of management and masters. We receive them at our peril.”

Greg Lindsay, nonresident senior fellow at the Atlantic Council’s Scowcroft Strategy Initiative, commented, “Humans will be out of the loop of many important decisions by 2035, but they shouldn’t be. And the reasons will have less to do with the evolution of the technology than politics, both big and small. For example, given current technological trajectories, we see a bias toward large, unsupervised models such as GPT-3 or DALL-E 2 trained on datasets riddled with cognitive and discriminatory biases using largely unsupervised methods. This produces results that can sometimes feel like magic (or ‘sapience,’ as one Google engineer has insisted) but will more often than not produce results that can’t be queried or audited.

“I expect to see an acceleration of automated decision-making in any area where the politics of such a decision are contentious – areas where hard-coding and obscuring the apparatus are useful to those with power and deployed on those who do not.

“In the face of seemingly superior results and magical outcomes – e.g., an algorithm trained on historical crime rates to ‘predict’ future crimes – will be unthinkingly embraced by the powers that be. Why? First, because the results of automated decision-making along these lines will preserve the current priorities and prerogatives of institutions and the elites who benefit from them. A ‘pre-crime’ system built on the algorithm described above and employed by police departments will not only post outcomes ad infinitum, it will be useful for police to do so. Second, removing decisions from human hands and placing them under the authority of ‘the algorithm,’ it will only make it that much more difficult to question and challenge the underlying premises of the decisions being made.”

Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” focused his response on the choices people might or might not have in a further-out future world in virtual reality. “Every indication of the design of the metaverse is that humans will have less agency once they enter the virtual world. The very presentation of what you see will be driven by Big Tech algorithms, which in turn will be determined by advertisers bidding on the opportunity to present their product to the person with the VR gear on. The only decisions that will require human input will be the decision to purchase (with crypto) some product or experience. All of this will accelerate the move toward a transhumanist future, a future that Francis Fukuyama has called ‘the world’s most dangerous idea.’”

Eduardo Villanueva-Mansilla, associate professor at Pontificia Universidad Católica del Perú and editor of the Journal of Community Informatics, said, “Humans’ experiences will depend on the level of control that large corporations have of machines driven by artificial intelligence. As the experience thus far indicates, without regulation the potential for profit-driven design will determine how much and for what services these systems will be driven and any social benefits thereof.”

Llewellyn Kriel, retired CEO of a media services company based in Johannesburg, South Africa, warned, “The future in this context looks bleaker by the day. This is primarily due to a venal confluence of cybercrime, corporate bungling and government ignorance. This has meant and will continue to mean that individuals (‘end users’) will inevitably be overlooked by programmers. The digital divide will see parts of Africa plunge further and further behind, as intractable corruption entrenches itself as a lifestyle and no longer merely an identifying oddity. The continent is already a go-to haven of exploitation in which the only winners are corruptocrats, some outside nation-states and a handful of mega corporations (especially banks, insurance, medical and IT).”

Karl M. van Meter, author of “Computational Social Science in the Era of Big Data” and leader with the Association Internationale de Méthodologie Sociologique, wrote, “In the automation cases of Boeing’s 737 and the deadly assembly lines where there have been deaths among workers due to lack of control, higher echelons decided safer ‘tech-aided decision-making’ was either too expensive or couldn’t be installed on time. Such administrative decisions will very probably determine where we will be by 2035, if we all agree to keep technology out of making major social, political, economic, ecological and strategic decisions. Does anyone consider that the roles of presidents, senators, judges and other powerful political positions might be filled on the basis of ‘tech-aided decision-making’?

“The same question can be asked of the financial, economic and business sectors. Should housing, poverty, health and environment policies also be based on ‘tech-aided decision-making,’ or is it more likely that the public might prefer that these decisions should come about through a process that includes discussion among shareholder human beings, with technology simply providing additional information, analysis and the suggestion of possible future scenarios based on eventual choices made? We already have witnessed – on the negative side in the case of the Boeing 737 – the result when tech-aided decision-making computer flight programs could not ‘easily be controlled’ and – on the positive side – ‘tech-aided’ micro-surgery and tele-commanded operations in distant clinics during which no surgeon would ever let the tech work without being under control.”

George Onyullo, an environmental-protection specialist at the U.S. Department of Energy and Environment, commented, “The broadening and accelerating rollout of tech-abetted, often autonomous decision-making may change human society by increasing human suspicion and decreasing trust. The relationship between humans and machines, bots and systems (powered mostly by autonomous and artificial intelligence) will likely be more complex than it is currently. The more machines are allowed to get into the decision-making spaces that are relevant to people’s lives, the more people will interrogate the ability of machines to make those decisions.”

Sebastian Hallensleben, head of digitalization and artificial intelligence at VDE Association for Electrical, Electronic and Information Technologies, responded, “In my view, the vast majority of humans who are affected (positively or negatively) will indeed not be in control of decisions made by machines. They will lack access to sufficient information as well as the technical competence to exert control. However, a small minority of humans will be in control. In a rosy future, these would be regulatory bodies ultimately shaped through democratic processes. In a less rosy (but probably more likely) future, these will be the leadership tiers of a small number of large global companies.”

Calton Pu, professor and co-director of the center for experimental research in computer systems, Georgia Tech, wrote, “There will not be one general trend for all AI systems. Given the variety of smart machines, bots and systems that will incorporate some kind of AI, a complex question has been simplified to provide clarity of a binary answer. The question on human decision-making has two implicit dimensions: 1) technical vs. managerial, and 2) producer vs. consumer.

The relationship between humans and machines will not be dominated by technology forecasts. Instead, the selection of available technology in products and systems will be dictated by political/monetary concerns.

Calton Pu, professor and co-director of the center for experimental research in computer systems, Georgia Tech

“On the first dimension, there are some technical constraints, but the manufacturers are expected to develop the technology to provide a wide range of capabilities that can support many degrees of human control and decision-making in AI-assisted products and systems. However, of the wide range of technical capabilities, the management will choose what they think will be appropriate for them either politically (governments) or for profit (companies).

“On the second dimension, the producers will be guided by managerial decisions (since the technical capabilities will be many). In contrast, the consumers will have their own wide range of preferences, from full human control to full automation (machines making many decisions). Producers may (or may not) choose to satisfy consumer preferences for a variety of political and monetary reasons.

“An analysis from these two dimensions would suggest that the relationship between humans and machines will not be dominated by technology forecasts. Instead, the selection of available technology in products and systems will be dictated by political/monetary concerns. Therefore, technological advances will provide the capabilities to enable a wide range of answers to the question on which key decisions that will be automated or requiring human input. However, the management will determine which way to go for each decision for political or monetary reasons that can change quickly in the space-time continuum.

“It is common for management to hide political/monetary reasons behind technological facilities, e.g., blaming an algorithm for ‘automated’ decision-making, when they specified the (questionable) policies implemented by the algorithm in the first place. In many such cases, the so-called autonomous decision-making is simply a convenient mislabel, when the systems and products have been carefully designed to follow specific political/monetary policies.”

Scott Santens, author of “Let There Be Money” and editor of @UBIToday, said, “Although it is entirely true that technology can liberate people and increase human agency and enable everyone to be more involved in the collective decision-making process of how to implement technology in a way that benefits everyone, the status quo is for it to benefit only some, and that will remain until people start thinking differently.

“Humankind has a trust problem. Society today seems to be built on distrust – we must assume by default we can’t trust each other. We are witnessing a continuing decline of trust around the world. The trend is not toward greater trust over time. Will that change because we have more technology? Technology is playing an active role in the decrease trust as it is used to spread misinformation and disinformation and create polarization. More technology seems unlikely to resolve this. It is more likely that as technology advances, those in power will prefer to sustain that power by avoiding putting more control into the hands of humans. They will, instead, choose to utilize the opportunity tech provides to reduce the agency of humans. If humans can’t be trusted, trust machines instead. The public seems likely to go along with that as long as trust in each other remains low.”

Tyler Anderson, a senior user-experience designer at one of the top five global technology companies, commented, “While there are a few highly visible areas over which humanity will retain a degree of control (think self-driving cars as the most present example), many of the more important areas of our lives are already slipping solely into the hands of AI and related IT. The types of information we consume on a daily basis and the long arc of how our popular culture reflects our world are influenced by algorithms over which we have no control. This will only increase and spread to more areas of our lives.

“Our health care system, already heavily commoditized and abused, is soon to be invaded by big tech companies looking to automate the caregiving process. There still may be doctors present to add a human touch to medical visits, but behind the scenes all patients will have their anonymized health care data put into an information thresher and algorithmic diagnoses will be tested based on reported symptoms and demographic details. This is but one example of the critical areas of the human experience into which AI will be further integrated, and the results of this integration are likely to be uneven at best.

“It is in the day-to-day activities of society where the concern really lies, where it’s quite likely that human input won’t even be considered. Hiring processes, scripting on television shows, political campaigns – all of these are areas that have a direct impact on human lives, and already we’re seeing AI-fueled predictive algorithms affecting them, often without the proper knowledge of what’s going on behind the scenes to generate the decisions that are being put into action. If these aspects of decision-making have already lost the benefit of human input, how can we hope things will get better in the future?

“All of these and many more will one day be completely run by AI, with less human input (because just think of the savings you get by not having to employ a human!). As this continues to proliferate across industries and all aspects of our lives, the damage to our society could become irreparable. The influence of algorithmic AI is permanently changing the ways in which we view and interact with the world. If this continues unchecked, it could result in a new form of AI-driven fascism that may further decimate culture and society.”

Humans value convenience and will continue to allow black-box systems to make decisions for them

A share of these experts discussed the long-standing human practice of taking convenient shortcuts and made the case that most people are already allowing programmed machines to greatly narrow their choices or even decide what they can and should do. They noted that it is common for people to accept the first automated choice suggested by an app or only skim through a few options in a split second without examining more of the options available. Even when people are given an opportunity to exercise any type of control right now, most do not choose to do so – sometimes because making the choice is not easy to do.

Alan S. Inouye, senior director for public policy and government relations at the American Library Association, commented, “Many or most users don’t actually want control. As long as the recommendations or decisions that the system proposes seem to generally be on-target, people enjoy having the technology in control. And as more and more data are interconnected and there is more historical data to mine, personally as well as in the aggregate, system decisions are apt to improve. Even today, Google search results or Amazon recommendations are rather on-target. For most people, ease of use is more important than user control.”

Marti Hearst, professor and head of the school of information, University of California, Berkeley, said, “In general, interfaces to allow people to adjust settings do not work well because they are complicated and they are disfavored by users. Consider tools that aid people in what they are doing, such as search engines or machine translation. These tools use a lot of sophisticated computation under the hood, and they respond quickly and efficiently to people’s queries. Research shows that people do not want to adjust the settings of the underlying algorithms. They just want the algorithms to work as expected.

Tools will continue make important decisions for people, whether they want this or not. This includes settings such as determining recidivism and bail, judging teacher performance and perhaps including push advertising and social media feeds.

Marti Hearst, professor and head of the school of information, University of California, Berkeley

“Today’s machine translation tools work adequately for most uses. Research shows that translators do not want to expend a lot of effort correcting a poor translation. And users do not want to take the time to tweak the algorithm; they will use the results even if they are poor since there is often no other easy way to get translations. Alternative interfaces might give users lots of choices directly in the interface rather than anticipating the users’ needs. This can be seen in faceted navigation for search, as in websites for browsing and searching for sporting goods and home decor products.

“Tools will continue make important decisions for people, whether they want this or not. This includes settings such as determining recidivism and bail, judging teacher performance and perhaps including push advertising and social media feeds. These tools will not allow for any user input since it is not in the interests of those imposing the decisions on others to do so.”

Ayden Férdeline, a public-interest technologist based in Berlin, Germany, asked, “Are consumers concerned enough about the risks associated with artificial intelligence and the deep analytics that AI can generate that they will actively seek out and shift their behavior to consciously adopt only the new technologies that support their agency and autonomy? Particularly if they don’t even know the current technologies are monitoring them? As it stands, in poll after poll, consumers seem to indicate that they don’t trust much of the information being shared on the World Wide Web and they say they believe their privacy and security is compromised – yet barely anyone uses an ad blocker on their web browser and billions of people use Facebook and Google, while they are quite cognizant of the fact that those companies’ business models are built off of monetizing their personal information.”

John L. King, professor of information studies and former dean, University of Michigan, “The issue is not that people cannot exercise some level of agency but instead that they usually will not when given a choice. Today, using button-click permission structures such as click-wrap, people simply give away control without thinking about it. Most users will do what is necessary to avoid extra work and denial of benefits. This is illustrated by current systems that allow users to prohibit cookies. Users face two choices: Allow cookies to get the goodies or prohibit cookies and get less. It’s hard to tell how much less. Users who remain in control get extra work. Most users will take the low-energy-cost path and opt for letting the system have its way as long as it appears not to hurt them directly. Those who benefit will make transfer of power to them easy and make it easy for the end user from then on. Users, like Pavlov’s dogs, click whenever they see the word ‘Accept.’ Those who benefit from this will push back on anything that makes it easier for users to be in control.”

The director of initiatives at a major global foundation focused on keeping communications networks open and accessible commented, “I expect that the majority of humans will not be in control of important decision-making in the year 2035. In addition to the fact that there is less profit for builders and managers of the tech if they work to support humans in understanding their options and exercising their agency:

  • “There appears to be a strong human tendency to give away agency to other entities, even if these entities are machines, especially when the interfaces undermining human agency are designed to be attractive and/or easier to use.
  • “A significant percent of the population may not be concerned with exercising agency if the options given to them to help them manage or personalize their interactions with machines are either complex or already programmed to be close enough to what they would want in any case.”

Barry Chudakov, founder and principal, Sertain Research, wrote, “In a word, the relationship between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence in 2035 will be fraught. In effect, questions of agency are off the table. Our devices as currently designed bypass agency, trick agency, deaden agency, lull agency – and these are just the crude forerunners to the good stuff, to devices and technologies just around the corner in the metaverse and omniverse.

“Looking at your phone can be so addictive you don’t notice you’re ignoring a loved one; your attention can be compromised when driving, with deadly consequences. These are agency compromises. If you had full-awareness agency, you would notice that being alone together is not the purpose of togetherness; or that driving while texting is dangerous. But the device has usurped a measure of your agency.”

“The nature of consumer-focused smart tools is to keep the logic and dynamics of the tools hidden, or at least less noticeable, and to engage the user with features that are quick, convenient, pacifying. These are the enemies of agency. The inside revolt of people in technology development is an enlightened pushback against the usurping of agency: Steve Jobs wouldn’t let his kids use an iPad or iPhone. Jaron Lanier has written extensively about the danger of treating humans as gadgets. Former Google insider Tristan Harris has railed against social media algorithms and how they amplify nonsense, creating the new propaganda, which he calls ‘amplifiganda.’ Stephen Hawking said that efforts to create thinking machines pose a threat to our very existence.”

Steven Marsh, an associate professor at Ontario Tech University, Canada, and a computational philosopher expert in human norms, wrote, “To begin with, the question presupposes that the systems we will be using will indeed be ‘smart’ or ‘intelligent’ enough to be in control. I see no reason why this should be the case. It is indeed the case that we have plenty of machines now that we put ‘in control’ but they’re not smart and, as liminal creatures, humans are able to deal with the edge cases that systems cannot much better than the machines. I believe this is likely to continue to be the case for some time. The danger is when humans relinquish that ability to step in and correct. Will this be voluntary? Perhaps. There are organizations that are active in trying to ensure we can, or that systems are less opaque (like the Electronic Frontier Foundation, for instance), and this is going to be necessary.

“My own take on where humans might remain in the loop is in the area of ‘slow’ computing, where when systems do reach edge cases, situations they haven’t experienced or don’t know how to deal with, they will appropriately and obviously defer to humans. This is especially true where humans are present. There are plenty of philosophical problems that are present here (trust, for one) but if handled properly conundrums like the trolley problem will be seen to be the fallacy that they are.”

Robin Cooper, emeritus professor of computational linguistics at the University of Gothenburg, Sweden, commented, “I am pessimistic about the likelihood that humans will have come around to understanding the limits of technology and the right way to design appropriate interfaces by 2035. We will still be in the phase of people believing that AI and ML techniques are more powerful than they really are because we can design machines that appear to behave sensibly. Any key decision should require direct human input – with assistance from knowledge gained from AI technology if appropriate. Given current AI technology trends, some major consequences of the broadening of autonomous AI decision-making by 2035 could include:

  • Major disasters caused by absurdly wrong predictions or interpretations of events (e.g., early-warning systems indicate possible nuclear attack).
  • A perpetuation of discriminatory behaviour based on previous data (e.g., systems that screen job applicants).
  • A stifling of humans’ capabilities for change and creativity (again because current AI techniques are based on past behaviour rather than on reasoning about the future).”

Mark Crowley, an assistant professor of computer engineering at the University of Waterloo, Canada, whose research seeks dependable and transparent ways to augment human decision-making, responded, “The public already has far too much confidence today in accepting the advice coming from AI systems. They seem to have a false sense that if an AI/ML-powered system has generated this answer it must be correct, or at least very reasonable. This is actually very far from the truth. AI/ML can be arbitrarily wrong about predictions and advice in ways human beings have a difficult time accepting. We assume systems have some baseline of common sense, whereas this is not a given in any software system. Many AI/ML systems do provide very good predictions and advice, but it entirely depends on how hard the engineers/scientists building them have worked to ensure this and to test the boundaries. The current trend of ‘end-to-end learning’ in ML is very exciting and impressive technically, but it also magnifies this risk, since the entire point is that no human prior knowledge is needed. This leads to huge risks of blind spots in the system that are difficult to find.”

Daniel R. Mahanty, innovations unit director for the Center for Civilians in Conflict commented, “The gradual and incremental concession of human control takes place in ways that most people either don’t recognize or don’t muster the energy to resist. Take, for example, the facial-recognition screening taking place in airports – it is a form of intrusion into human agency and privacy that most people simply don’t seem to see as worth questioning. We can also see this in other interactions influencing human behavior – e.g., along the lines of the nudge theories of behavior economics. These have been applied through imperceptible changes in public policy; humans are and will be directed toward behaviors and decisions influenced by automated processes of which they are not even aware.”

Axel Bruns, Australian Research Council Future Fellow and professor at the Digital Media Research Centre, Queensland University of Technology, Australia, said, “Blind belief in automated solutions still prevails without sufficient caution. Yes, some aspects of human life will be improved by automated systems, especially for well-off segments of society, but there is an equal or even greater tendency for these systems to also be used for social sorting, surveillance and policing purposes, from the automated vetting of job applications through the generation of credit scores to sociodemographic profiling. In such contexts, the majority of people will be subject to rather than in control of these automated systems, and these systems will actively curtail rather than assist individual and collective human agency. The fundamental problem here is that such systems are often designed by a small and unrepresentative group of tech solutionists; fail to take into account the diverse needs of the population they are designed to address; fail to consider any unintended consequences of algorithmic intervention (i.e., apply a trial-and-error approach without sufficient safeguards); and are often too complex to be understood by those who are supposed to regulate them (or even by their designers).”

Bryan Alexander, futurist, consultant and senior scholar at Georgetown University, said, “We will cede a good amount of decision-making to automation (AI and/or robotics) for several reasons, the first being that powerful force: convenience. Letting software take care of tasks we usually don’t enjoy – arranging meetings, doing most email, filing taxes, etc. – is a relief to many people. The same is true of allowing robots to take care of dishwashing or driving in heavy traffic. A second reason is that while our society putatively claims to enjoy sociability, for some people interpersonal encounters can be draining or worse and they prefer to automate many interactions. Further, there are tasks for which human interaction is not something nearly anyone enjoys; consider, for example, most bureaucratic functions. A third reason is that many people will experience social and political instability over the next dozen years due to the climate crisis and legacy political settlements. Settings like these may advance automation, because governments and other authorities may find automating rule and order to be desirable in chaotic situations, either openly or in secrecy. People may cede decision-making because they have other priorities. Each of these claims because contain the possibilities of people demanding more control over decision-making. Social unrest, for example, often features political actors vying for policy direction. Our issues with human interaction may drive us to want more choices over those interactions, and so on. Yet I think the overall tendency will be for more automated decision-making, rather than less.”

Emmanuel R. Goffi, co-founder and co-director of the Global AI Ethics Institute, noted, “By 2035, in most instances where fast decision-making is key, AI-fitted systems will be naturally granted autonomy. There is good chance that by 2035 people will be accustomed to the idea of letting autonomous AI systems do most of the work of their decision-making. As any remaining reluctance to use machine autonomy weakens, autonomous systems will grow in number. Many industries that will promote its advantages in order to make it the ‘new normal.’ You should know that many in the world see the idea of human control/oversight as a myth inherited from the idea that human beings must control their environment. This cosmogony, where humans are on the top of the hierarchy, is not universally shared but influences greatly the way people in the global north understand the role and place of humans. Practically speaking, keeping control over technology does not mean anything. The question of how decisions should be made should be addressed on a case-by-case basis. Asserting general rules outside of any context is pointless and misleading.”

A research scientist expert in developing human-machine systems and machine common sense said, “I do think AI bots and systems will be ‘designed’ to allow people to be in control, but there will likely be many situations where humans will not understand how to be in control, or they will choose not take advantage of the opportunity. AI currently is not all that good at explaining its behavior, and user-interface design is often not as friendly as it should be. My answer is that people are already interacting with sophisticated machines – e.g., their cars – and they allow their phones to support more and more transactions. Those who already use technology to support many of their daily tasks may have more trust in the systems and develop deeper dependencies on the decisions and actions enabled by these systems. How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? It could make people lazier and have less interest in understanding certain aspects of daily life. Also, if these services get disrupted or stop, people might have a hard time going back to doing things in a ‘manual’ way.”

A researcher at a North American University said, “To date, humankind has shown an immense and innate capacity to turn over decision-making to others – religious leader, political leader, educational leader, technology, even at the expense of the individual’s best interest, or society’s best interest. It is unclear whether this choice has been made because it is easier, because it absolves the individual of responsibility or some other reason. While tech-guided decision-making could be extraordinarily beneficial, it is likely future tech-guided decision-making will further remove morality from decision-making. The responsibility will be on humans to ensure morality remains in the equation, whether as a component of tech-aided decision-making, or as a complementary human element.”

An award-winning human-centered AI researcher who works with one of the top five global technology companies commented, “If you asked people in 1990 whether allowing Google to tell us how to get from every point A to every point B would be removing their agency, many would have said ‘yes,’ but now most would say ‘no.’ A similar analogy can be made for information retrieval/search. I am not a scholar of agency, but my guess is that it is about power. If people feel empowered by AI, they will feel they have agency. It’s subjective. What isn’t as subjective is whether the rewards from these empowerment tools more generally are being distributed equitably.”

Carl Schramm, professor of information science at Syracuse University and a leading authority on innovation, entrepreneurship and economic growth, commented, “Not only does the logic of decision-support technology work to displace the decision-making capacity of individuals, there is a denial of agency as they interface with such technology. A much larger issue is the overall societal damage being done to human agency by social theorists who seek to absolve individuals of individual and social responsibilities. One incontestable example is the government’s Social Determinants of Health. This rhetorical device is continuously used in public policymaking to deny agency as central to individual’s taking responsibility for protecting their own health.”

A number of respondents noted that, of course, people have been finding and will continue to find great advantage in allowing machine-based decision-making to simply and nearly invisibly take over more aspects of their daily lives while still allowing a certain amount of individual control in specific situations.

The net impact will be greater opportunities for control of one’s life than before. Some people may be happy putting their lives completely on autopilot. Others will want to have full control. Most will probably be somewhere in the middle, allowing algorithms to make many decisions but scheduling regular check-ins to make sure things are going right.

Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation

Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation, asked, “When you wake up to an alarm, is this a machine in control of a human or a human in control of a machine? Some would argue the machine is waking up the human, so therefore the machine is in control. Others would say the human set the alarm, so therefore the human is in control. Both sides have a point. What is exciting about AI is that we can move the debate to a new level – humans will have the option to use technology to better understand their entire sleep patterns, and how factors like diet, exercise, health and behavior impact their sleep and what options are available to them to change. Some of this will be automated, some of this will involve direct human choice and input. But the net impact will be greater opportunities for control of one’s life than before. Some people may be happy putting their lives completely on autopilot. Others will want to have full control. Most will probably be somewhere in the middle, allowing algorithms to make many decisions but scheduling regular check-ins to make sure things are going right – the same way that people may check their credit card bills, even if they have autopay.”

Gary Marchionini, dean at the University of North Carolina-Chapel Hill School of Information and Library Science, commented, I am optimistic that reflective individuals will continue to demand control of key life decisions. I don’t want to control the antilock braking system on my car because I believe the engineering solution is superior to my body-mind reflexes. But I do want to be able to talk to my physician about what kind of treatment plan is best for medical conditions. The physician may use all kinds of tools to give me scenarios and options, but the decision on whether to do surgery, chemotherapy or nothing should (and I believe will) continue to rest with the individual. Likewise with financial decisions, whom I choose to love, and how I advise my children, grandchildren and students. It is more difficult to imagine personal agency in decisions that affect society. I can choose who to vote for, but only among two or possibly three candidates – how those candidates rise to those positions may be strongly influenced by bots, trolls and search engine optimization and marketing algorithms, and this is indeed worrisome.”

Marc Brenman, managing partner at IDARE LLC, observed, “Humans already make many bad decisions based on wrong assumptions with poor or no inference, logic, evidence or critical thinking. Often, researchers and thinkers compare machines to the best of humans, instead of comparing them to all humans as we are. Machines are already making better decisions, even simple machines like toasters, ovens and microwaves. In addition, humans are already becoming more bionic and artificial, for example through implants to help hearing, heartbeat, to reduce Parkinson’s disease and epilepsy; metal knees, hips and shoulders; teeth implants; pins and screws; protheses; etc. Our cars already make many decisions for us, such as automatic stopping and lane-keeping. GPS systems tell us where to go.”

A U.S.-based designer expert in human-computer interfaces said, “The successes of already widely adopted technologies have resulted in people already being much less ‘hands-on’ in understanding and analyzing their lives. Humans don’t seem to want to think for themselves; they seem to prefer allowing a black box to govern them. They are using fertility trackers, autonomous driving and automatic scheduling assistants to guide their daily activities. The key automations are probably purchasing (food/clothing/consumer item subscription services) and wellness (checking in on health statistics, scheduling appointments for doctors and exercise regimens).

“All this automation means people may be less aware of the cause-and-effect between habits and their own health. Instead of reasoning and experimenting among different choices they are making, they are given a ‘standard plan’ without needing to understand the science and components of wellness. They are also losing the ability to communicate about their own health choices and outside-the-normal services may be missed or reduced. Further, as people take less active involvement in their own care and management, they will be less educated on best to care for their own welfare.”

AI technology’s scope, complexity, cost and rapid evolution are just too confusing and overwhelming to allow users to assert agency

A portion of respondents noted that the digital technology ecosystem is fast-changing, complex, broadly developed and already mostly centralized by powerful players. They say this makes it far too complicated to build tools and platforms in a way that allows individuals the chance to have much agency over their actions when they are within these systems.

William Lehr, an economist and tech industry consultant who was previously associate director of the MIT Research Program on Internet and Telecoms Convergence, wrote, “Bots, agents, AI and still mostly non-AI ICTs [information and communication technologies] are already far more advanced than most folks recognize – your car, your appliances and the way companies make decisions already are heavily ICT – automated and AI in multiple forms is part of that. Most folks are incapable of figuring out how their gadgets work even if they have lots of old-fashioned common sense and hands-on (non-tech) savvy and skills. When your car/washing machine/stove breaks today, it is often due to failure in a control board that requires specialized software/IT tools to diagnose.

“By 2035 we will have lots more AI to obscure and address the complexity of ICT automation. For many folks that will make life easier (fewer decisions requiring human, real-time cognition) although lots more human-focused decisions will be forced on people as a matter of policy (e.g., to enable realistic end-user ‘consent’ to data usage as byproduct of privacy/security/data management policies being crafted and so on).

“Yes, this means that we will substitute one type of pain-in-the-neck/I-have-to-think-about-my-tech instead of ‘it just works’ problem for others – however, that is necessary. So, in the end will it really be ‘easier’? I doubt it. ‘Who is in control?’ is the big-bucks question: Who/what/how is control sustained? AI will have to be part of the solution because it will certainly be part of the problem.”

Federico Gobbo, professor of interlinguistics at the University of Amsterdam, Netherlands, said, “Humans are losing control of the proliferation of abstract machines. Most of the current systems are so complex that no single human can understand their complexity, both in terms of coding (the known ‘many hand problem’) and in the tower of interfaces. In fact, most of the communication now is not anymore human-machine but machine-machine. Autonomous systems are prone to errors, but this is not the main issue. The crucial point is accountability: Who is responsible for errors?”

Humans are losing control of the proliferation of abstract machines. Most of the current systems are so complex that no single human can understand their complexity, both in terms of coding (the known ‘many hand problem’) and in the tower of interfaces.

Federico Gobbo, professor of interlinguistics at the University of Amsterdam, Netherlands

Gus Hosein, executive director of Privacy International, commented, “We need to step away from the consumer and individual frame in which we worry about a company designing something that will shape what people do based on some product or service or some device. Look at all the areas of our lives where we already have centralized control and very little autonomy. Look at all the institutions that make unaccountable procurement and deployment decisions, the institutions that already have access to our bodies and our lives. With their lack of accountability and their inappropriately near-unlimited resources, they are going to be the institutions developing and deploying the systems that matter most to individuals. And (whether willfully or not) they will deploy these systems in ways that undermine the autonomy and dignity of people. Border agencies. Welfare systems. Employers. Policing. Credit. Cost-limited health care. Schooling. Prisons.”

An accomplished professor of computer science at a leading U.S. Ivy League university wrote, “In scenarios involving machine-learning-mediated (ML) decision-making, it’s hard to imagine that humans will have agency in those interactions, barring significant advances in the field of explainable ML. Modern technological systems are extremely complex, so explaining how such a system works in a way that is complete, transparent and understandable to laypeople is hard. Lacking such explanations, laypeople will struggle to make fully informed decisions about how to use (or not use) technology. With respect to ML technologies in particular, models like GPT-3 are already sufficiently complicated that engineers (let alone laypeople) struggle to fully understand how the models work.”

An applications-design professional said, “I work with teams that create AI applications, such as cybersecurity. What I see is the technology is almost completely incapable of collaborating with humans in any meaningful way. The ideal scenario would be one where the computer does complex analysis but then explains its analysis to end users, who can then make informed decisions about what they want to do with the analysis. But this is NOT what I am seeing. What I see is the analysis is so complex the computer is not able to explain the reasoning, nor is it able to provide meaningful ways for the human to coach it into better decisions.”

A professor of computer science based in Canada said, “We interact with computers and AI systems within too many contexts to have all of these properly audited and controlled. The sum of many small and/or seemingly insignificant decisions suggested by our technology in the future will end up having larger unintended consequences on our daily lives. Humans should be in control of important decision-making, but without significant action by governing bodies and other regulations, this will not begin to happen, and even if effective governance might be adopted for some fraction of important decisions, it is unlikely to be universal.”

John Lazzaro, retired professor of electrical engineering and computer science at the University of California, Berkeley, said,“It is tempting to believe that we can outsource the details that determine our interactions to a machine while maintaining high-level control. But I would argue granular decisions are where the true power of being human lies. When we delegate nuanced choices away, we surrender much of our influence.

“We can see this dynamic in play whenever we compose using the Gmail web client. If one turns on the full suite of machine-learning text tools (smart compose, smart reply, grammar suggestions, spelling suggestions, autocorrect), preserving your personal voice in correspondence is a struggle. The medium quickly becomes the message, as you find yourself being prodded to use words, phrases and constructions that are entirely not your own.

“We also see this dynamic at play in the computational photography tools at the heart of the modern smartphone camera experience. Schematically, an algorithm recognizes that the photographer is taking an outdoor photo with a sky and uses machine-trained (or hand-crafted) models to fashion a sky that ‘looks like a sky should look.’ But on August 9, 2020, in the San Francisco Bay Area, when fire ash created an ‘apocalypse’ red-orange sky, computational photography models made it impossible to capture a sky that was ‘what a sky should never look like.’”

Alejandro Pisanty, Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at UNAM, National Autonomous University of Mexico, predicted, “The rollout of automated, to some extent autonomous, decision-making systems is not happening as one central decision made in public. It is death by a thousand cuts in which smaller units develop or purchase and deploy such systems. The technical complexity is hidden, and even well-trained scientists and technologists struggle to keep up with the pace of developments and their integration. It is thus quite difficult for them to inform the general population clearly enough and in time.

“In addition, atavistic beliefs and disbeliefs have gained too much space in societies’ decision-making. This will be transferred to the development and deployment of automated systems. The politicization of even basic decisions such as respiratory hygiene for preventing a respiratory disease or immunization against it during the COVID-19 crisis negate the truly miraculous level of scientific and technological development that allowed humankind to have a vaccine against it less than a year after the disease began ravaging human life. The flaws in elementary logic exhibited by citizens who have completed 12 or more years of education in advanced economies are appalling and sadly have too much influence.

“That these systems are powered by what we now call ‘AI’ (in its different forms) is of secondary importance to the fact that the systems are automated and black-boxed. Technologists cite some good reasons for blackboxing, such as to prevent bad actors from hacking and weaponizing the systems; but this ‘security by obscurity’ is a rather naïve excuse for hiding the work behind the AI because simple reverse engineering and social engineering can be applied to weaponize these systems anyway.”

Sean Mead, CEO at Ansuz Strategy, predicted, “By 2035, human moderation of AI and augmented technology will rarely be available in any significant manner in most settings. Cost-control, speed and reduction of ambiguity in response will drive cutting humans out of the decision loop in most circumstances. One of the exceptions will be combat robots and drones deployed by the U.S. which will maintain humans in the loop at least as far as approval of targets; the same will not be true for Russian and Chinese forces. The improved automation will threaten economic security for wide swaths of today’s employees as the creation of new jobs will fall far behind automated replacement of jobs.”

An information security, risk, privacy and identity expert based in Texas said, “Designers are lazy and marketers are arrogant. Designing a complex system to be easy for non-expert users to understand and guide is very difficult, and most product designers will opt not to do it, falling back instead on simpler modes of operation for automated decision-making systems that don’t require human input. Product marketers will then overstate the advantages of these relatively limited applications of automation.”

Scott Johnston, an Australia-based researcher and educator, said, “AI systems are expensive to create, train and deploy, and the most effective of them will be ones created at the behest of the highly resourced ruling elite. Dominant AIs will be created so as to enhance the power of this elite. The social structures which imbue the very few with the vast majority of decision-making power will persist to 2035 and beyond. Because AIs are connected to extensive web-based ‘sensory systems’ and act on the basis of the rules created by their makers, their activities will be extraordinarily difficult to oversee. And as we have seen recently, ruling elites have the capacity to dominate the lens through which we are able to oversee such changes. The limits to agency of our world’s population will not be inhibited by AI technologies as such, they are just another of our technological tools, the limits will be imposed as ‘normal’ by the demands of corporate empire building and protection.”

Andre Popov, principal software engineer at Microsoft, wrote, “Humans have already outsourced important decision-making in a number of areas, including stock trading and operating machinery/vehicles. This outsourcing happens wherever possible, as a cost-cutting measure, due to machines making decisions faster or in order to eliminate human error. Autonomous decision-making and improvements in AI further reduce the subset of the population that is needed for the society to operate. These trends make human society even more dependent on and susceptible to complex technology and infrastructure that no one person really understands end-to-end. On one hand, we have complex and therefore inherently fragile systems in charge of basic human needs. On another hand, computer-assisted humans have less dependency on their own intellectual capabilities for survival.”

Lenhart Schubert, a prominent researcher in the field of commonsense reasoning and professor of computer science at the University of Rochester, commented, “We will not have AI personal assistants that make new technology easy to access and use for everyone by 2035. Thirteen more years of AI development will not be enough to shift from the current overwhelming mainstream preoccupation with deep learning – essentially mimicry based on vast oceans of data – to automated ‘thinking,’ i.e., knowledge-based reasoning, planning and acquisition of actionable new knowledge through reading, natural language processing interaction and perceptual experience.”

In 2035 scientists will still be debating whether and how decision-making by automated systems can reduce bias and discrimination on average, compared to human institutions.

J. Nathan Matias, leader of the Citizens and Technology Lab at Cornell University

J. Nathan Matias, leader of the Citizens and Technology Lab at Cornell University, predicted this scenario for 2035: “In 2035, automated decision-making systems will continue to be pervasive, powerful and impossible to monitor, let alone govern. Organizations will continue to use information asymmetries associated with technology-supported decision-making to gain outsized power and influence in society and will seek to keep the details of those systems secret from democratic publics, as has been the case throughout much of the last century and a half. In response to this situation, U.S. states and the federal government will develop regulations that mandate testing automated systems for bias, discrimination and other harms. Efforts at ensuring the safety and reliability of algorithmic decision-making systems may shift from a governance void into a co-evolutionary race among regulators, makers of the most widely known systems and civil society. Automated systems will be more prevalent in areas of labor involving human contact with institutions, supported by the invisible labor of people who are paid even less than today’s call-center workers. In some cases, such as the filtering of child sexual abuse material and threats of violence, the use of these imperfect systems will continue to reduce the secondary trauma of monitoring horrible things. In other cases, systemic errors from automated decision-making systems (both intentional and unintentional) will continue to reinforce deep inequalities in the U.S. and beyond, contributing to disparities in health, the economy and access to justice. In 2035 scientists will still be debating whether and how decision-making by automated systems can reduce bias and discrimination on average, compared to human institutions.”

James S. O’Rourke IV, professor of management at the University of Notre Dame and author of 23 books on communication, commented, “AI-aided decision and control will be far more dominant than it is today after it is enhanced by several fundamental breakthroughs, probably sometime after 2035. At that point ethical questions will multiply. How quickly can a machine program learn to please its master? How quickly can it overcome basic mistakes? If programmers rely on AI to solve its own learning problems, how will it know when it has the right answer, or whether it has overlooked some fundamental issue that would have been more obvious to a human? A colleague in biomechanical engineering who specializes in the use of AI to design motility devices for disabled children told me not long ago, ‘When people ask me what the one thing is that most folks do not understand about AI, I tell them, ‘How really bad it is.’”

An expert in economic forecasting and policy analysis for a leading energy consultancy said, “Ever since the icon-controlled environment was adopted for computing technologies, vendors have prioritized usability over agency and widened what I referred to in my master’s thesis as the ‘real digital divide,’ the knowledge gap between skilled manufacturers (and fraudsters) and unskilled users. It is not merely application software that has become opaque and limited in its flexibility.

“Programming languages themselves often rely on legacy object libraries that programmers seldom try to understand and do not meaningfully control. It may be inefficient to build from first principles each time one sets out to write software, but the rapid development of the last several decades veers to the opposite extreme, exposing widely used applications to ‘black-box risk’ – the inadvertent incorporation of vulnerabilities and/or functional deficits.

“Neural nets are even more opaque, with inscrutable underlying decision algorithms. It seems highly unlikely that today’s results-oriented practices will become more detail-focused with the advent of a technology that does not lend itself to detailed scrutiny.”

A professor of political science based in the UK wrote, “The primary reasons I think humans will not be designed to allow people to easily be in control over most tech-aided decision-making relevant to their lives are as follows: It is not clear where in the chain of automated decision-making humans can or would be expected to make decisions (does this include automated advertising auction markets, for example). Allowing humans to be more in charge requires a new infrastructure to think about what sorts of control could be applied and how those would be realized. To the extent that some of these machines/bots/systems are for use by law enforcement or security, it is not clear that more choice would be allowed. (Paradoxically, this may be where people MOST want such ability.) Due to the fact that the tech ethos remains ‘disruption’ and things move quickly, it is not clear how embedding more human/user control will work out, especially if users are given more choice. Giving them more choice also risks the demonetization of automated decisions. Ultimately, answering the complicated questions of when users should have more control, what type of control they should have, how this control would be exercised, and whether or how those who make such systems might be willing to acquiesce to giving user more control (especially if this must be forced through regulation) seems a tall order to achieve in just over 10 years. 2035 sounds far away, but these are a lot of hurdles to solve.”

Daniel Wyschogrod, senior scientist at BBN Technologies, wrote, “Systems are not likely to be designed to allow people to easily be in control over most tech-aided decision-making. Decisions on credit-worthiness, neighborhoods that need extra policing, etc., are already made today based on deep learning. This will only increase. Such systems’ decisions are heavily based on training corpora that is heavily dependent on the availability of data resources with possible biases in collection.”

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings