Numbers, Facts and Trends Shaping Your World

The Future of Human Agency

1. A sampling of overarching views on human agency

The following incisive and informative responses to our questions about the evolution of human agency by 2035 represent some of the big ideas shared by a small selection of the hundreds of thought leaders who participated in this canvassing.

Success of AI systems will remain constrained due to their inherent complexity, security vulnerabilities and the tension between targeted personalization and privacy

Mike Liebhold, retired distinguished fellow at the Institute for the Future, wrote, “By 2035 successful AI and robotic ‘full’ autonomous ‘control’ of ‘important’ decisions will be employed only in secure and well-managed and controlled applications of highly refined generations of applied machine intelligence, where any autonomous processes are managed and operated carefully, by highly skilled workforces, with high degrees of systems literacies.

“While there will be gradually widespread adoption of AI augmentation (not necessarily replacement) of human decisions by applied AI and machine learning deeply embedded in hardware and digital services, in most cases [the use] of fully autonomous systems will only gradually be successfully applied. It will still be constrained by evolved versions of the same systemic vulnerabilities [of current systems], including pervasive digital security vulnerabilities and continued tensions of targeted personalization and privacy surveillance.

“Complexity is a continuing challenge. Computing technology is enabling far more capabilities than humans can understand and use effectively. It’s a form of cognitive dissonance like an impedance mismatch in electronic connections. Given the explosive availability of potentially useful data and structured knowledge resources and promising but immature data fusion, AI, and cloud computing capabilities, many years of work ahead will be needed to design systems that somehow systematise and simplify the complexity of AI machines to reliably summarise, explain and amplify capabilities to growing but still limited useful datasets for human cognitive capabilities and focused tasks.”

Society will no longer be human but instead socio-technical: ‘Without technology there would be no “society” as we know it’

David J. Krieger, director of the Institute for Communication and Leadership in Lucerne, Switzerland, predicted, “Individual agency is already a myth, and this will become increasingly obvious with time. The problem here is not technological, but ideological. Humanism attempts to preserve the myth of individual agency and enshrine it in law. Good design of socio-technical networks will need to be explicit about its post-humanist presuppositions in order to bring the issue into public debate. Humans will act in partnership – that is, distributed agency – with technologies of all kinds. Already this is so, and it will be more so in the future.

Individual agency is already a myth, and this will become increasingly obvious with time. The problem here is not technological, but ideological. 

David J. Krieger, director of the Institute for Communication and Leadership in Lucerne, Switzerland

“In a data-driven society, all decisions on all levels and in all areas, business, health care, education, etc., will need to be evidence-based and not based on position in a hierarchy, or intuition, gut feeling, experience, etc. The degree of automation is secondary to the principle of evidence-based decision-making. When sufficient evidence is available, the level of automation will increase. Also, constraints of time and space will condition the level of automation.

“No decisions will be left to individual agency since there is no such thing. Even decisions about who to marry, what to study, what job to take, what therapy is appropriate, etc., will be assisted by automated data evaluation. Society will no longer be ‘human’ but instead ‘sociotechnical.’ Already there is no such thing as human society, for without technology there would be no society as we know it. The problem is that our current political and social ideologies do not acknowledge this fact and continue to portray a mythical version of the social and the human.”

‘People tend to be submissive to machines or any source of authority. Most people don’t like to think for themselves but rather like the illusion that they are thinking for themselves’

Rakesh Khurana, professor of sociology and professor of leadership development at Harvard University, responded, “It is easier for many people to imagine the end of the world than it is for them to imagine the end of capitalism. People tend to be submissive to machines or any source of authority. Most people don’t like to think for themselves but rather like the illusion that they are thinking for themselves.

“Consider, for example, how often people follow GPS instructions even when instinct suggests they are going in the wrong direction. In politics or consumption, people often outsource their decision-making to what their friends ‘like’ on Facebook or the songs Pandora chooses, even if it is against their interests or might expose them to new ideas or music.

“In most instances, even without machines, there is a strong tendency among humans to rely on scripts and taken-for-granted unquestioned assumptions for their actions. Whether these scripts come from ‘society’ (a type of programmed machine) or an algorithm seems to be a difference of degree, not kind. For example, many people believe they have no agency in addressing problems linked to capitalism, human-caused climate change or any other ‘system’ that seems to exist outside of human control, even though these phenomena are designed and perpetuated by humans.”

‘Machines allow “guilt-free decision-making” along the lines of what the Nuremberg Trials revealed about armies’ chains of command’

Leiska Evanson, a Caribbean-based futurist and consultant, observed, “Machines allow ‘guilt-free decision-making’ along the lines of what the Nuremburg trials revealed about armies’ chains of command. Many will revel in such ‘freedom’ from decision burden and happily blame ‘the machine’ instead of their choice to trust the machine – much as they have blamed television, social media and videogames for human failings. Programmers and computer scientists do not trust humans. Humans do not trust humans.

“Very simply, human programming of AI currently relies on reducing specific human input points to reduce the fallacy of ‘organic beings’ – twitches, mistakes, miscalculations or bias. It has been known for at least a century how cameras, infrared and other visual/light-based technology do not capture darker skin tones well, yet this technology is being used for oxygen sensors, security cameras, facial recognition yielding the same mistakes and leading to incorrect incarceration, poor medical monitoring and death.”

Machines that think could lead us to become humans who don’t think

Richard Watson, author of ‘Digital vs. Human: How We’ll Live, Love and Think in the Future,’ commented, “2035 is a bit early for humans to fall into deeper dependence on machine ‘intelligence’ – for that, 2045 is more likely. In 2035 humans will simply cooperate and collaborate with machines, and we will still trust human judgment ahead of AIs in important cases. This isn’t to say that the tech companies won’t try to remove individuals’ agency though, and the work of Shoshana Zuboff is interesting in this context. How might automated decision-making change human society? As Zuboff asks: Who decides? Who is making the machines and to what ends? Who is responsible when they go wrong? What biases will they contain? I think it was Sherry Turkle who asked whether machines that think could lead us to becoming humans who don’t. That’s a strong possibility, and we can see signs of it already.”

‘The bubble of algorithmically protected comfort will force us to have to find new ways to look beyond ourselves and roll the dice of life’

Sean McGregor, technical lead for the IBM Watson AI XPRIZE and machine learning architect at Syntiant, said, “The people in control of automated decision-making will not necessarily be the people subject to those decisions. The world in 2022 already has autonomous systems supervised by people at credit-rating agencies, car companies, police, corporate HR departments and more. How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? We will better appreciate the importance of random chance. Non-random computers mean you will not discover the unexpected, experience and learn from what you don’t prefer, and grow beyond the bubble of algorithmically protected comfort. We will need to find new ways to look beyond ourselves and roll the dice of life.”

Systems powered by AI will not be designed to allow people to easily be in control over decision-making. The reigning paradigm for both basic research and industrial product design in AI is to strive to develop AI systems/models that meet or exceed human-level performance.

Rob Reich, professor of political science and director of the Center for Ethics in Society at Stanford University

The goal for AI design is to ‘meet or exceed human-level performance … and this leads inexorably to the diminishment or replacement of human agency’

Rob Reich, professor of political science and director of the Center for Ethics in Society at Stanford University, said, “Systems powered by AI will not be designed to allow people to easily be in control over decision-making. The reigning paradigm for both basic research and industrial product design in AI is to strive to develop AI systems/models that meet or exceed human-level performance. This is the explicit and widely accepted goal of AGI, or artificial general intelligence. This approach sets AI on a course that leads inexorably to the diminishment or replacement of human agency.”

The manipulability of these systems has moved the world away from a wider human agency toward hostility toward expertise. ‘In the future, very few people may have agency’

Jean Seaton, director of the Orwell Foundation and professor of media history at the University of Westminster, said, “Already we can see the impact of new, apparently ‘democratic’ ways of communicating on political choices on political structures. The manipulability of apparently technical systems has already moved the world dramatically away from a wider human agency. The willingness – particularly of authoritarian states – to monitor but also ‘please’ people and manipulate understanding depends on these systems. The hostility toward expertise seen today, the politicization of every critical issue, and more – these are all manipulable. What political systems do well out of this?

“In the future, very few people may have agency. How will they use it? Fear and anxiety are proper responses to the challenges we face. For one, the existential threat of climate extinction is about to be fogged by the initial waves of refugees from soon-to-be uninhabitable places – Delhi? Central and South Africa? Afghanistan and Pakistan? Mis-, dis- and mal-information succeed as distractions, and human agency is wasted on small revenges rather than solving the long-term challenges that must be addressed now.”

‘Digital tools to support decision-making are upgrades of old-fashioned bureaucracies; we turn over our agency to others to navigate our limitations’

Devin Fidler, futurist and founder of Rethinkery Labs, commented, “Turning over decisions to digital agents ultimately has the same downsides as turning over decisions to human agents and experts. In many ways, digital tools to support decision-making are upgrades of old-fashioned bureaucracies. For one thing, it can be easy to forget that, like digital systems, bureaucracies are built around tiered decision trees and step-by-step (algorithmic) processes. Indeed, the reason for both bureaucracy and digital agents is ultimately the same – humans have bounded attention, bounded time, bounded resources to support decision-making, and bounded information available. We turn over our agency to others to navigate these limitations. Importantly, however, we still need to establish the equivalent of a clear equivalent to the principle of ‘fiduciary duty’ that covers the majority of digital agents designed to act on our behalf.”

Humans will be augmented by autonomous systems that resolve complex problems and provide relevant data for informed decisions

Kunle Olorundare, Vice President, Internet Society, Nigeria Chapter, wrote, “By 2035, bots with high-level intelligence will take over most human decisions – key decisions in engineering design, finance, logistics tracking, the triggering of alerts about threats to public safety/the environment, and more. However, at the same time, human decisions will still be relevant even if seemingly relegated to the background. For example, ethical issues in engineering will still be taken on by humans because they require making relative arguments for and against. Our society will be changed for good with integrated bots taking on most movement logistics decisions. There will be safer traffic practices on our roads, in the sky and on the ocean.

“Other important places in which autonomous systems and the Internet of Things will play roles in resolving complex problems are in hospitals – for diagnosis and other tasks – and in agriculture, where data analytics and unmanned aerial vehicles will be useful in all aspects of farming and food distribution. These autonomous systems will operate on a secured internet that allows for secure dissemination of relevant data for informed decisions based on analytics.”

‘Digital systems will let those willing to adopt them live a life of “luxury,” assuming subservient roles and freeing users of many tedious chores’

Michael Wollowski, professor of computer science, Rose-Hulman Institute of Technology, and associate editor of AI Magazine, said, “In order to ensure wide acceptability of digital systems, the users need to be in charge of any decision made, whether having a seemingly large impact or an apparent small impact. Those systems need to be engineered to work as a pleasant assistant to the user, just as a personal assistant might be, and a user must be able to override any decision for any reason. The system, just like the driving directions given by a navigation system, will continuously replan.

In order to ensure wide acceptability of digital systems, the users need to be in charge of any decision made, whether having a seemingly large impact or an apparent small impact.

Michael Wollowski, professor of computer science, Rose-Hulman Institute of Technology, and associate editor of AI Magazine

“Given that most humans are creatures of habit, all decisions that can be automated based on learning a human’s habits will be automated. Such systems should take into consideration human input, and they should ask the user whether they are sure they really want to go through with a decision that the system deems to have a significant impact. That type of decision depends on the person; what I consider a high-impact decision, my next-door neighbor may not care about. The system has to learn each user’s preferences. Digital systems will let those willing to adopt them live a life of ‘luxury.’ Just as people with means employ gardeners, nannies, housekeepers, pool boys, personal assistants, etc., these systems will assume many of those subservient roles and free users of many tedious chores.”

In order for Big Tech to choose to design technologies that augment human control, ‘the incentives structure would have to be changed from profit to mutual flourishing’

Douglas Rushkoff, digital theorist and host of the NPR One podcast “Team Human,” wrote, “The incentives structure of Western civilization would have to be changed from profit to mutual flourishing in order for any technology development company to choose to design technologies that augment human control. I do believe we could easily shift the emphasis of technology development from control-over-others to augmentation of agency, but this would require a radical shift in our cultural value system. I don’t believe that billions of dollars will be spent on a counter-narrative until such a shift were to occur. It’s also hard to imagine scenarios years in the future without also taking into account mass migrations, the rise of authoritarianism, climate change and global health catastrophe. So, are we talking about the ‘key decisions’ of 6 billion climate refugees, or those of 200,000 corporate executives?”

‘Everyone wants to believe they always have free will, and they will convince themselves of that while opening their wallets to pay for more GPUs that further direct human behavior’

Bill Woodcock, executive director of Packet Clearing House, commented, “The unholy trinity of the surveillance economy, pragmatic psychology and machine learning have dug us into a hole. They are convincing us to dig ever faster, and they are making us believe that it’s our own bright idea. I don’t see us getting out of this hole as long as the automated exploitation of human psychological weaknesses continues to be permitted. I’m very pessimistic about the balance of beneficial outcomes between humans and autonomous systems based on our track record thus far.

“For the first time in human history, we’ve created a stand-alone system which predates people and has its own self-contained positive feedback loops driving it toward increased scale. What’s particularly problematic is that the last 40 years of investigation of human psychology have revealed how easily people can be externally directed and how much work their brains will do to rationalize their actions as having been self-determined. Everyone wants to believe that they always have free will – that they always make their own choices based on rational processes – so they’ll do all of the work necessary to convince themselves of that while simultaneously opening their wallets to pay for more GPUs that further direct their own, and others’ behavior.”

The standardization of routine decisions as AI takes them over will make many of them more reliable, easy to justify and more consistent across people

Steven Sloman, a cognitive scientist at Brown University whose research focus is how people think, reason, make decisions and form attitudes and beliefs, commented, “The main changes I expect in human society are the standardization of routine decisions as AI takes them over and the uses of AI advice that make even unique decisions much more informed.

“Handing routine decisions over to AI will make many life decisions that are made repeatedly more reliable, easy to justify and more consistent across people. This approach could be applied everywhere in society, e.g., automating rulings in sports contests and other aspects of life. Should we interpret this type of radiology image as a tumor? Does a mechanic need to look at my car? Is it time for a new roof? Will student essays be graded automatically?

“My guess would be a bifurcation in class within society: Public schools with large demands will rely on automatic grading; private schools that demand a lot of tuition will not. Efficiency will trade off with cost, with the result that richer students will learn to express themselves with more freedom, less constrained by the less flexible, less insightful criteria of AI.

“Many difficult, unique decisions, though, involve large amounts of uncertainty and disagreement about objectives. Such decisions will never be handed over to AI. Doing so would reduce the justifiability of the decisions and put the responsible individuals in jeopardy. They will certainly be aided by AI, but I don’t see handing decision-making over to them entirely. Should my country go to war? Who should I vote for? Even, is it time to buy a new dishwasher? Or what TV show should I watch tonight? All of these questions involve either enormous uncertainty about outcomes or large disagreements about values, and people will always want to make the final decision.”

‘We need to reinvent the concept of consumer protection for the information age’ and create checks and balances that move us in the right direction

Laurie L. Putnam, educator and communications consultant, commented, “If you look at where we are now and plot the trajectory of digital ‘tools,’ it looks like we’re going to land in a pretty dark place. Yes, digital technologies can do a lot of good in the world, but when they are created to improve a bottom line at any cost, or to control people through surveillance, then that is what they will do.

“If we want to alter our course and land in a better place, we will need to reinvent the concept of consumer protection for the information age. That will require thoughtful, well-informed human decision-making – now, not years from now – in legislative policies, legal standards and business practices. These are the checks and balances that can help move us in the right direction. Already we would be hard-pressed to live our lives without using digital technologies, and already we cannot use those phones and apps and cars and credit cards without having every bit of data we generate – every action we take, every purchase we make, every place we go – hoovered up and monetized. There is no way to opt out. Already we are losing rather than gaining control over our personal data, our privacy, our lives.”

‘Industry needs open protocols that allow users to manage decisions and data to provide transparent information that empowers them to know what the tech is doing’

Gary A. Bolles, chair for the future of work at Singularity University and author of “The Next Rules of Work,” predicted, “Innovators will continue to create usable, flexible tools that will allow individuals to more easily make decisions about key aspects of their lives and about the technologies they use. There’s also a high probability that 1) many important decisions will be made for people, by technology, without their knowledge, and 2) the creators of media and information platforms will lead the arms race, creating tools that are increasingly better at hacking human attention and intention, making implicit decisions for people and reaping the data and revenue that comes from those activities.

“First, every human needs education in what tech-fueled decision-making is and what decisions tech can and does make on its own. Second, tech innovators need a stringent code of ethics that requires them to notify humans when decisions are made on their behalf, tells them the uses of related data and tells how the innovator benefits from the use of their tools. Finally, industry needs open protocols that allow users to manage dashboards of aggregated decisions and data to provide transparent information that allows users (and their tools) to know what decisions technology is making on their behalf, empowering them to make better decisions.”

There’s a strong tendency in tech to look for the advantages that cutting out human agency, cognitive biases and other ‘failures of rationality’ bring to complex systems

Richard Ashcroft, deputy dean and professor of bioethics at City University of London Law School, an expert on AI and ethics in health care, commented, “I am not optimistic because designing in human agency to AI/ML [machine learning] based systems is not easy from an engineering point of view, plus the industry and most of academia is mainly focused on ‘quick wins,’ ‘low-hanging fruit’ and gaining competitive advantage in so doing.

“There’s also a strong tendency in the field to look for the advantages that ‘cutting out’ human agency, cognitive biases and other ‘failures of rationality’ bring, so I don’t think there is much appetite for designing human agency into these systems, outside the rather narrow field of ‘AI ethics,’ and the general debate in that area is more about assuring us that AI is safe, rather than looking for ways to make it so. A third point: Only some of these problems are specific to AI/ML systems; many of the issues were already built into complex socio-technical systems, such as state bureaucracy, precisely to eliminate individual discretion because of issues around efficiency, avoidance of corruption and personal bias and so on. Also, any sufficiently complex system has ‘control problems’ that become problems of causal efficacy and epistemology. Humans have influence over such systems, but the effects of such influence are not always predictable or even desirable, from the point of view of the purposes built into such systems.”

AI will be built into so many systems that it will be hard to draw a line between machine decisions and human decisions

Leah Lievrouw, professor of information studies at UCLA, wrote, “Who exactly has ‘agency’? According to the June 11, 2022, cover feature on AI in The Economist, the only ‘designers’ – organizations? individuals? – with the cash and brute-force computing capabilities to create the newest ‘foundational AI’ are huge, private for-profits, with one or two nonprofits like OpenAI being supported by the private firms; there are also a few new startups attempting ‘responsible’ or ‘accountable’ algorithms. So, there’s the agency of designers (will they design for user control?) and the agency of users (decision-making based on what AI presents them?).

“Decision-making may not be the only aspect of agency involved. The ‘machine-human’ relationship binary has been around in popular culture for ages, but I think the current thinking among AI designers goes way beyond the one-to-one picture. Rather, AI will be integrated into many different digital activities for lots of reasons, with ripple effects and crossovers likely. Thus, there’s unlikely to be a bright-line division between machine decisions and human decisions, both for technical reasons and because who, exactly, is going to declare where the line is? Employers? Insurers/finance? State agencies? Legislatures? Any entity deploying AI will want to use it to the greatest extent possible unless specifically enjoined from doing so, but right now (except maybe in the EU…?) it seems to me that few regulators or organizations are there yet. We already see some very worrisome outcomes, for example, algorithmic systems used in legal sentencing.”

‘Our sense of control is increasingly illusory; unless the machine stops, this will spread by 2035, and not just a little’

Jonathan Grudin, a principal researcher at Microsoft and affiliate professor at the University of Washington Information School, observed, “People won’t control a lot of important decision-making in the year 2035. We’re already losing control. A few current examples:

  • “When Google exhibits the editorial control that has long been expected of publishers by removing 150,000 videos and turning off comments on more than 600,000 and removing ads from nearly 2 million videos and more than 50,000 channels, algorithms decide. Overall, this is a great service, but thousands of false alarms will elude correction. 
  • “When an algorithm error dropped a store from Amazon, humans were unable to understand and fix the problem. 
  • “A human resources system that enforces a rule where it shouldn’t apply can be too much trouble for a manager to contest, even if it may drive away valued employees. 
  • “Human agency is undermined by machine learning (ML) that finds effective approaches to convince almost any individual to buy something they don’t need and can’t afford. 

“Our sense of control is increasingly illusory. Algorithms that support management and marketing decisions in some organizations operate on a scale too extensive for humans to validate specific decisions. Unless the machine stops, this will spread by 2035, and not just a little.”

‘Major tech-driven decisions affecting the rest of us are being made by smaller and smaller groups of humans’

Michael G. Dyer, professor emeritus of computer science at UCLA, wrote, “The smartest humans create the products of convenience that the rest of us use on a daily basis. A major goal of those smartest humans is to a product easily usable without the user having to understand how the product works or how it was constructed. I turn on a flat-screen TV and use its controls to navigate the internet without having to understand its internal structure or manufacture. I get into a car and drive it in similar fashion. Many extremely important decisions are being made without input from a majority of humans. Heads of major tech companies make key decisions about how their products will affect the public (examples: in terms of surveillance and info gathering on their consumers) without supplying much if anything in the way of human agency. While we will remain in control of products of convenience in 2035 (that’s what makes them convenient), we will continue to lose control in terms of major command-and-control systems of big tech and government. In fact, major tech-driven decisions affecting the rest of us are being made by smaller and smaller groups of humans.”

When choice is diminished, we impede our ability to adapt and progress

Kenneth A. Grady, futurist and consultant on law and technology and editor of The Algorithmic Society newsletter, observed, “As we turn over more decisions to computers, we have seen choice diminish. Variety no longer is the spice of life. We have already reached a point where humans have relinquished important aspects of decision-making to computers. By broadening and accelerating the rollout of decision-making through computers rather than humans, we risk accelerating society’s movement toward the mean on a range of matters. We will drive out the unique, the outlier, the eccentric in favor of pattern behavior. The irony of this approach lies in its contradiction of nature and what got us to this point. Nature relies on mutations to drive adaptation and progress. We will retard our ability to adapt and progress. We have seen already the early indications of this problem.”

‘Whoever writes the code controls the decision-making and its effects on society’

Ginger Paque, an expert in and teacher of internet governance with the Diplo Foundation, commented, “We are facing serious challenges today: pandemics, war, discrimination, polarizations, for example. It’s impossible to predict what kind of or level of civilization will prevail 13 years from now. AI will continue to be designed, coded and controlled by profit-seeking companies who have a vested interest in shaping and controlling our decision-making processes. So, it is not AI that controls our decisions, it is other humans who use the powerful resources of AI. Autonomous decision-making is directed by some agency, most often a profit-making entity that logically has its profit as a priority. Whoever writes the code controls the decision-making and its effects on society. It’s not autonomous, and we should have clear and transparent options to ensure we do not continue to cede control to known or unknown entities without proper awareness and transparency. It’s not AI that’s going to take humans’ decision-making faculties away any more than phones and GPS ruin our memories. Humans choose – quite often without the proper awareness, information and training – to do so.”

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings