Numbers, Facts and Trends Shaping Your World

The Future of Digital Spaces and Their Role in Democracy

5. Closing thoughts

The following respondents wrote contributions that bring together a holistic look at the issues at hand, trying to place them in human and historical context.

Peter B. Reiner, co-founder of the National Core for Neuroethics at the University of British Columbia, wrote, “It is challenging to make plausible predictions about the impact that digital spaces will have upon society in 2035. For perspective, consider how things looked 14 years ago when the iPhone was first introduced to the world. A wonderous gadget it was, but nobody would have predicted that 14 years later, nearly half the population of the planet would own a smartphone, no less how reliant upon them people would become. With that disclaimer in mind, I expect that digital life will have both negative and positive effects in the year 2035. Among the positives, I would include automation of routine day-to-day tasks, improved algorithmic medical diagnoses and the availability of high-quality AI assistants that take over everything from making reservations to keeping track of personal spending. The worry is that such cognitive offloading will lead to the sort of corpulent torpor envisioned in the animated film ‘Wall-E,’ with humans increasingly unable to care for themselves in a world where the digital takes care of essentially all worldly needs.

“Yet such a dystopian outcome may be unlikely. Victor Frankl vividly describes the human need for finding meaning in one’s life, even when the abyss seems near at hand. Faced with the manifold offerings of the digital world, many will look for meaning in creative tasks, in social discourse and perhaps even in improving the intolerable state of political affairs today. While some may blame digital spaces for providing a breeding ground for divisive political views, what we are witnessing seems more an amplification of persistent prejudice by people who are, for the first time in generations, feeling less powerful than their forebears.

The real problem is that our digital spaces cater to assuaging the ego rather than considering what makes for a life well-lived.”

Peter B. Reiner, co-founder of the National Core for Neuroethics at the University of British Columbia

“The real problem is that our digital spaces cater to assuaging the ego rather than considering what makes for a life well-lived. In the current instance, social media, driven by the dictates of surveillance capitalism, is largely predicated on individuals feeling better (for a few seconds) when someone notices them with a like or a mention. Harder to find are digital spaces that foster the sort of deep interpersonal interaction that Aristotle famously extolled as friendships of virtue. The optimistic view is that the public will tire of the artifice of saccharine digital interactions and gravitate toward more meaningful opportunities to engage with both human and artificial intelligence. The pessimistic view is that, well, I prefer not to go there.”

Michael Kleeman, senior fellow at the University of California-San Diego, commented, “The digital space has radically altered the costs of information distribution, including the costs of misinformation. This economic reality has created and will likely continue to create a cacophony with no filters and likely cause people to continue to move toward a few sources that echo their beliefs and simplify what are inherently complex issues. Threats to civil society, democracy and physical and mental health are very real and growing. The only hope I feel is a move toward more local information where people can ‘test’ the digital data against what they see in the real world. But even that is complex and difficult as partial truths can mask for more complete information and garner support for a distorted position. I am, sadly, not hopeful.”

Kenneth A. Grady, a lawyer and consultant working to transform the legal industry, said, “Could digital spaces and digital life be substantially better by 2035? Of course. But present circumstances and the foreseeable future suggest otherwise. For them to become substantially better, we need consensus on what ‘substantially better’ means. We need changes in laws, customs and practices aimed at realizing that consensus position. And we need time. At present, we have a gridlocked society with very different ideas of where digital space and digital life should be. These ideas reflect, in part, the different ideas we see in other areas of society on cultural issues. If we look back roughly 15 years at where things were, we can see that reaching a consensus (or something close to it) over the next 15 years seems unlikely. Without a consensus, changes to laws, customs and practices will fall over a spectrum rather than be concentrated in one direction. As a society, this reflects how we work out our collective thoughts and direction. We go a bit in one direction, course correct, move a bit in another direction, and continue the process over time. Will 15 years be enough time to reach a substantially better position for digital spaces and digital life? I doubt it. Inertia, vested capital interests and the lack of consensus mean that the give-and-take process will take longer. We may make progress toward ‘better,’ but to get to ‘substantially better’ will take longer and require a less-divisive society.”

Hans Klein, associate professor of public policy at Georgia Tech, responded, “The U.S. has a problem: ‘state autonomy.’ Its military and foreign policy establishments (‘the state’) are only imperfectly under civilian/democratic control. The American public is not committed to forever wars in the Middle East, Russia and China, nor to deindustrialization through global trade, but no matter who the citizens elect, the policies hardly change. Elections – the will of the people – have remarkably little effect on policy. Policies arguably do not represent the will of the people. The state is autonomous of the citizens.

“Large media corporations play an important role in enabling such state autonomy. The media corporations repeat and amplify policymakers’ narratives, with little criticism. They report on select issues while ignoring others and frame issues in ways that reinforce the status quo. So, in 2003, we heard endlessly about weapons of mass destruction but nothing about antiwar protests. In 2020, we heard endlessly about protests but nothing about people of color suffering from violent crime. What we call the ‘public sphere’ might better be called the narrative sphere. Citizens are enclosed in a state-corporate narrative sphere that tells them what to think and what to feel. Media corporations’ control of this narrative sphere is essential to state autonomy, because the narratives shape facts in ways that support the autonomy of policy makers.

“Around 2010, a revolution occurred: social media punctured the corporate narrative sphere. Alongside the narrative sphere there appeared a public sphere, in which the voices of people could be heard. This new social-media-enabled public sphere led to political movements on the left and the right. On the left, Bernie Sanders criticized state and especially corporate power. He focused citizens’ attention upward to the power structure. On the right, Donald Trump did something literally unthinkable prior to social media: He ran on an anti-war platform. Bernie Sanders was contained by his party, but Trump broke his party, won the nomination and won the election. This new, social-media-enabled public sphere is often crude, and the voices it empowers may be both constructive and destructive. Donald Trump manifested that. Those who could see beyond his personal style saw an elected official who finally raised important questions of war and peace, work and justice. The autonomy of the state was named and criticized (colorfully, as a ‘swamp’). Social media made it possible for such issues – perhaps the most important issues facing American society – to be publicly raised. Social media empowered the public. Therefore, social media had to be brought back under control.

“Following the election of such a critic of state autonomy, both the state and the corporate media have sharply attacked the social media that made his election possible. The corporate-created narrative sphere doubled down to inform the American public that the bad voices in social media are all there is. The power structure is working hard to demonize social media and the public sphere. Voices … are given outlet in state-quoting corporate media like The Atlantic. The public is being silenced. Looking ahead to 2035, it seems possible that the social-media-enabled public sphere will merely be a memory. Digital spaces and people’s use of them will be safely bounded by the understandings disseminated by the state. The wars will be good wars, and there will be no stories about people losing their livelihood to workers in Bangladesh. Perhaps the greatest challenge of our time is to prevent such a suppression of the social-media-enabled public sphere. Citizens on both the left and the right have a powerful interest in making sure that social media survives to 2035.”

Adam Nagy, project coordinator at Harvard Law School’s Cyberlaw Clinic, commented, “In general, the digitization of sectors that have lagged behind others – such as government social services, health care, education and agriculture – will unlock significant potential productivity and innovation. These areas are critical to accelerating economic growth and reducing poverty. At the same time, sectors that have led the pack in digitization, such as finance, insurance, media and advertising, are now facing regulatory headwinds and public scrutiny. Globally, politicians, regulators, civil society and even some industry players are increasingly trying to understand and mitigate harms to individual privacy rights, market competitiveness, consumer welfare, the spread of illegal or harmful content and various other issues. These are complex issues, and not every solution is waiting just around the corner, easy to achieve or free of difficult trade-offs.”

Ayden Férdeline, a public-interest technologist based in Berlin, Germany, said, “We have recentralized what was a decentralized network of networks by primarily relying on three or four content-distribution networks to store and cache our data. We are making the internet’s previously resilient architecture weaker, easier to censor and more reliant on the goodwill of commercial entities to make decisions in our interests. If we don’t course-correct soon, I worry that the internet of 2035 will be even more commercial, government-controlled and far less community-led. I am concerned that we are moving toward more closed ecosystems, proprietary protocols and standards, and national Splinternets that all abandon the very properties that made the internet such an impactful and positive tool for social change over the past 25 years.

“Of course, in not addressing many of the very real issues that the internet does have on society we have found ourselves in a situation where some kind of intervention is required. I just worry that the wrong actors have identified this opportunity to intervene. If we think back to how the internet was developed, it grew somewhat surreptitiously as far as commercial and political interests are concerned, which gave it the time and space to have defined around it the norms and governance structures that we now take for granted: values like interoperability, permissionless innovation and reusable building blocks. These are excellent properties, but they are not technical values, they were political choices only possible because the internet was a publicly funded project intended for use in democracies for academic and military networks. As the internet has grown in importance and commercial interests have recognized opportunities to monetize it, the internet’s foundational values have been abandoned. Social media and messaging services have no interoperability.”

Steven Livingston, founding director of the Institute for Data, Democracy and Politics at George Washington University, wrote, “Narratives about technology tend to run hot or cold: ‘It is all terrific and a new democratic dawn is breaking!’ Or … ‘Technology is ushering in a dystopian nightmare!’ Both outcomes are possible. With the former, Western scholars tend to ignore or be unaware of digital network effects in the developing world that have a positive effect. This would include M-Pesa in Kenya and the entire array of information and communication technologies for development applications. I wrote an article several years ago about the positive effects of crowdsourced elections monitoring in Nigeria. I came up with a whopper example of academic jargon to describe this: Digitally enabled collective action in areas of limited statehood. Positive human intentions have been made actionable by the lower transaction costs in digital space.

“Another example of positive outcomes is found in the work of online information sites such as Bellingcat, Forensic Architecture, and The New York Times Visual Investigations Unit headed by Malachy Browne. We know things about war crimes and other horrific events because of the digital breadcrumbs left behind that are gathered and analyzed by people and organizations such as these. On the other hand, where human intentions are less laudable these same affordances are used to erode confidence in institutions, spread disinformation and make the lives of others miserable. The kicker here is that digital phenomena such as QAnon are seen and understood by participants – at least many of them – as doing good. After all, QAnon is in a fight against evil, just as Forensic Architecture is out to expose war criminals. We end up judging the goodness and harmfulness of these two moments according to our own value structures. Is there some external position that allows us to determine which is misguided and which is God’s work? I believe there is. QAnon is no Forensic Architecture.”

A retired consultant based in Canada said, “Marshall McLuhan noted: ‘The most human thing about us is our technology.’ Language and culture are technology. Life is the emergence of complexity that engenders more complexity. Uncertainty is integral to evolutionary constraints shaping survival choices. We are at the threshold of a phase transition that demands we guide our choices during this struggle between empires ruled by elites and the next flourishing and ‘leveling-up’ toward a participatory democracy. All technologies can be weaponized. All weapons can find a positive use. There will never be a shortage of work and activity to do and to value when we are engaged in the enterprise of a flourishing life, community and ecology. In the 21st century, where everything that can be automated will be, there are three paradigms enabling responseable action:

  1. The power of a nation with its own currency – modern monetary theory.
  2. The enabling of the people to flourish as citizens – accomplished through universal basic assets (UBA) and guaranteed jobs (rather than unemployment insurance).
  3. Enabling communities to be response-able in a changing world through Asset-Based Community Development.”

Steve Jones, co-founder of the Association of Internet Researchers and distinguished professor of communication at the University of Illinois-Chicago, observed, “Digital spaces reflect analog spaces, that is, they are not separate from the pressure and tensions of social, political, economic, etc., human life. It is not so much that digital spaces are ‘entrenched’ as that they will evolve in ways that are unpredictable while also predictably tracking social and political evolution.”

Russell Newman, associate professor of digital media and culture at Emerson College, wrote, “Perhaps most challengingly, our communications networks and the metadata of our use of them have themselves become intrinsically embedded within global capital flows, with aspects of our interactions with traditional media being as folded into this amalgam as the tracking of container-ship cargo. Making democratic media policy in its own right is challenging when it is interwoven with flows of global capital in this way. This is also another reason why antitrust has, itself, become fraught in many ways. New interest in resuscitating a moribund antitrust policy does not address the core logics in play here, as developing manifestations of power are unaddressable by it, barring much rethinking. There are numerous technical initiatives that seek to instill different rationales and logics for new forms of participation. Such initiatives, while useful to explore, neglect the banal and almost crucial insight of all: that all of the problems we face are social ones, not technological ones, and developing new web platforms of varying logics is ancillary to addressing the conditions the trends do not just exacerbate but actually support.

“The notion that policy just ‘lags’ behind emergent tech is a red herring. The business models being pursued today were agendas before they became realities in policy debates, even if still gestating. I study this stuff intensively and I was barely familiar some of these initiatives introduced in the piece [the Applebaum and Pomerantsev article in The Atlantic Monthly that prompted the primary question asked in this canvassing of experts]. Participation in these new arenas is a privilege of both knowledge and, frankly, time that many working people do not possess (for that matter, even that I do not possess, and I occupy a position of relative privilege in that regard). … All of the ills identified are endemic to a time in which wages have effectively stagnated and the power of collective bargaining has been brought low (leading to greater efforts by necessity to pinpoint perfect audiences so as to clear markets), where policy toward corporate interests has intensified a divergence between the capital-owning sector and main street; where basic needs like health care are lacking for so many; where a personal-debt crisis (born not just of student debt but historically stagnating wages) threatens the financial health of multiple generations and, by extension, the economy writ large.

“This is to leave aside the barriers being thrown up to voting itself and the constitution of right-wing echo chambers our new platforms have afforded which have been armed and deployed to forestall these trends from changing. Elites across the globe share more commonalities in their interests and station than differences, even if national prerogatives differ. The climate crisis intensifies every single one of the trends above, one that these same economic elites look to evade themselves, rather than solve. All of this does not portend stronger democratic features across our landscape. It portends continued division sown by artificial intelligence-driven suggestion engines, an economic climate that only finds bullet wounds covered over with Band-Aids that threaten new and larger future implosions, and a climate crisis that will only heighten these tensions.”

Thomas Lenzo, a consultant, community trainer and volunteer in Pasadena, California, said he expects that human behavior will not adapt to change, writing, “I expect a continuing transformation of digital spaces and life, and I expect it will be a mix of good and bad based on the driving actor:

  • Tech leaders, in general, will push the technology they create; some as visionaries, and some to make money.
  • Politicians will push technology in an effort to ensure they and their political party remain in office.
  • Public audiences for the most part will want those digital spaces that will improve the quality of their lives.
  • Criminals will seek digital spaces that enable them to commit crimes and get away without risk.”

Deirdre Williams, an independent internet governance consultant, commented, “I was lucky enough to attend an early demonstration of ‘Mosaic’ [the first graphical web browser] at the University of Illinois, Urbana-Champaign in 1993. I can still remember how I felt then – ‘Charm’d magic casements, opening,’ to borrow from Keats. I thought how wonderful this would be for the students in the rather remote small island state I had come from. Nearly 30 years later it feels that the miracle I was expecting didn’t happen. And plenty of unwelcome things happened instead – things to do with identity, with the community/individual balance in the society. Those unwelcome things are not all to be attributed to ‘digital life,’ but digital life seems to have failed to provide much of its positive potential. I may appear to be pessimistic, however, underneath there is optimism.

“The human perspective fails in its refusal to accept other ways of looking, of seeing, other priorities. Time is often ignored because it is an element beyond human control. And human agency is not the only agency. ‘There’s a divinity that shapes our ends, Rough-hew them how we will,’ says Hamlet to Horatio in Shakespeare’s play ‘Hamlet’ Act 5, Scene 2. Call it divinity, or Gaia, or simply serendipity, but the system is such that it always strives for balance. What is missing currently in ‘digital life’ is a sense of balance; the weightings are all uneven. They need time to reach equilibrium. The questions posed here are all about human agency, but the system itself is superhuman. Fourteen years may (or may not) be sufficient for the system to effect its levelling, but I would expect the pendulum to swing toward improvement because that is what is in its nature.

“At the human level, ‘digital life’ has the potential to create globally shared experience and improve understanding, thus bringing greater balance among the human variable. Climate change, the movement of asteroids, solar flares and the evolution of the Earth’s geology will re-teach human animals their true place in the system and force them to learn humility again. Fourteen years and the opportunities provided by digital life will hopefully be enough to at least begin the reordering and balancing of a system in which humans acknowledge their place as members, not leaders; parts of a greater whole.”

Bill Woodcock, executive director at the Packet Clearing House, wrote, “For the internet’s first 40 years, digital spaces and the conversations they engendered were largely defined by individual interaction and real conversation between real people. In the past 10 or 15 years, though, we’ve moved away from humans talking with humans to machine-intermediated and machine-dominated ‘conversation’ that exists principally to exploit human psychological weaknesses and to direct human behavior. This is the ‘attention economy,’ in which bots interact with people or decide what people will see in order to guide them toward predetermined or profitable outcomes. This is destroying civic discourse, destroying the fundamental underpinnings of democracy and undermining the human intellectual processes that we think of as ‘free will.’ It’s not clear to me that any of the countervailing efforts will prevail, though 2035 is a long time from now, and I am irrationally optimistic.”

A well-known UK-based professor of media history said, “I am gloomy, but with hopeful glints. I don’t, due to my exchanges with policymakers, etc., believe that they are up to speed on this. There is a vanishingly small opportunity, but presumably a real one, to get the right or better policies and regulations in place so that the digital space is tipped in a positive way. There never has been and never will be a ‘medium’ that is inherently anything. How things go depends how they are used and regulated. Some ‘public-interest’ algorithms are being developed, and some governments have at last woken up to the real challenges that dis/mis/malinformation are causing. But it’s late. Plus, what might be seen as a ‘good’ regulation in a democratic society is a ‘bad’ one in an authoritarian one – so policy is quite complex.

“Looking at the changes in private and public lives over the last five years, it is remarkable how uncivil public discourse has become so swiftly. It is the degradation of manners that is so dangerous. Manners require a taking into account of the experiences of others. In addition, the capacity of the foreign/domestic/rich to attack and protect their own interests online has grown exponentially. So, there might be a policy shift, there might be an ability to bring the big social media companies who profit from divisive behaviors to come to have a more public-interest view of their power. Right now, we are looking at the tabloidisation of life. There are some ways forward. Vaccine hesitancy in the UK has been tackled really interestingly (locally and familiarly). On the other hand, the sense of collective values today is weakening.”

A distinguished professor of computer science at one of the largest universities in the U.S. commented, “As Richard Feynman put it, ‘To every man is given the key to the gates of heaven; the same key opens the gates of hell.’ This statement can be applied to nuclear technology and to digital technology. Whether a technology opens the door to heaven or hell is up to how well people regulate the use of it. The internet’s unprecedented growth took people by surprise, thus society was unprepared. Few foresaw where it might be headed, and warning voices were not well heard (see the book ‘Surveillance Capitalism’ as one example). It takes a deep understanding to develop effective solutions to make digital technology better serve society’s needs and to raise the bar against the abuse. We are not there yet but we will get there.”

An award-winning author wrote, “While human survival depends upon a sophisticated ability to categorize, the current notion that our intellectual lives should rest upon aligning with one side or another or seeing the world in either/or terms – can be traced to the chokehold that digital spaces have on our minds and lives today. We need digital spaces – from email to TikTok – that leave more room for not-knowing and for attending to issues and questions that are messy, murky and shifting. This kind of digital space might allow for multiple tempos of communication-and-response, as well as for operating systems that are more in sync with freeform, associational, ‘inefficient’ types of human thinking, such as reverie, forgetting, confusion, doubt and, above all, uncertainty. It is not a coincidence, in my view, that such mysterious yet astonishing realms of human thinking are devalued in society today.”

An author and journalist based in the Northeastern U.S. urged, “It’s important to rethink and re-envision the meta-architecture of digital spaces so that they can allow for more open-minded, dialectical, creative thinking and social connections. This meta-question seems to me to be almost entirely ignored in society today. What’s missing from national conversations about the nature of digital spaces is a realization that the architecture and aesthetics – i.e., the look and feel and bones – of our virtual realities exacerbate human inclinations to see the world in clear, binary and easily digestible terms.

“In a nutshell, I believe that the way digital spaces are set up deeply shapes our behavior in these spaces, just as strongly as physical landscapes and human-built buildings implicitly and explicitly influence our actions and moods. In effect, the meta-quality of digital spaces disturbs me more than even the current alarming content of these realms.

“The digital realm is a space of boxes, templates, lists, bullet points and crisp brevity. In searching, most people are offered a linear, pre-prioritized list of ‘answers’ – often even before they finish asking a question. The value and worth of people and objects are aligned with explicit data; ratings have become a standard of measurement that squeezes out all room for in-betweenness or dynamic change. In these and many other ways, digital spaces narrow our vision of what it means to know, paving the way for the types of cursory, extremist and simplistic content online we see today.”

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information