Numbers, Facts and Trends Shaping Your World

The Future of Digital Spaces and Their Role in Democracy

1. A sampling of some of the key overarching views

The following selection of responses covers some of the more panoramic and incisive big ideas shared by several dozen of the 862 thought leaders participating in this canvassing.

This is the fork in the road where people can choose a better future – or a downward path

Mark Davis, associate professor of media and communications at the University of Melbourne, wrote, “Against all expectations otherwise, we are still in the ‘Wild West’ phase of the internet, where ethical and regulatory frameworks have failed to keep up with rapid advances in technology. The internet, in this phase, and against early utopic hopes for its democratic utility, has had severely negative impacts on democracy that are not offset by its more-hopeful developments such as Black Twitter and #metoo, among the many innovative, emancipatory uses of online media. One reason for this is that the surveillance business model on which digital platforms operate – which has seen traditional liberal democratic intermediaries displaced to some extent by algorithmic intermediaries – privileges quantities of engagement over the qualities of content.

“Against all expectations otherwise, we are still in the ‘Wild West’ phase of the internet, where ethical and regulatory frameworks have failed to keep up with rapid advances in technology.”


Mark Davis, associate professor of media and communications at the University of Melbourne

“Emancipatory movements exist in the digital folds of an internet designed to maximise corporate profits. It has seen a new class of mega-rich individuals and corporations emerge that, in effect, now own the infrastructure of the ‘public sphere’ and have enormous lobbying power over government. The affordances of these systems have at the same time fostered the creation of alternative media spheres where extremism and hate discourse continue to proliferate.

“We are fast approaching a crisis point where the failures of the present hyper-corporate, relatively unregulated model of the internet are having severe, detrimental impacts on public communication. We are at a proverbial fork in the road. One route leads an ever deeper downward spiral into digital dystopia: hyper-surveillance, predictive technology working hand in hand with authoritarianism, disinformation overload and proliferating online divisiveness and hatred. The alternative route is a more-regulated internet where accountability matters, guided by a commonly assented ethics of public culture.

“Is this alternative possible in an era of winner-takes-all partisanship and corporate greed so vast that it is literally interplanetary in its ambitions? I fear not, but if we are to be civic optimists then it is the only possible hope, and we have no alternative but to say ‘yes’ to a better digital future and to become digital activists who collectively work to make it happen.”

Move to a new moral theory that it is wrong to exploit known flaws in the human psyche

Brad Templeton, internet pioneer, futurist, activist and former president of the Electronic Frontier Foundation, said, “I hold some hope for the advancement of a new moral theory I am exploring. Its thesis is that it is wrong to exploit known flaws in the human psyche. A well-known example is gambling addiction. We know it is wrong to exploit that and we even make it illegal to exploit it and other addictive behaviours. On the other hand, we have no problem with all sorts of marketing and computer interaction tricks that unconsciously lead us to do things that, when examined later, we agree are against our interests and which exploit flaws well established in the scientific literature. A-B testing to see what is more addictive would be deprecated rather than be a good idea.

“This psyche-exploitation avoidance approach is new but might lead to a way to design our systems that has stronger focus on our true interests. While it would be nice if we could make social media that are not driven by advertising, and thus work more toward serving the interests of users/customers than advertisers/customers, this is not enough. After all, Netflix also works hard to addict users and make them binge, even though it does not take advertising.

“I don’t think anybody knows what form the changes for the better in the digital public sphere will take, but it’s clear that the players and their customers find the current situation untenable. They will find solutions because they must. Tristan Harris has convinced Facebook to at least give lip-service to his ‘time well spent’ positioning; to make people feel, upon reflection, that their time on social media was worthwhile where today many feel it’s not.

“I have proposed there be a way for friends to anonymously ‘shame’ friends who post false and divisive material – a way that you can learn that some of your friends found your post false or lacking, without knowing who they were (so they don’t feel they will risk the relationship to tell you, for instance, that you fell for a false meme.) This will not be enough, but it’s a start. I also hope we’ll be trained to not trust video evidence any more than we do text because of deepfakes. It will get worse in some ways, too. This is an adversarial battle, with some forces trying deliberately to disrupt their enemies. But they will certainly try. Propaganda, driven by AI, will continue to be weaponized.”

People will use new tools to turn rage into public awareness, acceptance and rapport

Maja Vujovic, owner/director of Compass Communications in Belgrade, Serbia, predicted, “By engineering more tools to tap our commonalities rather than our differences, we will keep transcending our restrictive bubbles between now and 2035. Automatic translation and transcription already tackle our language differences. Our public fora, like Wikipedia and Quora, teach us about foreign cultures, customs or religions. We will also find new ways to manage our conflicting gender or political identities, by ‘translating,’ role-playing or modeling them (maybe through augmented reality and virtual reality). The gaming industry, for one, could creatively crush its misogyny and help reform hostile workplaces and audiences everywhere faster.

“Over these early digital decades, our online public spheres have brought major issues of contention to the surface – truly globally – for the first time ever. Social media algorithms exploited our many frustrations, thus the rage was all the rage. In the future, we’ll turn that public rage into public awareness, then into acceptance, then – in a distant future – into rapport. One step down, three to go; we will struggle through a four-step algorithm regarding each of our principal polar opposites. We will learn to hold ourselves accountable over time. When our public online spheres normalize our real identities (eliminating bozos and bots) we will prove civil on the whole. In the years to come, a new global consensus and protocols will inevitably emerge from and for dealing with worldwide emergencies such as pandemics or climate change.

“Improvements will largely be owed to the global public debates we passionately exercise online. If we, the taxpayers of all countries, crowdsource the most viable identity-vouching solutions, we could, de facto, become fully represented. The distributed technologies will boldly attempt to keep a tally of everyone in all of our demographic, economic, cultural and other tribes. …

“It would be ludicrous to not want to walk our talk directly once we become equipped to do so. We could then automate, gamify or distribute the governance (or choose ‘all of the above’). As a bonus, our global digital public spheres would vastly improve as well. In effect, we would be saving the civilization baby and purifying its bath water, too.”

We should do more work imagining and creating new spaces

Ethan Zuckerman, director of the Initiative on Digital Public Infrastructure at the University of Massachusetts-Amherst, said, “We can, absolutely, change digital spaces to better serve the public good. But we’ve not made the broad commitment to do so. Right now, we are overfocused on fixing existing broken spaces, for instance, making Facebook and Twitter less toxic. We need to do more work imagining and creating new spaces with explicit civic purposes and goals if we are to achieve better online communities by 2035. We begin solving the problem of digital public spaces by imagining spaces designed to encourage pro-social conversations. Instead of naively assuming that connecting people will lead toward increased social harmony, we need to recognize that functional public spaces require careful engineering, moderation and attention paid toward marginalized and traditionally silenced communities. This innovation is more likely to come from real-world communities who take control of their own digital public spaces than it is to come from tech entrepreneurs seeking the next billion-person network. Regulation has a secondary role to play here – its job is not to force Facebook and others into pro-social behavior, but to create a more level playing field for these new social networks.”

A robust regulatory approach can improve more of the digital sphere

Kunle Olorundare, vice president of the Nigeria Chapter of the Internet Society, said, “The Fourth Industrial Revolution has started in most countries, and we are witnessing manufacturing in the digital space in a way that is unprecedented. Our society will be smarter and have richer experiences – it will be bettered as it engages in more-immersive education and virtual-reality entertainment. Our currency may be totally digital. The Internet of Things (IoT) will facilitate a brighter society. However, there are many concerns. More financial heists and scams may be perpetrated through digital platforms. Cryptocurrency, due to its decentralised nature, is used to facilitate crime; ransomware perpetrators demand cryptocurrency as a method of untraceable payment, and illegal international deals are made possible by payment through untrackable cryptocurrency. Terrorism may be advanced using new robotics tools and digital identities to wreak more havoc. It is possible that with a proper framework and meticulous, robust regulatory approach that the positive advantages will outweigh the ills.

“Most aspects of our lives will be impacted positively by the emerging technologies. The IoT can usher in smart cities, smart agriculture, smart health, smart drugs, smart sports, smart businesses, smart digital currencies. Robotics will be used to combat pandemics by promoting less physical contact where it will help to flatten the curves and it will be used in advanced industrial applications. The opportunities are limitless. However, all hands should be on deck so that the negative impact will not erode the gains of digital evolution. Global collaboration through global bodies is necessary for positive digital evolution. International governance and national governance of each country will have to be active. Sensitisation of the citizenry against the ills of digital transformation is key to sustaining the gains. Inventors and private businesses have roles to play. Even a future technological singularity is also a threat.”

Tech alone can’t solve inequality or hate; humans must collaborate to bring true change

danah boyd, founder and president of the Data & Society Research Institute and principal researcher at Microsoft, commented, “Technology mirrors and magnifies the good, bad and ugly of society. There are serious (and daunting) challenges to public life in front of us that are likely to result in significant civil unrest and chaos – and technology will be leveraged by those who are scared, angry or disenfranchised even as technology will also be used by those seeking to address the challenges in front of us. But technology can’t solve inequality. Technology can’t solve hate. These require humans working together. Moreover, technology is completely entangled with late-stage capitalism right now, and addressing inequality/hate and many other problems (e.g., climate change) will require a radical undoing/redoing of capitalism. My expectation is that technology will be leveraged to reify capitalism rather than to help undo its most harmful components.”

“Technology will be leveraged by those who are scared, angry or disenfranchised even as technology will also be used by those seeking to address the challenges in front of us.”


danah boyd, founder and president of the data & SOciety Research Institute and principal researcher at microsoft

These are challenging issues, but people and tools will evolve a better public sphere online

Vinton G. Cerf, vice president and chief internet evangelist at Google and Internet Hall of Fame member, observed, “Digital spaces have evolved dramatically over the past 50 years. During that time, programmable devices have become central to an unlimited number of products upon which we increasingly depend. Information space is instantly accessible thanks to the World Wide Web and search engines such as Google. Collaboration is facilitated with email, texting, shared documents, access to immeasurable amounts of data and increasingly powerful computer-based tools for its use.

“Over the next 15 years, instrumentation in every dimension will color our lives to include remote medical care, robotics and self-driving cars. Cities will have models of themselves they can use to assess whether they are functioning properly or not; these models will be invaluable to aid in response to emergencies and to smooth the course of daily life.

“During this same period, we will have to continue to cope with the amplifying effects of social media, including the side effects of misinformation, disinformation, malware, stalking, bullying, fraud and a raft of other abuses. We will have made progress in international agreements on norms of civil behavior and law enforcement in online environments. The internet or its successor will have become safer and more secure, and preservation of these properties will be easier with the help of new devices and practices. There will be more collaboration between government and the private sector in the interest of citizen safety and privacy. These are hard problems, and abuses will continue, but tools will evolve to provide better protection in 2035.”

Requiring platforms to become interoperable would allow people to choose where they want to be

Cory Doctorow, activist, journalist and author of “How to Destroy Surveillance Capitalism” and many other books, recommended, “The move to lower switching costs – by imposing interoperability on online spaces – will correct the major source of online toxicity – the tyranny of network effects. Services like Facebook are so valuable due to network effects that users are loathe to leave, even when they have negative experiences there.

“If you could leave Facebook but still connect to your Facebook friends, customers and communities, then the equilibrium would shift – Facebook would have to be more responsive to users because otherwise the users would depart and it would lose money. And if Facebook wasn’t responsive to user needs, the users could take advantage of interoperability to leave, because interoperability means they don’t have to give up the benefits of Facebook when they go.”

We need to start training our babies as carefully as we are talking about training our AIs

Esther Dyson, internet pioneer, entrepreneur and executive founder of Wellville.net, responded, “I see things getting both better and worse for people depending on who you are and under what jurisdiction you live. (It is ever thus.) There is no particular endpoint that will resolve the tension between more power for both good and bad actors. We will have AI [artificial intelligence] that can monitor speech and to some extent, reactions to speech, closely – but we will have both good and bad actors in charge of the AIs. As more of life goes online, people will have more freedom to choose their virtual jurisdictions, and the luckier ones will be able to get an education online and perhaps to move out to a better physical jurisdiction.

“By 2065, I would hope that there would be some worldwide movement that would simply rescue the bottom-of-the-pyramid citizens of the most toxic governments, but I believe that the (sometimes misguided) respect for sovereignty is strong enough to persist through 2035. At what point will we be able to escape to now floating jurisdictions (especially as many places get flooded by climate change) or even – though this will remain an expensive proposition – into space?

I see things getting both better and worse for people depending on who you are and under what jurisdiction you live. There is no particular endpoint that will resolve the tension between more power for both good and bad actors.”


Esther Dyson, internet pioneer, entrepreneur and executive founder of Wellville.net

“Somehow, we have evolved to prefer superiority over absolute progress, and we are unlikely to move into a world of evenly distributed power. To get more specific, I do see business playing a bigger role, but businesses are seduced by and addicted to increasing profits just as political actors are seduced by and addicted to power.

“Somehow, we need to start training our babies as carefully as we are talking about training our AIs. Train them to think long-term, to favor their own species, to love justice and fairness.”

Putting people’s rights above companies’ rights will create better spaces

Adam Nelson, software development manager at Amazon, commented, “Initiatives around privacy, data portability and – most importantly – putting the rights of individuals, governments and marketplaces above those of companies will lead to a more-equitable digital space and digital life. This will be an uneven transition though, with many people still suffering from abuse.”

We need a Universal Declaration of Digital Rights

Raashi Saxena, project officer at The IO Foundation and scientific committee member at We, the Internet, wrote, “We need to move toward defining technical standards that will protect citizens’ data in digital spaces from harm. One such initiative from The IO Foundation is the Universal Declaration of Digital Rights, which would act as a technical reference for technologists, which we identify as the next generation of rights defenders, so that technology is designed and implemented in a way that proactively protects citizens. Governments are not closing the loop when it comes to tech policies by not offering infrastructures that implement them. Examples of how this is possible can be found in corporate tech: Apple can enforce its policy (its licensing business model) in its digital assets such as music because it has implemented its own infrastructure for that. The same degree of protection should be provided to citizens. Their sharing of data does not follow a different model from a technical perspective. In essence, they are licensing their personal data. The underlying problem is that we do not have a global, agreed-upon list of digital harms, that is, harms that can be inflicted upon us by the data that models all of us. In order to implement public infrastructures that foster meaningful connectivity, philanthropies should pursue the core principle of ‘Rights by Design.’ We first need to catalog and collectively agree on a common definition of digital harms so that we can proceed to define the rights to be protected. The areas of work for them should be around digital governance, sustainability and capital to promote the rise of other stakeholder groups that can sustain, scale and grow. Supporting projects to implement research-informed best practices for conflict zones and sparsely populated terrains should be the highest priority, since access to information and communication can constitute a critical step in the defense of the territories of these communities.”

Stop playing with ‘technocratic incrementalism’ and take big steps toward positive change

Caitlin Howarth, humanitarian data and security analyst, asked, “Are there ways that things can change for the better? Yes. Is that change not complex and dramatic? No. We need to stop playing at this with technocratic incrementalism. Here are some needed internet governance measures:

  1. Firmly establish that information is a human right, interdependent upon other established rights (particularly the right to protection). The right to information – accessing, creating, sharing, updating, storing and deleting it – is particularly critical during crises and must be protected as a vital condition for securing all other human rights. This right to information must also be protected and comprehensively advanced – along with its interdependent rights – through the activities and obligations of human rights and humanitarian organizations that operate according to shared standards. As Hugo Slim and others have called for, this is the moment for a fifth Geneva Convention given the fact that ICT systems are routinely targeted first as ‘dual use’ infrastructure and are therefore considered valid targets under outdated laws of armed conflict. 
  2. Using a rights-based approach, substantially advance these rights using a comprehensive framework of accessibility, security and protection (e.g., digital security and surveillance awareness), civilian redress and rectification measures (e.g., regulatory guidance and claims structure, akin to the original design of the Consumer Financial Protection Bureau) and eliminating or ending liability-shielding practices for major technology companies.
  3. Every cybersecurity professional is aware that governments, including the U.S., are on the cusp of achieving quantum computing breakthroughs that will render current digital security protocols meaningless. Invest explicitly and rapidly in quantum-era civilian-protection mechanisms that could meaningfully advance their human rights when such government capacity comes online; if not, we risk a rapid descent into wholesale authoritarianism.
  4. Establish hard national and international regulations on the propagation of cyber currencies and the use of blockchain technologies that bear disproportionately harmful environmental burdens without demonstrable, comparable benefits to society as a whole. Similarly, regulate the use of digital-identification systems, especially those connected to biometric data and irreversible data storage, to ensure the fundamental bodily integrity of human beings’ ‘digital bodies’ as well as their physical persons. When systems cannot pass the stress tests to meet minimum rights-based requirements, they should not be permitted to profligate and harm. We need regulatory systems similar in focus and function to the FDA for platforms of such significance – and they must be free of regulatory capture.” 

People should be recognized for what they do; they should not pollute their rivers

Srinivasan Ramani, Internet Hall of Fame member and pioneer of the internet in India, said, “I am reminded of life in Kerala, one of the states of India. There are many rivers and backwaters there and it is common for people to live on the rivers; that means that most people live on the edge of riverbanks or the backwaters. The rivers give them food (mostly fish) and transportation by boat. The rivers, of course, give them drinking water. The people are very hygiene-conscious, because if they pollute their river, they will be ruining their own lives.

“We now live by the internet, and we should be equally careful not to pollute it with misinformation, unreliable information, etc. Of course, people have freedom of expression. Going back to the river analogy, do they have freedom to pollute the river? I think, and I hope, that rubbish will reduce on the internet in the coming years. People should have freedom of expression, but they should not be able to hide behind anonymity. I would hope that every original post and every forwarding would be signed in a manner that would let us identify the person responsible. Then there is the question of ignorant postings. One may express one’s opinion and own the responsibility for it. That does not guarantee that it is a contribution for the good of society. You may claim in all sincerity that a certain herbal remedy protects you against COVID-19, but it may be a statement with no reliable evidence behind it whatever. It can land the reader in trouble by misleading him or her. We can probably invent an effective safeguard against it, but it may not be very easy.”

Changes in governance and law, amplified by tech, can help shape a better public sphere

Beth Simone Noveck, director of the Governance Lab and author of “Solving Public Problems: How to Fix Our Government and Change Our World,” observed, “Many people are working today on building better alternatives to the current social media dumpster fire and many institutions turning to the use of platforms designed to facilitate more-civil and engaged discourse. … Brazil has adopted platforms like Mudamos, which enables citizens to propose legislation and which is being used systematically and on an ongoing basis for ‘crowdlaw,’ namely to enable ordinary citizens to participate in the legislative process. Taiwan has engaged the public in co-creating 26 pieces of national legislation, but perhaps even more exciting is its creation of a ‘Participation Officers Network‘ to train officials to work with the public in a more-conversational form of democratic engagement enabled by technology, day in and day out.

“The most exciting initiatives are those where institutions are collaborating with civil society, not as a pilot or experiment, but as an institutionalized and new form of governance and problem solving. In the UK, GoodSAM uses new technology to crowdsource a network of thousands of amateur first responders to offer bystander aid in the event of an emergency, thereby dramatically improving survival rates. Petabancana enables residents in parts of Indonesia and India to report on fair weather flooding to facilitate better governmental disaster response. Civic tech developers are creating exciting new alternatives designed to foster a more participatory future. Whether it is platforms for citizen engagement like Pol.is or Your Priorities or projects like Applied – hiring software designed by the UK Behavioral Insights team designed to foster diversity rather than inadvertently entrenching new biases – there has always been a community of tech designers committed to using tech for good.

“But the technology is not enough. The reforms that have the biggest impact are those changes in law and governance that lead to uses of technology that promote a systematically more responsive, engaged and conversational forms of governance on a quotidian basis by prohibiting malevolent uses of tech while encouraging good uses. For example, New Jersey is exploring opportunities to regulate uses of hiring technology that enable discrimination. But, at the same time, New Jersey is running a Future of Work Accelerator to invest in and promote technologies that protect workers, amplify workers’ voices and strengthen worker rights.

“In the United States, many positive uses of technology are happening in cities and at the local, rather than the national, level. The Biden Administration’s July 2021 OMB request for comments to explore more equitable forms of citizen engagement may portend greater investment in technology for sustained citizen engagement. Also, the introduction of machine learning is enabling the creation of new kinds of tools to facilitate more efficient forms of democratic engagement at scale.

“Given the proliferation of new platforms and initiatives designed to solve public problems using new technology and the collective intelligence of communities, I am hopeful that we will see increasing institutionalization of technologies that promote strong democracy and civil rights, however, in the absence of sufficient investments in civic infrastructure (i.e., government and philanthropy paying for these platforms) and investments in training citizens to learn how to be what Living Cities calls ‘resident engaged,’ the opportunity to use technology to enable the kind of democracy and society we want will go unrealized.”

Industry should come together with the public sector to broaden access to digital skills

Melissa Sassi, the Global Head of IBM Hyper Protect Accelerator, focused on empowering early-stage startups, suggested, “Initiatives for improvement that could be undertaken that might have the largest impact on digital life include:

  1. Access to affordable internet for the 50% that are not currently connected and/or those that are unable to connect due to costs.
  2. Digital skill-building for those with access but currently unable to make meaningful use of the internet.
  3. Empowering underserved and underrepresented communities via digital inclusion (woman/girls, youth, people with disabilities, Indigenous populations, elderly populations, etc.).
  4. Investment in locally generated tech entrepreneurship endeavors in hyper-local communities. Tech leaders play an important role by incorporating design thinking into everything and anything built. It is important to hire and involve a more-representative group of builders, design makers and experts into designing and creating solutions that are more empathetic with audience needs, making the customer and/or user central to what gets shipped and/or evolved.
  5. Tech leaders from social media platforms should be playing a greater role in data stewardship, protection, privacy and security, as well as incorporating more-informed consent protocols for those individuals who might lack the necessary skills to understand what data is going where and how data is being used when it comes to ad serving and other actions taken by social media networks.
  6. Tech leaders play a fundamental role in training our current and next generation of users on the introductory building blocks of learning to code, as well as what it means to be digitally skilled, ready, intelligent, literate and prepared for the future of work. This is something that could be incorporated into a multistakeholder approach where industry comes together with the public sector to broaden access to digital skills.
  7. Improvement areas relating to digital life includes individuals becoming more productive at work and in their personal lives, utilizing technology to drive outcomes (health care, education, economic, agricultural, etc.) and incorporating technology to solve the 17 UN Sustainable Development Goals.
  8. Technology could play an incredibly important role in evolving the global monetary system to one that is decentralized. One that is for the people, with the people, by the people; where those at the bottom of the pyramid do not suffer from faulty monetary policies.”

Leaders will see they must cooperate to convert swords into sustainable solutions

Jonathan Grudin, principal human-computer design researcher at Microsoft and affiliate professor at the University of Washington, wrote, “In 2005, digital spaces served the public good. Recovering from the internet bubble, we were connecting with long-lost classmates and friends and conducting business more efficiently online. By 2020, digital spaces had become problematic. Mental health problems afflicted young and old, there was rising income inequality, trust in governments and institutions had eroded, there were elected politicians of staggering ineptitude, and tens of millions were drawn to online spaces rife with malicious conspiracy fantasies and big lies. Trillions of dollars are spent annually to combat bad actors who may have the upper hand. Debt-ridden consumers are succumbing to marketers armed with powerful digital technologies. In 2035, another 15 years will have elapsed. … Life may be worse for the average person in 2035 than today, but I’m betting the digital spaces will be better places.”

Tech will mostly be applied to controlling populations and resources and to entertainment

Douglas Rushkoff, digital theorist and host of the NPR One podcast “Team Human,” predicted, “There will be many terrific, wonderful innovations for civics in digital spaces moving forward. There will also be almost unimaginably cruel forms of oppression implemented through digital technology by 2035. It’s hard to talk too specifically about digital technology in 2035, since we will likely be primarily dealing with death and destruction from climate change. So, digital technology will be useful for organizing humanity’s broad retreat from coastal areas, organizing refugee camps for over a billion people, administrating medical and other forms of triage, and so on. That’s part of the problem when casting out this far. We don’t really know how much of the world will be on fire, whether America will be a democracy, whether China will be dominating global affairs, how disease and famine will have changed the geopolitical landscape, and so on. So, if I have to predict, I’d say digital technology will be mostly applied to: 1) control populations, 2) administrate mass migrations and resource allocation and 3) provide entertainment.”

Digital transformation arrives as climate change is at the top of the global agenda

Grace Wambura, an associate at DotConnectAfrica based in Nairobi, said, “Digital transformation will pursue unlimited growth and our limitless consumption threatens to crowd out everything else on Earth. Climate change is currently happening, we are overspending our financial resources, we require more fresh water than we have, there is increasing income inequality, a diminishing of other species, and all of these are triggering shockwaves. At this important time, technology initiatives that are aimed at working forward to end climate change, achieve financial inclusion, overcome gender inequalities and enable the provision of safe drinking water will have a great impact on communities by 2035. Tech leaders are increasing their power and digital surveillance. They can also apply technology to come up with new options to cope with the problems arriving with the digital technology evolution. Thanks to technology, everyone can be able to access the world’s best services, resources and knowledge. One thing that will remain as a puzzle and continue to cause concern is the vital need for both privacy and security.”

Bad actors are still going to act bad; no one is in charge of the internet

Alan Mutter, consultant and former Silicon Valley CEO, observed, “The internet is designed to be open. Accordingly, no one is in charge. While good actors will do many positive things with the freedom afforded by digital publishing, bad actors will continue to act badly with no one to stop them. Did I mention that no one is in charge?”

We need an FDA for tech, an agency to help monitor and regulate its effects on humans

Carolina Rossini, an international technology law and policy expert and consultant who is active in a number of global digital initiatives, predicted, “For years to come – based on the current world polarization and the polarization within various powerful and relevant countries – I feel speech and security risks will increase. Personal harm, including a greater impact on mental health, might also increase within digital realms like the metaverse. We might need some new form of a regulatory agency that has some input on how technology impacts people’s health. We have FDA for medicines and more, why not something like that for the tech that is getting closer and closer to being put inside our bodies? If countries do not come together to deal with those issues, the future might be grim. From building trust and cooperation to good regulation against large monopolistic platforms to better review of the impact of technologies to good data governance frameworks that tackle society’s most pressing problems (e.g., climate change, food security, etc.) to digital literacy to building empathy early on, there is a lot to be done.”

New breeds of social platforms and other human institutions have to emerge

Robin Raskin, a writer, conference organizer and head of the Virtual Events Group, exclaimed, “There should be a UBI – Universal Basic Internet – for the good of all! Human nature never changes, so the internet will have to keep evolving to try to stay ahead of human greed; the same holds true for all of our other human institutions – evolution and change have to be the norm. Digital currency has to be regulated on a worldwide basis if the internet is going to NOT be a place for ransomware and money laundering. In 2035 there will be more social players. Facebook is already falling in popularity, paving the way for a new breed of social media platforms that seem more in tune with keeping their citizens safer. The metaverse – digital twins of real worlds or entirely fabricated worlds – will be a large presence by 2035, unfortunately with some of the same bad practices on the internet today such as personal-identity infringements. Regulators will crack down on privacy violations. Clearly marked posts as to their origins (possibly on the blockchain) will authorize the source of information. Warnings about information being suspect will be worked out. The Internet of Things will be in full swing, creating safer, more-efficient cities – provided adequate privacy practices are created. Advertisers hungry for information and the traditional ad model make the internet less important than it could be. Subscription models, possibly based on usage are one possible answer.”

“The metaverse – digital twins of real worlds or entirely fabricated worlds – will be a large presence by 2035, unfortunately with some of the same bad practices on the internet today such as personal-identity infringements.”


Robin Raskin, a writer, conference organizer and head of the Virtual Events Group

Can we meet the challenge of automating trust, truth and ethics?

Francine Berman, distinguished professor of computer science at Rensselaer Polytechnic Institute, (sharing statements she had earlier made in a long interview with the Harvard Gazette) wrote, “Tech is critical infrastructure. It saved lives during the pandemic. It also enabled election manipulation, the rapid spread of misinformation and the growth of radicalism. The same internet supports both Pornhub and CDC.gov, Goodreads and Parler.com. The digital world we experience is a fusion of tech innovation and social controls. For cyberspace to be a force for good, it will require a societal shift in how we develop, use and oversee tech, a reprioritization of the public interest over private profit.

“Fundamentally, it is the public sector’s responsibility to create the social controls that promote the use of tech for good rather than for exploitation, manipulation, misinformation and worse. Doing so is enormously complex and requires a change in the broader culture of tech opportunism to a culture of tech in the public interest. There is no magic bullet that will create this culture change – no single law, federal agency, institutional policy or set of practices will do it, although all are needed. It’s a long, hard slog. Changing from a culture of tech opportunism to a culture of tech in the public interest will require many and sustained efforts on a number of fronts, just like we are experiencing now as we work hard to change from a culture of discrimination to a culture of inclusion. That being said, we need to create the building blocks for culture change now – proactive short-term solutions, foundational long-term solutions and serious efforts to develop strategies for challenges that we don’t yet know how to address. …

“At the root of our problems with misinformation and fake news online is the tremendous challenge of automating trust, truth and ethics. Social media largely removes context from information, and with it, many of the cues that enable us to vet what we hear. Online, we probably don’t know whom we’re talking with or where they got their information. There is a lot of piling on. In real life we have ways to vet information, assess credentials from context and utilize conversational dynamics to evaluate what we’re hearing. Few of those things are present in social media.

“Harnessing the tremendous power of tech is hard for everyone. Social media companies are struggling with their role as platform providers (where they are not responsible for content) versus their role as content modulators (where they commit to taking down hate speech, information that incites violence, etc.). They’ve yet to develop good solutions to the content-modulation problem. Crowdsourcing (allowing the crowd to determine what is valuable), third-party vetting (employing a fact-checking service), advisory groups and citizen-based editorial boards all have truth, trust and scale challenges. (Twitter alone hosts 500 million tweets per day.)”

Investing in change can have a multiplied impact

Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network, observed, “As we build policies, programs and funding mechanisms to support bringing more and more people online, it will be necessary to build policies and appropriate investments to ensure folks are coming online to digital spaces that are safe and accessible. Misinformation is a massive concern that will continue without deliberate and direct work, and the pandemic has made clear the level of necessity we have for internet in everyday lives from virtual school to virtual work, telehealth to social connection. That means that the inequities that we’ve enabled around digital divides were exacerbating many other clear inequities around health, social supports, employment, schooling and much more. This means that while challenges are compounded, investing in change can have a multiplied impact.”

Tech leaders and politicians can play a beneficial role

Mei Lin Fung, chair of People-Centered Internet and former socio-technical lead for the U.S. Department of Defense’s Federal Health Futures initiative, predicted, “The trajectory of digital transformation in our lives and organizations will have parallels to the transformation that societies underwent with the introduction of electricity. Thus, the creation of digital public goods and digital utilities will allow for widespread participation and access in digital transformation. This is already underway at the IEEE.org, the International Telecommunication Union and action-oriented forums like the World Summit on the Information Society and the Internet Governance Forum. There are tech leaders and/or politicians who are playing and can play a beneficial role: 1) Antonio Guterres, the first electrical engineer to be UN Secretary-General, has established the Digital Cooperation Roadmap, bringing together stakeholders from across many sectors of society; 2) Satya Nadella, CEO of Microsoft; 3) Ajay Banga, executive chair of MasterCard; 4) Marc Benioff, chairman of Salesforce; 5) an original innovator of the internet, Vint Cerf, now a Google vice president, and other internet pioneers who built the internet as a free and open resource.

“All of these and more will be working to build bridges to a better approach to digital transformation. The most noticeable improvement in the network in 2035 will be that digital will become more invisible, and it will be much more natural and easier to navigate the digital world.

“This transformation will be similar to the evolution of the impact of writing. At the beginning, it was difficult to learn to write, but it advanced broadly and quickly. After it becomes a normal part of people’s education we will see a shift to a digital world with much more digitally literate people. It will be like the film ‘Back to the Future’ – the best parts of human life will flourish, augmented by digital. Current problems that will be diminished include cyberattacks, misinformation, fake news and the stirring up of tribal conflicts. The uses of digital tools and networks by criminals, for human and sex trafficking, for online abuse of the vulnerable, especially children, for fraud, for violence and drug trafficking; increasing attacks via cyber by both state actors and nonstate actors; and increasing attempts to shape and manipulate political discourse by cyber means will persist as major concerns.”

Hoping for the decommodification of digital platforms and the rise of AI-generated ad hoc networks

Bart Knijnenburg, associate professor of human-centered computing at Clemson University, said, “One big transformation that I am really hoping for is the decommodification of the spaces that facilitate online discourse. Right now, most of our online interactions are aggregated on a few giant social networks (Twitter, Facebook, Instagram). We tend to use these networks for multiple purposes, which leads to context collapse: If you mostly talk on Facebook about cars and politics, your car junkie friends will be exposed to your political views and your political kindred spirits will learn about your mechanical skills. On the consumer side this context collapse may induce some serendipity, but on the author’s side it could have a stifling effect: If your words are shared with an increasingly broad audience, you will likely be less outspoken than you’d be in smaller circles. This problem is exacerbated by the lack of transparency in how social networks show content to your audience and by the tendency of social networks to make this audience as broad as possible (e.g., by encouraging users to add more ‘friends,’ or by automatically translating posts into other languages).

“I envision the decommodification of these spaces to result in interest-oriented peer networks (e.g., surrounding a common interest in a certain podcast, author, sports club, etc.), hosted on platforms like Slack, Clubhouse or Discord, which do not specifically aim to grow the network or to algorithmically control/manipulate the presentation of the shared information. By joining *multiple* networks like this, people can mentally separate the expression of a variety of their interests, thereby overcoming the current issue of context collapse. If AI technologies do end up playing a role in this scenario, then I hope it to be at the level of network creation rather than content distribution. The idea would be for an AI system to automatically create ad hoc networks of people with preferences that are similar enough to create an engaging discourse, but not so similar that they result in ‘echo chambers.’”

A new kind of information civilization is being built

Calton Pu, professor of computer science at Georgia Tech, wrote, “We are building an information civilization unlike anything else in the history of humankind. The information civilization is built on digital technologies and platforms that can be called digital spaces. The impact of information has been profound in economy (both macro and micro), society (as an organization affecting its population, and the people transforming the social organization), and humans (an aspect that can be called digital life). …

“Throughout the human history, all civilizations have risen and fallen. It appears that as the builders construct an increasingly more sophisticated civilization, the intricacy of organization also makes it more susceptible to manipulation and disruption by the schemers. It is clear that the schemers are not acting alone: They reflect deep, dark desires in human nature. The battle between the builders and schemers will persist in the information civilization, as it has been through all the civilizations in the history. …

“Technical leaders and politicians who help build the information civilization will make beneficial contributions, and those who misuse the digital spaces for their own benefits will lead us toward the downfall of the information civilization. For the information civilization to thrive, the builders must find technological and political means to distinguish factual information (the constructive building blocks) from misinformation and disinformation (the destructive, eroding bacteria/fungi). As the information civilization grows stronger, there is hope that its building blocks of factual information will become better organized and easier to adopt. This improvement will help more humans to grow wiser and help build the human civilization, including the informational and physical dimensions.”

In the next section, we highlight the remarks of several dozen experts who gave some of the most wide-ranging answers or incisive responses to our question about the future of the digital public sphere. Following it, we offer a number of additional sections with respondents’ comments organized under the set of themes we set out at the top of this report.

The remarks made by the respondents to this canvassing reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their backgrounds and the locus of their expertise. Some responses are lightly edited for style and readability.

As long as there is profit to be made in scaring people, societies will continue to fracture

Larry Lannom, director of information services and vice president at the Corporation for National Research Initiatives (CNRI), commented, “Solutions will be hard to come by. The essential conundrum is how to preserve free speech in an environment in which the worst speech has a many-fold advantage. This general phenomenon is not new. Jonathan Swift wrote in “The Art of Political Lying” in 1710, ‘If a lie be believed only for an hour, it has done its work, and there is no farther occasion for it. Falsehood flies, and Truth comes limping after it.’ Today the problem is enormously exacerbated by the ease of information spread across the internet, and it is unclear whether the virus-like behavior of misinformation that strikes the right chords in some subset of the population can be stopped.

“The negative sense I have is primarily about social media and the algorithms that drive users into more and more extreme positions. As long as there is profit in scaring people, in pushing conspiracy theories and in emphasizing wedge issues instead of the common good, societies will continue to fracture and good governance will be harder to achieve.

“There is still a lot of good in collaboration technologies. You can focus the world’s expertise on a given problem without having to get all of those experts together in a single room. It makes information more readily available. Consider the transformative protein-folding announcement from DeepMind. Researchers say the resource – which is set to grow to 130 million structures by the end of 2021 – has the potential to revolutionize the life sciences. These sorts of advances, widely shared, will increase over time, with great potential benefits.”

Citizens become targets in an evolving ecology in which their emotions are being datafied

A professor who studies civil society and intelligence elites observed, “The disinformation media ecology that generates and targets messages that are deceptive and/or designed to bypass thoughtful deliberation in favour of profiled, emotionalised engagement severely challenges the democratic ideal of treating people as citizens rather than as ‘targets’ or ‘consumers.’ This is an ecology in which the psychological and emotional behaviour of individuals and groups is increasingly being quantified and datafied, as evidenced by the rise of emotion AI or affective AI. Also important is the nature of psychology, in that influential behavioural sciences downplay rationality in favour of a neo-behaviourist outlook. In an applied context, neo-behaviourism and seeing people in psycho-physiological terms disregards (or denies) agency and civic autonomy. This near-horizon future is bleak, particularly since such techniques for emotional profiling are rapidly becoming commonplace in the political and civic world, starting with social media but spilling out into once offline domains (e.g., cities that have become ‘smart’, and dwellings that have become ‘Internet of Things-connected’).”

Cross-sector collaboration is needed to work toward the creation of aligned incentives

Perry Hewitt, chief marketing officer at data.org, a platform for partnerships to build the field of data science for social impact, urged, “Achieving a transformation of digital spaces and improved digital life will require collaboration: private-sector tech, government and social-impact organizations coming together in a combination of regulation and norms. Aligned incentives enabling for-profit and social impact to come together is critical. Healthy, informed and engaged publics are better consumers and citizens. Public audiences will play a role to the extent that we build digital spaces that are engaging and convenient to use; it’s hard to see people flocking toward digital broccoli in a candy store of addictive apps. Nate Matias’ research into the civic labor of volunteer moderators online, showing the actions of individuals in improving a platform’s algorithm, is hugely encouraging. I am very bullish on the ability to better manage spam, misinformation and hate speech, the scourge of digital spaces today. But it will be an ongoing battle as deepfakes and similar technologies (fake VR in one’s living room?) become more persuasive. Perhaps the biggest challenge will be the trade-offs between personal privacy and safe spaces. There are many legitimate reasons people require anonymity in public spaces (personal threats, whistleblowing, academic freedom), but it’s really tricky to moderate information and abuse in communities with high anonymity.”

Reasonable regulation can promote accountability and free expression

Nazar Nicholas Kirama, president and CEO of the Internet Society chapter in Tanzania and founder of the Digital Africa Forum, said, “The internet is a reflection of our own societies’ good and bad sides; the good far outweighs the harm. As digital spaces evolve, stakeholders need to find ways to curb online harms, not through ‘sanitation’ of digital spaces but by creating reasonable regulations that promote freedom of online expression and personal accountability that promote internet trust. The internet has evolved to a stage where it is now a necessary ‘commodity.’ Over the past year we have learned how key it is for communication and business continuity in times of global emergencies like the COVID-19 pandemic. During the first wave, more than 1.5 billion learners who were put out of classrooms due to global lockdowns could not continue their education because they had no connection. Had their homes been connected, the disruption would have been minimal. Being online is vital and good for societies.”

“As digital spaces evolve, stakeholders need to find ways to curb online harms, not through ‘sanitation’ of digital spaces but by creating reasonable regulations that promote freedom of online expression and personal accountability that promote internet trust.” 


Nazar Nicholas Kirama, president and CEO of the Internet Society chapter in Tanzania and founder of the Digital Africa Forum

All stakeholders have to keep each other in check in the further development of digital life

Olivier Crépin-Leblond, internet policy expert and founding member of the European Dialogue on Internet Governance, wrote, “I am optimistic about the transformation of digital spaces for the following reasons:

  1. Natural Law will ensure that the extreme scenarios will ultimately not be successful.
  2. The Public, at large, is made up of people who want to live a positive, good life.
  3. Unless it is completely censored and controlled, the internet will provide a backstop to any democracy that is in trouble.
  4. The excesses of the early years’ GAFAs [an acronym for Google, Apple, Facebook, Amazon that is generally meant to represent all of the tech behemoths] will be soon kept more in check, whilst innovation will prevail.
  5. The next generations of political leaders will embrace and understand technology better than their predecessors.
  6. Past practice will help in addressing issues like cybersecurity, human rights, freedom of speech – issues that were very novel in the context of the internet only a few years ago.
  7. On the other hand, this could be only achievable if all stakeholders of the multistakeholder model keep each other in check in the development of the future internet. If this model is not pursued, the internet’s characteristics and very fabric will change dramatically to one serving the vested interests of the few at the expense of the whole population.”

Machines, bots will be more widespread and more spaces will be autonomously controlled

Marc Rotenberg, president and founder of the Center for AI and Digital Policy and editor of the AI Policy Sourcebook, said, “Digital Spaces will evolve as users become more sophisticated, more practical and more willing to turn aside from online environments that are harmful, abusive and toxic. But the techniques to lure people into digital spaces will also become more subtle and more effective, as interactive bots become more widespread and as more spaces are curated by autonomous programs. By 2035, we will begin to experience online a society of humans and machines that will also be making its way into the physical world.”

Politicians will be motivated to ensure resilient economic societies

Amali De Silva-Mitchell, futurist and founder/coordinator of the Internet Governance Forum’s Dynamic Coalition on Data-Driven Health Technologies, predicted, “The increasing knowledge of the space and of its benefits and risks by the average user of technology could be exponential, as digital becomes the norm in health, education, agriculture, transport, governance, climate change mitigation including waste management, and so forth. By 2035 most global citizens will be more conversant with the uses of technology, easing the delivery of technology goods and services.

“The biggest advances will be in the universal quality of connectivity and increased device accessibility. Citizens who are unable to participate digitally must be served by alternative means. This is a public duty. A 100% technology-user world is not possible, and this limitation must be recognized across all services and products in this space.

“Perfection of technology output will continue to be marred by misinformation, fake news, poor design, bias, privacy versus copyright, jurisdiction mismatches, interoperability issues, content struggles, security problems, data ocean issues (data silos, fickle data, data froth, receding-stability data and more) and yet-to-be-identified issues. All of these must be managed in order to create a more-positive digital public sphere with better opportunities.

“Politicians will be motivated to ensure resilient economic societies and will pursue the ideal of universal accessibility through all means such as satellite, quantum and other emerging technologies. The public will be focused on affordable, quality, unbiased (Artificial Intelligence/Machine Learning, quantum) internet access. In the nano, quantum and yet-unidentified operational spaces the private sector will be focused on issues of interoperability for the Internet of Things and other emerging applications (for market growth versus democratization).

“In the future, quantum entanglement will create new opportunities and unexpected results while challenging old principles and norms due to potential breakthroughs, for instance, telepathy for human information exchange competing with traditional wireless technology.”

All miraculous technologies eventually ‘settle down’ and steadily improve humanity

Frank Kaufmann, president of the Twelve Gates Foundation, responded, “I see digital life and digital spaces and the ‘evolution’ of these as following classic patterns of all prior major turning points of radical change that are connected to technological progress and development. Wheels, fire, the printing press, electricity, the railroads, flight and so forth.

“To me the pattern goes: 1) A genius visionary or visionary group opens a historical portal of magic and wonder. These first people tend to be visionaries with pure, wholesome dreams and the desire to help people. 2) The new technology explodes in a ‘Wild West’ environment during which time underdeveloped, avaricious, power-hungry, vile people amass obscene amounts of wealth and power by exploiting the technology and exploiting people. Eventually these criminals vanish into their private, petty hells and try to coat the horror they perpetrated by establishing self-serving veneers of work for ‘charitable’ causes and ‘grant-giving foundations,’ Their time of power lust has come and gone. In the meantime, 3) a widespread reaction by normal, good people to the harm and evil caused by the avaricious exploiters, gradually, 4) implements ‘checks and balances’ to bring the technology more fully into genuine healthy and wholesome service to people, plus a natural ‘decentralization’ occurs, yielding an explosion of creativity and positive growth and development.

“Both the implementation of guardrails, and ‘checks and balances,’ after the ‘Wild West’ time and the smaller-ness, the more local-ness, the more manageable, humane little subunits of the boundless benefits afforded by all these miraculous technologies settle down and they will help us improve steadily.”

Leaders’ primary role is to assure that decentralized and open systems can thrive

James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute, said, “In the fast-changing world of today we see new technologies emerging rapidly, and then they become institutionalized. Thus, most people only see the larger, more tech-giant-dominated applications. But we are seeing moves toward several things that encourage me to believe innovators will help to create useful solutions. In particular, much work is now going into ways to give individuals more control of their online identities in order to control the flow of information (privacy enhancing technologies). By 2035, it is likely some of these will have broken through and they may become heavily used. Additionally, the further internationalization of communication technologies reaching more of the world can help break down barriers. The primary role of tech leaders and politicians is to help keep the innovation space alive and to make sure that decentralized and open systems can thrive (a counter to tendencies toward authoritarianism, etc.). Today’s children and teens are learning to be less trusting of everything they see online (much as in the past they had to learn not to believe everything one saw in a TV commercial or read in a newspaper) and that will also help in navigating a world where dis- and misinformation will continue to exist.”

The primary role of tech leaders and politicians is to help keep the innovation space alive and to make sure that decentralized and open systems can thrive.”



James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute

The best spaces come with heterogeneity, collaboration and consequences

Gary A. Bolles, chair for the future of work at Singularity University, commented, “The greatest opportunity comes from community-anchored digital spaces that come with heterogeneity, collaboration and consequences. Community-anchored, because the more humans can interact both online and in person, the more the potential there is for deeper connection. Heterogeneity, because homogeneous groups build effective echo chambers and heterogenous groups expose members to a range of ideas and beliefs. Collaboration, because communities that solve one problem together can solve the next, and the next. Consequences, because effective public discourse requires people to be aware of and responsible for the potential negative results of their words and actions. What is critical is that the business models of the digital communications platforms must change. Tech leaders must turn the same level of innovation they have brought to their products, toward business model innovations that encourage them to design for more heterogeneity, collaboration and consequences.”

It is, as always, a war with Doomsday scenarios ready to write, yet the future is bright

David Porush, writer, longtime professor at Rensselaer Polytechnic Institute and author of “The Soft Machine: Cybernetic Fiction,” wrote, “Digital spaces are like all technologies: They change our minds, and even our brains, but not our souls. Or if the word ‘soul’ is too loaded for you, try ‘the eternal, enduring human instincts and impulses that drive our interactions with each other and considerations of our selves.’ (You can see why I prefer the shorthand). Digital spaces have unleashed new facilities for getting what’s in our souls into each other’s, for better or worse. We can do so wider, faster and with more fidelity and sensation (multimedia) and intimacy. New media grant us ways to express ourselves that were inconceivable without them. We can share subjectivities (i.e., Facebook) and objectivities (academic and scientific sites). The world is mostly made a better place by digital spaces, though new terrors and violence come with it, too. This is as always since we scrawled on cave walls and invented the phonetic alphabet and the printing press. It’s been a millennia-long ride on the asymptote, up toward technologically mediated telepathy. Neuralink is just the latest, most explicit manifestation of what’s always been implicit in the evolution of communication technologies. So, to answer the question at hand: I believe leaders, politicians and governments can do more to civilize the digital commons and regulate our behaviors in them, make the Wild West into a national park or theme park, but I both a) despair of them having the wisdom to do so, and b) sort of hope they don’t. I say a) because I don’t trust their wisdom beyond self-interest and ideology. I say b) because I believe the attempt is likely to do more damage to liberties in the short run up to 2035. In the long run, the digital commons, the virtual world – like the meatworld [in-person world] – will get better. It will be a healthier, safer, better, saner space. Sneakers, air conditioning, food, vaccines, and knowledge and education available for everyone, though unevenly. It is always already, and will continue to be, a war with plenty of Doomsday scenarios ready to write. But the future is bright. And with the help of the digital commons, we’ll get there.”

A rising communications tide lifts hospital ships and pirate ships, altruists and fascists

Howard Rheingold, a pioneering sociologist who was one of the first to explore the early diffusion and impact of the internet, responded, “When I wrote ‘The Virtual Community’ (published in 1993), I felt that the most important question to ask about what was not yet known as ‘social media’ was whether the widespread use of computer-mediated communication would strengthen or weaken democracy, increase or decrease the health of the public sphere. Although many good and vital functions continue to be served by internet communications, I am far from sanguine about the health of the public sphere now and in the future. My two most important concerns are the amplification of the discourse of bad actors and the emergence and continuing evolution of computational propaganda (using tools like Facebook’s ability to segment the population according to their beliefs to deliver microtargeted misinformation to very large numbers of people). The rising tide of internet communications lifts all boats by enabling like-minded people to meet, communicate and organize; it lifts both the hospital ships and the pirate ships, the altruists and the fascists. Misinformation and disinformation about the COVID-19 epidemic has already contributed to mass deaths. Flat-earthers, QAnon cultists, racists, anti-Semites, vandals and hackers are growing in numbers and capabilities, and I see no effort of equivalent scale from governments and private parties. Facebook is the worst, and unless it dies, it will never get better, because Facebook’s business model of selling to advertisers microtargeted access to large, finely segmented populations is exactly the tool used by bad actors to disseminate misinformation and disinformation. I have called for the increased creation and use of smaller communities, either general-purpose or specialized (e.g., patient and caregiver support groups to name just one example of many beneficial uses of social media).”

As communication becomes more bifurcated, things could become more deconstructed

Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications, said, “The public discussion of this issue has only focused on the big social media services, but there are many other ‘digital spaces’ – in games, online forums, messaging platforms and the long tail of smaller niche groups both on the public internet and in dark nets. In 2035, there will be just as much – maybe more – fragmentation of the social commons through this proliferation of new types of ‘digital spaces.’ It is difficult to recover a shared, collective sense of what the world is – ideologically, culturally, politically, ethically, religiously, etc. – when people are scattered across innumerable disembodied and nonlocal digital networks. It’s very easy for fringes to connect and coordinate across the globe. Will this fact change by 2035? Or will it continue to deconstruct the social, political and economic mechanisms that are meant to contain such problems?”

Have faith in individuals’ improvisation, bricolage, resistance and reuse/reinterpretation

Jay Owens, a research and innovation consultant with New River Insight, responded, “You ask, ‘What aspects of human nature, internet governance, laws and regulations, technology tools and digital spaces do you think are so entrenched. …’ The entrenched issue here isn’t ‘human nature’ or technology or regulation – it’s capitalism. Unless we overthrow it prior to 2035, digital spaces will continue to be owned and controlled by profit-seeking companies who will claim they’re legally bound to spend as little as possible on ‘serving the public good’ – because it detracts from shareholder returns. The growth of Chinese social media companies in Western markets will mean there are firms driven by more than purely for-profit impulses, yes – but the vision of ‘good’ that they are required to serve is that of the Chinese state. Theirs is not a model of ‘public good’ that either speaks to Western publics or indeed Western ideas of ‘good.’ I retain faith in individual users’ capacity for improvisation, bricolage, resistance, creative reuse and reinterpretation. I do not think this will grow substantially from today – but it will remain a continuing contrapuntal thread.”

Don’t ignore the good on the net; media’s narrative about it is incomplete and dystopian

Jeff Jarvis, director of the Tow-Knight Center for entrepreneurial journalism at City University of New York, said, “We have time. The internet is yet young. I have confidence that society will understand how to benefit from the net just as it did with print. After Gutenberg, it took 150 years before innovations with print flourished: the creation of the first regularly published newspaper, the birth of the modern novel with Cervantes and of the essay with Montaigne. In the meantime, yes, there was a Reformation and the Thirty Years War. Here’s hoping we manage to avoid those detours.

“Media is engaged in a full-blown moral panic about the net. It is one of their own engineering and it is in their self-interest, as media choose to portray their new competitor as the folk devil that is causing every problem in sight. In the process, media ignore the good on the net. It is with the net and social media that #BlackLivesMatter rose to become a worldwide movement. Imagine enduring the pandemic without the net, preserving jobs, the economy, connections with friends and families. Media’s narrative about the net is dystopian. It is an incomplete and inaccurate picture of the net’s present and future.”

Increasing complexity will dominate our future; here’s a rundown of what will change

Mike Liebhold, distinguished fellow, retired, at The Institute for the Future, commented, “Here is an outline of a few of the technical foundations of the shifts in digital spaces and digital life expected by 2035:

  • Cross-Cutting Forces – (across the technology stack):
    • Applied machine intelligence everywhere.
    • Continuous pervasive cybersecurity vulnerabilities, and vastly amplified security and privacy engineering.
    • Energy efficiency and circular accountability will become critical factors in personal and organization decision processes.
  • Systemic Digital Technology Shifts – (layers of the technology stack):
    • User-experience technologies (conversational agents everywhere), and a shift from glass screens to augmented reality for common interaction, including holographic telepresence and media.
    • Continued evolution and adoption of embedded intelligent and automated technologies in physical spaces and in robotics and cobotics [collaborative robotics].
    • Connection and network technologies – continuous adoption of fiber and broadband wireless connections including low-Earth-orbit satellites providing broadband internet connections in remote geographies.
    • Advances in computing and in cloud technologies.

New-gen platforms will live in our networked wearables, transportation and built environment

John Lazzaro, retired professor of electrical engineering and computer science, wrote, “The only way to make progress is to return to people being ‘the customer’ as opposed to ‘the product.’ By 2035, a new generation of platforms will replace smartphones (and the apps that run on them). The new platforms will be built from the ground up to address the intractable issues we face today. Unlike the smartphone – a single platform that tries to do it all – the new platforms will be customized to place and purpose.

  • A platform for the body: Wearables that function as stand-alone devices, incorporating augmented reality, with a direct connection to the cloud.
  • A platform for built environments: Displays, sensors, computing and communication built into the home and office environments not as add-ons, but as integral parts of the structure.
  • A platform for transportation: The passenger compartment of fully self-driving automobiles will be reimagined as a ‘third place’ with its own way to interface humans to the cloud.

“What the platforms will share is a way to structure interactions between an individual and the community that mirrors the way relationships work in the physical world. Inherent in this redesign will be a reworking of monetization.”

People have to adapt to overcome; there is no other workable solution

Anna Andreenkova, professor of sociology at CESSI, an institute for comparative social research based in Europe, predicted, “Attempts to censor or ‘clean up’ digital space by any actors – private or public – will not be possible or beneficial. People will have to learn and adapt to living in open information space, how to sort the fake from the real, trustful from untrustful, evidence-based from interest-driven. Digital education is a more-fruitful approach than any limitations or attempts at guiding in paternalistic way.

“Any innovation or social change always evokes concerns about its consequences. These concerns are often expressed in very radical manner. Over the centuries, eschatological or catastrophic consequences have been predicted for most emerging processes or innovations, and most of these worries are eventually forgotten. Digitalization of life domains is certainly not straightforward or easy. But at the end it is inevitable and unavoidable. What is really important to discuss is how to minimize the negative sides.”

Every year of an unrestricted internet industry damages the public sphere more

Bruce Bimber, professor of political science and founder of the Center for Information Technology and Society at the University of California-Santa Barbara, observed, “I envision that, eventually, new ways of thinking about regulation and the responsibility of social media companies will have an influence on policy. Every major industry with an effect on the public’s safety and well-being is managed by a regulatory regime today, with principles of responsibility and accountability, with limits, with procedures for oversight, with legal principles enforced in courts.

“That is except for internet industries which instead enjoy Section 230. I anticipate that this will change by 2035, as countries come to understand how to think about the relationship of the state and the market in new and more productive ways.

“That being said, it is not at all clear that this will happen in time. Every year of unrestrained market activity and lack of accountability damages the public sphere more, and we may reach a point where things are too broken for policy to matter.”

The difficulty comes in generating the appropriate collective action and trust

Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for science, technology and innovation policy, noted, “Forwarding the public good requires both collective action and trust in democratic institutions. Online spaces may become even better places for yelling and organizing in the years to come, but so far they are of zero usefulness in causing genuine policy changes to happen through the public-spirited work of elected representatives. Restoring trust in our real-world democratic institutions will require some exogenous stuff. And online spaces don’t do exogenous.”

We hoped for cyberutopia, feared cybergeddon, and we’re getting ‘cyburbia’ – an amped-up analog reality

Paul Saffo, a leading Silicon Valley-based forecaster exploring long-term trends and their impact on society, wrote, “This particular media revolution – a shift from mass to personal media – is approximately 25 years old, and it has unfolded in precisely the same way every single prior media revolution has evolved. This is because beneath the technological novelty is a common constant of human behavior. Specifically, when a new media technology arrives, first it is hailed as the utopian solution to everything from the common cold to world peace. Then time passes, we realize there is a downside, and the new medium is demonized as the agent of the end the civilization. And finally, the medium, now no longer new, disappears into the cultural fabric of daily life. In short, we hoped cyberspace would deliver a new cyberutopia, then we feared cybergeddon. But what we are getting in the end is ‘cyburbia,’ an amplified version of our analog reality.”

‘Between Fear and Hope’ is a fitting title for today, but there’s hope for a brighter tomorrow

Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab at the University of Maryland, said, “My view toward 2035 has been darkened by the harsh tone of politics over the past few years that is continuing to play out. … Journalists can’t resist reporting on outrageous behaviours, and false claims and lies still make the news. Social media have also been a problem, with algorithms that amplify misinformation rather than stopping bot farms and giving more control to users … My fears are that political maneuvers that encourage divisiveness will remain strong, misinformation will continue, and racism and other forms of violence will endure. I am troubled by the Google/Facebook surveillance capitalism (strong bravos to Shoshanna Zuboff for her amazing book on the topic, ‘Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power’), social media abuses and the general tone of violence, anger and hate speech in the U.S. My journalist father wrote a book, ‘Between Fear and Hope,’ in 1947 about post-war Europe. That title fits for now, but I am hoping for a brighter tomorrow.”

We are divided by very real differences that did not originate with the internet

Michael H. Goldhaber, an author, consultant and theoretical physicist who wrote early explorations on the digital attention economy, commented, “Underlying the success of social media, and also their ills, is the widespread recognition that these media can be used to get potentially wide attention, and that it’s exceedingly easy to give that a try. And underlying that is the fact that a very large percentage of people worldwide want and desire attention, and possibly a lot of it. Algorithms used, for instance, by Facebook, may further distort what gets attention, but that’s not the only problem. The best way to get attention is to say or do something different from just the daily ‘boring’ sort of colloquy. You can do that with cute cat videos, by inventing and showing off a new dance, by juggling 13 balls at once, or by saying something that recognized authorities or widespread consensus is not saying. Thus, an outright lie is one attractive method. A whole series of lies and wild assertions gets you something like the attention that goes to QAnon. If what you say can be shared by an at-first-little, self-reinforcing community, that helps, too.

“When those lies underline and amplify a widely shared but not widely articulated attitude, such as the feeling of being oppressed by technocrats, experts or just the self-appointed ‘elite’ with supposedly more credentialized ‘merit’ than most people have (as pointed out for example in Michael Sandel’s ‘The Tyranny of Merit’) such views can easily gain wide followings. Algorithms may help further amplify support of such messages, but that ignores their underlying sources of strength. We, especially in the U.S. – though by no means only here – are divided by very real differences that did not at all originate with the internet. These are differences primarily in who gets heard and how, as well as in monetary income levels that partly follow along with the former. In one sense, social media offer a new path to greater equality. These are not refereed journals by any means. Anyone can try to seize an audience. Movements I would regard as positive, such as: the effort for stronger response to climate change; Black Lives Matter; #MeToo; LGBTQ rights – these all have been strengthened in my judgment by social media. …

“Clearly, over the next few years, until well beyond 2035, we are in for a wild ride, dealing with the ongoing pandemic, horrendous effects of climate change and social issues, including various kinds of inequality that are only exacerbated and, in some cases, brought to light through social media. Another crisis is that the political motion we might hope for is stalled by the inadequacies and susceptibilities to crass manipulation that our now elderly political institutions and constitutions now reveal. It will be more and more difficult to remain either aloof from or unaware of these interlocking struggles. It may well turn out to be a good thing in the long run that we are all drawn in. It will be good, if somehow, we move toward greater acknowledgment of all of the inequalities and problems and somehow forge a degree of consensus about the solution. We may not, but we could.”

These systems can be built to support full agency for everyone by design

Doc Searls, internet pioneer, co-author of “The Cluetrain Manifesto” and “The Intention Economy” and co-founder and board member at Customer Commons, predicted, “There is hope for 2035 if we think, work, invest and gather outside the web and the closed worlds of apps available only from the siloed spheres provided by giant companies and company stores. That closed world – or collection of private worlds – is based on a mainframe-era model of computing on networks called ‘client-server’ and might better have been called ‘slave-master.’ This model is now so normative that, without irony, Europe’s GDPR [General Data Protection Regulation] refers to the public as ‘data subjects,’ California’s CCPA calls us ‘consumers’ and the whole computer industry calls us ‘users’ – a label used elsewhere only by the drug industry. None call us ‘persons’ or ‘individuals’ because they see us always as mere clients. But the web and the tech giants’ app ecosystems are just early examples of what can be built on the Internet. By its open and supportive end-to-end design, however, the Internet can support full agency for everyone and not just the servers of the world and the companies that operate them.

“I don’t see full agency being provided by today’s tech leaders or politicians, all of whom are too subscribed to ‘business as usual.’ I do see lots of help coming from technologists working with communities, especially locally, on solutions to problems that can best be solved first by tools and systems serving individuals and small groups.

“I expect mostly good outcomes because it will soon be clear to all that we have no choice about working toward them. As Samuel Johnson said, ‘When a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.’ Our species today is entering a metaphorical fortnight, knowing that it faces a global environmental catastrophe of its own making. To slow or stop this catastrophe we need to work together, learn from each other, and draw wisdom from our histories and sciences as widely and rapidly as possible. For this it helps enormously that we are digital now and that we live in the first and only technical environment where it is possible to mobilize globally to save what can still be saved. And we have already begun training at home during a strangely well-timed global pandemic. The internet and digital technology are the only ways we have to concentrate our collective minds in the metaphorical fortnight or less that is still left to us.”

The definition of ‘social good’ is evolving in digital environments

Jamais Cascio, distinguished fellow at the Institute for the Future, responded, “The further spread of internet use around the globe will mean that by 2035 a significant part – perhaps the majority – of active digital citizens will come from societies that are comfortable with online behavioral restrictions. Their 2035 definition of the ‘social good’ online will likely differ considerably from the definition we most frequently discuss in 2021. This isn’t to say that attempts to improve the social impacts of digital life won’t be ongoing, but they will be happening in an environment that is culturally fractured, politically restive and likely filled with bots and automated management relying on increasingly obscure machine-learning algorithms.

“The further spread of internet use around the globe will mean that by 2035 a significant part – perhaps the majority – of active digital citizens will come from societies that are comfortable with online behavioral restrictions.”

Jamais Cascio, distinguished fellow at the Institute for the Future

“Our definition of ‘social good’ in the context of digital environments is evolving. Outcomes that may seem attractive in 2021 could well be considered anathema by 2035, and vice versa. Censorship of extreme viewpoints offers a ready example. In 2021, we’re finding that silencing or deplatforming extreme political and social voices on digital media seems to have an overall calming effect on broader political/social discourse. At the same time, there remains broad opposition (at least in the West) to explicit ‘censorship’ of opinions. By 2035, we may find ourselves in a digital environment in which sharp controls on speech are widely accepted, where we generally favor stability over freedom. Conversely, we may find by 2035 that deplatforming and silencing opinions too quickly becomes a partisan weapon, and there’s widespread pushback against it, even if it means that radical and extreme voices again garner outsized attention. In both of these futures, the people of the time would see the development as generally supporting the social good – even though both of these futures are fairly unattractive to the people of today.”

The metaverse is nigh: The seeming gap between digital and real-world spaces will soon be gapless. What then?

Barry Chudakov, founder and principal at Sertain Research, said, “I imagine an awakening to the nature and logic of digital spaces, as people realize the profound human, psychological and material revolutions these spaces – the metaverse (virtual representation combined with simulation) – will provoke. I suspect we will go through a transition period of unlearning: We will look at emerging digital spaces and have to unlearn our inherited alphabetic logic to actually see their inherent dynamics.

“A central question: By 2035 what will constitute digital spaces? Today these are sites, streaming services, apps, recognition technologies, and a host of (touch)screen-enabled entertainments. But as we move into mirror worlds, as Things That Think begin to think harder and more seamlessly, as AI and federated learning begin to populate our worlds and thinking and behaviors – digital spaces will transform. It is happening already.

“Consider inventory tracking – making sure that a warehouse knows exactly what’s inside of it and where: Corvus Robotics uses autonomous drones that can fly unattended for weeks on end, collecting inventory data without any human intervention at all. Corvus Robotics’ drones are able to inventory an entire warehouse on a rolling basis in just a couple days, while it would take a human team weeks to do the same task. Effectively Corvus’ drones turn a warehouse into a working digital space. Another emerging digital space: health care. In the last couple of years, the sale of professional service robots has increased by 32% ($11.2 billion) worldwide; the sale of assistance robots for the elderly increased by 17% ($91 million) between 2018 and 2019 alone. Grace, a new medical robot from Singularity Net and Hanson Robotics, is part of a growing cohort of robot caregivers working in hospitals and eldercare facilities around the world. They do everything from bedside care and monitoring to stocking medical supplies, welcoming guests and even cohosting karaoke nights for isolated residents. As these robots warm, enlighten and aid us, they will also monitor, track and digitize our data.

“The gap between digital spaces and real-world space (i.e., us) is narrowing. Soon that seeming gap will be gapless. By 2035, a profound transition will be well on the way. The transition and distinction between digital worlds and spaces and the so-called real world will be less distinctive, and in many instances will disappear altogether. In this sense, digital spaces will become ubiquitous invisible spaces. Digital spaces will be breathing, will be blinking, will be moving. Digital spaces will surround us and enter us as we enter them. William Gibson said, ‘We have no future because our present is too volatile. … We have only risk management. The spinning of the given moment’s scenarios. Pattern recognition.’ The new immersion is submersion. We will swim through digital spaces as we now swim through water. Our oxygen tanks will be smart glasses, embedded chips, algorithms and AI. The larger question remains: What will this mean? What will this do to us and what will we do with this?

“Like Delmore Schwartz’s ‘heavy bear who goes with me,’ we carry our present dynamics into our conception of future digital spaces. Via cellphones, computers or consoles we click, swipe or talk to engage with digital spaces. That conception will be altered by advances in the following technologies, which will fuse, evolve, transform and blend to effect completely different dynamics:

“We presently approach technology like kids opening presents at Christmas. We can’t wait to get our hands on the tech and jump in and play with it. No curriculum or pedagogy exists to make us stop and consider what happens when we open the present. With all puns intended, once we open it, the present itself changes. As does the past. As do we. Digital spaces change us, and we change in digital spaces. So, we will transform digital spaces in crisis mode, instead of the better way: using game theory and simulation to map out options. …

“As reality is digitized, the digital artifact replaces the physical reality. We have no structural or institutional knowledge that aids us in understanding, preparing for or adjudicating this altered reality. What are the mores and ethics of a world where real and made-up identities mingle? Consider for a moment how digital dating sites have affected how people get to know and meet significant others. Or how COVID-19 changed the ways people worked in offices and from home. Ask yourself: How many kids play outside versus play video games? Digital spaces have already been replacing reality. The immediate effect of ubiquitous digital spaces that are not distinct spaces but extensions of the so-called real world will be reality replacement.”

A tale of 2035: It need not be this grim

Judith Donath, a faculty fellow at Harvard’s Berkman Klein Center whose work focuses on the co-evolution of technology and society, shared this predictive scenario set in 2035:

“Back in 2021, almost 5 billion people were connected to the internet (along with billions of objects – cameras, smart cars, shipping containers, bathroom scales and bear collars, to name a few). They thought of the internet as a useful if sometimes problematic technology, a communication utility that brought them news and movies, connections to other humans and convenient at-home shopping.

“In 2035, nearly all humans and innumerable animate and inanimate others are online. And while most people still think of the internet as a network for their use, that is an increasingly obvious illusion, a sedating fiction distracting them from the fact that it now makes more sense to consider the internet to be a vast information-digesting organism, one that has already subsumed them into its vast data and communication metabolism.

“As nectar is to bees, data is to The Internet (as we’ll refer to its emergent, sovereign incarnation). Rather than producing honey, though, it digests that data into persuasive algorithms, continually perfecting its ability to nudge people in one direction or another. It has learned to rile them up with dissatisfactions they must assuage with purchases of new shoes, a new drink, a trip to Disney or to the moon. It has mastered stoking fear of others, of immigrants, Black people, White people, smart people, dumb people – any ‘Other’ – to muster political frenzy. Its sensors are everywhere and it never tires of learning.

“In retrospect, it is easy to see the roots of humankind’s subsumption into The Internet. There was the early blithe belief that ads were somehow ‘free,’ that content which we were told would be prohibitively expensive if we paid its real cost was being provided to us gratis, in return for just a bit of exposure to some marketing material. Then came the astronomical fortunes made by tycoons of data harvesting, the bot-driven conspiracies.

“By the end of the 2020s, everything from hard news to soft porn was artificially generated. Never static, it was continuously refined – based on detailed biometric sensing of the audience’s response (the crude click-counting of the earlier web long gone) – to be evermore-addictively compelling.

“Arguably the most significant breakthrough in The Internet’s power over us came through our pursuit of health and wellness. Bodily monitoring, popularized by Fitbitters and quantified selfers, became widespread – even mandated – during the relentless waves of pandemics. But the radically transformative change came when The Internet went from just measuring your response to chemically inducing it with the advent of networked infusion devices, initially for delivering medicine to quarantined patients but quickly adapted to provide everyone with personalized, context-aware micro-doses of mood-shifting meds: a custom drip of caffeine and cannabis, a touch of Xanax, a little cortisol to boost that righteous anger.

“It is important to remember that The Internet, though unimaginably huge and complex, is not, as science fiction might lead you to believe, an emergent autonomous consciousness. It was and is still shaped and guided by humans. But which humans and toward what goal?

“The ultimate effect of The Internet (and its earlier incarnations) has been to make power and wealth accrue at the very top. As the attention and beliefs of the vast majority of people came increasingly under technological control, the right to rule, whether won by raising armies of voters or of soldiers, was gained by those who wield that control.”

Donath continued: “From the standpoint of 2021, this prediction seems grim. Is it inevitable? Is it inevitably grim? We are moving rapidly in the direction described in this scenario, but it is still not inevitable. The underlying business model of the internet should not be primarily based upon personal data extraction. Strong privacy protection laws would be a start. Serious work in developing fair and palatable ways of paying for content must be developed. The full societal, political and environmental costs of advertising must be recognized: We are paying for the internet not only with the loss of privacy and, ultimately, of volition, but also with the artificial inflation of consumption in an overcrowded, climate-challenged and environmentally degraded planet. If we allow present trends to continue, one can argue the future is not inevitably grim. We simply place our faith in the mercy of a few hugely powerful corporations and the individuals who run them, hoping that instead of milking the world’s remaining resources in their bottomless status competition, they use their power to promote peace, love, sustainability and the advancement of the creative and spiritual potential of the humans under their control.”

The sections that follow organize hundreds of additional expert predictions under headings that reflect the common themes listed in the tables at the beginning of this report. For more details regarding how this canvassing was conducted, including full question wording, see the section “About this canvassing” at the end of this report.

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings