Numbers, Facts and Trends Shaping Your World

Visions of the Internet in 2035

2. Building better spaces

A large portion of respondents hoped and predicted that digital platforms will reform themselves and new platforms will arise by 2035 that could lead to a better online environment – one that enshrines new norms for discourse and allows for open and honest conversations that are less fractious and menacing. Among the reforms they imagine: people have control of their data and their relationships with commercial and nonprofit entities; big social media firms are regulated in ways that encourage them to create less socially-harmful spaces; interoperable systems allow people to move smoothly among digital spaces and not be confined to walled-garden commercial platforms; people are tied to clear online identities so they can be held accountable, though some forms of anonymity are enabled for those who are beleaguered; artificial intelligence (AI) plays a greater role in isolating bad actors, encouraging connections, moderating discussions; government-supported media platforms with different incentives and algorithms that encourage pro-social engagement; and auditors track the performance and impact of for-profit online enterprises.

Eileen Donahoe, executive director of the Stanford Global Digital Policy Incubator, wrote, “In a new and improved digital realm in 2035, I hope private sector companies are expected to engage in human rights impact assessments with respect to the design, development and deployment of their digital products and services. In addition, new processes will be developed for democratic governments to engage in human rights impact assessments with respect to their own procurement, use and regulation of digital tools and services. Part of what may motivate this trend would be global recognition that in digitized society, open democratic governments that protect the human rights of citizens will be stronger than authoritarian-leaning governments that use digital technology to repress citizens, and private sector companies that support democracy and respect human rights will be more successful than those that do not.”

The historic inequities of the digital divide will be recognized as economic inefficiencies that hold back large reservoirs of human potential.

Andy Opel, professor of communications at Florida State University

Andy Opel, professor of communications at Florida State University, predicted, “A major change that will improve the digital realm will be the rise of public broadband and publicly owned and operated social media tools. Wireless broadband will be recognized as a public good, akin to electricity, and access to broadband will be reinforced under the equal protection clause of the Constitution. The historic inequities of the digital divide will be recognized as economic inefficiencies that hold back large reservoirs of human potential. The ubiquity of access to broadband will accelerate the robotic revolution and promote shorter work weeks that allow for more sustainable work/life balance that supports families, the elderly, and the mental health of the entire society. The seeds of these trends emerged during the COVID-19 pandemic, and this lived experience will not be forgotten quickly. Publicly owned social media options will allow communities to build strong connections and not market fear and polarization. … The ongoing climate crisis, the visibility of dysfunctional income inequality, and the accelerating digital realm create a dynamic force that is going to reshape our environment, culture, and economy and emphasize our interdependent, collective interests and limit the excesses of hyper-individualism. At this point in time in the near future, everyone will be fully in control of their digital profile. This control will allow people to edit their publicly available information and monetize their data, receiving micro-payments from companies that want to use personal data to market goods and services.

“Instead of black-box data harvesting of every moment online, with our data trails harvested and sold by Google, Facebook and others, a Digital Bill of Rights will empower people to both control and benefit from the use of their personal information. This concept will prove so popular that people will look back at our current era as the dark ages of digital technology. The idea that corporations created complex profiles of everyone and yet no one could access their own profile or know details about the algorithms that shaped their profile will be looked back upon as a repressive era of social control.”

Peter Levine, professor of citizenship and public affairs at Tisch College, Tufts University, said, “Maybe by 2035, people will be able to spend their time on digital platforms that are not relentlessly engineered to compel attention, to transmit advertising and to collect consumer data from us, and are instead designed to serve their users’ authentic needs and are accountable to the public. I don’t think the problem is designing such spaces; the problem is making them profitable so they can compete.”

Jeff Jarvis, director of the Tow-Knight Center for entrepreneurial journalism at City University of New York, commented, “I imagine the creation of an expert network such as the one scientists and doctors created in a matter of weeks at the start of the pandemic using preprint servers to share research and data and using Twitter – yes, Twitter – to peer-review those papers. I imagine social networks that are based on constructive collaboration regarding shared concerns. I imagine a media ecosystem – online and off – that breaks free of the corruptions of the attention-based business model it imported into the net.”

Mary Chayko, distinguished teaching professor of communication and information at Rutgers University, predicted, “By 2035, politicians, the public and leaders from business, academic, technological and other communities can literally team up to address and solve pressing global social problems. For example, members from each of these groups could form actual teams that would take part in cross-field, interdisciplinary, international working ‘summits’ or ‘competitions’ aimed toward (and incentivizing) the creation of more equitable, just, inclusive accessible digital spaces.”

Yasmeen Ibrahim, professor of digital economy and culture and author of “Posthuman Capitalism: Dancing with Data in the Digital Economy,” commented, “We need to think through ownership and collective possession of data. A better online world is connected to a better offline world. When we pose questions about how to build a better online world, in essence we are asking how to build a better material, physical and tangible offline world as well. In essence, what we need to consider about the online environment is about how it amplifies social ills and misogyny through design. The governments and regulators have to intentionally curb data empires as a form of power in their own right. Retentive economies that save and track data should be countered by technologies which can selectively erase data after a transaction as a legal requirement where applicable. We need to think beyond consent and cookies, to think about how people may own and repurpose data for the common good collectively.”

Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, noted, “A long time ago I wrote in a column, ‘There’s an old joke: In heaven the police are British, the mechanics German, the cooks French, the lovers Italian and the Swiss organize it. In hell the police are German, the mechanics French, the cooks British, the lovers Swiss and the Italians organize it. An internet version might be: In theory, topic experts would supply our information, social networks would connect us for common humanity and Google would organize it for authority. In practice, we get our information from the most attention-driven sites, social networks bundle us for marketing and Google organizes it for ad sales.’ … The public investment required to create such a system would be extensive. But it could happen – in the most utopian world.”

We can assume that at some point, perhaps as early as 2035, all data will be instantly available to all scientists in an understandable and reusable form, with essentially unlimited storage and compute power.

Larry Lannom, director of information services and vice president at the Corporation for National Research Initiatives (CNRI)

Larry Lannom, director of information services and vice president at the Corporation for National Research Initiatives (CNRI), said, “The vastly improved compute and network facilities developed over the next decade or so will accelerate an evolution in the quantity, quality and availability of the data on which science makes its advances. Mere availability of data is insufficient, but work is already underway on interoperability, adding layers of abstraction that will push the details down into the technology stack, much as computer users today do not need to think about where the bits comprising their files are actually held or how their data flows around the world. We can assume that at some point, perhaps as early as 2035, all data will be instantly available to all scientists in an understandable and reusable form, with essentially unlimited storage and compute power.”

William Lehr, an associate research scholar at MIT’s Computer Science & Artificial Intelligence Laboratory with more than 25 years of internet and telecommunications experience, wrote, “In a new and improved digital realm there would be better-trusted curation so that digital speech is more trustworthy. Enablers of ‘big lies’ ought to be criminally liable for their speech. Society has to figure out how to design a framework for stopping speech that has real effects that can cause major harm. When I go to my doctor and he diagnoses me with cancer, it is reasonable for me to trust his judgment and presume he is not lying to me, even though mistakes may happen. If a smart friend without medical training says he thinks I may have cancer and turns out to be wrong, I would not expect that friend to be liable. Whereas a doctor who makes an egregious mistake that violates reasonable standards of professional judgment or – much worse – intentionally lies to me about my cancer diagnosis in ways that cause harm, then there ought to be real penalties/liability.

“As more things move online, more types of speech may need to be subject to such graduated liability. In the future, there could be less insincere communication (bullying for effect, lies because they are easy, etc.) so that online is an extension of our better natures and behavior, rather than the converse. Online can expand access and make modalities of communication more inclusive and complement human capabilities with information resources. It can also be manipulated and used to capture or distort public narratives.”

Adam Nagy, project coordinator at Harvard Law School’s Cyberlaw Clinic, commented, “In an imagined future, one might consider platforms that make it easier to build bridges across communities that are different from one another within the platform and even across platforms. Today, people are increasingly atomized, distrustful, depressed, unable to organize for public goods, and divorced from civic engagement. Popular social media platforms can accelerate the growth of ‘bonding’ social capital, which is the reinforcement of relationships within an existing group or community. For example, one joins a group dedicated to topic X or exclusive to residents of neighborhood Y. One may follow news sources and personalities that typically align with their own political views. Even if one is a member of many different forums, the platform architecture compartmentalized those communities.”

Kent Landfield, a chief standards and technology policy strategist with 30 years of experience, wrote, “If we are truly to become a new and improved digital realm in 2035, it is critically important that we can trust the infrastructure we are so dependent on. If we do not get a handle on improving the security aspects of the internet, we will continue to see ever-increasing and more-elaborate cybercrime, ransomware and nation-state attacks. Populations will distrust the internet as a way to communicate, collaborate and live our daily lives. We need a secure foundation from which to operate. Today’s internet is fundamentally flawed because of the lack of built-in security to the foundational protocols. Identity theft brings the impact to the individual internet user. It adversely affects their lives, their finances and their futures. This is because we are currently operating on an Internet Protocol suite that is inherently and obviously insecure. If the government was to create a program for advanced research of a new set of Internet Protocols that are founded in a route of trust, we could, by 2035, create a foundation for a successful, valuable and useful digital realm. Without that transformation of our current infrastructure, we may find ourselves in a very scary place.”

Ayden Férdeline, a public-interest technologist based in Berlin, Germany, commented, “There are three protocols that are gaining traction and, if widely adopted, could change the internet for the better in the next several years. First, the Interledger protocol being developed by the Interledger Foundation is an open-payment network that seeks to provide unbanked and underserved populations with access to financial services. Organizations like Coil are now using Interledger to enable online micropayments, helping fund the work of independent content creators around the world. Second, the Unlock protocol, which runs on the Ethereum blockchain, is empowering individuals to ‘gate’ their data, on their own terms, while allowing people to sell their data on their terms if they wish. Third, the InterPlanetary File System developed by Protocol Labs is creating archival solutions for the Web so that content on the Web does not rot and disappear. All three of these protocols have robust, bottom-up governance processes and their builders are working to make the internet a healthier, better, more sustainable place.”

Leah Lievrouw, professor of information studies at the University of California-Los Angeles, commented, “Way back in the 1980s, social psychologists already found that interaction online was often ‘disinhibited’ – rude, asocial, vulgar, etc. People would basically act out online in ways they would never do face-to-face, with few consequences. There were attempts at establishing expectations and etiquette, but we know where that went. So, at the micro level, I’d say from a young age, people should be taught to have higher expectations for their own actions and those of others. At the more-macro level, I would like to see innovative new arenas or landscapes invented in which people are expected to cultivate positive, constructive, considerate sociality online – not filled with the current ‘all about me’ content and not the type of brute-force broadcasting Castells refers to as ‘mass self-communication,’ but instead reflecting other types of more-beneficial expression. It’s interesting to me that even after nearly 20 years of social media, the genres of discourse and interaction online still mimic mass forms – broadcast video, ‘branding’ and self-promotion, telethons (‘ring the bell!’), performance for unseen audiences (hopefully massive). Conversation and small groups have perhaps seen a bit of resurgence with platforms such as Zoom, but those have been designed to mimic conference rooms, which isn’t exactly where we cultivate civil society, friendship and social capital in the Robert Putnam sense. I’d love to see the user-experience and interaction design communities really put their minds to this. Perhaps there will be a return to a kind of civics education that is more appropriate to the realities of public and private life online. People should be learning about the risks, responsibilities and ethics or ethos of ‘being a good citizen’ as well as being a genuinely good person online.”

Neil Richards, professor of law at Washington University in St. Louis and one of the country’s foremost academic experts on privacy law, wrote, “I’d love to see a duty of loyalty imposed on tech companies, requiring them to act in the best interests of the people whose lives they purport to improve. It’s a simple change, but one that would radically reshape the digital world, surveillance capitalism, advertising and our ability to trust that digital world.”

Charles Anaman, founder of waaliwireless.co, based in Ghana, responded, “No online platform of more than 400,000 users will exist (or at least less than 1 million, ideally). Users will be split up to manage the levels of disinformation, with admins who are dedicated to the verification of information with help to ONLY flag possible violations. Humans who are trained to conduct research will work around the clock to review content in a decentralised federation of networks to track and block malicious sources of misinformation with the assistance of a legal team in every country. Platforms that are open to new users without verification will be exempt and not be indexed to global search engines.”

Bart Knijnenburg, associate professor of human-centered computing at Clemson University, said, “The de-commodification of online discourse could result in a more diverse political landscape, where actors can discuss nuanced political positions with like-minded (but not too like-minded) peers, without directly being pigeonholed into the aggregate position of ‘the left’ or ‘the right’ by bystanders. With politics currently happening ‘center stage,’ it is difficult to move beyond black-and-white discussions (‘Should we abolish the police or not?’). I envision a future where such discussions happen in smaller spaces, so that they can be much more nuanced (‘What would abolishing the police entail?’). I also imagine that it would become easier for people to interact with others who are similar to them in unexpected ways. Currently, it might be easy to find a group for people who eat halal food, or a group for people who like to grill, but where does one find a group of people on the intersection of those two preferences? In a more-distributed environment, AI algorithms could predict what types of currently nonexistent discourses you would be most interested in, and then automatically find you groups of like-minded (but not too like-minded) individuals structured around those potential discourses.”

An author and social media and content marketing expert wrote, “One very simple way in which digital spaces – specifically social media platforms – can be improved is users having to verify they are indeed a real person with a real name. That alone would begin to improve social discourse. If you look at a platform like LinkedIn, it is a much more pleasant atmosphere. No one hides behind a username that is anything but who they are. Their job is a way of verifying they are indeed who they are. Another improvement would be breaking up the major players through the antitrust legislation currently in Congress. This would allow room for new and perhaps more creative social platforms and more choice.”

In 2035, smart technologies will ensure that political conversations online will look less like a series of increasingly heated comments and more like town hall meetings with relatively coherent conversations.

Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark

Alf Rehn, a professor of innovation, design and management at the University of Southern Denmark, said, “In 2035, smart technologies will ensure that political conversations online will look less like a series of increasingly heated comments and more like town hall meetings with relatively coherent conversations. An AI-system – tried, tested and trusted by all participants – will control the opening and closing of mics and other ways of commenting as well as ensuring that people’s XR [extended reality] glasses aren’t overrun by emoji and similar visual chaff. The AI will cycle participants throughout smaller breakout discussions, all whilst keeping everyone abreast of the general tenor of the conversation. People who are judged by the AI as being fair and equitable in the conversation will often get the best chance to input into the same – which will lead some to go for ‘strategic listening’ (keeping quiet and adopting a facial pose that indicates that they are taking in the argument) in order to game the AI for some extra time or decibels, but as listening is only part of the algorithm, this will not get anyone far. There will be no leaderboards, per se, as the AI will instead aim to continuously communicate the key argument in as fair a way as possible, taking on board all counterarguments whilst trying to filter out obvious rhetorical tricks and logical fallacies. As a result, seasoned politicians may find themselves marginalized in these conversations, as their pandering will often run afoul of what the AI sees as an interesting argument. Intelligent questions will tend to outperform the bombastic, and trolling will become difficult because people will not respond to obvious provocations simply because they never see them.”

Russ White, a leading internet infrastructure architect at major technology companies for more than 20 years and a current member of the Internet Architecture Board of the IETF, responded, “Increasing transparency might help in some ways. While these big companies cannot be forced to open up their neural networks for examination (they often don’t know how these decisions are made themselves), they could be forced to provide the ability for researchers to openly seek out bias, exposing that bias to public view. Further, governments could encourage the creation and use of truly local digital spaces to encourage a stronger sense of place, and to break up the strong centralization that is currently occurring in the internet realm (both in terms of services and infrastructure). Finally and importantly, we could educate people to stop taking these digital spaces so seriously. Digital spaces should be seen as an adjunct to the real world, rather than as a replacement to the real world.”

Peng Hwa Ang, professor of media law and policy at Nanyang Technological University, Singapore, commented, “The current debate swirling around fake news and disinformation will lead to the development of rules, technologies and programs that will defeat such content. The quality of information will improve. The most downloaded economics article is the one by Nobel Prize winner for economics George Akerlof in 2001. His ‘Market for Lemons’ paper argues that in a market where information is asymmetrical, the absence of indicators of quality will destroy that market. In essence, if there is a buyer and seller and the only the seller knows the quality of the product (asymmetry of information) and there is no way to indicate the quality of information of the seller, the market will be destroyed. Applied to the internet, if internet advertising continues as the Wild West, then the market for internet advertising will be destroyed.”

Alex Halavais, associate professor of data and society and director of the master’s program in social technologies at Arizona State University, commented, “Some people recognize that they cannot offload the responsibility for fighting disinformation or for reigning in toxic interactions to platform owners; that if they want spaces that uphold their values they will need to shape those spaces themselves. This will necessarily mean a fracturing of online spaces into smaller groups, and I suspect the early abandonment of some of the massive platforms that have benefitted from economies of scale will continue. Of course, this is currently marked by splintering into groups where those values are not those we might prefer within a liberal democracy: hate groups, cults of personality and those that amplify disinformation, for example. Of course, these are concerning, but I suspect they will continue to be matched by communities that reproduce more prosocial values. The question will become what this kind of centrifugal disaggregation will mean; what happens to our public when it is divided into small intentional spheres?”

Kate Klonick, a law professor at St. John’s University whose research has focused on private internet platforms’ policies and social responsibilities, responded, “I’d like to see a rise in platforms creating different governance structures that imbue their private platforms with the democratic and participatory ideals that we imagine for powerful actors that control our public rights. In particular, I’d like to see this in the context of speech platforms – which would effectively create different systems to govern their platforms. Some platforms might use a court-like model to solve speech disputes, some might use a user-choice/choice architecture model – but there would be a consolidation of a few different ways that we see as normatively acceptable for private platforms to govern speech online.”

Mark Andrejevic, head of the Culture, Media and Economy Program at Australia’s Monash University, responded, “I would like to see large-scale public investment in the collaborative, international development of public-service media platforms that combine content, sociality and public informational resources. I would like this to take place at the scale of a Google or Facebook – an international collaboration of public-service media that provides free access to news, information and entertainment, both professional and amateur, and uses this free access as a means of bringing people to the platform for the purposes of sociality and public education and deliberation. This would be subsidized by eliminating the loopholes that make it possible for tech companies to avoid a reasonable level of taxation. It would be a platform that does not need to collect and store the amount of personal information collected by the platforms (thus saving some costs). All platforms are curated – this one would be curated in the public interest using independent mechanisms similar to those developed for public service broadcasting.”

Tim Bray, founder and principal at Textuality Services, previously a vice president in the cloud computing division at Amazon, wrote, “I would like to see larger players of the internet adopt the Wikipedia practice of flagging content as unsubstantiated, with such material subject to removal if supporting evidence is not supplied.”

Paul Manuel Aviles Baker, senior director for research and innovation at Georgia Tech’s Center for Advanced Communications Policy, wrote, “I could envision the continued splintering of digital spaces to occur in a way that cocoons people in a comfortable self-reinforcing space. The model would be something like the way the old larger broadcast television networks lost dominance to, first, cable, then online distribution and engagement platforms. One role for the public sector or even the private sector as possible alternative innovative platforms spring up would be to provide relatively neutral spaces that could serve as either or both moderated channels as well as generally recognized ‘fact-checking’ resources.”

Ellery Biddle, projects director at Ranking Digital Rights, predicted, “One encouraging trend that I could see continuing to rise and produce strong outcomes is that of messaging services that cater to small groups. Signal supports this well and has increased its userbase substantially in recent years. It has become a place where people who know each other in real life can meet and chat with relatively strong guarantees of privacy and an ad-free experience, allowing us to use digital technology to improve and enrich our lives without some of the risks presented by more open and more commercial platforms.”

George Sadowsky, Internet Hall of Fame member and Trustee with the Internet Society, said, “I hope the growth of healthy online communities will be a feature of 2035. These communities can be geographic, professional, hobby-oriented or even ideological, provided that we can get the meanness out of online behavior. Some modification of Section 230 might require online posters to assume more responsibility for their remarks, and some combination of technical progress and regulation would work to make identification of originators of content easily identifiable and therefore more responsible for their contributions to online conversations. We should definitely preserve the right to anonymity but restrict its use to concerns where anonymity is needed rather than letting it protect those who intend to make incendiary remarks without taking responsibility for them. This will not be easy, and it will require rethinking our relationships to one another at the community, country and global levels.”

An angel and venture investor who previously led innovation and investment for a major U.S. government organization commented, “There will be no such thing as passwords. All digital behavior will be discoverable and trackable. It will be impossible to turn off location services because of legal mandates, so all people will be trackable physically wherever they are all the time.”

In 2035 there will be a host of ‘trusted’ oracles for information. They won’t all agree on policy and politics, but they will have moved away from unsophisticated lying as a source of persuasion.

Jan English-Lueck, professor of anthropology at San Jose State University and distinguished fellow at the Institute for the Future

Jan English-Lueck, professor of anthropology at San Jose State University and distinguished fellow at the Institute for the Future, responded, “In 2035 there will be a host of ‘trusted’ oracles for information. They won’t all agree on policy and politics, but they will have moved away from unsophisticated lying as a source of persuasion. A new generation of technology developers will emerge from disgruntled youth to create an array of tools to maximize transparency and build a culture of accountability in the organizations in which they work. The overall environment of a multiverse of information will not have gone away, if anything, it will have intensified, but new tools will be leveraged to judge the quality of the information used in public debates.”

Bill Woodcock, executive director at the Packet Clearing House, wrote, “The single most important factor in improving the quality of digital life and the trajectory of digital interaction is the disintermediation of human communication: The removal of the agents with separate and competing agendas, like Facebook and Twitter, that have positioned themselves between people who could otherwise be engaging directly in civil discourse. This requires decentralization, federation and the empowerment of the users of digital technology to act on their own behalf.”

Scott Santens, senior advisor at Humanity Forward, commented, “In a new and improved digital realm, clickbait is a thing of the past. With incentives realigned, there is no longer the same incentive to provoke outrage, anger and fear in order to cause people to click stuff, and the stuff they click no longer does the best if it’s false, greatly exaggerated or highly polarizing. What people are more interested in is helpful, accurate information and healthy community spaces. This may seem like science fiction at this point, but it is possible if we make the many systemic changes necessary to make it happen.”

A professor whose work is focused on technology and society wrote, “Wouldn’t it be beautiful if people had a good reason to gather and deliberate and exchange ideas in safe spaces? Capitalism rules the roost and likely will have sway for a long, long time, so I hopefully imagine future spaces in which companies are incentivized to create these kinds of structures. It’s extremely Pollyanna-ish, but you asked! I love James Fishkin’s model of deliberative polling and gathering people of all stripes. Once gathered and in a neutral space, Americans do great things. They’re less stupid, less reactive, more tolerant and come up with better ideas. I would like to imagine structures online that incentivize these kinds of civic gatherings and mixings, as opposed to our social network-based echo chambers.”

Sean Mead, strategic lead at Ansuz Strategy, commented, “I imagine that in 2035 AI agents could exhibit a high degree of understanding of and customization for what people need; agents that can act autonomously for the benefit of individuals within useful parameters. There could also be network and software security redesigns that limit or largely eliminate cyberattacks, ransomware, information leaks and similar trust destroyers.”

Oscar Gandy, an emeritus scholar of the political economy of information at the University of Pennsylvania, said, “I had once upon a time imagined that we would see the development, promotion and use of something akin to ‘informational dietary supplements,’ perhaps akin to personal digital assistants, that would take note of our informational exposure and recommend, or even ‘nudge’ us toward more-comprehensive, challenging, reflective materials for consumption, while also providing assistance to us in exploring our own contributions to public discussions. My concern, of course, is whether we would see the development of trustworthy, rather than manipulatory digital assistants. Ideally, the development of trust would have to be based on some form of personal investment, rather than third-party investments by marketers or influencers seeking to promote their own visions of what I need individually and we all need collectively.”

A scholar, knowledge manager and adjunct professor listed the following as his top wishes for improvement of the digital public sphere by 2035:

  • “Establish a clear distinction between open crowdsourcing and ‘qualified crowdsourcing’ – including in the latter only those with demonstrable competence.
  • Make free and open-access the default mode for all internet content.
  • Make exertion of intellectual property rights possible but difficult, and free ALL legacy sci-tech materials from intellectual property constraints, with the exception of potentially dangerous content – for example, insider information on chemical/biological/nuclear technology.
  • All university courses should be free and open worldwide.
  • Medical care should be freely and directly available on global scale.”

Christopher Yoo, founding director of the Center for Technology, Innovation and Competition at the University of Pennsylvania, responded, “Most of the ways that digital life could change for the better involve users, who are ultimately the main determinant of what practices gain traction online. My hope would be to give users better tools to be more discerning about the information they encounter online. I would also hope that practices emerge that curb bullying, flaming and other forms of antisocial behavior, both by empowering users to avoid encountering such attacks and by encouraging a healthy distance from social media.”

Russell Newman, associate professor of digital media and culture at Emerson College, wrote, “I hope we will have developed not just policies but entirely new schools of thought so that we can rethink our relationships with communication, with politics, with our economy and with our ecology. I imagine new schools of thought that supersede antitrust as a solution, simultaneously developing policies that address the very real material needs whose lack provides openings for ‘culture warriors’ to wedge us apart, with public subsidization of experiments directed toward the betterment of democratic discourse as opposed to simply the fomenting of new markets; and maybe we find new uses for markets within a more just framework in which markets serve as means instead of ends. We won’t tech our way out of our tech problems, even as we cannot leave the intricacies of the tech problems themselves to the dominant players today. Rather, we need to reframe the problems themselves. Perhaps by 2035, with enough effort, we will have conjured productive new formulations.”

An eminent expert in technology and global political policy said in a better world in 2035, “The much-celebrated tech concept of ‘permissionless innovation’ will be replaced by ‘responsible and accountable innovation,’ in which digital businesses engage in serious dialogue with those with expertise in the areas likely to be affected by digital innovation and truly take into account, as an integral part of decision-making, its non-commercial impacts and risks (such as those that have impact on people’s rights, the environment and in/equality).”

A share of these experts expect that government moves will have significant impacts on today’s most-popular commercial social platforms.

A machine learning research scientist based in the U.S. wrote this futuristic news report: “Oct. 1, 2035. After five years of litigation, trillions in fines and countless incidents of civil strife, today YouTube, Facebook, TikTok and Twitter collectively announced they are abandoning the algorithmic ranking of user content. Content will now no longer be personalized to individual users but will instead present communities of information indifferent to the preferences of the user. The action follows a long series of incidents related to violent extremism as people’s worst instincts were reinforced in deepening filter bubbles.”

Several respondents suggested requiring public digital platforms be categorized as public utilities.

Brent Shambaugh, developer, researcher and consultant, said, “Order in the data world may allow for more chaos in the physical world that is conducive to creativity and innovation. Digital communication technologies have allowed for collaboration across space. By 2035, many organizations may be more flexible than before if interoperability of data is achieved. Online games will become more immersive, but the physical world will not be replaced. Social media has led to isolation and relationships that cannot compete with physical ones. However, it has also initiated relationships that when they enter the physical world are of high quality. The digital realm in 2035 will be improved when people are able to express their views openly.”

An AI scientist at a major global technology company said, “Social media and tech companies that provide basic internet services (i.e., internet connectivity, internet search, email, website hosting) are categorized as utilities and forced to be open about how their algorithms work. Certified auditors monitor their algorithms in a secure manner to protect their intellectual property, but they are able to see when algorithms violate human rights or social good, for instance, selling private data without consent, targeting ads to vulnerable populations (e.g., gambling ads to addicts) or promoting violent, hateful or disinformation content. When this activity is detected, companies are immediately warned to take action, victims are notified and compensated, and companies are fined if they do not alter the algorithms within 48 hours. Broadband is considered a basic necessity like water and electricity and provided to every citizen as a public good.”

An activist and voice of the people wrote, “Ultimately the internet has to be governed like a public utility … to ensure that they are no longer so vulnerable to crime, trafficking are held responsible for outcomes.”

A French professor of information science suggested a revamp for 2035 that divides the internet into non-commercial and commercial branches, writing, “I propose a division of the Internet into two distinct networks: the Original Internet and the Business Internet. The Original Internet would have a general ban on all for-profit activities, advertising, sales, marketing and so on. The Original Internet would be refocused on human activities: art, science, nature, knowledge, education, health, leisure, amateur sport, gardening, games (non-gambling games). An e-reputation for users in this setting is but only their non-commercial human activities can be mentioned. Any solicitation by email or sales of lists would be prohibited. The Original Network could be moderated a priori. No private company could impose anything on the Original Internet, which would be placed under the control and authority of a commission (for example a National Commission for Data Processing and Freedoms) from a government of sovereign states. The Original Internet would be maintained by accredited companies and under the control of the CNIL which could, at any time, withdraw its approval and take legal action in case of breach and fraud. The Business Internet would be dedicated to commercial activity and allow advertising, online shopping and so forth. Individuals’ secondary, business email addresses would be available on the Business Internet to those who would like them (but people would not be required to have one). If a user deems that their Business email address is misused, they can at any time delete it and create a new one. All lucrative activities (transactions, sales, advertising revenues) on the Business Internet are taxed and all such proceeds are earmarked for the updating and maintenance of the nonprofit activities of the Original Internet.”

New norms could be the principal driving factor for better human engagement online

A share of these respondents said that change by 2035 will emerge organically or with some focused assistance due to the gradual acceptance of new norms as the public adjusts to operating in online spaces.

Steve Jones, co-founder of the Association of Internet Researchers and distinguished professor of communication at the University of Illinois-Chicago, said, “By 2035 we will hopefully have a better understanding of how to evaluate and integrate the digital into everyday life and how to manage our online and offline interactions more holistically.”

Users of public digital spaces must evolve the same kind of etiquette that governs their behavior when in public.

Robert Bell, co-founder of Intelligent Community Forum

Robert Bell, co-founder of Intelligent Community Forum, urged, “Users of public digital spaces must evolve the same kind of etiquette that governs their behavior when in public. Most of us do not go to the biggest public square we can find and shout vile imaginings, filthy wishes and threats. Most of us do not form impromptu gangs and roam around seeking those who might disagree with us so that we can punish them. Most of us do not laugh uproariously in the face of another’s harm or embarrassment. Why? Because we have been socialized to keep such things to ourselves when the eyes of others are upon us. How we get to that kind of etiquette, I have no idea. But the fact we have done it before, evolving over decades and centuries, shows that it can be done and what is required.”

An expert in urban studies based in Venezuela observed, “We find ourselves in the middle of a digital emergency today, similar to the one we already recognize regarding climate change, and with dramatic consequences if we do not act promptly. There are two complementary fronts that should be addressed in parallel. One is information literacy. That is, disruptive educational programs (not under traditional pedagogical models) so that people are continuously (throughout life, from children to adults) developing skills that allow them a critical use of digital tools. The other aspect is the construction of socially acceptable behavior patterns to be applied to the development and use of these technological resources. Ethical codes agreed between the various social actors and gradually implemented are necessary. The goal would be to consolidate a culture toward information (infoculture) that recognizes the peculiarities of each set of actors (children, developers, teachers, officials, entrepreneurs, parents, etc.) and that regulates the relationships between all these groups.”

Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications, shared this scenario: “It’s morning and I want to check in on the two major social nets I use regularly. I put on my viz lenses and their biometrics certify my identity, allowing me access – my followers only see my username, but it’s just a couple layers to get to my real name. I have other nets I can use anonymously because they haven’t met the 10 million-user mark that requires that I use my real ID. I scan the feed on my desktop screen as the viz lens adds additional layers. Any users with more than 100,000 follows have an overlay. If I scan it, I can see the largest nodes they’re connected to on the network, for instance, I can see who they share the most and who shares them as well as their other major non-network relationships – employers, major stock holdings, military or law enforcement affiliation, etc.

“If people want to use the largest social nets, they have to be open and honest about it. In real life, actions have consequences and it’s clear that this was needed in our digital lives as well. That’s how socialization and mores work. In 2035, I can pull a layer that shows their network and how information and content move across it, tracking it all the way back to the originating source. Along the way, each of the nodes (i.e., accounts) is marked with trustworthiness, transparency and history ratings. It’s a great way to determine where information comes from, how it moves across the network and gets amplified, and whether or not it’s objectively credible or malicious.

“When I start to interact with another user’s post – for instance gazing at it or starting to comment or annotate –  some will show emotional indicators. These are mostly a mix of animated emojis rendered directly from the poster’s face, but some include videos and livestreams. Once people understood how much communication is physical and embodied and how much of that is removed from a primarily textual medium online, they started to add more ways to signal emotions and body language. Then the services adopted it. You can even use the viz lenses to stream your face and EKG readings from your scalp. It’s a start at actually being able to see in a person’s eyes how they feel about something shared on the network, how they react to an insult or compliment and how the information highway is often an emotional roller coaster. So far, this has started to noticeably reduce the amount of trolling and griefing. And, with user transparency and traceability, it’s much harder to insult someone with anonymity.

“There are consequences, and the network can see it all. And, of course, these days the network is also part of the real world and all the physical touchpoints around us that connect to the net. It’s not a centralized, government-mandated social credit system. It’s just society and sociology and norms and mores and the consequences of violating them finally starting to take form online.”

Christopher Savage, partner and cyberlaw specialist at Davis Wright Tremaine, responded, “People will learn to identify clickbait and propaganda more effectively by 2035 than they do today. We were spoiled, in a sense, from roughly 1940 through roughly 1990, in having a national and largely responsible, reasonable set of media outlets. All of that splintered with the growth of the internet and social media, and we are still learning how to separate wheat from chaff, etc. Once new norms arise regarding these issues, the online/digital world will be much more civilized and hospitable.”

Valerie Bock, principal at VCB Consulting, wrote, “I hope that by 2035 we will have become sufficiently familiar with online meeting spaces that a rich set of cultural norms will have replaced the ‘anything goes’ craziness which started with the advent of large scale online anonymity and pseudonymity. Children will have been taught that the other person in a digitally mediated conversation is just as human, with feelings just as deep, as the kids they talk to on the playground. (Except, of course, in those cases where they are speaking with a robot. I’d like to see cultural norms developed around those conversations as well, involving 1) the preservation of a respectful tone, just to keep those muscles in good shape and 2) some sort of intermittent reminder, akin to the beep on a recorded line, that one’s interlocutor is not actually human.) We will be mostly known by our real names and/or persistent, traceable pseudonyms in our digital conversations in quality venues, and hence will be less likely to spew angry words into what is no longer a void but is, instead, a valued, shared space where a welcoming, patient, kind tone of expression is expected. More of us will have had long-term practice in developing such a conversational tone, even in writing, and more will be aware of the common pitfalls where voice tone is missing – hence less likely to use sarcasm without explicitly making it clear that that is what we are doing.”

Stephan G. Humer, Fresenius University of Applied Sciences in Berlin, said, “Despite all the difficulties, the trend of digitization is a positive one, because people want the continuous improvement of their lives and digitization can help them do that, just as industrialization did before. There will be a better digital culture, a more diverse internet and a broader usage of digitization. We will see more socio-technical knowledge and more holistic designs in digital technology. The internet will be more interweaved with our lives, there will be fewer gaps where digitization hasn’t been thought about at least once and there will be better solutions for a better life.”

Deanna Zandt, media technologist and author of “Share This: How You Will Change the World with Social Networking,” said, “I wrote a book about this back in 2010. While I didn’t foresee Russian troll farms and how many people were willing to attach their names to their abusive behavior, I still see a future where our digital tools create meaningful collective participation and the ability to hold power accountable. I still believe we create empathy when we share our stories. How could we improve one aspect of making that happen? I think we need to focus on digital literacy and understanding the impact of what we do online. I suspect Gen Z and even younger folks already understand much more than folks my age. When I used to teach workshops and give talks explaining how the neuroscience of digital interaction works, people were always sort of dumbfounded. Teaching and training each other in intentional ways should be part of a larger media literacy/criticism effort, and rather than demonizing the tools themselves as hopeless, we can and should learn and have agency over what we choose to do with them.”

Howard Rheingold, a pioneering sociologist who was one of the first to explore the early diffusion and impact of the internet, wrote, “Between now and 2035, more and more people become more and more disillusioned by Facebook, and new regulations by governments around the world begin to enable people to port their friendship networks to other online venues. The widespread use of synchronous and asynchronous media and educational dissemination of knowledge of how to use these free media then leads to a kind of renaissance of mass-creatorship, similar to the way the web originally grew.”

Dan Caprio, founder and CEO of The Providence Group, a privacy and security firm based in Washington, DC, said, “I hope the golden rule will be back in vogue in 2035.”

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings