Numbers, Facts and Trends Shaping Your World

Code-Dependent: Pros and Cons of the Algorithm Age

Theme 1: Algorithms will continue to spread everywhere

Nearly all of these respondents see the great advantages of the algorithms that are already changing how connected institutions and people live and work. A significant majority expects them to continue to proliferate – mostly invisibly – and expects that there will be an exponential rise in their influence. They say this will bring many benefits and some challenges.

Jim Warren, longtime technology entrepreneur and activist, described algorithms: “Any sequence of instructions for how to do something (or how a machine that can understand said instructions can do it) is – by definition – an ‘algorithm.’ All sides – great and small, benevolent and malevolent – have always created and exercised such algorithms (recipes for accomplishing a desired function), and always will. Almost all of the ‘good’ that humankind has created – as well as all the harm (sometimes only in the eye of the beholder) – has been from discovering how to do something, and then repeating that process. And more often than not, sharing it with others. Like all-powerful but double-edged tools, algorithms are. ;-)”

Terry Langendoen, a U.S. National Science Foundation expert whose job is to support research on algorithms, is enthusiastic about what lies ahead. “The technological improvements in the past 50 years in such areas as speech-recognition and synthesis, machine translation and information retrieval have had profound beneficial impacts …,” he said. “The field is poised to make significant advances in the near future.”

The benefits will be visible and invisible and can lead to greater human insight into the world

Patrick Tucker, author and technology editor at Defense One, pointed out how today’s networked communications amplify the impacts of algorithms. “The internet is turning prediction into an equation,” he commented. “From programs that chart potential flu outbreaks to expensive (yet imperfect) ‘quant’ algorithms that anticipate bursts of stock market volatility, computer-aided prediction is everywhere. As I write in The Naked Future, in the next two decades, as a function of machine learning and big data, we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference. That will have an impact in all areas including health care, consumer choice, educational opportunities, etc. The rate by which we can extrapolate meaningful patterns from the data of the present is quickening as rapidly as is the spread of the internet because the two are inexorably linked.”

Code, flexible and open code, can make you free – or at least a bit freer. Paul Jones

Paul Jones, clinical professor at the University of North Carolina-Chapel Hill and director of ibiblio.org, was optimistic. “The promise of standardization of best practices into code is a promise of stronger best practices and a hope of larger space for human insight,” he predicted. “Code, flexible and open code, can make you free – or at least a bit freer.”

David Krieger, director of the Institute for Communication & Leadership IKF, predicted, “Data-driven algorithmic cognition and agency will characterize all aspects of society. Humans and non-humans will become partners such that identity(ies) will be distributed and collective. Individualism will become anachronistic. The network is the actor. It is the network that learns, produces, decides, much like the family or clan in collective societies of the past, but now on the basis of big data, AI and transparency. Algorithmic auditing, accountability, benchmarking procedures in machine learning, etc., will play an important role in network governance frameworks that will replace hierarchical, bureaucratic government. Not government, but governance.”

An anonymous software security consultant noted, “There will be many positive impacts that aren’t even noticed. Having an ‘intelligent’ routing system for cars may mean most people won’t notice when everyone gets to their destination as fast as they used to even with twice the traffic. Automated decisions will indeed have significant impacts upon lots of people, most of the time in ways they won’t ever recognize. Already they’re being used heavily in financial situations, but most people don’t see a significant difference between ‘a VP at the bank denied my loan’ and ‘software at the bank denied my loan’ (and in practice, the main difference is an inability to appeal the decision).”

Another anonymous respondent wrote, “Algorithms in general enable people to benefit from the results of the synthesis of large volumes of information where such synthesis was not available in any form before – or at least only to those with significant resources. This will be increasingly positive in terms of enabling better-informed choices. As algorithms scale and become more complex, unintended consequences become harder to predict and harder to fix if they are detected, but the positive benefit above seems so dramatic it should outweigh this effect. Particularly if there are algorithms designed to detect unintended discriminatory or other consequences of other algorithms.”

The many upsides of algorithms are accompanied by challenges

Respondents often hailed the positives while noting the need to address the downsides.

If we guard the core values of civil society (like equality, respect, transparency), the most valuable algorithms will be those that help the greatest numbers of people. Galen Hunt

Galen Hunt, partner research manager at Microsoft Research NExT, reflected the hopes of many when he wrote, “Algorithms will accelerate in their impact on society. If we guard the core values of civil society (like equality, respect, transparency), the most valuable algorithms will be those that help the greatest numbers of people.”

Alf Rehn, professor and chair of management and organization at Åbo Akademi University in Finland, commented, “New algorithmic thinking will be a great boon for many people. They will make life easier, shopping less arduous, banking a breeze and a hundred other great things besides. But a shaved monkey can see the upsides. The important thing is to realize the threats, major and minor, of a world run by algorithms. They can enhance filter bubbles for both individuals and companies, limit our view of the world, create more passive consumers, and create a new kind of segregation – think algorithmic haves and have-nots. In addition, for an old hacker like me, as algorithmic logics get more and more prevalent in more and more places, they also increase the number of attack vectors for people who want to pervert their logic, for profit, for more nefarious purposes, or just for the lulz.”

Andrew Nachison, founder at We Media, observed, “The positives will be enormous – better shopping experiences, better medical experience, even better experiences with government agencies. Algorithms could even make ‘bureaucrat’ a friendlier word. But the dark sides of the ‘optimized’ culture will be profound, obscure and difficult to regulate – including pervasive surveillance of individuals and predictive analytics that will do some people great harm (‘Sorry, you’re pre-disqualified from a loan.’ ‘Sorry, we’re unable to sell you a train ticket at this time.’). Advances in computing, tracking and embedded technology will herald a quantified culture that will be ever more efficient, magical and terrifying.”

Luis Lach, president of the Sociedad Mexicana de Computación en la Educación, A.C., said, “On the negative side we will see huge threats to security, data privacy and attacks to individuals, by governments, private entities and other social actors. And on the positive we will have the huge opportunity for collective and massive collaboration across the entire planet. Of course the science will rise and we will see marvelous advances. Of course we will have a continuum between positive and negative scenarios. What we will do depends on individuals, governments, private companies, nonprofits, academia, etc.”

Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information and professor of law at the University of Maryland, wrote, “Algorithms are increasingly important because businesses rarely thought of as high-tech have learned the lessons of the internet giants’ successes. Following the advice of Jeff Jarvis’ What Would Google Do, they are collecting data from both workers and customers, using algorithmic tools to make decisions, to sort the desirable from the disposable. Companies may be parsing your voice and credit record when you call them, to determine whether you match up to ‘ideal customer’ status, or are simply ‘waste’ who can be treated with disdain. Epagogix advises movie studios on what scripts to buy based on how closely they match past, successful scripts. Even winemakers make algorithmic judgments, based on statistical analyses of the weather and other characteristics of good and bad vintage years. For wines or films, the stakes are not terribly high. But when algorithms start affecting critical opportunities for employment, career advancement, health, credit and education, they deserve more scrutiny. U.S. hospitals are using big data-driven systems to determine which patients are high-risk – and data far outside traditional health records is informing those determinations. IBM now uses algorithmic assessment tools to sort employees worldwide on criteria of cost-effectiveness, but spares top managers the same invasive surveillance and ranking. In government, too, algorithmic assessments of dangerousness can lead to longer sentences for convicts, or no-fly lists for travelers. Credit scoring drives billions of dollars in lending, but the scorers’ methods remain opaque. The average borrower could lose tens of thousands of dollars over a lifetime, thanks to wrong or unfairly processed data. It took a combination of computational, legal and social scientific skills to unearth each of the examples discussed above – troubling collection, bad or biased analysis, and discriminatory use. Collaboration among experts in different fields is likely to yield even more important work. Grounded in well-established empirical social science methods, their models can and should inform the regulation of firms and governments using algorithms.”

Cindy Cohn, executive director at the Electronic Frontier Foundation, wrote, “The lack of critical thinking among the people embracing these tools is shocking and can lead to some horrible civil liberties outcomes …. I don’t think it’s possible to assign an overall ‘good’ or ‘bad’ to the use of algorithms, honestly. As they say on Facebook, ‘It’s complicated.’”

Bernardo A. Huberman, senior fellow and director of the Mechanisms and Design Lab at HPE Labs, Hewlett Packard Enterprise, said, “Algorithms do lead to the creation of filters through which people see the world and are informed about it. This will continue to increase. If the negative aspects eventually overtake the positive ones, people will stop resorting to interactions with institutions, media, etc. People’s lives are going to continue to be affected by the collection of data about them, but I can also see a future where they won’t care as much or will be compensated every time their data is used for money-making purposes.”

Marcel Bullinga, trend watcher and keynote speaker, commented, “AI will conquer the world, like the internet and the mobile phone once did. It will end the era of apps. Millions of useless apps (because there are way too many for any individual) will become useful on a personal level if they are integrated and handled by AI. For healthy robots/AI, we must have transparent, open source AI. The era of closed is over. If we stick to closed AI, we will see the rise of more and more tech monopolies dominating our world as Facebook and Google and Uber do now.”

In today’s market economy, driven by profit and shareholder value, the possibility of widespread abuse is quite high. Michael Rogers

Michael Rogers, author and futurist at Practical Futurist, said, “In a sense, we’re building a powerful nervous system for society. Big data, real-time analytics, smart software could add great value to our lives and communities. But at the same time they will be powerful levers of social control, many in corporate hands. In today’s market economy, driven by profit and shareholder value, the possibility of widespread abuse is quite high. Hopefully society as a whole will be able to use these tools to advance more humanistic values. But whether that is the case lies not in the technology, but in the economic system and our politics.”

An anonymous principal engineer commented, “The effect will depend on the situation. In areas where human judgment is required, I foresee negative effects. In areas where human judgment is a hindrance it could be beneficial. For example, I don’t see any reason for there to be train accidents (head-on collisions, speeding around a curve) with the correct design of an intelligent train system. Positive and negative effects will also depend on the perception of the person involved. For example, an intelligent road system could relieve congestion and reduce accidents, but also could restrict freedom of people to drive their cars as they wish (e.g., fast). This could be generalized to a reduction in freedom in general, which could be beneficial to some but detrimental to others.”

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings