Many respondents to the canvassing pointed out that algorithms are already the backbone for most systems and it is quite evident they have been mostly of great benefit and will continue to improve every aspect of life. Their driving idea is that great things will be achieved thanks to recent and coming advances in algorithm-based actions. These respondents said that algorithms will help make sense of massive amounts of data, and this will inspire breakthroughs in science, new conveniences and human capacities in everyday life, and ever-better capacity to link people to the information that will help them. As an anonymous senior researcher employed by Microsoft replied, “They enable us to search the web and sequence genomes. These two activities alone dwarf the negatives.”
Demian Perry, director of mobile at NPR, said algorithmic “helpmates” add efficiencies. “An algorithm is just a way to apply decision-making at scale,” he explained. “Mass-produced decisions are, if nothing else, more consistent. Depending on the algorithm (and whom you ask), that consistency is either less nuanced or more disciplined than you might expect from a human. In the NPR One app, we have yet to find an algorithm that can be trusted to select the most important news and the most engrossing stories that everyone must hear. At the same time, we rely heavily on algorithms to help us make fast, real-time decisions about what a listener’s behavior tells us about their program preferences, and we use these algorithms to pick the best options to present to them at certain points in their listening experience. Thus algorithms are helpmates in the process of curating the news, but they’ll probably never run the show. We believe they will continue to make our drudge work more efficient, so that we have more time to spend on the much more interesting work of telling great stories.”
Stowe Boyd, chief researcher at Gigaom, said, “Algorithms and AI will have an enormous impact on the conduct of business. HR is one enormous area that will be revamped top to bottom by this revolution. Starting at a more fundamental level, education will be recast and AI will be taking a lead role. We will rely on AI to oversee other AIs.”
Data-driven approaches to problem-solving will expand – and good design is a plus
We have several thousand years of human history showing the severe limitations of human judgment. Data-driven approaches based on careful analysis and thoughtful design can only improve the situation. Jason Hong
[of]
[have]
Why is the “monitoring of every aspect of life” likely to be “unavoidable and irreversible”? Because all of these improvements are data-dependent. Among the data-reliant innovations expected to rapidly expand are cognitive AI “digital agents” or “assistants.”
Scott Amyx, CEO of Amyx+, commented, “Within the field of artificial intelligence, there has been significant progress on cognitive AI as evidenced by Viv, IBM Watson, Amazon Echo, Alexa, Siri, Cortana and X.ai. Advancement in cognitive AI will usher in a new era of orchestration, coordination and automation that will enable humans to focus on human value-add activities (creativity, friendship, perseverance, resolve, hope, etc.) while systems and machines will manage task orientation. More exciting, in my opinion, is the qualitative, empathetic AI – AI that understands our deep human thoughts, desires and drivers and works to support our psychological, emotional and physical well-being. To that end, we are kicking off a research consortium that will further explore this area of research and development with emphasis on friend AI, empathetic AI, humorous AI and confidant AI. To enable hyper-personalization, these neural network AI agents would have to be at the individual level. All of us at some point in the future will have our own ambient AI virtual assistant and friend to help navigate and orchestrate life. It will coordinate with other people, other AI agents, devices and systems on our behalf. Naturally, concerns of strong AI emerge for some. There is active research, private and public, targeted at friendly AI. We will never know for sure if the failsafe measures that we institute could be broken by self-will.”
Algorithms will enable each one of us to have a multitude of various types of assistants that would do things on our behalf, amplifying our abilities and reach in ways that we’ve never seen before. Marina Gorbis
Marina Gorbis, executive director at the Institute for the Future, suggested these as “main positive impacts”: “Algorithms will enable each one of us to have a multitude of various types of assistants that would do things on our behalf, amplifying our abilities and reach in ways that we’ve never seen before. Imagine instead of typing search words and getting a list of articles, pushing a button and getting a narrative paper on a specific topic of interest. It’s the equivalent of each one of us having many research and other assistants …. Algorithms also have the potential to uncover current biases in hiring, job descriptions and other text information. Startups like Unitive and Knack show the potential of this.”
[certified public accountant]
An anonymous deputy CEO wrote, “I hope we will finally see evidence-based medicine and integrated planning in the human habitat. The latter should mean cities developed with appropriate service delivery across a range of infrastructures.”
An anonymous computer security researcher observed, “Algorithms combined with machine learning and data analysis could result in products that predict self-defeating behaviors and react and incentivize in ways that could push users far further than they could go by themselves.”
Code processes will be refined and improved; ethical issues are being worked out
If we want algorithms that don’t discriminate, we will be able to design algorithms that do not discriminate. David Karger
David Karger, a professor of computer science at MIT, said, “Algorithms are just the latest tools to generate fear as we consider their potential misuse, like the power loom (put manual laborers out of jobs), the car (puts kids beyond the supervision of their parents), and the television (same fears as today’s internet). In all these cases there were downsides but the upsides were greater. The question of algorithmic fairness and discrimination is an important one but it is already being considered. If we want algorithms that don’t discriminate, we will be able to design algorithms that do not discriminate. Of course, there are ethical questions: If we have an algorithm that can very accurately predict whether someone will benefit from a certain expensive medical treatment, is it fair to withhold the treatment from people the algorithm thinks it won’t help? But the issue here is not with the algorithm but with our specification of our ethical principles.”
Respondents predict the development of “ethical machines” and “iteratively improved” code that will diminish the negatives.
Lee McKnight, an associate professor at Syracuse University’s School of Information Studies, wrote, “Algorithms coded in smart service systems will have many positive, life-saving and job-creating impacts in the next decade. Social machines will become much better at understanding your needs, and attempting to help you meet them. Ethical machines – such as drones – will know to sense and avoid collisions with other drones, planes, birds or people, recognize restricted air space, and respect privacy law. Algorithmically driven vehicles will similarly learn to better avoid each other. Health care smart-service systems will be driven by algorithms to recognize human and machine errors and omissions, improving care and lowering costs.”
Jon Lebkowsky, CEO of Polycot Associates, wrote, “I’m personally committed to agile process, through which code is iteratively improved based on practice and feedback. Algorithms can evolve through agile process. So while there may be negative effects from some of the high-impact algorithms we develop, my hope and expectation is that those algorithms will be refined to diminish the negative and enhance the positive impact.”
Edward Friedman, emeritus professor of technology management at the Stevens Institute of Technology, expects more algorithms will be established to evaluate algorithms, writing, “As more algorithms enter the interactive digital world, there will be an increase of Yelp-type evaluation sites that guide users in their most constructive use.”
Ed Dodds, a digital strategist, wrote, “Algorithms will force persons to be more reflective about their own personal ontologies, fixed taxonomies, etc., regarding how they organize their own digital assets or bookmark the assets of others. AI will extrapolate. Users will then be able to run thought experiments such as ‘OK, show the opposite of those assumptions’ and such in natural-language queries. A freemium model will allow whether or not inputting a user’s own preferred filters will be of enough value.”
An anonymous chief scientist observed, “Short-term, the negatives will outweigh the positives, but as we learn and go through various experiences, the balance will eventually go positive. We always need algorithms to be tweakable by humans according to context, creating an environment of IA (intelligent assistants) instead of AI (artificial intelligence).”
Another anonymous respondent agreed, writing, “Algorithms will be improved as a reactive response. So negative results of using them will be complained about loudly at first, word-workers will work on them and identify the language that is at issue, and fine-tune them. At some point it will be 50-50. New ones will always have to be fine-tuned, and it will be the complaining that helps us fine-tune them.”
‘Algorithms don’t have to be perfect; they just have to be better than people’
Some respondents who predicted a mostly positive future said algorithms are unfairly criticized, noting they outperform human capabilities, accomplish great feats and can always be improved.
An anonymous professor who works at New York University said algorithm-based systems are a requirement of our times and mostly work out for the best. “Automated filtering and management of information and decisions is a move forced on us by complexity,” he wrote. “False positives and false negatives will remain a problem, but they will be edge cases.”
An anonymous chief scientist wrote, “Whenever algorithms replace illogical human decision-making, the result is likely to be an improvement.” And an anonymous principal consultant at a top consulting firm wrote, “Fear of algorithms is ridiculously overblown. Algorithms don’t have to be perfect, they just have to be better than people.”
Algorithms are less subject to hidden agendas than human advisors and managers. … Humans are a lot more suspect in their advice and decisions than computers are. Daniel Berleant
Daniel Berleant, author of The Human Race to the Future, noted, “Algorithms are less subject to hidden agendas than human advisors and managers. Hence the output of these algorithms will be more socially and economically efficient, in the sense that they will be better aligned with their intended goals. Humans are a lot more suspect in their advice and decisions than computers are.”
Avery Holton, an assistant professor and humanities scholar at the University of Utah, got into the details. “In terms of communication across social networks both present and future, algorithms can work quickly to identify our areas of interest as well as others who may share those interests. Yes, this has the potential to create silos and echo chambers, but it also holds the promise of empowerment through engagement encouragement. We can certainly still seek information and relationships by combing through keywords and hashtags, but algorithms can supplement those efforts by showing us not only ‘what’ we might be interested in and ‘what’ we might be missing, but ‘who’ we might be interested in and ‘who’ we might be missing. Further, these algorithms may be able to provide us some insights about others (e.g., their interests, their engagement habits) that help us better approach, develop and sustain relationships.”
[ways]
Dan Ryan, professor of sociology at Mills College in Oakland, California, wrote, “The worry that algorithms might introduce subtle biases strikes me as much social-science ado about very little. No more true than the ways that architecture, cartography, language, organizational rules, credentialing systems, etc., produce these effects.”
An anonymous respondent said, “It would be a fallacy to say that without algorithms our society would be more fair. We can ‘unteach’ discrimination in computers more easily than we can in human beings. The more algorithms are capable of mimicking human behavior, the more we will need to reconsider the implications of what makes us human and how we interact.”
An anonymous principal consultant at a consulting firm wrote, “People often confuse a biased algorithm for an algorithm that doesn’t confirm their biases. If Facebook shows more liberal stories than conservative, that doesn’t mean something is wrong. It could be a reflection of their user base, or of their media sources, or just random chance. What is important is to realize that everything has some bias, intentional or not, and to develop the critical thinking skills to process bias.”
In the future, the world may be governed by benevolent AI
An anonymous respondent projected ahead several hundred years, writing, “Algorithms initially will be an extension of the ‘self’ to help individuals maintain and process the overload of information they have to manage on a daily basis. ‘How’ identities are managed and ‘who’ develops the algorithms will dictate the degree of usefulness and/or exploitation. Fast-forward 200 years – no governments or individuals hold a position of power. The world is governed by a self-aware, ego-less, benevolent AI. A single currency of credit (a la bitcoin) is earned by individuals and distributed by the AI according to the ‘good’ you contribute to society. The algorithm governing the global, collective AI will be optimized toward the common good, maximizing health, safety, happiness, conservation, etc.”