Numbers, Facts and Trends Shaping Your World

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade

4. Could a quantum leap someday aid ethical AI?

As they considered the potential evolution of ethical AI design, the people responding to this canvassing were given the opportunity to speculate as to whether quantum computing (QC), which is still in its early days of development, might somehow be employed in the future in support of  the development of ethical AI systems.

In March 2021, a team at the University of Vienna announced it had designed a hybrid AI that relies on quantum and classical computing and showed that – thanks to quantum quirkiness – it could simultaneously screen a handful of different ways to solve a problem. The result was a reinforcement learning AI that learned more than 60% faster than a nonquantum-enabled setup. This was one of the first tests to show that adding quantum speed can accelerate AI agent training/learning. It is projected that this capability, when scaled up, also might lead to a more-capable “quantum internet.”

Although there have been many other announcements of new steps toward advancing QC in the past year or two, it is still so nascent that even its developers are somewhat uncertain about its likely future applications, and the search is on for specific use cases.

Because this tech is still in its early days, of course these respondents’ answers are speculative reflections, but their discussion of the question raises a number of important issues. The question they considered was:

How likely is it that quantum computing will evolve over the next decade to assist in creating ethical artificial intelligence systems? If you think that will be likely, why do you think so? If you do not think it likely that quantum computing will evolve to assist in building ethical AI, why not?

Here, we share four overarching responses. They are followed by responses from those who said it might be possible in the future for AI ethics systems to get a boost from QC and by responses from experts who said that such a development is somewhat or quite unlikely.

Greg Sherwin, vice president for engineering and information technology at Singularity University, wrote, “Binary computing is a lossy, reductionist crutch that models the universe along the lines of false choices. Quantum computing has an opportunity to better embrace the complexity of humanity and the world, as humans can hold paradoxes in their minds while binary computers cannot. Probabilistic algorithms and thinking will predominate the future, leading to more emphasis on the necessary tools for such scenario planning, which is where quantum computers can serve and binary computers fail. That demand for the answers that meet the challenges of the future will require us to abandon our old, familiar tools of the past and to explore and embrace new paradigms of thinking, planning and projecting. I do see ethical AI as something orthogonal to the question of binary vs. quantum computing. It will be required in either context. So, the question of whether quantum computing will evolve as a tool to assist building ethical AI is a nonstarter. Either because there is little ‘quantum’ specialty about it, or because building ethical AI is a need independent of its computational underpinnings. Humans will continue to be in the loop for decisions that have significant impacts to our lives, our health, our governance and our social well-being. Machines will be wholly entrusted for only those things that are mechanized, routine and subject to linear optimization.”

Barry Chudakov, founder and principal of Sertain Research, said, “I believe quantum computers may evolve to assist in building ethical AI, not just because they can work faster than traditional computers, but because they operate differently. AI systems depend on massive amounts of data that algorithms ingest, classify and analyze using specific characteristics; quantum computers enable more precise classification of that data. Eventually, quantum computing-based AI algorithms could find patterns that are invisible to classical computers, making certain types of intractable problems solvable. But there is a fundamental structural problem that must be addressed first: vastly more powerful computing power may not resolve the human factor. Namely, that the moral and ethical framework for building societal entities (churches, governments, constitutions, laws, etc.) grew out of tribal culture, nomadic culture, which recorded precepts which then turned into codified law. …

“We’re in a different world now. As William Gibson said in 2007: “The distinction between cyberspace and that which isn’t cyberspace is going to be unimaginable.” It’s now time to imagine the unimaginable. This is because AI operates from an entirely different playbook. The tool logic of artificial intelligence is embedded machine learning; it is quantum, random, multifarious. We are leaving the Gutenberg Galaxy and its containment patterns of rule-based injunctions. The tool logic of the book is linear, celebrates one-at-a-timeness and the single point of view; alphabetic sequentiality supplanted global/spatial awareness and fostered fear of the image; literacy deified books as holy and the ‘word of God.’ AI, on the other hand, takes datasets and ‘learns’ or improves from the analysis of that data. This is a completely different dynamic, with a different learning curve and demands. …

“I believe we need a 21st-century Quantum AI Constitutional Convention. The purpose of such a convention is clear: To inaugurate a key issue not only for AI tech companies in the coming decade but for the known world, namely, establishing clear ethical guidelines and protocols for the deployment of AI and then creating an enlightened, equitable means of policing and enforcing those guidelines. This will necessitate addressing the complexities of sensitive contexts and environments (face recognition, policing, security, travel, etc.) as well as a host of intrusive data collection and use case issues, such as tracking, monitoring, AI screening for employment, or algorithmic biases. This will demand transparency, both at the site of the deployment of AI as well as addressing its implications. Without those guidelines and protocols – the 21st-century equivalent of the Magna Carta and its evolved cousin, the U.S. Constitution – there will be manufactured controversy over what is ethical and what is questionable. … AI is ubiquitous and pervasive. We hardly have the language or the inclination to fully appreciate what AI can and will do in our lives. This is not to say that we cannot; it is to say that we are unprepared to see, think, debate and wisely decide how to best move forward with AI development.

“Once there is a global constitution and Bill of AI Rights, with willing signatories around the world, quantum computing will be on track to evolve in assisting the building of ethical AI.”

barry chudakov, founder and principal of Sertain Research

“Once there is a global constitution and Bill of AI Rights, with willing signatories around the world, quantum computing will be on track to evolve in assisting the building of ethical AI. However, the unfolding of that evolution will collide with legacy cultural and societal structures. So, as we embrace and adopt the logic of AI, we will change ourselves and our mores; effectively, we will be turning from hundreds or thousands of years of codified traditional behaviors to engage with and adapt to the ‘chaotic implications’ of AI. …

“AI represents not human diminishment and replacement but a different way of being in the world, a different way of thinking about and responding to the world, namely, to use designed intelligence to augment and expand human intelligence. Yes, this will create new quandaries and dilemmas for us – some of which may portend great danger. … We will braid AI into the fabric of our lives, and, in order to do so successfully, society at many levels must be present and mindful at every step of AI integration into human society.”

David Brin, physicist, futures thinker and author of the science fiction novels “Earth” and “Existence,” predicted, “Quantum computing has genuine potential. Roger Penrose and associates believe it already takes place, in trillions of subcellular units inside human neurons. If so, it may take a while to build quantum computers on that kind of scale. The ethical matter is interesting, though totally science fictional, that quantum computers might connect in ways that promote reciprocal understanding and empathy.”

Jerome C. Glenn, co-founder and CEO of the futures-research organization The Millennium Project, wrote, “Elementary quantum computing is already here and will accelerate faster than people think, but the applications will take longer to implement than people think. It will improve computer security, AI and computational sciences, which in turn accelerate scientific breakthroughs and tech applications, which in turn increase both positive and negative impacts for humanity. These potentials are too great for humanity to remain so ignorant. We are in a new arms race for artificial general intelligence and more-mature quantum computing, but, like the nuclear race that got agreements about standards and governance (International Atomic Energy Agency), we will need the same for these new technologies while the race continues.”

Responses from those who said quantum computing is very or somewhat likely to assist in working toward ethical design of artificial intelligence

Stanley Maloy, associate vice president for research and innovation and professor of biology at San Diego State University, responded, “Quantum computing will develop hand-in-hand with 5G technologies to provide greater access to computer applications that will affect everyone’s lives, from self-driving cars to effective drone delivery systems, and many, many other applications that require both decision-making and rapid analysis of large datasets. This technology can also be used in harmful ways, including misuse of identification technologies that bypass privacy rights.”

A longtime network technology administrator and leader based in Oceania, said, “Quantum computing gives us greater computational power to tackle complex problems. It is therefore a simple relationshipif more computational power is available, it will be used to tackle those complex problems that are too difficult to solve today.”

Sean Mead, senior director of strategy and analytics at Interbrand, said, “Quantum computing enables an exponential increase in computing power, which frees up the processing overhead so that more ethical considerations can be incorporated into AI decision-making. Quantum computing injects its own ethical dilemmas in that it makes the breaking of modern encryption trivial. Quantum computing’s existence means current techniques to protect financial information, privacy, control over network-connected appliances, etc., are no longer valid, and any security routines relying on them are likewise no longer valid and effective.”

David Mussington, a senior fellow at CIGI and professor and director at the Center for Public Policy and Private Enterprise at the University of Maryland, wrote, “I am guardedly optimistic that quantum computing ‘could’ develop in a salutary direction. The question is, ‘whose values will AI research reflect?’ It is not obvious to me that the libertarian ideologies of many private sector ICT and software companies will ‘naturally’ lead to the deployment of safelet alone secureAI tools and AI-delivered digital services. Transparency in the technologies, and in the decisions that AI may enable, may run into information-sharing limits due to trade secrets, nondisclosure agreements and international competition for dominance in cyberspace. Humans will still be in the loop of decisions, but those humans have different purposes, cultural views andto the extent that they represent statesconflicting interests.”

Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, observed, “There is some evidence that quantum methods may be applicable to ML systems for optimization for example. But it’s early days yet.”

Jamais Cascio, research fellow at the Institute for the Future, observed, “To the degree that quantum computing will allow for the examination of a wide variety of possible answers to a given problem, quantum computing may enhance the capacity of systems to evaluate best long-term outcomes. There’s no reason to believe that quantum computing will make ethical systems easier to create, however. And if quantum computing doesn’t allow for ready examination of multiple outcomes, then it would be no better or worse than conventional systems.”

Gary A. Bolles, chair for the future of work at Singularity University, responded, “We might as well ask if faster cars will allow us to go help people more quickly. Sure, but they can also deliver bad actors to their destination faster, as well. The quantum computing model lends itself to certain processes that will eventually blow past traditional microprocessors, such as completely new forms of encryption. Those methods, and the products created using them, could enable unbreakable privacy. Or they could be used to circumvent traditional approaches to encryption and create far more risk for anyone depending on traditional computing systems. As Benjamin Bratton presciently discusses in The Stack, if we don’t specifically create technology to help us manage the complexity of technology, that complexity alone will ensure that only a rarefied few will benefit.”

A journalist and industry analyst expert in AI ethics said, “Quantum is going to be made available via the cloud because of the cooling requirements. A lot of innovation has already happened, but in the next decade there will be major advancements. It will break cybersecurity as we know it today. Humans need to be in the loop. However, they will likely find themselves out of the loop unless safeguards are built into the system. AI can already do many tasks several orders of magnitude faster than humans. Quantum computing will add yet more orders of speed magnitude.”

A professor of digital economy based in Europe responded, “The fascination with quantum computing means that technology companies will do a lot of work on it without being too concerned about how many of these new inventions will facilitate human life. The emphasis will remain on monetizing this frontier and enabling AI that is less guided by human interventions. In effect, these technologies will be more error-prone, and as such they will unleash even more ethical concerns as they unravel through time. Its speed of calculation will be matched by glitches that will require human deliberation.”

Ibon Zugasti, futurist, strategist and director with Prospektiker, wrote, “Artificial intelligence will drive the development of quantum computing, and then quantum computing will further drive the development of artificial intelligence. This mutual acceleration could grow beyond human control and understanding. Scientific and technological leaders, advanced research institutes and foundations are exploring how to anticipate and manage this issue.”

Joshua Hatch, a journalist who covers technology issues, commented, “It seems to me that every technological advance will be put to use to solve technological dilemmas, and this is no different. As for when we’ll see dramatic advances, I would guess over the next 10 years.”

A director of standards and strategy at a major technology company commented, “In general, our digital future depends on advances in two very broad, very basic areas: bandwidth and computer power. Most generally, I need to be able to complete tasks, and I need to be able to move information and generally communicate with others. Quantum computing is one of the promising areas for computing power.”

Ray Schroeder, associate vice chancellor of online learning, University of Illinois-Springfield, responded, “The power of quantum computing will enable AI to bridge the interests of the few to serve the interests of the many. These values will become part of the AI ethos, built into the algorithms of our advanced programs. Humans will continue to be part of the partnership with the technologies as they evolve – but this will become more of an equal partnership with technology rather than humans micromanaging technology as we have in the past.”

A complex systems researcher based in Australia wrote, “Once AI systems can start to self-replicate, then there will be an explosive evolution. I doubt it will become the fabled singularity (where humans are no longer needed), but there will be many changes.”

A technology developer/administrator commented, “Quantum computing may be a more efficient way to implement a neural network. That doesn’t change the final result, though. Just as I can compile my C for any architecture, an AI algorithm may be implemented on a different hardware platform. The results will be equivalent, though hopefully faster/cheaper to execute.”

Moira de Roche, chair of IFIP IP3, noted, “AI systems rely on massive amounts of data. Quantum computing can help classify the data in meaningful ways. Quantum will boost machine learning.”

A futurist and consultant responded, “AI is about managing ever-larger datasets and machine learning. Quantum accelerates both.”

Eric Knorr, pioneering technology journalist and editor in chief of IDG, commented, “Yes, computing power a magnitude greater than currently available could raise the possibility of some emulation of general intelligence at some point. But how we apply that is up to us.”

“Computing power a magnitude greater than currently available could raise the possibility of some emulation of general intelligence at some point. But how we apply that is up to us.”

eric knorr, pioneering technology journalist and editor in chief of IDG

Philip M. Neches, lead mentor at Enterprise Roundtable Accelerator and longtime trustee at California Institute of Technology, commented, “I expect cost-effective quantum computing hardware to emerge by 2030. Programming will remain a work-in-progress for some decades after 2030.”

Nigel Cameron, president emeritus at the Center for Policy on Emerging Technologies, commented, “It’s hard to predict the timeline, even though it does seem inevitable that quantum systems will dominate. It’s a tantalizing idea that we can just build ethics into the algorithms. Some years back, the Department of Defense issued a strange press release in defense of robotic warfare that suggested it would be more humane, since the Geneva Conventions could be built into the programming. I’m fascinated, and horrified, by the experiences of military drone operators playing de facto video games before going home for dinner after taking out terrorists on the other side of the world. A phrase from a French officer during our high-level, super-safe (for the U.S.) bombing of Serbia comes to mind: If a cause is worth killing for, it has to be worth dying for. The susceptibility of our democracy to various forms of AI-related subversion could lead to a big backlash. I remember the general counsel of Blackberry, a former federal prosecutor, saying that ‘we have yet to have our cyber 9/11.’ When I chaired the GITEX conference in Dubai some years back, we had a huge banner on the stage that said, I think, ‘Our Digital Tomorrow.’ In my closing remarks, I suggested that, unless we get a whole lot more serious about cybersecurity, one big disastersay, a hacked connected car system that leaves 10,000 dead and 100,000 injured by making a million cars turn left at 8:45 one morningwill give us an Analog Tomorrow instead.”

Richard Lachmann, professor of political sociology at the State University of New York-Albany, said, “Whatever quantum computing achieves, it will be created by humans who serve particular interests, either for corporations of making profit or for governments of controlling populations. So, humans will be in the loop, but not all humans, most likely just those with money and power. Those people always work to serve their own interests, and so it is unrealistic to expect that the AI systems they create or contract to be created will be ethical. The only hope for ethical AI is if social movements make those demands and keep up the pressure to be able to see what is being created and to impose controls.”

The digital minister for a Southeast Asian nation-state said, “The problem with programming anything is the programmer. If they are not ethical, their system will not be.”

A professor of government at one of the world’s leading universities said, “Ethical efforts will occur in parallel with efforts that are not. The question is not whether quantum computing will assist in building ethical AI but whether it will significantly retard less-favorable developments.”

A research director for a major university center investigating the impact of digital evolution on humanity said, “Computing power for AI is advancing faster than Moore’s law. There have been recent breakthroughs in quantum computing publicized by Google and other companies. However, although system performance may improve, transparency may not – such systems may become even more complicated, unintelligible and more difficult to regulate.”

Shel Israel, Forbes columnist and author of many books on disruptive technologies,commented, “Quantum computing does not change the principles of computing. But, in theory, it allows computers to solve problems and perform faster by orders of magnitude. They will be smarter because AI is starting to improve exponentially. Once again, the computing itself will be neither good nor evil. That is up to those who develop, sell and use the technologies. Perhaps gunmakers intend them for defense, but that does not stop thousands and thousands of human deaths and animals being killed just for the fun of it.”

Andrea Romaoli Garcia, an international lawyer actively involved with multistakeholder activities of the International Telecommunication Union and Internet Society, said, “Classical computers have limitations, and quantum computers are necessary to allow the ultimate implementations of AI and machine learning. However, ethical regulation and laws are not keeping up with advances in AI and are not ready for the arrival of quantum computing. Quantum’s capability to process huge volumes of data will create a huge profit center for corporations, and this has typically led them to move quickly and not always ethically. It also allows bad actors to operate freely. Ethical AI should be supported by strong regulatory tools that encourage safe technological advancement. If not, we will face new and dangerous cyber threats.”

Maja Vujovic, a consultant for digital and ICT at Compass Communications, noted, “Quantum computing will prove quite elusive and hard to measure and therefore will progress slowly and painstakingly. Combining two insufficiently understood technologies would not be prudent. Perhaps the right approach would be to couple each with blockchain-based ledgers, as a way to track and decode their black-box activity.”

Monica Murero, director, E-Life International Institute and associate professor in Communication and New Technologies at the University of Naples Federico II, noted, “A quantum computing superpower may somewhat assist in creating ethical artificial intelligence systems that help regulate, evaluate and ‘control’ AI in-out process. But I do not think that a cool technological solution is enough or is the key. In the near future, society will rapidly change thanks to AI and quantum computing. It’s like reorganizing society. We need, as a community, to work together and rewrite the fundamental rules of coexistence that go well beyond ethical considerations. A sort of Rousseau’s new social contract: An AIQC contract. We need the means to enforce the new rules because quantum computing super-power can be extremely attractive for governments and big companies. Think about generating fake news at quantum computing super-power: unacceptable. Now think about quantum computing fighting fake news: pretty cool. My view of quantum computing in the next decade is systemic. Quantum computing can somewhat help an ethical development of AI if we regulate it. I see quantum computing super-power to have the potential of solving (faster) many complex scientific problems – in health care, for example. But I also see this technology being able to break ‘normal’ encryption systems that are currently protecting our society around the world. I also see a developing business to make quantum computing and machine learning run and then sell ‘the antidote’ to protect our systems at a fair price to cure the problem: Quantum-safe cryptography blockchain. It’s like a computer virus and the antivirus business. We truly have to urgently work as a society to regulate our ecosystem and arrive in the next decade by planning in advance rather than by going along with the outcomes.”

The military strategy and technology director responded, “Quantum will evolve. The timescale is uncertain, but my gut sense is quantum computing will emerge in a significant way in the early to mid-2030s. How much it will assist in creating AI appears to be dependent on the nature of the AI. Quantum computing may help in complex pattern recognition.”

The head of research at a major U.S. wireless communications trade association responded, “It is likely that quantum computing will evolve, and that it might be deployed by those hoping to build ethical AI, but that those responsible for implementing the AI systems will either underrate its importance in the nominally neutral systems being deployed by local governments and private-sector institutions or consider it irrelevant or even hostile to the intended uses of the non-neutral monitoring and control systems being developed for use by state and nonstate institutions. Those who may not underrate its importance will not be those with the decision-making power with respect to its implementation. Ethical individuals are essential but will be marginalized by significant decision-makers.”

An expert in learning technologies and digital life wrote, “Many folks, including experts, still don’t know what they think about quantum computing and how to think about quantum computing in relation to AI, much less about the possibilities of its assistance with ethical AI. The theoretical musings on the subject cover the waterfront of exploratory communication among experts and amateur experts. Humans will still be in the loop as AI systems are created and implemented, assuming we don’t create our own destruction device, which we are perfectly capable of doing through ignorance, fatigue, lack of care, existing unethical practice, etc. A crisis can help an evolution unfold because of great need(s), but crisis-driven thinking and feeling are not always rational enough to benefit the changes needed.”

The chief technology officer for a technology strategies and solutions company said, “This isn’t a technical question. It’s about the people charged with research and development. I hope no one has cause to repeat Robert Oppenheimer’s thought after the first atomic bomb exploded.”

Responses from those who said quantum computing is somewhat unlikely or very unlikely to assist in working toward ethical design of artificial intelligence

David Karger, professor at MIT’s Computer Science and Artificial Intelligence Laboratory, said, “Quantum computing is, in public, being used as shorthand for ‘really fast computers.’ But that’s not what it is. Quantum computers are highly specialized devices that are good at very specific tasks such as factoring. There’s a small chance these computers will have a significant impact on cryptography by 2030 (I doubt it), but I see almost no chance that they will improve our ability to solve complex machine-learning problems, much less have any impact on our understanding of knowledge representation or creativity or any of the other key attributes of natural intelligence that we have been trying to understand and emulate in machines for decades. Finally, even if we do somehow create super-fast computers, they still won’t help us with the key challenge in the design of ethical AI, which is to understand ethics. After thousands of years, this is something people are still arguing about. Having faster computers won’t change the arguments one bit.”

“Quantum computing is, in public, being used as shorthand for ‘really fast computers.’ But that’s not what it is. Quantum computers are highly specialized devices that are good at very specific tasks such as factoring. There’s a small chance these computers will have a significant impact on cryptography by 2030 (I doubt it), but I see almost no chance that they will improve our ability to solve complex machine-learning problems…”

david karger, professor at MIT’s Computer Science and Artificial Intelligence Laboratory

Jim Spohrer, director of cognitive open technologies and the AI developer ecosystem at IBM, said, “Quantum computing is decades away from being practical. It will be important by 2040.”

Michael Wollowski, a professor of computer science at Rose-Hulman Institute of expert in artificial intelligence, said, “Quantum computing is still in its infancy. In 15 or 20 years, yes, we can build real systems. I don’t think we will be able to build usable systems in 10 years. Furthermore, quantum computing is still a computational system. It is the software or in case of statistical machine learning, the data that makes a system ethical or not.”

Sam S. Adams, a 24-year veteran of IBM now working as a senior research scientist in artificial intelligence for RTI International, “Quantum computing, if and when it becomes a commercially scalable reality, will basically allow AI systems to consider vast high-dimensional alternatives at near-instantaneous speed. This will allow not only playing hyper-dimensional chess in real-time but consider the impact of being able to simulate an entire economy at high resolution faster than in real-time. Program trading run amok in financial markets has caused global economic crises before. Now, accelerate that risk by orders of magnitude. Again, too much opportunity to gain extreme wealth and power for bad actors to ignore. The threat/opportunity of QC already fuels a global arms race in cryptography and privacy. Ethics barely has a chair in the hallway – let alone at the table in the national war rooms. That said, if a cost and scale breakthrough allows for the widespread democratization of QC, then the playing field is leveled. What if a $30 Raspberry Pi/Q gave every device a quantum-supremacy-level capability?”

Charlie Kaufman, a security architect with Dell EMC, said, “Quantum computing may have an important influence on cryptography and in solving problems in physics and chemistry, and it might be used to accelerate AI if it is developed to solve those other problems, but AI doesn’t need it. AI will benefit from computation becoming cheaper and more parallel. In terms of hardware advances, the most important are likely to be in GPUs, FPGAs [field-programmable gate arrays] and customized CPUs.”

Dan S. Wallach, a professor in the systems group at Rice University’s Department of Computer Science, said, “Quantum computing promises speedups over classical computing in a very small number of circumstances. Probably the only such task of note today is that quantum computers have the potential to break cryptographic algorithms in widespread use today. Academic cryptographers are already hard at work on ‘post-quantum’ cryptography, which works today but is significantly less efficient than classical cryptosystems. Hopefully, by the time quantum computers are operational, we’ll have better substitutes ready. It is, of course, entirely possible that quantum computers will be able to someday accelerate the process of training machine learning models or other tasks that today are exceptionally computationally intensive. That would be fantastic, but it really has nothing to do with ethical vs. unethical AI. It’s just about spending less electricity and time to compute the same solution.”

John Smart, foresight educator, scholar, author, consultant and speaker, observed, “Quantum computing should be thought of like fusion. A high-cost, complex technology easily captured, slowed down and restricted by plutocrats. There’s nothing commodity about it. Human brains don’t use quantum computing. The real coming disruption is in neuro-inspired, self-improving AI. Quantum computing could definitely assist in building more brain-inspired systems, via simulation of neurodynamics. Simulation of biological and chemical processes to improve medicine, find new materials, etc., is the killer app.”

Glenn Edens, professor at Thunderbird School of Global Management, Arizona State University, previously a vice president at PARC, observed, “Quantum computing has a long way to go, and we barely understand it, how to ‘program’ it and how to build it at cost-effective scale. My point of view is that we will just be crossing those thresholds in 10 years’ time, maybe eight years. I’d be surprised (pleasantly so) if we got to commercial scale QC in five years. Meanwhile, AI and ML are well on the way to commercialization at scale, as well as custom silicon SoCs (system on chip) targeted to provide high-speed performance for AI and ML algorithms. This custom silicon will have the most impact in the next five to 10 years, as well as the continued progress of memory systems, CPUs and GPUs. Quantum computing will ‘miss’ this first wave of mass commercialization of AI and ML and thus will not be a significant factor. Why? It is possible that QC might have an impact in the 10- to 20-year timeframe, but it’s way too early to predict with any confidence (we simply have too much work ahead). Will humans still be in the loop? That is as much a policy decision as a pragmatic decision – we are rapidly getting to the point where synthetically created algorithms (be it AI, CA, etc.) will be very hard for humans to understand; there are a few examples that suggest we may already be to that point. Can we even create testing and validation algorithms for ML (much less AI) is a key question, and how will we verify these systems?”

Michael Richardson, open-source consulting engineer, responded, “It is very unlikely that a practical quantum computer will become available before 2030 that will be cheap enough to apply to AI. Will a big company and/or government manage to build a QC with enough qubits to factor current 2048-bit RSA keys easily? Maybe. At a cost that breaks the internet? Not sure. At a cost where it can be applied to AI? No. Will ML chips able to simulate thousands of neurons become very cheap? Yes, and the Moore’s Law for them will be very different because the power usage will be far more distributed. This will open many opportunities, but none of them are in the AI of science fiction.”

Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that oversaw the UK’s initial networking developments, commented, “Quantum computing only helps on algorithms where the underlying relationships are reversibleit has the potential to reduce the elapsed time for a ‘result’ to appearit is not a magical portal to a realm where things that were intrinsically unanswerable suddenly become answerable. Where is the underlying theoretical basis for the evaluation of ethics as a function of a set numerical values that underpin the process? Without such a framework accelerating the time to get a ‘result’ only results in creating more potential hazards. Why? because to exploit quantum computation means deliberately not using a whole swath of techniques, hence reducing the diversity (thus negating any self-correcting assurance that may have been latent).”

Kenneth A. Grady, adjunct professor at Michigan State University College of Law and editor of “The Algorithmic Society” on Medium, said, “Despite the many impressive advances of the entities pursuing quantum computing, it is a complicated, expensive and difficult-to-scale technology at this time. The initial uses will be high-end, such as military and financial, and key applications such as pharmaceutical development. Widespread application of quantum computing to enforce ethical AI will face many challenges that quantum computing alone cannot solve (e.g., what is ‘ethical,’ when should it be enforced). Those pursuing quantum computing fall into more than one category. That is, for every entity who sees its ‘ethical’ potentials, we must assume there is any entity who sees its ‘unethical’ potentials. As with prior technology races, the participants are not limited to those who share one ideology.”

Chris Savage, a leading expert in legal and regulatory issues based in Washington, D.C., noted, “AI has something of an architecture problem: It is highly computationally intensive (think Alexa or Siri), to such a degree that it is difficult to do onsite. Instead, robust connections to a powerful central processing capability (in the cloud) is necessary to make it work, which requires robust high-speed connectivity to the end points, which raises problems of latency (too much time getting the bits between the endpoint and the processing) for many applications. Quantum computing may make the centralized/cloud-based computations more rapid and thorough, but it will have no effect on latency. And if we can’t get enough old-style Boolean silicon-based computing power out to the edges, which we seem unable to do, the prospect of getting enough quantum computing resources to the edges is bleak. As to ethics, the problem with building ethical AI isn’t that we don’t have enough computational power to do it right (an issue that quantum computing could, in theory, address), it’s that we don’t know what ‘doing it right’ means in the first place.”

Carol Smith, a senior research scientist in human-machine interaction at Carnegie Mellon University’s Software Engineering Institute, said, “Quantum computing will likely evolve to improve computing power, but people are what will make AI systems ethical … AI systems created by humans will be no better at ethics than we areand, in many cases, much worse, as they will struggle to see the most important aspects. The humanity of each individual and the context in which significant decisions must always be considered.”

Kevin T. Leicht, professor and head of the department of sociology at the University of Illinois-Urbana-Champaign, commented, “Relying on one technology to fix the potential defects in another technology suffers from the same basic problemtechnologies don’t determine ethics. People, cultures and institutions do. If those people, cultures and institutions are strong, then the potential of getting more ethical outcomes is more likely than not. We simply don’t have that. In fact, relying on quantum computing to fix anything sounds an awful lot like expecting free markets to fix the problems created by free markets. This homeopathic solution has not worked with markets, so it is difficult to see how it will work with computing. So, let’s take an elementary example that may be more applicable to the English-speaking world than elsewhere. The inventor of an AI program seeks to make as much money as possible in the shortest amount of time, because that is the prevailing institutional and economic model they have been exposed to. They develop their AI/quantum computing platform to make ‘ethical decisions,’ but those decisions happen in a context where the institutional environment where the inventor is rewards the behaviors associated with making as much money as possible in shortest amount of time. I ask you, given the initial constraint (‘The primary goal is to be a billionaire’), all of the ethical decisions programmed into the AI/quantum computing application will be oriented toward that primary goal and make ethical decisions around that.”

Paul Jones, professor emeritus of information science at the University of North Carolina, Chapel Hill, observed, “While engineers are excited about quantum computing, it only answers part of what is needed to improve AI challenges. Massive amounts of data, massive amounts of computing power (not limited to quantum as a source), reflexive software design, heuristic environments, highly connected devices, sensors (or other inputs) in real time are all needed. Quantum computing is only part of the solution. More important will be insight as to how to evaluate AI’s overall impact and learning.”

Glynn Rogers, retired, previously senior principal engineer and a founding member at the CSIRO Centre for Complex Systems Science, said, “Computer power is not the fundamental issue. What we mean by AI, what expectations we have of it and what constraints we need to place on it are the fundamental issues. It may be that implementing AI systems that satisfy these requirements will need the level of computing power that quantum computing provides if it is the full understanding of the implications of quantum mechanics that will provide insights into the nature of intelligence, not quantum computing technology itself.”

A telecommunications and internet industry economist, architect and consultant with over 25 years of experience responded, “Quantum computing will develop, yes, but will it benefit ethical AI systems? AI systems will, once fully unleashed, have their own biology. I do not think we understand their complex system interaction effects any more than we understand pre-AI economics. All of our models are at best partial.”

An ethics expert who served as an advisor on the UK’s report on “AI in Health care” responded, “Quantum computing will take an already barely tractable problem (AI explainability) and make it completely intractable. Quantum algorithms will be even less susceptible of description and verification by external parties, in particular laypeople, than current statistical algorithms.”

Gregory Shannon, chief scientist at the CERT software engineering institute at Carnegie Mellon University, wrote, “I don’t see the connection between quantum computing and AI ethics. They seem very orthogonal. QC in 2030 might make building AI models/systems faster/more efficient, but that doesn’t impact ethics per se. If anything, QC could make AI systems less ethical because it will still take significant financial resources in 2030 for QC. So, a QC-generated model might be able to ‘hide’ features/decisions that non-QC capable users/inspectors would not see/observe due to their limited computational resources.”

Micah Altman, a social and information scientist at MIT, said, “Quantum computing will not be of great help in building ethical AI in the next decade, since the most fundamental technical challenge in building ethical systems is in our basic theoretical understanding of how to encode within algorithms and/or teach ethical rules to learning systems. Although QC is certain to advance, and likely to advance substantially, such advances are [also] likely to apply to specific problem domains that are not closely related such as cryptography and secure communication, solving difficult search and optimization problems. Even if QC advances in a revolutionary way, for example by (despite daunting theoretical and practical barriers) exponentially speeding up computing broadly or even to the extent of catalyzing the development of self-aware general artificial intelligencethis will serve only to make the problem of developing ethical AI more urgent.”

A distinguished professor of computer science and engineering said, “Quantum computing might be helpful in some limited utilitarian ethical evaluations (i.e., pre-evaluating the set of potential outcomes to identify serious failings), but I don’t see most ethical frameworks benefiting from the explore/recognize model of quantum computing.”

Michael G. Dyer, a professor emeritus of computer science at UCLA expert in Natural Language Processing, responded, “What quantum computing offers is an incredible speed-up for certain tasks. It is possible that some task (e.g., hunting for certain patterns in large datasets) would be a subfunction in a larger classical reasoning/planning system with moral-based reasoning/planning capabilities. If we are talking simply about classification tasks (which artificial neural networks, such as ‘deep’ neural networks, already perform) then, once scaled up, a quantum computer could aid in classification tasks. Some classification tasks might be deemed ‘moral’ in the sense that [for example] people would get classified in various ways, affecting their career outcomes. I do not think quantum computing will ‘assist in building ethical AI.’”

An anonymous respondent observed, “I expect that quantum computing will evolve to assist in building AI. The sheer increase in computation capacity will make certain problems tractable that simple wouldn’t be otherwise. However, I don’t know that these improvements will be particularly biased toward ethical AI. I suppose there is some hope that greater computing capacity (and hence lower cost) will allow for the inclusion of factors in models that otherwise would have been considered marginal, making it easier in some sense to do the right thing.”

John Harlow, smart cities research specialist at the Engagement Lab @ Emerson College, noted, “We don’t really have quantum computing now, or ethical AI, so the likeliest scenario is that they don’t mature into being and interact in mutually reinforcing ways. Maybe I’m in the wrong circles, but I don’t see momentum toward ethical AI anywhere. I see momentum toward effective AI, and effective AI relying on biased datasets. I see momentum toward banning facial recognition technologies in the U.S. and some GDPR movement in Europe about data. I don’t see ethicists embedded with the scientists developing AI, and even if there were, how exactly will we decide what is ethical at scale? I mean, ethicists have differences of opinion. Clearly, individuals have different ethics. How would it be possible to attach a consensus ‘ethics’ to AI in general? The predictive policing model is awful: Pay us to run your data through a racist black box. Ethics in AI is expansive, though (https://anatomyof.ai/). Where are we locating AI ethics that we could separate it from the stack of ethical crises we have already? Is it ethical for Facebook workers to watch traumatic content to moderate the site? Is it ethical for slaves to mine the materials that make up the devices and servers needed for AI? Is it ethical for AI to manifest in languages, places and applications that have historically been white supremacist?”

John L. King, a professor at the University of Michigan School of Information, commented, “There could be earth-shattering, unforeseen breakthroughs. They have happened before. But they are rare. It is likely that the effect of technological advances will be held back by the sea anchor of human behavior (e.g., individual choices, folkways, mores, social conventions, rules, regulations, laws).”

Douglas Rushkoff, well-known media theorist, author and professor of media at City University of New York, wrote, “I am thinking, or at least hoping, that quantum computing is further off than we imagine. We are just not ready for it as a civilization. I don’t know if humans will be ‘in the loop’ because quantum isn’t really a cybernetic feedback loop like what we think of as computers today. I don’t know how much humans are in the loop even now, between capitalism and digital. Quantum would take us out of the equation.”

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information