ABSTRACT
This essay talks about Artificial Intelligence (AI) and Responsible Innovation (RI), especially the dimension of anticipatory governance. The vast potential of AI has sparked off a heated debate about how to regulate AI. I argue that RI could be a useful framework in this endeavor. Four suggestions are offered here for the anticipatory governance of AI. First, recognizing AI’s nature of General-Purpose Technology could help predict the future of AI. Second, issues of equity and equality of AI needs urgent attention. Third, sci-fi could be very helpful for developing scenarios, drawing public awareness, and encouraging discussions about AI. Fourth, design principles established in Human-Computer Interaction could be applied for helping formulate AI policies.
Keywords: Artificial Intelligence, Responsible Innovation, Anticipatory Governance, Science Fiction, Design Thinking, General Purpose Technology, Technological Unemployment.
INTRODUCTION
Artificial Intelligence (AI) is a collection of disciplines that use machines to draw human-level inferences, conduct physical or cognitive tasks automatically, and other intelligence activities. The term AI was coined by John McCarthy along with Marvin Minsky, Nathan Rochester, and Claude Shannon in the proposal for the Dartmouth Conference in 1956 where AI was defined as the attempt to “find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves” (McCarthy et al. 1955). After more than 60 years of development, AI has been through several cycles of “AI Winters,” periods where investment fell, and public interest was lost due to its incompetence to fulfill promises made by overoptimistic experts.
In the recent decade, however, AI has returned to the center of people’s attention because its performance started to take off due to the latest technological developments that have created perfect conditions for AI to thrive, such as massive training data available, unprecedentedly enormous computing power, vast storage space, and sophisticated algorithms (Kaplan 2016). A series of events and applications that show the power and potentials of AI astonished and at the same time scared people, for example, AlphaGo, AlphaZero, Siri, Amazon Alexa, and IBM Watson. These AIs either provide us with valuable services or beat humans in some intelligent tasks that were thought unique to humans.
Given the perceived huge potentials of AI, concerns about the risks of AI are emerging. In both public and academic discussions, I identified two types of concerns—short-term and long-term. For short-term concerns, it is worried that AI might entail risks such as algorithmic biases, technological unemployment, and ubiquitous surveillance, scenarios that might jeopardize human values such as privacy, equity, equality, and autonomy. In fact, it is evident that these scenarios are actually happening right now, for instance, an Amazon hiring algorithm favoring male over female candidates (Dastin 2018), FBI’s facial recognition technology biasing against people with darker skins (Garvie 2016), and risk assessment algorithms used by courts biasing against minorities (Tan et al. 2017). On the other hand, long-term concerns focus on more speculative, existentially risky, “Singularity” scenarios where AIs become smarter than humans, take over the world, and perhaps destroy the entire humanity (Bostrom 2016).
With those concerns and issues arising, calls for better governance on AI are heard everywhere. Debates about whether and how AI should be regulated have been sparked off, which resonates with an early discussion in the 1980s about the social control of technology, especially the “dilemma of control” described by David Collingridge. As he put it, technologies are easy to control during the early age of its development, but little is known about how it might affect the society, so it is uncertain how and what to control and how to avoid killing its potential to be beneficial; on the other hand, if we wait until its influences, especially the negative ones become obvious, it would be very tough, slow, and costly to control because the technology has already “lock-in” the society through path dependence (David. Collingridge 1980).
Now, many people believe that AI is at the early stage of its development. It is our moral imperative to discuss how can AI be better governed at this stage (Tegmark 2017, 0). Many attempts have been made in both public and academic discourse. For example, Future of Life Institute, an NGO founded by MIT cosmologist Max Tegmark has organized discussions, conferences, seminars, and foundations to facilitate AI safety research and promote the concept of “beneficial AI.” At the same time, the legal profession is looking for ways to regulate AI-powered technologies, for instance, the efforts made by Georgetown Law to urge FBI to be more transparent about how citizens’ pictures are used in their facial recognition technology (Garvie 2016), and the discussion about if driverless vehicles should be treated as legal persons (Čerka, Grigienė, and Sirbikytė 2017). Governments are also introducing regulations on data privacy, for instance, EU’s General Data Protection Regulation (GDPR) (Laybats and Davies 2018).
RESPONSIBLE INNOVATION OF AI
In the conversation about technology governance, Responsible Innovation (RI) is a new concept that I consider as a perfect framework for AI. RI is a description of scientific research and technological development processes that take social and environmental impacts of new technologies into account. RI stems from the debates about broadening the basis for technology assessment and aligning scientific innovation with social goals (Stilgoe 2016). RI prompt us to consider not only the consequences but also the “very purpose” of innovations (Owen et al. 2013). RI can be translated into four integrated dimensions—anticipatory, reflective, inclusively deliberative, and responsive. Anticipatory governance is to describe and analyze potential impacts exerted by the technology of interest, be intended or unintended, using methodologies developed in future studies, technology assessment, and scenario development. Reflectivity means to reflect on the “underlying purposes, motivations, and potential impacts.” Inclusive deliberation invites perspectives from public and plural stakeholders, opening up questions through dialogue, engagement, and debates. Responsivity focuses on influencing the “subsequent trajectory” of innovation based on the outcomes of the discussion in the previous three processes (Owen et al. 2013).
Among the four dimensions of RI, anticipatory governance is one of the starting points for all other discussions, especially for technologies still in their infant stage such as AI. On the one hand, it is of great significance because it could not only provide an empirical toolkit for future policy-making but also address issues in advance before they are locked in. On the other hand, it is also very difficult and thorny because of the considerable challenge of uncertainty. We simply don’t know what will happen in the future.
Some new theories and methods have been under development in order to cope with uncertainty and complexity of anticipating the future, for example, real-time technology assessment (Guston and Sarewitz 2002), constructive technology assessment (Fisher and Rip 2013), and Ethical, Legal and Social Implications (ELSI) program (Tito. Carvalho 2012). Ongoing practices are also providing us with empirical evidence of how to incorporate broader and more democratic perspectives into anticipatory governance, such as the consensus conference and scenario workshops in Denmark (Andersen and Jæger 1999). Many of these efforts attempt to formulate frameworks that are not only working in the context under which they are developing such as nanotechnology and human genome research but also generalizable to all other emerging technologies. Those studies provide insights of great value for pondering over the anticipatory governance of AI.
SUGGESTIONS ON ANTICIPATORY GOVERNANCE OF AI
Here in this essay, I offer some directions of thinking that could help anticipate the potential impacts of AI under uncertainty.
First, recognizing AI’s nature of being a General-Purpose Technology (GPT) could help anticipate the future of AI. GPTs are the kind of technologies that can potentially influence “many sectors of the economy” (Wright 2000). A GPT should be “pervasive, improving over time, and able to spawn new innovations” (Brynjolfsson and McAfee 2014), for example, steam engine and electricity. AI has the potential to become a GPT because it can be incorporated into many areas, its power can grow exponentially as the computing power and training data grow, and it can not only help with innovation but also self-improve. Thus, when anticipating the future of AI, we should not be limited by the AI applications so far but consider the possibility that AI could be used in almost every section of the economy. So far, AI has been integrated into many industries, such as agriculture, education, transportation, military, law enforcement, retail, and medical care, which should be paid great attention to when predicting the future of AI. However, three aspects should also be appreciated. First, the existing areas where AI has not been proved to be extensively useful should not be overlooked when anticipating the future, for example, coal mining, gene editing, fashion design, and personal training. Second, AI could be integrated into new areas that do not exist right now, but that might emerge in the future, for example, astrobiology, asteroid mining, and planet-scale geoengineering. Third, the co-evolution and convergence of AI and other technologies could breed even new areas that are barely imaginable today, for example, psychohistory as envisioned by Isaac Asimov in his famous Foundation Trilogy (Isaac Asimov 1951). By doing this, the potentials of AI could be genuinely recognized, and it could help tremendously with the practices on anticipatory governance of AI, for instance, in scenario development in citizen workshops.
Second, issues of equality and equity should be paid scrupulous attention to because they might become very obvious because of AI. Ideas from anticipatory governance of other emerging technologies can be borrowed. For example, Cozzens offered a two-dimensional framework to think about how to build equity and equality into nanotechnology using “pro-poor,” “fairness,” and “equalizing” approaches. The framework consists of a vertical dimension, which means groups with different income and wealth, and a horizontal dimension, which means different characters, such as men and women. Also, this framework emphasizes creating mid-wage, local jobs (Cozzens 2010). This framework could work well in AI as well. AI will cause technological unemployment because it could automatize most routine jobs in the short term (Frey and Osborne 2013). Thus, one of the critical things we should be focusing on is how to deal with this unemployment. Should we protect the old jobs from automatizing, or should we focus on creating new jobs, or should we subsidy people who lose their jobs due to AI with universal basic income (UBI) (Murray 2016)? Also, how do we make sure the benefits and risks of AI are distributed evenly among different groups of people? How do we prevent algorithmic biases against minorities to increase equality?
Third, I argue that science fiction could be used as a powerful tool for anticipatory governance of AI. Existing AI sci-fi works can be used to help develop future scenarios in discussions. There’s a vast media repository of sci-fi works, including films, novels, comics, and animations, which can be used as an instrument to improve engagement of target audience, kindle interests of the public, and provide materials for analysis. Many of them are evidently sparked off a lot of serious, heated discussion about AI in both public and academia, for instance, The Matrix, Ghost in the Shell, The Terminator, 2001: A Space Odyssey, Blade Runner, A.I. Artificial Intelligence, and so on. Actually, it is believed that the perception of the general public about AI is shaped by sci-fi films. Also, many of the sci-fi works have involved policy and governance related topics, which can be seen as a social experiment. For example, the policy of exiling all replicants to off-world colonies in Blade Runnertriggered revolts. Short Circuit 2 (1988)explored the possibility of granting an AI citizenship. A.I. Artificial Intelligence discussed the scenario where AIs developed human-level affections and emotions but are not treated humanely. Those are all examples of discussions about how AI governance could go right or wrong. Also, the thorny issue of “whose future matters” in foresight and STS (Selin 2008)can be discussed extensively with sci-fi. For example, Robocop presented a situation where a cyborg who was supposed to be detached from his previous life still had residue from past memories, which influenced his judgments. Ex Machina told a story about how a highly intelligent AI manipulated human emotions and escaped because she longed for freedom. They are asking valuable questions such as whose future matters and further, who gets to decide whose future matters. Should AIs be treated as persons? Should AIs be taken into considerations as a subject or an object in future governance? Also, sci-fi writing for academic purposes could be used as a tool or experimentation instrument to facilitate debates about anticipatory governance. It is evident in other disciplines that such fiction writing can serve academic purposes. For example, Charis Cussins’s fiction novel Confessions of a Bioterroristserved as an instrument for an academic discussion about gender and reproductive right (Cussins 1999). Some efforts are already seeking outcomes in this direction. For instance, AI Policy Futures, a project initiated by the Center for Science and the Imagination (CSI) at Arizona State University (ASU) is aiming to draw insights on AI policy-making through original sci-fi stories (“Policy Futures” n.d.).
Fourth, I argue that design principles developed in Human-Computer Interaction (HCI) can provide some helpful insights into the design for AI policy and technology policy in general. Mind, Society, and Behavior Report by the World Bank in 2015, inspired by recent developments from psychology and behavior studies, proposes that policymakers pay attention to “the processes of mind” in policy design. Three principles were put forward—thinking automatically, thinking socially, and thinking with mental models. HCI is a discipline that has extensively and successfully used theories from psychological and behavioral studies. Thus, in my opinion, HCI can offer policymakers great insights. The principles from Donald Norman’s book The Design of Everyday Things are worth noting—visibility, feedback, constraints, mapping, consistency, and affordance. The fundamental principle of design is to “provide a good conceptual model” that allows people to predict the effects of their actions and that ensure users’ mental models are identical to the design model (Norman 2002). Another set of useful design principles proposed by Lidwell, Holden, and Butler includes affordance, hierarchy, feedback loop, forgiveness, layering, mental model, modularity, prototyping, and so on (Lidwell, Holden, and Butler 2003). Each of the principles is worth discussing with a lengthy essay for their policy-making implications. Limited by the length of this essay, I would like to discuss more in another piece.
CONCLUSION
In conclusion, Responsible Innovation (RI) should be paid careful attention to when thinking about how to regulate AI. Among the four dimensions of RI, Anticipatory governance is the starting point of thinking, which requires predicting the future of AI development. I argue in this essay that recognizing AI’s nature of being a General-Purpose Technology could help us predict the future of AI. Also, issues of equity and equality of AI needs urgent attention. While predicting the future of AI, sci-fi works could be very constructive for developing scenarios, drawing public awareness, and encouraging discussions. In addition, design principles such as affordance and mental model, which are evidently useful in HCI, could be applied for helping formulate science and technology policies.
REFERENCES
Andersen, Ida-Elisabeth, and Birgit Jæger. 1999. “Scenario Workshops and Consensus Conferences: Towards More Democratic Decision-Making.” Science & Public Policy – SCI PUBLIC POLICY26 (October): 331–40. https://doi.org/10.3152/147154399781782301.
Bostrom, Nick. 2016. Superintelligence: Paths, Dangers, Strategies. Reprint edition. Oxford, United Kingdom ; New York, NY: Oxford University Press.
Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. First edition. New York: W.W. Norton & Company.
Čerka, Paulius, Jurgita Grigienė, and Gintarė Sirbikytė. 2017. “Is It Possible to Grant Legal Personality to Artificial Intelligence Software Systems?” Computer Law & Security Review: The International Journal of Technology Law and Practice33 (5): 685–699. https://doi.org/10.1016/j.clsr.2017.03.022.
Cozzens, Susan E. 2010. “Building Equity and Equality into Nanotechnology.” In Nanotechnology and the Challenges of Equity, Equality and Development, 433–46. Yearbook of Nanotechnology in Society ; v. 2. Dordrecht ; New York: Springer Verlag.
Cussins, Charis Thompson. 1999. “Confessions of a Bioterrorist: Subject Position and Reproductive Technologies.” In Playing Dolly: Technocultural Formations, Fantasies, and Fictions of Assisted Reproduction. Millennial Shifts. New Brunswick, N.J. ; London: Rutgers University Press.
Dastin, Jeffery. 2018. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women | Reuters.” October 9, 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
David. Collingridge. 1980. The Social Control of Technology. New York: StMartin’s Press.
Fisher, E., and A Rip. 2013. “Responsible Innovation: Multi-Level Dynamics and Soft Intervention Practices.” In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, 165–83. New York, UNITED KINGDOM: John Wiley & Sons, Incorporated. http://ebookcentral.proquest.com/lib/asulib-ebooks/detail.action?docID=1166329.
Frey, Carl, and Michael Osborne. 2013. “The Future of Employment: How Susceptible Are Jobs to Computerization?” Oxford University. http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf.
Garvie, Clare. 2016. The Perpetual Line-up: Unregulated Police Face Recognition in America. Washington, DC: Georgetown Law, Center on Privacy & Technology.
Guston, David H., and Daniel Sarewitz. 2002. “Real-Time Technology Assessment.” Technology in Society24 (1): 93–109. https://doi.org/10.1016/S0160-791X(01)00047-1.
Isaac Asimov. 1951. The Foundation Trilogy: Three Classics of Science Fiction. [Book Club ed.]. Doubleday Science Fiction. Garden City, N.Y.: Doubleday.
Kaplan, Jerry. 2016. Artificial Intelligence: What Everyone Needs to Know. What Everyone Needs to Know. New York, NY: Oxford University Press.
Laybats, Claire, and John Davies. 2018. “GDPR: Implementing the Regulations.” Business Information Review35 (2): 81–83. https://doi.org/10.1177/0266382118777808.
Lidwell, William, Kritina Holden, and Jill Butler. 2003. Universal Principles of Design. Gloucester, Mass: Rockport.
McCarthy, John, Marvin Lee Minsky, Nathan Rochester, and Claude Shannon. 1955. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.
Murray, Charles. 2016. “A Guaranteed Income for Every American.” Wall Street Journal, June 3, 2016, sec. Life. http://www.wsj.com/articles/a-guaranteed-income-for-every-american-1464969586.
Norman, Donald. 2002. The Design of Everyday Things. Basic Books. http://proquestcombo.safaribooksonline.com.proxy.library.georgetown.edu/9780465003945.
Owen, Richard., J. Stilgoe, P. Macnaghten, M. Gorman, E. Fisher, and D. Guston. 2013. “A Framework for Responsible Innovation.” In Responsible Innovation Managing the Responsible Emergence of Science and Innovation in Society, 27–50. Chichester, West Sussex: John Wiley & Sons Inc.
“Policy Futures.” n.d. Accessed December 7, 2018. https://www.policyfutures.org/.
Selin, Cynthia. 2008. “The Sociology of the Future: Tracing Stories of Technology and Time.” Sociology Compass2 (6): 1878–1895. https://doi.org/10.1111/j.1751-9020.2008.00147.x.
Stilgoe, Jack. 2016. “Geoengineering as Collective Experimentation.” Science & Engineering Ethics22 (3): 851–69. https://doi.org/10.1007/s11948-015-9646-0.
Tan, Sarah, Rich Caruana, Giles Hooker, and Yin Lou. 2017. “Auditing Black-Box Models Using Transparent Model Distillation With Side Information.” ArXiv:1710.06169 [Cs, Stat], October. http://arxiv.org/abs/1710.06169.
Tegmark, Max. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf.
Tito. Carvalho. 2012. The Human Genome Project and ELSI the Imperative of Technology and the Reduction of the Public Ethics Debate. Thesis MS–Arizona State University. http://hdl.handle.net/2286/e9c61r4dbun.
Wright, Gavin. 2000. “General Purpose Technologies and Economic Growth. (Book Review).”Journal of Economic Literature38 (1): 161–162.