When Edmond De Belamy, a quite blurry portrait, was sold for $432,500 at Christie’s in October 2018, a heated discussion broke out. Some celebrated the debut of AI art; others warned the legal ambiguity of the authorship or questioned whether it was real art (Elgammal, 2018; Epstein et al., 2020; Turnbull, 2020). The painting was created by a Paris-based artist collective using a generative adversarial network (GAN), a type of AI framework that can generate new data with similar features as the training data, such as human faces that look deceptively realistic (Goodfellow et al., 2014; Karras et al., 2019). In addition to GANs, many other recent AI advances appear to create or innovate on their own. For instance, OpenAI’s language model GPT-3 generates extremely human-like and creative texts (Heaven, 2020). Protein structures predicted by DeepMind’s AlphaFold were proven to be better than the ones predicted using X-ray crystallography, and it was believed to have solved a 50-year-old challenge in biology (Callaway, 2020; Deep Mind, 2020; Mater & Coote, 2019; Wu et al., 2021).
All these advances appear to convincingly demonstrate that AI can create novel and useful knowledge and artifacts. When properly translated into industries, this ability to create is believed to augment innovation in a wide range of domains and stimulate enormous economic growth.
Innovation has been conceptualized as an adaptive search process over a space of combinatorial possibilities (Kauffman et al., 2000; Wagner & Rosen, 2014; Youn et al., 2015). In this context, a search problem means conducting specific algorithms over a well-defined search space. A search space consists of nodes and edges. Nodes represent possible innovations, which could be either combination of existing knowledge or new knowledge created through standardized R&D. A directed edge between two nodes represents a possible path from one node to the other. The goal of the search process is to find nodes that meet certain criteria, in this case, nodes that are novel and useful, through a plausible trajectory that satisfies constraints such as resources, time, and available intellectual properties (Kauffman et al., 2000; Macready et al., 1996). Given the complexity of the knowledge space, the search process could become extensive, clumsy, trivial, and erratic. Some call it a needle-in-the-haystack problem and find it pervasive in many cutting-edge fields. As Agrawal et al. (2018) pointed out, current AI techniques, especially deep learning, are particularly suited for this type of problem because it automates feature extraction and significantly increases the ability to search in complex, non-linear, high dimensional spaces. It is believed that due to the financial rewards, more R&D will be incentivized to use AI with little human involvement (Cockburn et al., 2018).
Nevertheless, psychology and cognitive science theories suggest that creativity is extremely complicated, implying that innovation extends beyond merely solving search problems. Boden (1990) distinguished three types of creativity. Combinatorial creativity occurs when familiar ideas are combined in new ways. Exploratory creativitygenerates new ideas by exploring an established conceptual space and reaching points that have never been accessed before, such as inventing a new material for fighter jets through experimentation in wind tunnels. Transformational creativity reaches previously inaccessible points by changing the rules by which the conceptual space is constructed, resonating with Kuhn’s (1996) concept of paradigm shift. For example, Einstein’s general theory of relativity is a transformational creation that fundamentally changed the Newtonian perception of gravity. This taxonomy of creativity can be readily applied to innovation. Similarly, we can distinguish three types of innovation – combinatorial innovation, exploratory innovation, and transformational innovation.
I argue that current AI is undoubtedly valuable in combinatorial and exploratory innovation that relies heavily on search but has little promise in transformational innovation. As Agrawal et al. (2018) put it, AI can increase human’s accessibility to existing knowledge stock and the ability to combine elements of that stock into valuable new ideas. With the expansion of human knowledge, individual innovators face “the burden of knowledge” (Jones, 2009). For example, a biologist cannot read and remember all the biology papers published every day before starting her work, not to mention identifying gaps and opportunities in the literature. Current mainstream AI, namely connectionist models, can help ease this burden and assist with innovation through searching. For example, the key of AlphaFold to predict protein structures is to search the protein landscape to find the proper structures that matched the scoring functions (Deep Mind, 2020). Similarly, GANs work essentially through searching for a sufficiently good “Generator” starting from random noise. Another good example is AlphaGo’s move 37 while playing Go against Lee Sedol in 2016. When AlphaGo made that move, everyone was surprised because even though it was a legal move, it was against the empirical formula established for 3,000 years and would be unimaginable and considered a mistake if made by a human player. Fifteen moves later, that move 37 turned out to be such a creative and decisive move for AlphaGo to win the game (Menick, 2016). AlphaGo made this move because it searched much more possibilities than humans, not to mention the fact that human players, in their search process, would exclude this move altogether (Zarkadakis, 2016). A well-designed system with proper AI components can even automate combinatorial and exploratory innovation with little human intervention. For instance, adding a GAN into the design pipeline can significantly speed up the innovation process for a fashion company (Raffiee & Sollami, 2020; Tautkute et al., 2019; Zhu et al., 2017).
However, AI appears unpromising in transformational innovation. Current AI must search in a well-defined space where information is well-structured and -represented. Therefore, AI can “think” inside a giant box but cannot “think” outside the box. The ability of AI to perform combinatorial and exploratory innovation, however advanced, cannot help reaching the tipping point for transformational innovation. Just like building an ever-higher ladder will never land us on the moon, and narrow intelligence is not on a continuum with general intelligence (Dreyfus, 2012; Mitchell, 2021). More ironically, today, even AI research per se is combinatorial and exploratory in nature without little transformation – think about how much AI research has become hyperparameter-tuning (Hutson, 2020). What is learned by AIs via connectionist models is the features, or more precisely, the statistical distributions of the features in the training data. Thus, an AI’s creation is only as good as the training data. If a GAN is trained on MNIST, a hand-written digit database, it learns nothing about generating hand-written letters or Chinese characters. Also, a style-transfer AI may be good at rendering photos into van Gogh’s Starry Night style but cannot create a new style that is sufficiently meaningful, intelligible, and comprehensible to humans, not to mention creating an original image that is significant enough to initiate an art movement equivalent to Impressionism.
On the other hand, we humans can innovate by changing the rules. For example, Picasso’s “Tete de Femme” was a profound change in artistic representation that had never existed before. The Copernican system radically changed the Ptolemaic assumption about the earth’s position in the universe. Humans can change the rules partly because we have a more holistic understanding of the world than AIs. Therefore, we know what rules are mutable under what conditions. In other words, we are gifted with common sense. When I draw a cat, I am not creating based on a dataset of millions of cats. Instead, I draw on all my experience, understanding, and knowledge regardless of the relevance to cats. In addition, each person sensors the world slightly differently, but we share conceptual spaces symbolically, allowing us to comprehend the meanings and appreciate the surprises. However, AIs cannot “recognize” or “understand” anything that’s not in their training data. Indeed, by adding some random factors, such as adding an exploration function to a reinforcement learning agent (Russell & Norvig, 2010, p. 842), AIs can exhibit certain behaviors resembling free will. However, humans’ free will is not random. Innovation is not random either. Even August Kekulé’s dream of snakes that inspired his creative conceiving of the structure of benzene was not random; at least the transfer of rules from the conceptual space of dreams to the conceptual space of chemistry was based on prior knowledge structure instead of randomness and blindness (Sutton, 2015). In short, current AI is unable to climb up the meta-mountain without significant human input.
Therefore, I argue that the next significant breakthrough in AI-augmented innovation must improve AI to help with transformational innovation. In other words, AI must understand the rules of constructing conceptual spaces, learn how to change the rules in meaningful ways like Neo bending the spoon. It must know how to alter, remove, or add dimension(s) to the spaces – acting like there is no spoon. It also must evaluate the resulting spaces, drop valueless ones, and perform a further search in good ones (Boden, 1998), and subsequently learn how to construct new valuable spaces more effectively in the future. To achieve this goal, it requires advances beyond the current mainstream machine learning paradigm, namely deep learning, because connectionist models like neural networks, without symbolic components, learn nothing but patterns on mesa levels and unlikely to climb up the “meta mountain” on their own. It also requires re-visit GOFAI – symbolic AI. Symbolic AI is an AI paradigm that gained fame initially but lost popularity in the late 1980s during the AI Winter. It attempts to achieve intelligence by manipulating symbols like humans. Adding symbolic components may be the first step to teach machines how to alter rules. A feasible direction for AI-augmented transformational innovation would be the one advocated by Gary Marcus and others – a hybrid AI model that is knowledge-driven, reasoning-based, centered around cognitive models (Marcus, 2020). It may include at least two components, one top-down, the other bottom-up. The top-down component builds the cognitive models of transformational innovation into the system. It will benefit from the fields of psychology, cognitive science, STS studies, innovation studies, and others that investigate how humans innovate, in particular, how humans change rules, where the rule-changing rules come from, whether the rule-changing rules in one conceptual space can be applied to another one, what rules can be changed, and under what conditions. It will be crucial to study and theorize transformational innovation in human history systematically. It also requires domain-specific as well as cross-domain expertise to map conceptual spaces. On the other hand, the bottom-up component can utilize connectionist AI to search in alternative conceptual spaces and evaluate their performance.
To illustrate such hybrid systems, we can imagine a language model that writes science fiction stories. It will have two components – the top-down and the bottom-up. The top-down component will be built with an innate cognitive model that captures how human writers change rules, for instance, by altering some dimension(s) of a cultural aspect that is taken for granted by human beings to create a pleasant yet comprehensible surprise. A good example is the alien language depicted in Ted Chiang’s Story of Your Life where he altered the space and time dimensions of language rules. (Of course, the way Chiang changed the rules might be inspired by human languages. For instance, Arabs write from right to left, and ancient Chinese write from up to down and then right to left.) With this component, the model can create multiple new conceptual spaces as story settings. For example, one such space could have a universe where time goes backward; another could have a “spaceship” that goes into the earth’s center instead of space; one could harbor a fluid and dynamic cosmological constant. Then, the bottom-up component, just like GPT-3, starts to generate stories in each new space and assess: 1) whether the stories are comprehensible by human readers; 2) in all comprehensible stories in each new space, which one attracts the most readers; 3) in all new spaces, which space generates the most engaging stories; 4) which altered rules produce the best space(s). In this way, the model can produce very novel and attractive stories.
Throughout the history of AI, most efforts have been devoted to keeping AI following the rules. I believe teaching AI how to break the rules is equally important, not only for AI-augmented innovation but also for achieving artificial general intelligence. It required us to understand how the human mind works no less than how machines work. It can be accomplished by hybrid systems that mix symbolic and connectionist approaches, the former changing the rules and creating new conceptual spaces, while the latter following the altered rules, conducting search tasks, and optimizing the results. By the time AI can bend the spoons and develop transformational creativity, a general-purpose innovation engine is plausible. It will dramatically enhance the development of science and technology. By then, we can finally say the Singularity is nearer.
Agrawal, A., McHale, J., & Oettl, A. (2018). Finding Needles in Haystacks: Artificial Intelligence and Recombinant Growth (No. w24541). National Bureau of Economic Research. https://doi.org/10.3386/w24541
Boden, M. A. (1990). The Creative Mind: Myths and Mechanisms (1st Edition). Weidenfeld & Nicolson.
Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence, 103(1), 347–356. https://doi.org/10.1016/S0004-3702(98)00055-1
Callaway, E. (2020). ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures. Nature, 588(7837), 203–204. https://doi.org/10.1038/d41586-020-03348-4
Cockburn, I., Henderson, R., & Stern, S. (2018). The Impact of Artificial Intelligence on Innovation. IDEAS Working Paper Series from RePEc. http://search.proquest.com/docview/2059182462/?pq-origsite=primo
Deep Mind. (2020, January 15). AlphaFold: Using AI for scientific discovery. Deepmind. /blog/article/AlphaFold-Using-AI-for-scientific-discovery
Deep Mind. (2020, November 30). AlphaFold: A solution to a 50-year-old grand challenge in biology. Deepmind. https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology
Dreyfus, H. L. (2012). A History of First Step Fallacies. Minds and Machines, 22(2), 87–99. https://doi.org/10.1007/s11023-012-9276-0
Elgammal, A. (2018, October 29). What the Art World Is Failing to Grasp about Christie’s AI Portrait Coup. Artsy. https://www.artsy.net/article/artsy-editorial-art-failing-grasp-christies-ai-portrait-coup
Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who Gets Credit for AI-Generated Art? IScience, 23(9). https://doi.org/10.1016/j.isci.2020.101515
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 27 (pp. 2672–2680). Curran Associates, Inc. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
Heaven, Wi. D. (2020, July 20). OpenAI’s new language generator GPT-3 is shockingly good—And completely mindless. MIT Technology Review. https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/
Hutson, M. (2020). Core progress in AI has stalled in some fields. Science, 368(6494), 927–927. https://doi.org/10.1126/science.368.6494.927
Jones, B. F. (2009). The Burden of Knowledge and the “Death of the Renaissance Man”: Is Innovation Getting Harder? The Review of Economic Studies, 76(1), 283–317. https://doi.org/10.1111/j.1467-937X.2008.00531.x
Karras, T., Laine, S., & Aila, T. (2019). A Style-Based Generator Architecture for Generative Adversarial Networks. ArXiv:1812.04948 [Cs, Stat]. http://arxiv.org/abs/1812.04948
Kauffman, S., Lobo, J., & Macready, W. G. (2000). Optimal search on a technology landscape. Journal of Economic Behavior & Organization, 43(2), 141–166. https://doi.org/10.1016/S0167-2681(00)00114-1
Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed..). University of Chicago Press.
Lamb, C., Brown, D., & Clarke, C. (2018). Evaluating Computational Creativity: An Interdisciplinary Tutorial. ACM Computing Surveys, 51(2), 1–34. https://doi.org/10.1145/3167476
Licklider, J. C. R. (1960). Man-Computer Symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1(1), 4–11. https://doi.org/10.1109/THFE2.1960.4503259
Macready, W. G., Siapas, A. G., & Kauffman, S. A. (1996). Criticality and Parallelism in Combinatorial Optimization. Science, 271(5245), 56–59. https://doi.org/10.1126/science.271.5245.56
Marcus, G. (2020). The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. ArXiv:2002.06177 [Cs]. http://arxiv.org/abs/2002.06177
Mater, A. C., & Coote, M. L. (2019). Deep Learning in Chemistry. Journal of Chemical Information and Modeling,59(6), 2545–2559. https://doi.org/10.1021/acs.jcim.9b00266
Menick, J. (2016, October 17). Move 37: Artificial Intelligence, Randomness, and Creativity. Mousse Magazine, 53. http://localhost:4000/writing/move-37-alpha-go-deep-mind.html
Mitchell, M. (2021). Why AI is Harder Than We Think. ArXiv:2104.12871 [Cs]. http://arxiv.org/abs/2104.12871
Raffiee, A. H., & Sollami, M. (2020). GarmentGAN: Photo-realistic Adversarial Fashion Transfer. ArXiv:2003.01894 [Cs]. http://arxiv.org/abs/2003.01894
Russell, S. J., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed). Prentice Hall.
Sutton, M. (2015, October 8). Snakes, sausages and structural formulae. Chemistry World. https://www.chemistryworld.com/features/snakes-sausages-and-structural-formulae/9038.article
Tautkute, I., Trzcinski, T., Skorupa, A., Brocki, L., & Marasek, K. (2019). DeepStyle: Multimodal Search Engine for Fashion and Interior Design. ArXiv:1801.03002 [Cs]. http://arxiv.org/abs/1801.03002
Turnbull, A. (2020, January 6). The price of AI art: Has the bubble burst? The Conversation. http://theconversation.com/the-price-of-ai-art-has-the-bubble-burst-128698
Wagner, A., & Rosen, W. (2014). Spaces of the possible: Universal Darwinism and the wall between technological and biological innovation. Journal of the Royal Society Interface, 11(97), 20131190–20131190. https://doi.org/10.1098/rsif.2013.1190
Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Yu, P. S. (2021). A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4–24. https://doi.org/10.1109/TNNLS.2020.2978386
Youn, H., Strumsky, D., Bettencourt, L. M. A., & Lobo, J. (2015). Invention as a combinatorial process: Evidence from US patents. Journal of the Royal Society, Interface, 12(106), 20150272–20150272. https://doi.org/10.1098/rsif.2015.0272
Zarkadakis, G. (2016, November 26). Move 37, or how AI can change the world | HuffPost. Huffpost. https://www.huffpost.com/entry/move-37-or-how-ai-can-change-the-world_b_58399703e4b0a79f7433b675
Zhu, S., Fidler, S., Urtasun, R., Lin, D., & Loy, C. C. (2017). Be Your Own Prada: Fashion Synthesis with Structural Coherence. ArXiv:1710.07346 [Cs]. http://arxiv.org/abs/1710.07346
 In this piece, “innovation” and “creativity” are used interchangeably. They both involve the production of novel and useful knowledge or artifacts (Pereira, 2007), with innovation more focusing on economic aspects while creativity more on cognitive aspects.