Abstract
The technological singularity is a hypothesized time in the near future where machine intelligence surpasses humans. Many people consider it a threat to humanity while others think it will augment humans to a much higher level, physiologically and psychologically. In this paper, I review the state of the art of computing and conclude that singularity will not happen in the near future because of the physical limits of silicon-based computing paradigm. Alternative paradigms are not promising enough to drive the observed exponential growth into the future. Even if artificial general intelligence is possible in the future, it is unnecessary to worry about the bad AI scenario. Instead, we need to worry about the stupidity of machines. AI-related research should be encouraged in order to find the real problems and their solutions. In order to regulate AI, we can treat them as legal persons like companies.
1. The prophet of the technological singularity
In his Hugo-Award-winning science fiction novel, A Fire Upon the Deep, Vernor Vinge vividly depicted the Milky Way being divided into four “Zones of Thought[1]” to represent his hierarchy of intelligence spatially. The outermost layer is called the Transcend, where “incomprehensible, superintelligent beings dwell.” When a civilization reaches the “technological singularity,” it enters the Transcend layer. An evil superintelligence called Blight arises from a “bad singularity,” becoming a malevolent villain who treats intelligently inferior beings indifferently. Vinge warned that this could be our future if we leave artificial intelligence (AI) research unchecked[2].
Vinge is one of the biggest supporters of Technological Singularity Hypothesis. Unlike his pessimism, Google’s Ray Kurzweil, a futurist well-known for his various inventions, enthusiastically believes singularity will rise humans up to a much higher level, in which memories and thoughts can be uploaded and downloaded and human life span is extended tremendously. In his book The Singularity Is near, he famously defined the technological singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed[3]”. In order to embrace his transhumanistic vision, he takes more than 200 pills a day[4].
Although the opinions of Vinge and Kurzweil differ in singularity’s impact on humans, they both agree that it will surely happen in the near future. According to Kurzweil’s estimation, this tipping point will come in 2045[5].
Look around, however, I find no trace of such superintelligence. Not a clue. Siri on my iPhone seems clever, but when I ask whether the singularity will come, it answers “interesting question.” When I ask whether it believes in the technological singularity, it says “I don’t believe that I have beliefs.” It is merely programmed to answer these questions in a way that seems clever but actually saying nothing at all. Moreover, my calendar app which claims to use AI to arrange my schedule always mistakes the room number for time—hey, I have a meeting in Car Barn 311, not at 3:11 am!
So, from where does Kurzweil draw his conclusion? His theory is rooted in an assumption called “law of accelerating returns,” or exponential growth. According to Moore’s Law, the number of transistors in a given integrated circuit doubles every eighteen months, therefore the computing power and storage space are both growing in an exponential way. Kurzweil believes this exponential growth will reach a point where machine intelligence surpasses humans. By that time, artificial general intelligence (AGI) is capable of nearly everything that can be done by humans. Even more, combined with other advanced technologies such as nanotechnology, biotechnology, and neuroscience, AGI will become competent in tasks beyond our imagination.
This hypothesis echoes with numerous science fiction works in which wars between humans and machines seem endless. For example, The Matrix Series depicted a future where AI takes control of the world and turns humans into batteries, while the Terminators Series tell stories about how a bad AI named Skynet tries to wipe out all humans. Not to mention the notorious HAL 9000 in 2001: A Space Odyssey, who kills the entire crew of the spaceship.
While singularity appears fictional, it obtains widespread support, even within AI practitioners. In 2014, Swedish philosopher Nick Bostrom and Vincent Müller published a survey conducted in 2012 and 2013 on more than 500 experts. They found the median estimate of the time when high-level machine intelligence is 50% possible is somewhere between 2040-2050, while the median estimate of 90% possible is 2075. They also think it’s 1/3 possible that this machine intelligence will turn “bad” or “extremely bad.” Stephen Hawking, Elon Musk, Max Tegmark, and other big names devote a great deal of effort to warning people about the bad AI scenario. “success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks,” said Hawking in 2013.
Is singularity really inevitable in the near future? I don’t think so.
2. Silicon-based transistor is approaching physical limit.
Singularity hypothesis is based on exponential growth, which is just an observation, not a universal law of nature, and it will soon cease to continue in the absence of new computing paradigms.
Admittedly, Moore’s Law has worked fine for the past half a century. During that time, the size of transistors has shrunk for one thousand times, providing us with unprecedentedly strong computing power and concepts like mobile computing, Internet of Things, and cloud computing. More importantly, it saved AI from the last AI winter in the 1990s, showing AI’s enormous potential. As transistors become smaller and smaller, however, problems emerge. For example, the decreasing distance between source and drain weakens the gate’s control over the transistor channel, resulting in short-channel effect and quantum tunneling effect, so that current sneaks through the channel and the underlying silicon substrate, causing errors[6].
In order to solve this problem, in the 2000s, chip makers such as Intel and AMD started to manufacture 3D transistors, pushing Moore’s Law forwad for another decade[7]. But 3D transistor is not the ultimate savior. Nanometer is the realm of subatoms. Nano-scale transistors are approaching the insurmountable physical limit. Take Intel as an example. Intel originally planned to release its 10-nanometer transistor at the end of 2016 but had to postpone it to the second half of 2017. It is evident that they have slowed down the pace of R&D. William Holt, the executive vice president and general manager of Intel Corporation’s Technology and Manufacturing Group said, Intel will produce only two more generations of silicon chips, hopefully reaching 7 nanometers in five years and then they will turn to other technologies such as resonance tunneling transistors (RTTs) or spin transistors, because silicon can no longer support further advancement.
Without unceasingly shrinking transistors, computing power will stop growing. Therefore, the exponential growth will slow down and enter the plateau of an “S” shape curve.
3. The highly likely uncomputability of human-level intelligence
Being amazed by technological development every day, we tend to consider computers as mighty as possible. But actually, they are not, at least under current paradigm dominated by Von Neumann architecture and Turing machine.
Computer scientists categorize practical problems into five groups by the degree of difficulty of computing them. For example, calculating the greatest common divisor is easy so it’s called tractable problem or P problem where P represents Polynomial, while nondeterministic polynomial (NP) and NP-complete problems are exponentially hard but easy to verify. One example is the RSA cryptosystem, which “uses two large primes as its secret key and their product as the basis of the public key.” In his Great Principles of Computing, Peter Denning said that with current computing power, an RSA system with 2056-bit public keys may be crackable by 2020, while one with 4096-bits public keys would be “uncrackable forever.” For exponentially hard problems, the difficulty is rooted not in the algorithm, but in the time required to “enumerate all the states.” Beyond exponentially hard problems, there are some problems that are intractable or noncomputable whose orders of difficulty are even worse. Early computer scientists and mathematicians like Kurt Gödel and Allan Turing were aware of these problems. One instance is Turing’s halting problem, which deals with the feasibility of determining from a program and its input whether the program will halt or run forever[8]. It’s uncomputable for any Turing machine and current computer[9].
In English mathematical physicist Roger Penrose’s view, human consciousness involves such “noncomputable ingredients[10]” therefore beyond today’s computers’ capability. Quantum consciousness with hypothesized microtubules that inhabit neurons in the brain is what Penrose considers as the mysterious ingredient but he failed to convince most physicists and neuroscientists[11], one of whom is Max Tegmark, an MIT cosmologist who openly opposes Penrose’s theory and who strongly supports singularity and stands in the pessimistic camp with Hawking. Quantum computing, no matter in your brain or in a quantum computer, is only possible in the status of quantum coherence. According to Tegmark’s calculation, however, in a warm, humid brain full of vibrating molecules, a neuron and a microtubule will respectively go through a process called quantum decoherence within 10-20 and 10-13 seconds, not long enough for any computing at all[12].
Tegmark’s argument is convincing to contradict Penrose’s quantum consciousness, but insufficient to prove the computability of human-level intelligence under the current paradigm. Human brains are enormously complex, with hundreds of billions of neurons and even more interactions among them. To emulate a human brain, one needs to first understand not only the physical structure of the brain but also how neurons interact with one another as well as how thoughts and consciousness emerge from these interactions5. Even if we figure out these problems, it is by no means easy to enumerate every bit of the astronomical information in today’s most powerful computers because no fast algorithm is known or even no algorithm at all can exist. It may take a computer longer than the age of our universe to complete the task.
Yet, computer scientists have found ways to sidestep the endless enumeration, for instance, heuristic technique, in order to reduce search space. Monte Carlo tree search, a technique used by DeepMind’s AlphaGo is a kind of heuristic algorithm. Unfortunately, although AlphaGo beats humans, “the heuristic is not guaranteed to find the best subset,” as Peter Denning put it8. For example, a greedy heuristic algorithm may fail to reach the optimal solution in cases where local maximum is inferior to the global maximum[13]. In other words, sidesteps are just approximations. Given the fact that machine learning would create incomprehensible blackboxes, these approximate models sometimes are overfitted on the training data, creating a perfect illusion.
It might be argued that AGI can be achieved without fully understanding and emulating the brain and human intelligence. A very good analogy is bird and airplane. Human invented planes without emulating birds. On the contrary, we design planes by understanding aerodynamics. That’s true. But intelligence is different from flying. Fluid mechanics is the hardest course I took in college, but it’s way much simpler than intelligence. Today’s researchers just start to scratch the surface. The most advanced scanning MRI today is thousands of times crude than what we need for the first step of brain emulation11. Not to mention understanding the scanning results and then modeling them in computers. It is noted that scientists have built a robot by simulating C. Elegans roundworm’s neuron connections[14]. The problem is, a roundworm has less than 400 neurons, while a human has approximately 100 billion neurons. It’s extremely difficult to close the gap. In addition, human brains are different from computers in every aspect. For example, neurons are not binary11 and brains don’t share with computers the von Neumann architecture that consists of distinct components like memory and CPU. Understanding human brain is “getting harder as we learn more”, a phenomenon called “the complexity brake” proposed by Paul Allen5.
4. So many long ways to go
From roundworm to human brain, there’s still a very long way to go. Actually, there are lots of long ways in front of us. We just don’t know which one to take.
There are several ways to build an artificial mind, according to Calum Chace in his Surviving AI, for example, whole brain emulation, building on artificial narrow intelligence, and a comprehensive theory of mind.
Building on artificial narrow intelligence is no more promising. Actually, narrow AIs are everywhere nowadays. This approach is to combine different narrow AIs together, hoping human-level intelligence will emerge from the combination. The problem is, neither the way individual narrow AI works nor the way they combine together is similar to human intelligence. How can we expect a human-level intelligence emerging from such a system? In particular, deep learning, the most widely used technique in narrow AI, is nothing like the way humans learn things, though it’s bio-inspired. Gary Marcus, a cognitive psychologist at New York University and former student of Steven Pinker, criticizes deep learning for being the false hope for the future of AI. Deep learning requires tons of training data to form simple concepts, which humans can form with very few examples. It is particularly evident in the learning of language. Researchers feed Natural Language Processing (NLP) algorithms huge corpuses only to find them to be pretty dull compared to a two years old kids. Besides, AIs are really bad at common sense. Marcus believes that an approach better than deep learning is to figure out how human children learn from the environment[15]. This approach is in principle more promising than deep learning to create an AGI with narrow AI. But it is largely overlooked and remains underexplored due to its less lucrative short-term return.
5. Alternative paradigms are not promising enough
Supporters of singularity would argue, there are other computing paradigms such as quantum computing, which will continue to drive the exponential growth after silicon’s limit is reached. These new paradigms are not feasible yet, at least in the near future.
For example, the RTT and spin transistors mentioned by William Holt are no faster than silicon transistors. They are favored because they have lower energy consumption. Holt thinks energy is more important than speed and believes these new technologies have huge potential in Internet of Things.
Other paradigms seem to herald a future with powerful computing powers. The most noted one is quantum computing. Theoretically, a quantum computer shares information with itself in other parallel universes so that it has much more bits and computing resources at hand than conventional computers. Those quantum bits are called qubits. If built successfully, quantum computing can save Moore’s Law and exponential growth. The NP and NP-complete problems will also be solved. With quantum computers, the RSA cryptosystem will be obsolete immediately8. But there are a lot of challenges. The greatest one is quantum decoherence, the process Tegmark used to oppose Penrose’s quantum consciousness. In order to keep the computers coherent, the system has to locate in a dark and cold vacuum environment, isolated from the outside world. In addition, most alleged functional quantum computers can only be used in specific tasks, just like narrow AI. For example, Canadian company D-Wave’s quantum computers are specifically used to solve “a particular NP-complete problem related to the two-dimensional Ising model in a magnetic field[16].” Today, no fully functional, general purpose quantum computer is built successfully. We still have a very long way to go.
Other alternative computing includes DNA computing, molecular computing, optical computing, and neuromorphic computing. Some people are focusing on new materials such as germanium and III-V compound. These approaches share a problem—they just don’t work well, yet!
6. Machine intelligence is not exponentially growing
Even if exponential growth continues, AGI and singularity are hard to realize, because the growth rate of machine intelligence is not equivalent to the growth rate of computing power.
According to Peter Cochrane, we overemphasized the importance of computing power and storage space but overlooked systems’ interaction with the environment[17], a viewpoint Gary Marcus would agree with. A hard disk with huge storage space will not give rise to intelligence because of its lack of complexity. In Cochrane’s opinion, two things crucial for AGI are input sensors and output actuators, which enable entities to interact with, learn from, and cause “state changes” in the environment. He found that without processor and/or memory power, intelligence is still possible, but without sensors and/or actuators, intelligence is impossible. He also found that, while computing power and storage space are growing exponentially recently, the power of sensors and actuators is growing much slower, so the development of machine intelligence is more linear, not exponential. Therefore, AGI and singularity are not near.
7. How to avoid the “Bad AI Scenario”?
If the singularity does happen, should we concern about the “Bad AI Scenario?” People like Tegmark and Hawking think yes. An unfriendly AI may have different purpose than humans. If we are in its way, it may destroy us without mercy. An analogy mentioned by Tegmark in his book Our Mathematical Universe is that building a dam will kill a whole colony of ants but humans will do it anyway because the lives of ants are insignificant to us[18].
Will AIs turn bad? It’s an oversimplified question. Intelligence is not an all-or-nothing model, but a continuous spectrum with multiple dimensions. So is morality. Absolute morality is for fairy tales, not real life. In addition, morality standards differ hugely among cultures. Even the same person can have slightly different moral standards during a short period of time or on different people according to their degree of intimacy. Stanford prison experiment is a demonstration. How can we teach machines something that is difficult even for the smartest human philosophers? We are definitely asking too much when we prompt driverless cars to make moral decisions on the trolley problem. We don’t have the right answer, either!
How can a world with ambiguous and contradictory moral standards work so fine? Human society’s a secret weapon is law. Jerry Kaplan, an AI professor at Stanford proposes a very clever way to control AIs’ behavior—to treat them as legal persons and regulate them with laws. We have already treated non-human entities as legal persons for a long time, for example, companies. Discharging pollutants improperly does not make a company evil or bad, but costs it penalty or pollution tax under the law. The same thing can happen to machines and AIs. The laws, rather than moral standards, should be incorporated into future AIs. For example, if a driverless car hits a pedestrian, it will be punished with restriction and penalty, so it will have fewer resources to get enough fuel to do whatever it supposes to do. Take these consequences into consideration, AI will be more conscious if similar situations happen in the future. This is a process of machine learning, too. So, to make rules and laws that treat AIs as legal persons is an important task in the near future. This is the right way for AIs to learn how to understand and respect human ethics13.
Someone would say that what if a powerful AGI that doesn’t care about laws emerges and kills people anyway? I would argue, that means it is not generally intelligent enough to be called an AGI. Intelligence comes from a dynamically balanced status between order and chaos. I believe, an entity that can be called generally intelligent has the potential to understand and respect order and rules if its purposes include surviving as long as possible. The process of evolution demonstrates it again and again. Even animals with inferior intelligence obey rules such as community hierarchy.
So, the fear for a bad AI is actually for a stupid AI or artificial stupidity. Intelligent beings are predictable, while a stupid being is not. That’s what Peter Denning worries about in his What about an unintelligent singularity?. He also worries about another kind of machine stupidity, the one at the other end of the stupid spectrum, who respects rules too much instead of ignoring them. These programs will create a large-scale “bureaucratic system” that sacrifices human free will for efficiency. In this case, we don’t need to worry about being killed by machines, but our autonomy is at high risk. Actually, this Tayloristic scenario is happening today, especially in factories where humans are required to follow machines’ instructions and are supervised by machines.
8. Should we restrict AI research?
The last question is whether we should restrict AI research. My answer is definitely no. Instead, AI research should be funded to exploit its potential. This is the only way to find out the real problems and their solutions. If we restrict AI research, people will do it anyway because AI creates huge financial return if used properly. For example, in the 1990s, while Yahoo kept using humans to edit search results, Google developed algorithm to do that, which was much more cost-efficient and scalable and surpassed Yahoo quickly.
In addition, it’s impossible to ban AI research in the 21st century. Nearly every university has many professors and students doing AI related projects. It’s better to encourage it rather than restricting it. For example, in this semester, I did a project on facial recognition algorithm with a group of CCT students. We learned a lot of pros and cons about this technology. We interviewed Alvaro Bedoya and Clare Garvie of the Center on Privacy and Technology at Georgetown Law. They are doing research on facial recognition being used in law enforcement and now urging the FBI to be more transparent on this issue. It demonstrates a fact that adequate research will perfect social institution instead of encouraging abuse.
9. Conclusion
In conclusion, technological singularity will not happen, at least in the near future because silicon’s physical limits will stop the exponential growth. Existing computing paradigm is incompetent to solve NP, NP-complete, and uncomputable problems, where AGI-related problems dwell. Alternative computing paradigms such as quantum computing are not promising enough in short term. Even if they work and exponential growth continues, the essential parts of AGI—sensors, actuators, and algorithms are growing much slower.
Even if AGI is feasible, we don’t need to worry about the bad AI scenario. Instead, we need to worry about the stupidity of machines. In order to regulate AIs’ behavior and prompt them to learn and respect human morality, we can enact laws that treat AIs as legal persons instead of programming the ambiguous moral standards into AIs.
AI research should not be restricted. Instead, we should encourage AI-related research. This is the only way to find out the real problems and their solutions.
Reference
[1] Vinge, Vernor. 1992. A Fire upon the Deep. 1st ed. New York: TOR.
[2] Newitz, Annalee. 2017. “Vernor Vinge Says That When the Singularity Happens, It Will Be ‘very Obvious.’” io9. Accessed May 8. http://io9.com/5860524/vernor-vinge-says-that-when-the-singularity-happens-it-will-be-very-obvious.
[3] Kurzweil, Ray. 2006. The Singularity Is near: When Humans Transcend Biology. New York: Penguin.
[4] “Ray Kurzweil.” 2017. Wikipedia. https://en.wikipedia.org/w/index.php?title=Ray_Kurzweil&oldid=776673080.
[5] Allen, Paul G. 2017. “Paul Allen: The Singularity Isn’t Near.” MIT Technology Review. Accessed May 7. https://www.technologyreview.com/s/425733/paul-allen-the-singularity-isnt-near/.
[6] Schuegraf, Khaled Ahmed and Klaus. 2011. “Transistor Wars.” IEEE Spectrum: Technology, Engineering, and Science News. October 28. http://spectrum.ieee.org/semiconductors/devices/transistor-wars.
[7] Wang, Jieshu, and David H. Freedman. 2016. “3D晶体管.” In 科技之巅:《麻省理工科技评论》50大全球突破性技术深度剖析, 第1版. 人民邮电出版社.
[8] Denning, Peter J., and Craig H. Martell. 2015. Great Principles of Computing. Cambridge, Massachusetts: The MIT Press.
[9] Gleick, James. 2011. The Information: A History, a Theory, a Flood. New York: Patheon.
[10] Brockman, John. 1996. Third Culture: Beyond the Scientific Revolution. Simon and Schuster.
[11] Chace, Calum. 2015. Surviving AI: The Promise and Peril of Artificial Intelligence. Three Cs.
[12] Tegmark, Max. 2000. “Importance of Quantum Decoherence in Brain Processes.” Physical Review E 61 (4): 4194–4206. doi:10.1103/PhysRevE.61.4194.
[13] Kaplan, Jerry. 2016. Artificial Intelligence: What Everyone Needs to Know. What Everyone Needs to Know. New York, NY: Oxford University Press.
[14] Fessenden, Marissa. 2017. “We’ve Put a Worm’s Mind in a Lego Robot’s Body.” Smithsonian. Accessed May 10. http://www.smithsonianmag.com/smart-news/weve-put-worms-mind-lego-robot-body-180953399/.
[15] Knight, Will. 2017. “The Man with a Plan to Make AI More Human.” MIT Technology Review. Accessed May 10. https://www.technologyreview.com/s/544606/can-this-man-make-ai-more-human/.
[16] “D-Wave Systems.” 2017. Wikipedia. https://en.wikipedia.org/w/index.php?title=D-Wave_Systems&oldid=779582112.
[17] Cochrane, Peter. 2014. “Exponential Technology and The Singularity: The Technological Singularity (Ubiquity Symposium).” Ubiquity 2014 (November): 1:1–1:9. doi:10.1145/2667644.
[18] Tegmark, Max. 2014. Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. First edition. New York: Alfred A. Knopf.