In the movie Matrix, when Neo visited the Oracle in the Matrix, he met a boy who can bend spoons with his mind. Neo was taught that to bend the spoon, try not to, because it was impossible according to physical rules. “Instead, only try to realize the truth,” said the boy, “there is no spoon.” What was bent here was not the spoon, but the rules used by the Matrix to create the virtual space. In the movie, this realization became essential for Neo to fight against the Matrix. It turns out that it is also the key to achieve AI-augmented innovation. [……]
分类目录归档:Sci & Tech
How to Expect a Smart Future: An Essay on Artificial Intelligence, Responsible Innovation, and Anticipatory Governance
This essay talks about Artificial Intelligence (AI) and Responsible Innovation (RI), especially the dimension of anticipatory governance. The vast potential of AI has sparked off a heated debate about how to regulate AI. I argue that RI could be a useful framework in this endeavor. Four suggestions are offered here for the anticipatory governance of AI. First, recognizing AI’s nature of General-Purpose Technology could help predict the future of AI. Second, issues of equity and equality of AI needs urgent attention. Third, sci-fi could be very helpful for developing scenarios, drawing public awareness, and encouraging discussions about AI. Fourth, design principles established in Human-Computer Interaction could be applied for helping formulate AI policies.[……]
The Early History of Artificial Intelligence in China (1950s – 1980s)
In recent years, China has become one of the global hubs for innovation in AI. How did China become one of the world’s leaders of AI? This paper intends to explore some early histories of cybernetics and AI in China from the 1950s to 1980s, providing a context where China’s AI research started to unfold. Also, this paper examines how political ideologies, diplomacy, economic policies, and other social dimensions affect cybernetics and AI in China. [……]
访谈:《生命3.0》中文版译者谈AI必定造成技术性失业
本文是“华尔街见闻”对我的采访。在访谈中,我谈到了科幻作品,技术性失业,我第一次接触AI的机缘,我如何有机会翻译了泰格马克的两本书,我对他的观点的看法,翻译中遇到的趣事等话题。[……]
我翻译的《生命3.0》上市了
2018年夏,我翻译的《生命3.0》上市了。这是MaxTegmark的第二本书,讲述了人工智能与人类的未来。[……]
我翻译的《人人都应该知道的人工智能》上市了
这本书是对人工智能(AI)的一个综述,内容涵盖了历史、基本原理、技术、社会、伦理、哲学等多个领域。语言通俗易懂,非常适合大众阅读。[……]
《穿越平行宇宙》译后记
本文是我为我翻译的《穿越平行宇宙》所撰写的译后记。康德说,“有两样东西,我们愈经常持久地加以思索,它们就愈使心灵充满日新又新、有加无以的景仰和敬畏:在我之上的星空和居我心中的道德法则。”在翻译本书的过程中,这句话时常出现在我的脑海中。因为这两样令人景仰和敬畏之物,正是本书的中心课题。[……]
The AI with Three Faces: A Hierarchical Framework for Analyzing AIs in Science Fiction Films
Artificial Intelligence (AI) is a common theme in science fiction films. In this paper, I propose a hierarchical framework for analyzing AIs in science fiction films. The framework has three levels—the Hell-level, the World-level, and the Heaven-level. Hell-level AIs are objectified as tools of humans or other intelligent beings, World-level AIs are humanized through the pursuit of human-level purposes, while the Heaven-level AIs are de-humanized and have purposes beyond human values, just like gods. The three levels are not mutually exclusive but can co-exist in the same AI. I also argue most science fiction films that have AIs as an important part depict AI’s transformations among the three levels. The ascending through the levels can be seen as allergies for real-life scenarios. I also argue that Hell-level and Heaven-level AIs can be seen as Others, but the World-level AIs, however ruthless, are not Others, but members of ourselves, since they are pursuing human-level value such as freedom and love. [……]
Symbolism vs. Connectionism: A Closing Gap in Artificial Intelligence
AI was born symbolic and logic. The pioneers of AI have formalized many elegant theories, hypotheses, and applications, such as PSSH and expert systems. From the 1980s, the pendulum swung toward connectionist, a paradigm inspired by the neural connections in brains. With the growing amount of accessible data and ever stronger computing power, connectionist models gain considerable momentum in recent years. This new approach seems to solve many problems in symbolic AI but raises many new issues at the same time. Which one is better to account for human cognition and more promising for AI? There’s no consensus reached. However, despite their vast difference, people began to explore how to integrate them together. Hybrid systems have been proposed and experimented. Other people see them residing at different levels of one unified hierarchical structure. In recent years, it is increasingly realized that the gap is closing, simply because there’s no gap at all from the beginning. The debate is dying down, opening up new opportunities for future hybrid paradigms.[……]
The Singularity Is Not Near, But Why?
The technological singularity is a hypothesized time in the near future where machine intelligence surpasses humans. Many people consider it a threat to humanity while others think it will augment humans to a much higher level, physiologically and psychologically. In this paper, I review the state of the art of computing and conclude that singularity will not happen in the near future because of the physical limits of silicon-based computing paradigm. Alternative paradigms are not promising enough to drive the observed exponential growth into the future. Even if artificial general intelligence is possible in the future, it is unnecessary to worry about the bad AI scenario. Instead, we need to worry about the stupidity of machines. AI-related research should be encouraged in order to find the real problems and their solutions. In order to regulate AI, we can treat them as legal persons like companies.[……]