分享

机器有可能比人类更聪明吗?

 陈农 2018-08-31
Courier 联合国教科文组织 / 2018-08-30  
 

机器有可能比人类更聪明吗?不可能!计算机科学家让-加布里埃尔·加纳西亚说,这只是科幻小说虚构的神话罢了。在今天的文章中,他将带领大家了解人工智能领域的重大发展,回顾最新的技术进步,讨论亟待回答的道德问题。


人工智能: 神话与现实


作者:让-加布里埃尔·加纳西亚

 

1956年夏天,约翰·麦卡锡(John McCarthy)、马文·明斯基(Marvin Minsky)、纳撒尼尔·罗切斯特(Nathaniel Rochester)和克劳德· 香农(Claude Shannon)四位美国研究人员在美国新罕布什尔达特茅斯学院组织了一场研讨会,正式开启了人工智能这一科学学科。此后,“人工智能”这个最初可能是为了引人瞩目而创造出来的名称,已经变成众所周知的流行词语。计算机科学的这项应用在过去几年里不断发展,它所催生的技术在过去60年为改变这个世界做出了巨大贡献。


然而,“人工智能”一词成功得到普及,有时是基于一种误解,即认为它指的是拥有智慧、并因此会与人类竞争的人工实体。



由浅田埝教授(Minoru Asada,日本) 研制的机器人宝宝CB2正在学习爬行 / 
The infant robot CB2, built by Minoru Asada, Japan, is being taught to crawl.


这种观念与古代神话传说有关,例如关于泥人的神话传说(出自犹太民间传说,一尊被赋予生命的泥像)。最近,英国物理学家斯蒂芬·霍金(Stephen Hawking, 1942—2018年)、美国企业家埃隆·马斯克(Elon Musk)和美国未来学家雷·库兹韦尔(Ray Kurzweil)等当代名人,以及我们现在所说的“强人工智能”或“通用人工智能”(AGI)的支持者们,推动这种观念再次流行起来。在这里,我们不讨论第二种含义。因为至少现在,这一含义只能算是天马行空的想象,更多的是受到科幻小说的启发,而不是经过实验和实证观察确认的实实在在的科学现实。


对麦卡锡、明斯基,以及达特茅斯夏季人工智能研究项目的其他研究人员而言,人工智能最初的意图是用机器来模拟人类、动物、植物及物种种群的演变。更准确地来说,这一科学学科建立在这样一种猜想之上:所有认知功能都能被精确地描述,从而有可能通过计算机编程得到复制。在人工智能存在的60多年里,尚无证据能够反驳这一猜想,也无证据能够无可辩驳地证明这一猜想,它仍然充满着无限的可能和潜力。


ENIAC(电子数字积分计算机)是世界上第一台可编程计算机,于1946年问世,占地30 平方米,重达30 吨。这台计算机由美国宾夕法尼亚大学研制,曾用于解决核物理学和气象学问题。/ ENIAC (Electronic Numerical Integrator and Computer), the first programmable electronic digital computer, built in 1946, during the Second World War.


  坎坷的发展之路  


人工智能出现的时间很短,却已经历了多次变革,可以概括为六个阶段。


◼️ 预言者时期


最初,人工智能起步并取得早期成功,研究人员在欣喜之下沉迷于放大话,他们尽情发挥想象力,轻率地发表了某些观点,因此在后来饱受批评。


例如,1978年诺贝尔经济学奖得主美国政治科学家、经济学家赫伯特·西蒙(Herbert A. Simon)曾于1958年宣称,如果能够参加国际比赛的话,计算机将在10年之内问鼎国际象棋世界冠军。


◼️ 低谷时期


到了20世纪60年代中期,进展似乎变得非常缓慢。1965年,一个10岁的孩子在象棋比赛中击败了计算机;1966年,一份由美国参议院委托撰写的报告阐述了机器翻译的内在局限性。在大约10年的时间内,人工智能的负面消息层出不穷。


◼️ 语义人工智能 


然而,研究工作并未停止,只是有了新的方向:侧重于记忆心理学和理解的机制,并尝试在计算机上对其进行模拟,同时,还关注知识在推理中的作用。这推动了语义知识表示技术的产生。这项技术在20世纪70年代中期取得了相当大的发展,同时它还促进了专家系统的发展。之所以叫作语义知识表示技术,是因为这些系统利用技能娴熟的专家的知识来再现他们的思维过程。20世纪80年代早期,专家系统在医疗诊断等领域得到广泛应用,给人们带来了巨大的希望。


◼️ 新连接主义和机器学习 


技术进步带来了机器学习算法的发展,这使得计算机能够积累知识,并利用它们自己的经验,自动进行自我重新编程。


工业应用(指纹识别、语音识别等)由此发展起来。其中,人工智能、计算机科学、人造生命和其他学科结合在一起,产生了混合系统。


◼️ 从人工智能到人机界面


从20世纪90年代末开始,人工智能与机器人和人机界面相结合,产生了具有情感与情绪的智能代理。除了其他方面,这也带来了情绪计算(或情感计算,即评估感受情绪的对象的反应,并在机器上再现)的发展,特别是对话代理(聊天机器人)的发展。


◼️ 人工智能的复兴


自2010年以来,基于形式化神经网络的使用,机器的力量使得利用深度学习技术开发大数据成为可能。语音和图像识别、自然语言理解和自动驾驶汽车等许多领域出现了一系列非常成功的应用案例,正在引领人工智能的复兴。


蓝脑计划(BBP)是人类大脑计划 (HBP) 的组成部分, 旨在模拟鼠 类虚拟神经元的微电路活动(2015 年)。据研究人员称,这标志着向模 拟人类大脑功能迈出的一步。/ Simulation of electrical activity in a microcircuit of virtual neurons of a rat (2015), by the Blue Brain Project (BBP) team, part of Europe’s Human Brain Project (HBP).


  应  用  


许多人工智能技术的研究成果的能力已超越人类。1997年,一项计算机程序击败了世界国际象棋卫冕冠军;2016年,又一计算机程序击败了世界顶尖的围棋选手和一些数一数二的扑克选手。计算机正在证明或正在帮助证明数学定理;机器学习技术正在从太字节(1012字节)甚或是拍字节(1015字节)的海量数据中自动构建知识。


因此,机器可以识别语音并进行转录,就像打字员过去所做的那样。计算机可以准确识别数以千万计的面孔和指纹,也可以理解以自然语言编写的文本。利用机器学习技术,汽车可以实现无人驾驶;机器比皮肤科医生更善于利用手机拍摄的皮肤痣照片诊断黑色素瘤;机器人代替人类参与作战;工厂生产线变得越来越自动化。


科学家也在利用这些技术,根据某些生物大分子(特别是蛋白质和基因组)的组成序列——蛋白质的氨基酸,基因组的碱基——确定它们的某些功能。更普遍来说,所有科学都在经历计算机模拟实验在认识方面的严重断裂。之所以称之为计算机模拟实验,是因为这种实验是由计算机利用以硅制成其核心的强大处理器在海量数据中进行的。因此,这种实验不同于针对生命体的体内实验,最重要的是,也不同于在玻璃试管中进行的体外实验。


如今,人工智能的应用几乎影响了所有的活动领域,尤其是在工业、银行、保险、健康和国防部门。许多日常任务现在已经实现自动化,这使许多行业发生转变,同时也导致一些行业最终消失。


  存在哪些道德风险?


有了人工智能,可能除了幽默之外,大部分的智能都可以通过计算机进行理性分析和重构。然而,机器在大部分领域超越了人类的认知能力,引发了人们对道德风险的恐惧。这些风险分为三个类别,分别是工作稀缺,原本由人类从事的工作可以由机器取而代之;给个人自主性带来消极后果,特别是自由和安全方面;以及人性有可能被更“智能”的机器所取代。 


但如果我们审视现实,就会发现(人类从事的)工作并未消失。恰恰相反,这些工作正在发生变化,并且需要新的技能。同样,只要我们在面对侵入我们私人生活的技术时保持警惕,个人的自主和自由并不一定会因为人工智能的发展而遭到破坏。


最后,与某些人宣称的相反,机器并未对人类的生存构成威胁。机器的自主性纯粹是技术性的,它只对应从获取信息到决策的物质因果关系链。另一方面,机器没有道德自主性,因为即使它们在决策过程中迷惑和误导我们,它们也不具备自己的意志,仍然服从于我们指派给它们的目标。


作者简介


让-加布里埃尔·加纳西亚(法国):计算机学家,巴黎索邦大学教授,索邦计算机科学实验室LIP6研究人员,欧洲人工智能协会会员,法兰西大学研究院成员和国家科学研究中心道德委员会主席。他目前的研究兴趣包括机器学习、符号数据融合、计算伦理学、计算机道德和数字人文。


本文发布于联合国教科文组织《信使》杂志(2018-3)。《信使》创办于1948年,旨在宣传教科文组织的理念,充当文化间对话的平台,以及组建国际讨论的论坛。从2006年3月起,《信使》杂志以网络版发行,为了满足全球读者的需要,《信使》推出了不同语言的版本,包括教科文组织的六种官方语言(英文、阿拉伯文、中文、西班牙文、法文、俄文)以及葡萄牙语、世界语、撒丁语。杂志还印刷了少量纸质版。


 

Are machines likely to become smarter than humans? No, says Jean-Gabriel Ganascia: this is a myth inspired by science fiction. The computer scientist walks us through the major milestones in artificial intelligence (AI), reviews the most recent technical advances, and discusses the ethical questions that require increasingly urgent answers.


Artificial intelligence: 

between myth and reality


Jean-Gabriel Ganascia

 

A scientific discipline, AI officially began in 1956, during a summer workshop organized by four American researchers – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – at Dartmouth College in New Hampshire, United States. Since then, the term “artificial intelligence”, probably first coined to create a striking impact, has become so popular that today everyone has heard of it. This application of computer science has continued to expand over the years, and the technologies it has spawned have contributed greatly to changing the world over the past sixty years.


However, the success of the term AI is sometimes based on a misunderstanding, when it is used to refer to an artificial entity endowed with intelligence and which, as a result, would compete with human beings. This idea, which refers to ancient myths and legends, like that of the golem [from Jewish folklore, an image endowed with life], have recently been revived by contemporary personalities including the British physicist Stephen Hawking (1942-2018), American entrepreneur Elon Musk, American futurist Ray Kurzweil, and  proponents of what we now call Strong AI or Artificial General Intelligence (AGI). We will not discuss this second meaning here, because at least for now, it can only be ascribed to a fertile imagination, inspired more by science fiction than by any tangible scientific reality confirmed by experiments and empirical observations.


For McCarthy, Minsky, and the other researchers of the Dartmouth Summer Research Project on Artificial Intelligence, AI was initially intended to simulate each of the different faculties of intelligence – human, animal, plant, social or phylogenetic – using machines. More precisely, this scientific discipline was based on the conjecture that all cognitive functions – especially learning, reasoning, computation, perception, memorization, and even scientific discovery or artistic creativity – can be described with such precision that it would be possible to programme a computer to reproduce them. In the more than sixty years that AI has existed, there has been nothing to disprove or irrefutably prove this conjecture, which remains both open and full of potential. 


  Uneven progress  


In the course of its short existence, AI has undergone many changes. These can be summarized in six stages.


◼️ The time of the prophets


First of all, in the euphoria of AI’s origins and early successes, the researchers had given free range to their imagination, indulging in certain reckless pronouncements for which they were heavily criticized later. For instance, in 1958, American  political scientist and economist Herbert A. Simon – who received the Nobel Prize in Economic Sciences in 1978 – had declared that, within ten years, machines would become world chess champions if they were not barred from international competitions.


◼️ The dark years


By the mid-1960s, progress seemed to be slow in coming. A 10-year-old child beat a computer at a chess game in 1965, and a report commissioned by the US Senate in 1966 described the intrinsic limitations of machine translation. AI got bad press for about a decade.


◼️Semantic AI


The work went on nevertheless, but the research was given new direction. It focused on the psychology of memory and the mechanisms of understanding – with attempts to simulate these on computers – and on the role of knowledge in reasoning. This gave rise to techniques for the semantic representation of knowledge, which developed considerably in the mid-1970s, and also led to the development of expert systems, so called because they use the knowledge of skilled specialists to reproduce their thought processes. Expert systems raised enormous hopes in the early 1980s with a whole range of applications, including medical diagnosis.


◼️Neo-connectionism and machine learning


Technical improvements led to the development of machine learning algorithms, which allowed  computers to accumulate knowledge and to automatically reprogramme themselves, using their own experiences.


This led to the development of industrial applications (fingerprint identification, speech recognition, etc.), where techniques from AI, computer science, artificial life and other disciplines were combined to produce hybrid systems.


◼️From AI to human-machine interfaces


Starting in the late 1990s, AI was coupled with robotics and human-machine interfaces to produce intelligent agents that suggested the presence of feelings and emotions. This gave rise, among other things, to the calculation of emotions (affective computing), which evaluates the reactions of a subject feeling emotions and reproduces them on a machine, and especially to the development of conversational agents (chatbots).


◼️Renaissance of AI


Since 2010, the power of machines has made it possible to exploit  enormous quantities of data (big data) with deep learning techniques, based on the use of formal neural networks. A range of very successful applications in several areas – including speech and image recognition, natural language comprehension and autonomous cars – are leading to an AI renaissance. 


  Applications  


Many achievements using AI techniques surpass human capabilities – in 1997, a computer programme defeated the reigning world chess champion, and more recently, in 2016, other computer programmes have beaten the world’s best Go [an ancient Chinese board game] players and some top poker players. Computers are proving, or helping to prove, mathematical theorems; knowledge is being automatically constructed from huge masses of data, in terabytes (1012 bytes), or even petabytes (1015 bytes), using machine learning techniques.


As a result, machines can recognize speech and transcribe it – just like typists did in the past. Computers can accurately identify faces or fingerprints from among tens of millions, or understand texts written in natural languages. Using machine learning techniques, cars drive themselves; machines are better than dermatologists at diagnosing melanomas using photographs of skin moles  taken with mobile phone cameras; robots are fighting wars instead of humans; and factory production lines are becoming increasingly automated.


Scientists are also using AI techniques to determine the function of certain biological macromolecules, especially proteins and genomes, from the sequences of their constituents ‒ amino acids for proteins, bases for genomes. More generally, all the sciences are undergoing a major epistemological rupture with in silico experiments – named so because they are carried out by computers from massive quantities of data, using powerful processors whose cores are made of silicon. In this way, they differ from in vivo experiments, performed on living matter, and above all, from in vitro experiments, carried out in glass test-tubes.


Today, AI applications affect almost all fields of activity – particularly in the industry, banking, insurance, health and defence sectors. Several routine tasks are now automated, transforming many trades and eventually eliminating some.


  What are the ethical risks? 


With AI, most dimensions of intelligence ‒ except perhaps humour ‒ are subject to rational analysis and reconstruction, using computers. Moreover, machines are exceeding our cognitive faculties in most fields, raising fears of ethical risks. These risks fall into three categories – the scarcity of work, because it can be carried out by machines instead of humans; the consequences for the autonomy of the individual, particularly in terms of freedom and security; and the overtaking of humanity, which would be replaced by more “intelligent” machines. 


However, if we examine the reality, we see that work (done by humans) is not disappearing – quite the contrary – but it is changing and calling for new skills. Similarly, an individual’s autonomy and  freedom are not inevitably undermined by the development of AI – so long as we remain vigilant in the face of technological intrusions into our private lives.


Finally, contrary to what some people claim, machines pose no existential threat to humanity. Their autonomy is purely technological, in that it corresponds only to material chains of causality that go from the taking of information to decision-making. On the other hand, machines have no moral autonomy, because even if they do confuse and mislead us in the process of making decisions, they do not have a will of their own and remain subjugated to the objectives that we have assigned to them.


About the Author: 


French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University, Paris. He is also a researcher at LIP6, the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence, a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS), Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.


This article was published on Courier (2018-3), which is a UNESCO’s magazine first published in 1948. It aims at promoting UNESCO’s ideals, maintaining a platform for the dialogue between cultures and providing a forum for international debate. Available online since March 2006, the UNESCO Courier serves readers around the world in the six official languages of the Organization (Arabic, Chinese, English, French, Russian and Spanish), and also in Portuguese, Esperanto and Sardinian. A limited number of issues are also produced in print. 

French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University, Paris. He is also a researcher at LIP6, the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence, a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS), Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.


This article was published on Courier (2018-3), which is a UNESCO’s magazine first published in 1948. It aims at promoting UNESCO’s ideals, maintaining a platform for the dialogue between cultures and providing a forum for international debate. Available online since March 2006, the UNESCO Courier serves readers around the world in the six official languages of the Organization (Arabic, Chinese, English, French, Russian and Spanish), and also in Portuguese, Esperanto and Sardinian. A limited number of issues are also produced in print. 

French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University, Paris. He is also a researcher at LIP6, the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence, a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS), Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.


This article was published on Courier (2018-3), which is a UNESCO’s magazine first published in 1948. It aims at promoting UNESCO’s ideals, maintaining a platform for the dialogue between cultures and providing a forum for international debate. Available online since March 2006, the UNESCO Courier serves readers around the world in the six official languages of the Organization (Arabic, Chinese, English, French, Russian and Spanish), and also in Portuguese, Esperanto and Sardinian. A limited number of issues are also produced in print. 

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多