分享

TED演讲 | 人工智能时代,我们更需坚守人类道德

 23流星23 2020-02-16
hello大家好,我是达达。机器智能就在这里,我们已经在使用它来做出主观决策。但是,AI增长和改进的复杂方式使其难以理解甚至难以控制。在这个警示性演讲中,技术社会学家Zeynep Tufekci解释了智能机器如何以不适合人为错误模式的方式发生故障,以及以我们不会期望或未做好准备的方式发生故障。她说:“我们不能将责任外包给机器。”“我们必须更加严格地遵守人类价值观和人类道德。”

演说者:Zeynep Tufekci
演说题目:人工智能时代,我们更需坚守人类道德
我们更需坚守人类道德 来自TED英语演说优选 00:00 17:45

 中英文对照翻译
So, I started my first job as a computer programmer in my very first year of college -- basically, as a teenager. 
我的第一份工作是程序员,那是在我刚上大学的时候,不到二十岁。 
Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, 'Can he tell if I'm lying?' There was nobody else in the room. 
我刚开始工作不久,正当在公司写程序,公司的一位经理来到我旁边,他悄悄的对我说, “他能看出来我在撒谎吗?” 当时屋子里没有别人。 
'Can who tell if you're lying? And why are we whispering?' The manager pointed at the computer in the room. 'Can he tell if I'm lying?' Well, that manager was having an affair with the receptionist. 
“你是指谁能看出你在撒谎?还有,我们干嘛要悄悄地说话?” 那个经理指着屋子里的电脑,说: “他能看出我在撒谎吗?” 其实,那个经理和前台有一腿。 
And I was still a teenager. So I whisper-shouted back to him, 'Yes, the computer can tell if you're lying.' 
当时我只有十来岁,我低声地回答他, “是的,电脑什么都知道。” 
Well, I laughed, but actually, the laugh's on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested. 
我笑了,但其实我是在笑自己,现在,计算机系统已经可以通过分析人脸来辨别人的情绪,甚至包括是否在撒谎。广告商,甚至政府都对此很感兴趣。 
I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I'd learned about nuclear weapons, and I'd gotten really concerned with the ethics of science. I was troubled. 
我选择成为电脑程序员,因为我是那种痴迷于数学和科学孩子。其间我也学习过核武器,我也非常关心科学伦理。我曾经很困惑。
However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don't have to deal with any troublesome questions of ethics. So I picked computers. 
但是,因为家庭原因,我需要尽快参加工作。我对自己说,嘿,选一个容易找工作的科技领域吧,并且找个不需要操心伦理问题的。所以我选了计算机。 
Well, ha, ha, ha! All the laughs are on me. Nowadays, computer scientists are building platforms that control what a billion people see every day. They're developing cars that could decide who to run over. They're even building machines, weapons, that might kill human beings in war. It's ethics all the way down. 
哈哈哈,我多可笑。如今,计算机科学控制着十亿人每天能看到的信息,它们可以控制汽车朝哪里开,它们可以建造机器、武器,那些在战争中用于杀人的武器。说到底,都是伦理问题。 
Machine intelligence is here. We're now using computation to make all sort of decisions, but also new kinds of decisions. We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden. 
机器智能来了。我们用计算机来做各种决策,包括人们面临的新决策。我们向计算机询问多解的、主观的、开放性的或意义深远的问题。 
We're asking questions like, 'Who should the company hire?' 'Which update from which friend should you be shown?' 'Which convict is more likely to reoffend?' 'Which news item or movie should be recommended to people?' 
我们会问, “我们公司应该聘请谁?” “你该关注哪个朋友的哪条状态?” “哪种犯罪更容易再犯?” “应该给人们推荐哪条新闻或是电影?” 
Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. 
看,是的,我们使用计算机已经有一段时间了,但现在不一样了。这是历史性的转折,因为我们在这些主观决策上无法主导计算机,不像我们在管理飞机、建造桥梁、登月等问题上,可以主导它们。
Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs. 
飞机会更安全吗?桥梁会摇晃或倒塌吗?在这些问题上,我们有统一而清晰的判断标准,我们有自然定律来指导。但是在复杂的人类事务上,我们没有这样的客观标准。 
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. 
让问题变得更复杂的,是我们的软件正越来越强大,同时也变得更加不透明,更加复杂。最近的几十年,复杂算法已取得了长足发展。
They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go. 
它们可以识别人脸,它们可以破解笔迹,它们可以识别信用卡欺诈,可以屏蔽垃圾信息,它们可以翻译语言,他们可以通过医学图像识别肿瘤,它们可以在国际象棋和围棋上击败人类。 
Much of this progress comes from a method called 'machine learning.' Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. 
类似的很多发展,都来自一种叫“机器学习”的方法。机器学习不像传统程序一样,需要给计算机详细、准确的逐条指令。它更像是你给系统喂了很多数据,包括非结构化数据,比如我们在数字生活中产生的数据。
And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: 'This one is probably more like what you're looking for.' 
系统扎进这些数据中学习,重要的是,这些系统不再局限单一答案。他们得出的不是一个简单的答案,而是概率性的: “这个更像是你在寻找的。” 
Now, the upside is: this method is really powerful. The head of Google's AI systems called it, 'the unreasonable effectiveness of data.' The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. 
它的优势是:它真的非常强大。Google人工智能系统的负责人称它为: “不可思议的数据效率”。缺点在于,我们无法清楚的了解系统学到了什么,事实上,这也正是它的强大之处。不像是给计算机下达指令,更像是在训练一个机器狗,我们无法精确的了解和控制它。
So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking. 
这就是我们遇到的问题。人工智能会出错,这是一个问题。但他们得出正确答案,又是另一种问题。因为我们面对主观问题,是不应该有答案的。我们不知道这些机器在想什么。 
So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. 
所以,考虑一下招聘算法-通过机器学习构建的招聘系统。这样的系统会用员工现有的数据进行自我培训,参照公司的优秀员工来寻找和招聘新人。听起来很好。
I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers. 
有次我参加了一个会议,会上聚集了很多人力资源部的经理和总监,都是高管,让他们使用这样的招聘系统。他们都非常兴奋,认为这可以让招聘变得更加客观,从而减少偏见,给女性和少数族裔更多的机会,减少他们自身的偏见。 
And look -- human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, 'Zeynep, let's go to lunch!' I'd be puzzled by the weird timing. It's 4pm. Lunch? 
你知道的,招聘是存在偏见的,我也很清楚。在我刚开始做程序员的时候,我的直接主管会来找我,在早晨很早或下午很晚的时候,说,“ 图费, 我们去吃午饭!” 我就被这奇怪的时间给搞糊涂了,现在是下午4点,吃午饭?
I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender. 
我当时很穷,所以不会放过免费的午餐。后来我才想明白原因,我的主管们没有向他们的上级坦白,他们雇了一个十多岁的小女孩来做重要的编程工作,一个穿着牛仔裤,运动鞋工作的女孩。我的工作做得很好,我只是看起来不合适,年龄和性别也不合适。 
So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. 
所以,忽略性别和种族的招聘,听起来很适合我。但是这样的系统会带来更多问题,当前,计算机系统能根据零散的数据,推断出关于你的一切,甚至你没有公开的事。

They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference. 
它们可以推断你的性取向,你的性格特点,你的政治倾向。它们有高准确度的预测能力,记住,是你没有公开的事情,这就是推断。 
I have a friend who developed such computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms -- months before. No symptoms, there's prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring. 
我有个朋友就是开发这种系统,从社交媒体的数据中,推断患临床或产后抑郁症的可能性。结果令人印象深刻,她的系统可以在症状出现前几个月成功预测到患抑郁的可能性,提前几个月。在有症状之前,就可以预测到,她希望这可以用于临床早期干预,这很棒!现在我们把这项技术放到招聘中来看。 
So at this human resources managers conference, I approached a high-level manager in a very large company, and I said to her, 'Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They're not depressed now, just maybe in the future, more likely. What if it's weeding out women more likely to be pregnant in the next year or two but aren't pregnant now? 
在那次人力资源管理会议中,我接近了一位大公司的高管,我对她说,“看,如果这个系统在不通知你的情况下,就剔除了未来有可能抑郁的人,怎么办?他们现在不抑郁,只是未来有可能。如果它剔除了有可能怀孕的女性,怎么办?她们现在没怀孕,但未来一两年有可能。
What if it's hiring aggressive people because that's your workplace culture?' You can't tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled 'higher risk of depression,' 'higher risk of pregnancy,' 'aggressive guy scale.' 
如果因为你的公司文化,它只雇佣激进的候选人怎么办?” 只看性别比例,你发现不了这些问题,性别比例是可以被调整的。并且因为这是机器学习,不是传统的代码,不会有一个变量来标识 “高抑郁风险”、 “高怀孕风险”、 “人员的激进程度”。

Not only do you not know what your system is selecting on, you don't even know where to begin to look. It's a black box. It has predictive power, but you don't understand it. 
你不仅无法了解系统在选什么样的人,你甚至不知道从哪里入手了解。它是个暗箱。它有预测的能力,但你不了解它。 
'What safeguards,' I asked, 'do you have to make sure that your black box isn't doing something shady?' She looked at me as if I had just stepped on 10 puppy tails.
我问,“你有什么措施可以保证,你的暗箱没有在做些见不得人的事?” 她看着我,就好像我刚踩了10只小狗的尾巴。 
She stared at me and she said, 'I don't want to hear another word about this.' And she turned around and walked away. Mind you -- she wasn't rude. It was clearly: what I don't know isn't my problem, go away, death stare. 
她瞪着我说: “我不想再听你多说一个字。”然后她转身走开了。其实,她不是无礼,她想表达的其实是:我不知道,这不是我的错,走开,不然我瞪死你。 
Look, such a system may even be less biased than human managers in some ways. And it could make monetary sense. But it could also lead to a steady but stealthy shutting out of the job market of people with higher risk of depression. Is this the kind of society we want to build, without even knowing we've done this, because we turned decision-making to machines we don't totally understand? 
看,这样的系统可能在某些方面比人类高管怀有更少偏见,而且可以创造经济价值。但它也可能用一种顽固且隐秘的方式,把高抑郁风险的人清出职场。这是我们想要的未来吗?把决策权给予我们并不完全了解的机器,在我们不知情的状况下构建一种新的社会? 
Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we're telling ourselves, 'We're just doing objective, neutral computation.' 
另一个问题是,这些系统通常使用我们真实的行为数据来训练。它们可能只是在反馈我们的偏见,这些系统会继承我们的偏见,并把它们放大,然后反馈给我们。我们骗自己说, “我们只做客观、中立的预测。” 
Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don't know, can have life-altering consequences. 
研究者发现,在Google 上,高收入工作的广告更多的被展示给男性用户。搜索非裔美国人的名字,更可能出现关于犯罪史的广告,即使某些根本不存在。这些潜在的偏见以及暗箱中的算法,有些会被研究者揭露,有些根本不会被发现,它的后果可能是改变一个人的人生。 
In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It's a commercial black box. The company refused to have its algorithm be challenged in open court. 
在威斯康星,一个被告因逃避警察被判刑六年。你可能不知道,但计算机算法正越来越多的被应用在假释及量刑裁定上。他想要弄清楚,这个得分是怎么算出来的?这是个商业暗箱,这家公司拒绝在公开法庭上讨论他们的算法。
But ProPublica, an investigative nonprofit, audited that very algorithm with what public data they could find, and found that its outcomes were biased and its predictive power was dismal, barely better than chance, and it was wrongly labeling black defendants as future criminals at twice the rate of white defendants. 
但是一家叫ProPublica的非盈利机构,根据公开数据,对这个算法进行了评估,他们发现这个算法的结论是有偏见的,它的预测能力很差,比碰运气强不了多少,并且它错误的把黑人被告未来犯罪的可能性标记为白人的两倍。 
So, consider this case: This woman was late picking up her godsister from a school in Broward County, Florida, running down the street with a friend of hers. They spotted an unlocked kid's bike and a scooter on a porch and foolishly jumped on it. As they were speeding off, a woman came out and said, 'Hey! That's my kid's bike!' They dropped it, they walked away, but they were arrested. 
看下这个案例:这个女人急着去佛罗里达州,布劳沃德县的一所学校,去接她的干妹妹。女人和她的朋友在街上狂奔,她们看到门廊上一辆没上锁的儿童自行车,和一辆电瓶车,于是就愚蠢的骑上了车。正在她们要骑走的时候,另一个女人出来,喊道: “嘿!那是我孩子的自行车!” 她们扔掉车走开,但还是被抓住了。 
She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanors. Meanwhile, that man had been arrested for shoplifting in Home Depot -- 85 dollars' worth of stuff, a similar petty crime. But he had two prior armed robbery convictions. But the algorithm scored her as high risk, and not him. 
她做错了,她很愚蠢,但她也才刚满18岁,她之前有不少青少年轻罪的记录。与此同时,这个男人在连锁超市偷窃被捕了,偷了价值85美金的东西,同样的轻微犯罪,但他有两次持枪抢劫的案底。这个程序将这位女性判定为高风险,而这位男性则不是。
Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power. 
两年后,ProPublica发现她没有再次犯罪,但这个记录使她很难找到工作。而这位男性,却再次犯罪,并因此被判八年监禁。显然,我们需要审查这些暗箱,确保它们不再有这样不加限制的权限。 
Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture? 
审查是很重要的,但不能解决所有的问题。拿Facebook的强大的新闻流算法来说,就是通过你的朋友圈和你浏览过的页面,决定你的 “推荐内容”的算法。它会决定要不要再推一张婴儿照片给你,
A sullen note from an acquaintance? An important but difficult news item? There's no right answer. Facebook optimizes for engagement on the site: likes, shares, comments. 
要不要推一条熟人的沮丧状态?要不要推一条重要但艰涩的新闻?这个问题没有正解。Facebook会根据网站的参与度来优化:喜欢、分享、评论。 
In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends? 
在2014年8月,密苏里州弗格森市爆发了游行,一个白人警察在不明状况下杀害了一位非裔少年。关于游行的新闻在我的未经算法过滤的Twitter上大量出现,但Facebook上却没有。是因为我的Facebook好友不关注这事吗?
I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me. I researched this and found this was a widespread problem. 
我禁用了Facebook的算法,这是很麻烦的一键事,因为Facebook希望你一直在它的算法控制下使用,希望我的朋友持续地谈论这件事。只是算法没法给我这些信息。我研究了这个现象,发现这是个普遍的问题。 
The story of Ferguson wasn't algorithm-friendly. It's not 'likable.' Who's going to click on 'like?' It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. 
弗格森事件对算法是不适用的,它不是值得“赞”的新闻,谁会在这样的文章下点“赞”呢?甚至这新闻都不好被评论。因为没有“赞”和评论,算法会减少这些新闻的曝光,所以我们无法看到。
Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel. 
相反的,在同一周,Facebook的算法热推了ALS冰桶挑战的信息。这很有意义,倒冰水,为慈善捐款,很好。 这个事件对算法是很适用的,机器帮我们做了这个决定。非常重要但艰涩的新闻事件可能会被埋没掉,因为Facebook已经成为主要的信息来源。 
Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: 'Its largest airport is named for a World War II hero, its second-largest for a World War II battle.' 
最后,这些系统也可能会在一些不同于人力系统的那些事情上搞错。你们记得Watson吧,那个在智力竞赛《危险边缘》中横扫人类选手的IBM机器智能系统,它是个很厉害的选手。但是,在最后一轮比赛中,Watson 被问道: “它最大的机场是以二战英雄命名的,它第二大机场是以二战战场命名的。” 
(Hums Final Jeopardy music) 
(哼唱《危险边缘》插曲)
Chicago. The two humans got it right. Watson, on the other hand, answered 'Toronto' -- for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn't make. 
芝加哥。两位人类选手答对了,但Watson答的是, “多伦多”,这是个猜美国城市的环节!这个厉害的系统也会犯人类都不会犯的,二年级小孩都不会犯的错误。 
Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for. It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine. 
我们的机器智能系统,会在一些不符合人类出错模式的问题上出错,这些问题都是我们 无法预料和准备的。丢失一份完全有能力胜任 的工作时,人们会感到很糟,但如果是因为机器子程序的过度堆积,就简直糟透了。 
In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's 'sell' algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what 'error' means in the context of lethal autonomous weapons. 
在2010年五月,华尔街出现一次股票闪电崩盘,原因是“卖出”算法的反馈回路导致,在36分钟内损失了几十亿美金。我甚至不敢想,致命的自动化武器发生“错误”会是什么后果。 
So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines. Artificial intelligence does not give us a 'Get out of ethics free' card. 
是的,人类总是会有偏见,法庭上、新闻机构、战争中的,决策者、看门人…他们都会犯错,但这恰恰是我要说的。我们无法抛开这些困难的问题,我们不能把我们自身该承担的责任推给机器。人工智能不会给我们一张“伦理免责卡”。  
Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. 
数据科学家Fred Benenson称之为“数学粉饰”。我们需要是相反的东西。我们需要培养算法的怀疑、复查和调研能力。我们需要确保有人为算法负责,为算法审查,并切实的公开透明。我们必须认识到,把数学和计算引入解决复杂的、高价值的人类事务中,并不能带来客观性,相反,人类事务的复杂性会扰乱算法。
Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human. 
是的,我们可以并且需要使用计算机来帮助我们做更好的决策,但我们也需要在判断中加入道德义务,在这个框架下使用算法,而不是像人与人之间相互推卸那样,就把责任转移给机器。 
Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics. Thank you. 
人工智能到来了,这意味着我们要格外坚守人类的价值观和伦理。谢谢。

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多