分享

人工智能根据面部特征分辨罪犯,你怎么看?

 琴贵铧 2017-11-07

The fields of artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction. This might explain a new study out of Shanghai Jiao Tong University, which says computers can tell whether you will be a criminal based on nothing more than your facial features.

人工智能和机器学习领域发展得太迅速,致使任何伦理概念都滞后几十年,或者留给科幻作品。这也许能够解释上海交通大学的一项新研究,研究表明计算机只根据你的面部特征就能分辨你是否是一个罪犯。

Not so in the modern age of Artificial Intelligence, apparently: In a paper titled "Automated Inference on Criminality using Face Images," two Shanghai Jiao Tong University researchers say they fed “facial images of 1,856 real persons” into computers and found "some structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle." They conclude that "all classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic."

在人工智能的现代时代,显然:在一篇题为《基于面部图像的自动犯罪概率推断》的文章中,两位上海交通大学的研究人员说,他们将“1856个真人的面部图像”录入计算机,发现“一些能够预测犯罪率的结构特征,例如上唇曲率、内眼角间距和鼻唇角角度。”

In the 1920s and 1930s, the Belgians, in their role as occupying power, put together a national program to try to identify individuals’ ethnic identity through phrenology, an abortive attempt to create an ethnicity scale based on measurable physical features such as height, nose width and weight.

在20世纪20年代和30年代,比利时人以占领国的身份制定了一项国家计划,试图通过骨相来识别个人的民族特性,试图根据可测量的身体特征,如身高、鼻子宽度和重量,来划分一个的种族范围。

This can’t be overstated: The authors of this paper — in 2016 — believe computers are capable of scanning images of your lips, eyes, and nose to detect future criminality.

这不能夸大:该文章的作者——在2016年——相信计算机能够扫描你的嘴唇、眼睛和鼻子的图像,以检测未来的犯罪率。

The study contains virtually no discussion of why there is a "historical controversy" over this kind of analysis — namely, that it was debunked hundreds of years ago. Rather, the authors trot out another discredited argument to support their main claims: that computers can’t be racist, because they’re computers.

这项研究几乎没有讨论为什么这种分析有一个“历史争议”,它在几百年前就被揭穿了。相反,作者提出了另一个可信的论点来支持他们的主要论断:计算机不能成为种族主义者,因为它们是计算机。

Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc. Besides the advantage of objectivity, sophisticated algorithms based on machine learning may discover very delicate and elusive nuances in facial characteristics and structures that correlate to innate personal traits.

与人类检查员/法官不同,计算机视觉算法或分类器绝对没有主观看法、没有情绪、没有由于过去经验、种族、宗教、政治信条、性别、年龄等而造成的偏见。除了客观性的优势,基于机器学习的复杂算法可能发现面部特征和结构中非常微妙和难以捉摸的细微差别,这些细微差别与先天的个人特征相关。

来源:The Intercept爱语吧作者:Summer


    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多