立即打开
假指纹因人工智能越发猖獗,你要当心了

假指纹因人工智能越发猖獗,你要当心了

Jonathan Vanian 2018年12月03日
依靠人工智能技术开发的伪造数字指纹可以骗过智能手机上的指纹识别器。

一项新研究显示,依靠人工智能技术开发的伪造数字指纹可以骗过智能手机上的指纹识别器,黑客利用漏洞潜入受害者网上银行偷钱的风险已然提升。

最近,纽约大学和密歇根州立大学的研究人员联合发表了一篇论文,详细介绍了如何使用深度学习技术削弱生物识别安全系统的防护功能。该研究由美国国家科学基金会资助,今年10月在一个生物识别和网络安全论坛上荣获最佳论文奖。

苹果和三星等智能手机制造商通常在手机中使用生物识别技术,这样人们可以使用指纹轻松解锁设备,不用再输入密码。富国银行之类的大银行为了提升便利性,也越来越多地让客户使用指纹访问账户。

虽然指纹识别很方便,但研究人员发现系统背后的软件存在被骗过的可能性。这一发现非常重要,因为犯罪分子也可能利用领先的人工智能技术绕过传统的网络安全手段。

最新发表的论文主要基于去年纽约大学和密歇根州立大学的同一拨研究人员的相关研究。之前发表的论文称,使用数字修改过的真实指纹或指纹部分图像可以骗过一些指纹安全系统。他们将伪造指纹称为“指纹大师”,可以骗过只辨别部分指纹图像而不是完整指纹的生物安全系统。

具有讽刺意味的是,人类看到指纹大师生成的指纹会立刻发现是假的,因为都只有部分指纹。然而软件却识别不出来。

新发表的论文里,研究人员使用数据训练的基础软件,即神经网络生成看起来可信度非常高的数字指纹,表现比之前研究使用的图像还要好。伪造的指纹不仅看起来很真实,还带有人眼无法察觉的隐藏属性,从而迷惑指纹识别器。 

Fake digital fingerprints created by artificial intelligence can fool fingerprint scanners on smartphones, according to new research, raising the risk of hackers using the vulnerability to steal from victims’ online bank accounts.

A recent paper by New York University and Michigan State University researchers detailed how deep learning technologies could be used to weaken biometric security systems. The research, supported by a United States National Science Foundation grant, won a best paper award at a conference on biometrics and cybersecurity in October.

Smartphone makers like Apple and Samsung typically use biometric technology in their phones so that people can use fingerprints to easily unlock their devices instead of entering a passcode. Hoping to add some of that convenience, major banks like Wells Fargo are increasingly letting customers access their checking accounts using their fingerprints.

But while fingerprint scanners may be convenient, researchers have found that the software that runs these systems can be fooled. The discovery is important because it underscores how criminals can potentially use cutting-edge AI technologies to do an end run around conventional cybersecurity.

The latest paper about the problem builds on previous research published last year by some of the same NYU and Michigan State researchers. The authors of that paper discovered that they could fool some fingerprint security systems by using either digitally modified or partial images of real fingerprints. These so-called MasterPrints could trick biometric security systems that only rely on verifying certain portions of a fingerprint image rather than the entire print.

One irony is that humans who inspect MasterPrints could immediately likely tell they were fake because they contained only partial fingerprints. Software, it turns out, could not.

In the new paper, the researchers used neural networks—the foundational software for data training—to create convincing looking digital fingerprints that performed even better than the images used in the earlier study. Not only did the fake fingerprints look real, they contained hidden properties undetectable by the human eye that could confuse some fingerprint scanners.

左侧为真指纹的示例,右侧为人工智能生成的假指纹图像。

论文的作者之一是纽约大学计算机科学副教授朱利安·托格流斯,他表示团队使用改编的神经网络技术,即“生成对抗网络”(GAN)生成假指纹,取名为“深度指纹大师”,他说过去两年里这一系列假指纹“横扫了人工智能世界”。

研究人员还可以使用GAN生成看似真实、其实虚假的照片和视频,称为“深度伪造”,一些国会议员担心,可能有人用此类照片和视频制作让公众信以为真的虚假视频和宣传。举例来说,一些研究人员介绍了如何利用人工智能技术来制作虚假的美国前总统巴拉克·奥巴马的演讲视频,还有其他应用方式。

人工智能修改的照片也能骗过计算机,去年麻省理工学院的研究人员介绍了案例,他们用一张乌龟图像成功迷惑了谷歌的图像识别软件。谷歌的图像识别技术将乌龟错误识别为步枪,因为乌龟图像中嵌有类似步枪图片的隐藏元素,人眼根本无法察觉。

研究人员可使用GAN结合两种神经网络,协同工作生成嵌入神秘属性,可以骗过图像识别软件的仿真图像。研究人员使用数千个公开的指纹图像,训练神经网络识别真实指纹图像,同时训练另一个神经网络生成假指纹。

纽约大学计算机科学博士候选人菲利普·邦特拉格解释说,之后将第二个神经网络生成的假指纹图像输入第一个神经网络,测试是否成功。他也参与了撰写该论文。随着时间推移,第二个神经网络学会生成逼真的指纹图像,骗过其他神经网络。

随后研究人员用假指纹图像测试Innovatrics和Neurotechnology等科技公司销售的指纹扫描软件,检测能否骗过。每当假指纹图像成功骗过商业系统时,研究人员就能改进技术生成更逼真的假指纹。

负责生成假图像的神经网络嵌入了一组随机的计算机代码,邦特拉格称之为“噪声数据”,这些数据可以欺骗指纹图像识别软件。虽然研究人员能用所谓的进化算法校正“噪声数据”以迷惑指纹软件,但目前还不清楚此类代码对图像的影响,因为人类看不出来。

可以肯定的是,犯罪分子想破解指纹识别仪会面临许多障碍。邦特拉格解释说,许多指纹系统配有其他安全检查手段,例如可检测人类手指的热传感器。

但新开发的深度指纹大师软件起码可证明,人工智能技术可用于不良用途。网络安全行业、银行业、智能手机制造商和其他采用生物识别技术的公司要不断改进系统,跟上人工智能的快速发展。

托格流斯表示,在该论文发表之前,研究人员并没有考虑人工智能生成的虚假图像是否可能对生物识别系统构成“严重威胁”。但他表示,在论文发表后,已经有某些“大公司”联系他,想深入了解虚假指纹可能存在的安全威胁。

指纹传感器软件制造商Neurotechnology的研发经理胡斯塔斯·克兰瑙斯卡斯博士通过电邮告诉《财富》杂志,最近骗过指纹识别器的研究论文“触及”了关键点。但他指出,研究人员没有考虑公司同时使用的其他安全手段,而他认为,其他安全手段可确保“实际应用中错误识别几率极低”。

克兰瑙斯卡斯还表示,Neurotechnology已建议企业客户应用指纹扫描软件时,将安全级别设置为高于研究人员在论文中使用的安全级别。

然而,研究人员邦特拉杰指出,指纹安全级别越高,使用起来越不方便,因为公司通常会留出一些自由空间,不想让客户反复按手指实现准确读取。

“很明显,如果将安全性设置调高,(欺骗攻击)成功率会降低,” 邦特拉杰表示。 “但也不太方便。”他补充道。(财富中文网)

译者:Charlie

审校:夏林

Julian Togelius, one of the paper’s authors and an NYU associate computer science professor, said the team created the fake fingerprints, dubbed DeepMasterPrints, using a variant of neural network technology called “generative adversarial networks (GANs),” which he said “have taken the AI world by storm for the last two years.”

Researchers have used GANs to create convincing-looking but fabricated photos and videos known as “deep fakes,” which some lawmakers worry could be used to create fake videos and propaganda that the general public would think was true. For example, several researchers have described how they could use AI techniques to create fabricated videos of former President Barack Obama giving speeches that never took place, among other things.

AI-altered photos are also fooling computers, as MIT researchers showed last year when they created an image of a turtle that confused Google’s image-recognition software. The technology mistook the turtle for a rifle because it identified hidden elements embedded in the image that shared certain properties with an image of a gun, all of which were unnoticeable by the human eye.

With GANs, researchers typically use a combination of two neural networks that work together to create realistic images embedded with mysterious properties that can fool image-recognition software. Using thousands of publicly available fingerprint images, the researchers trained one neural network to recognize real fingerprint images, and trained the other to create its own fake fingerprints.

They then fed the second neural network’s fake fingerprint images into the first neural network to test how effective they were, explained Philip Bontrager, a NYU PhD candidate in computer science who also worked on the paper. Over time, the second neural network learned to generate realistic-looking fingerprint images that could trick the other neural network.

The researchers then fed the fake fingerprint images into fingerprint-scanning software sold by tech companies like Innovatrics and Neurotechnology to see if they could be fooled. Each time a fake fingerprint image tricked one of the commercial systems, the researchers were able to improve their technology to produce more convincing fakes.

The neural network responsible for creating the bogus images embeds a random set of computer code that Bontrager referred to as “noisy data” that can fool fingerprint image recognition software. Although the researchers were able to calibrate this “noisy data” to trip the fingerprint software using what’s known as an evolutionary algorithm, it’s unclear what this code does to the image, since humans are unable to see its impact.

To be sure, criminals face a number of obstacles cracking fingerprint scanners. For one, many fingerprint systems rely on other security checks like heat sensors that are used to detect human fingers, Bontrager explained.

But, these newly developed DeepMasterPrints show that AI technology can be used for nefarious purposes, which means that cybersecurity, banks, smartphone makers and other firms using biometric technology must constantly improve their systems to keep up with the rapid AI advances.

Togelius said that prior to the paper, researchers didn’t consider the possibility of AI-created fake images to be a “serious threat to biometric systems.” After its publication, he said unspecified “large companies” are contacting him to learn more about the possible security threats of fake fingerprints.

Dr. Justas Kranauskas, a research and development manager for Neurotechnology, the maker of fingerprint sensor software, told Fortune in an email that the recent research paper about fooling fingerprint readers “touched” on an important point. But he pointed out that his company uses other kinds of security that the researchers did not incorporate into their study that would, as he put it, ensure a “very low false acceptance risk in real applications.”

Kranauskas also said that the Neurotechnology recommends that its corporate customers set their fingerprint scanning software at a higher security level than the levels that researchers used in their paper.

Bontrager, the researcher, noted, however, that the higher the fingerprint security level, the less convenient it is for users, because companies typically want some leeway so that customers don’t have to repeatedly press their fingers on scanners to get accurate reads.

“So obviously, if you choose a high security setting, [spoofing attacks] are less successful,” Bontrager said. “But then it is less convenient,” he added.

  • 热读文章
  • 热门视频
活动
扫码打开财富Plus App