论文标题

DF-Captcha:用于防止伪造电话的深板验证码

DF-Captcha: A Deepfake Captcha for Preventing Fake Calls

论文作者

Mirsky, Yisroel

论文摘要

社会工程(SE)是一种欺骗形式,旨在欺骗人们访问数据,信息,网络甚至金钱。几十年来,SE一直是攻击者进入组织的关键方法,实际上跳过了所有的防御方法。攻击者还定期使用SE来骗人,通过威胁要模仿权威的电话或发送受感染的电子邮件,这些电子邮件看起来像是从亲人发送的。 SE攻击可能仍然是犯罪分子的最高攻击向量,因为人类是网络安全中最弱的联系。 不幸的是,由于一项称为DeepFakes的新技术到达,威胁只会变得更糟。 AI创建的Deepfake是可信的媒体(例如,视频)。尽管该技术主要用于交换名人的面孔,但也可以用来“木偶”不同的角色。最近,研究人员已经展示了如何实时部署该技术,以在电话中打电话或在视频通话中重新演奏某人的声音。鉴于任何新手用户都可以下载此技术来使用它,因此犯罪分子已经开始将其货币化以进行SE攻击也就不足为奇了。 在本文中,我们提出了一种轻巧的应用程序,该应用程序可以保护组织和个人免受Deepfake SE攻击。通过挑战和响应方法,我们利用了深击技术的技术和理论局限性来揭示攻击者。现有的防御解决方案过于终点解决方案,可以被动态攻击者逃避。相比之下,我们的方法是轻量级,打破了反应性的军备竞赛,使攻击者处于不利地位。

Social engineering (SE) is a form of deception that aims to trick people into giving access to data, information, networks and even money. For decades SE has been a key method for attackers to gain access to an organization, virtually skipping all lines of defense. Attackers also regularly use SE to scam innocent people by making threatening phone calls which impersonate an authority or by sending infected emails which look like they have been sent from a loved one. SE attacks will likely remain a top attack vector for criminals because humans are the weakest link in cyber security. Unfortunately, the threat will only get worse now that a new technology called deepfakes as arrived. A deepfake is believable media (e.g., videos) created by an AI. Although the technology has mostly been used to swap the faces of celebrities, it can also be used to `puppet' different personas. Recently, researchers have shown how this technology can be deployed in real-time to clone someone's voice in a phone call or reenact a face in a video call. Given that any novice user can download this technology to use it, it is no surprise that criminals have already begun to monetize it to perpetrate their SE attacks. In this paper, we propose a lightweight application which can protect organizations and individuals from deepfake SE attacks. Through a challenge and response approach, we leverage the technical and theoretical limitations of deepfake technologies to expose the attacker. Existing defence solutions are too heavy as an end-point solution and can be evaded by a dynamic attacker. In contrast, our approach is lightweight and breaks the reactive arms race, putting the attacker at a disadvantage.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源