论文标题
源打印机从使用智能手机获取的文档图像标识
Source Printer Identification from Document Images Acquired using Smartphone
论文作者
论文摘要
大量印刷文档继续用于各种重要和琐碎的应用程序。这些应用程序通常依赖于以印刷文本文档的形式提供的信息,其完整性验证由于时间限制和缺乏资源而构成挑战。源打印机标识以快速且具有成本效益的方式提供了有关印刷文档的起源和完整性的基本信息。即使确定了欺诈性文件,有关其起源的信息也可以帮助停止未来的欺诈行为。如果智能手机摄像头取代了文档采集过程的扫描仪,则文档取证将更加经济,用户友好,甚至在许多远程和分布式分析都是有益的应用中更快的。在现有方法的基础上,我们建议从字母图像的融合及其特定于打印机的噪声残留物中学习一个CNN模型。在没有任何公开数据集的情况下,我们创建了一个新的数据集,该数据集由2250个文本文档的文档图像组成,该文本文档由18个打印机打印并在五个采集设置中被智能手机摄像机收购。提出的方法在5x2交叉验证方法下使用字母“ E”的图像实现了98.42%的文档分类精度。此外,当使用所有类型的大约一百万个字母进行测试时,它将分别达到90.33%和98.01%的字母和文档分类精度,从而强调了学习判别模型而不依赖单个字母类型的能力。同样,在各种采集设置下,分类精度令人鼓舞,包括低照明和文档和相机飞机之间的角度变化。
Vast volumes of printed documents continue to be used for various important as well as trivial applications. Such applications often rely on the information provided in the form of printed text documents whose integrity verification poses a challenge due to time constraints and lack of resources. Source printer identification provides essential information about the origin and integrity of a printed document in a fast and cost-effective manner. Even when fraudulent documents are identified, information about their origin can help stop future frauds. If a smartphone camera replaces scanner for the document acquisition process, document forensics would be more economical, user-friendly, and even faster in many applications where remote and distributed analysis is beneficial. Building on existing methods, we propose to learn a single CNN model from the fusion of letter images and their printer-specific noise residuals. In the absence of any publicly available dataset, we created a new dataset consisting of 2250 document images of text documents printed by eighteen printers and acquired by a smartphone camera at five acquisition settings. The proposed method achieves 98.42% document classification accuracy using images of letter 'e' under a 5x2 cross-validation approach. Further, when tested using about half a million letters of all types, it achieves 90.33% and 98.01% letter and document classification accuracies, respectively, thus highlighting the ability to learn a discriminative model without dependence on a single letter type. Also, classification accuracies are encouraging under various acquisition settings, including low illumination and change in angle between the document and camera planes.