论文标题
通过在线用户参与到强大的手写文本识别
Towards Robust Handwritten Text Recognition with On-the-fly User Participation
论文作者
论文摘要
长期OCR服务旨在以有竞争力的成本为其用户提供高质量的产出。由于用户加载的复杂数据,因此必须升级模型。服务提供商鼓励提供数据的用户在OCR模型失败的情况下根据数据复杂性,可读性和可用预算奖励它们。迄今为止,OCR的作品包括在不考虑最终用户的情况下准备标准数据集上的模型。我们提出了一种策略,即在15个用户的数据集中始终如一地升级现有手写的印地语OCR模型。我们为每次迭代修复了4个用户的预算。对于第一次迭代,该模型直接从前四个用户的数据集上训练。对于静止迭代,所有其余用户都会编写每个页面,而服务提供商后来分析了该页面,以根据对人类可读词的预测质量选择4(新的)最佳用户。选定的用户还编写23页以升级模型。我们使用课程学习(CL)升级了当前迭代中可用的数据,并比较了以前的迭代中的子集。升级的模型将在所有23个用户的一页上进行测试。我们提供有关CL,用户选择效果的研究,尤其是来自看不见的写作样式的数据。我们的工作可用于服务提供商和最终用户的众包场景中的长期OCR服务。
Long-term OCR services aim to provide high-quality output to their users at competitive costs. It is essential to upgrade the models because of the complex data loaded by the users. The service providers encourage the users who provide data where the OCR model fails by rewarding them based on data complexity, readability, and available budget. Hitherto, the OCR works include preparing the models on standard datasets without considering the end-users. We propose a strategy of consistently upgrading an existing Handwritten Hindi OCR model three times on the dataset of 15 users. We fix the budget of 4 users for each iteration. For the first iteration, the model directly trains on the dataset from the first four users. For the rest iteration, all remaining users write a page each, which service providers later analyze to select the 4 (new) best users based on the quality of predictions on the human-readable words. Selected users write 23 more pages for upgrading the model. We upgrade the model with Curriculum Learning (CL) on the data available in the current iteration and compare the subset from previous iterations. The upgraded model is tested on a held-out set of one page each from all 23 users. We provide insights into our investigations on the effect of CL, user selection, and especially the data from unseen writing styles. Our work can be used for long-term OCR services in crowd-sourcing scenarios for the service providers and end users.