论文标题
我们需要多少用户上下文?精神健康NLP应用程序中设计的隐私
How Much User Context Do We Need? Privacy by Design in Mental Health NLP Application
论文作者
论文摘要
临床NLP任务,例如文本的心理健康评估,必须考虑社会限制 - 绩效最大化必须受保证用户数据隐私的最大重要性来限制。消费者保护法规(例如GDPR)通常通过限制数据可用性来处理隐私,例如需要将用户数据限制在给定目的的“必要内容”中。在这项工作中,我们认为提供更严格的正式隐私保证,同时增加模型中用户数据的数量,在大多数情况下,为所有有关方,特别是对用户而言,都会增加利益。我们在Twitter和Reddit帖子的两个现有自杀风险评估数据集上演示了我们的论点。我们提出了第一个分析并置用户历史记录长度和差异隐私预算,并详细说明建模其他用户上下文如何实现公用事业保存,同时保持可接受的用户隐私保证。
Clinical NLP tasks such as mental health assessment from text, must take social constraints into account - the performance maximization must be constrained by the utmost importance of guaranteeing privacy of user data. Consumer protection regulations, such as GDPR, generally handle privacy by restricting data availability, such as requiring to limit user data to 'what is necessary' for a given purpose. In this work, we reason that providing stricter formal privacy guarantees, while increasing the volume of user data in the model, in most cases increases benefit for all parties involved, especially for the user. We demonstrate our arguments on two existing suicide risk assessment datasets of Twitter and Reddit posts. We present the first analysis juxtaposing user history length and differential privacy budgets and elaborate how modeling additional user context enables utility preservation while maintaining acceptable user privacy guarantees.