论文标题
来自移动数据的多模式保护情绪预测:初步研究
Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study
论文作者
论文摘要
即使在获得高级医疗服务的国家,心理健康状况仍未诊断。从易于收藏的数据中准确有效预测情绪的能力对精神健康疾病的早期发现和干预具有几个重要的影响。帮助监控人类行为的一种有希望的数据源是来自每日智能手机的使用。但是,必须注意总结行为,而无需通过个人(例如,个人身份信息)或受保护的属性(例如,种族,性别)来识别用户。在本文中,我们使用高风险青少年人群的移动行为数据集研究行为标记或日常情绪。使用计算模型,我们发现文本和应用程序用法功能的多模式建模可高度预测每种模式的日常情绪。此外,我们评估了可靠地混淆用户身份的方法,同时还可以预测日常情绪。通过将多模式表示与隐私的学习相结合,与单峰方法相比,我们能够推动性能私人边界。
Mental health conditions remain under-diagnosed even in countries with common access to advanced medical care. The ability to accurately and efficiently predict mood from easily collectible data has several important implications towards the early detection and intervention of mental health disorders. One promising data source to help monitor human behavior is from daily smartphone usage. However, care must be taken to summarize behaviors without identifying the user through personal (e.g., personally identifiable information) or protected attributes (e.g., race, gender). In this paper, we study behavioral markers or daily mood using a recent dataset of mobile behaviors from high-risk adolescent populations. Using computational models, we find that multimodal modeling of both text and app usage features is highly predictive of daily mood over each modality alone. Furthermore, we evaluate approaches that reliably obfuscate user identity while remaining predictive of daily mood. By combining multimodal representations with privacy-preserving learning, we are able to push forward the performance-privacy frontier as compared to unimodal approaches.