论文标题
多任务学习中有效的连续帕累托探索
Efficient Continuous Pareto Exploration in Multi-Task Learning
论文作者
论文摘要
多任务学习中的任务通常相关,冲突甚至相互竞争。结果,对于所有任务最佳的单个解决方案很少存在。最近的论文将帕累托最优性的概念引入了该领域,并直接将多任务学习作为多目标优化问题,但是现有方法返回的解决方案通常是有限的,稀疏的和离散的。我们提出了一种新颖,有效的方法,该方法生成本地连续的帕累托集和帕累托方面,从而开辟了对机器学习问题中帕累托最佳解决方案进行连续分析的可能性。我们通过提出基于样本的稀疏线性系统来扩展多目标优化的理论结果,以对现代机器学习问题进行扩展,该系统可以应用机器学习中的标准无HESSIAN求解器。我们将我们的方法与最先进的算法进行比较,并证明了其用于分析各种多任务分类和回归问题的本地帕累托集的用法。实验结果证实,我们的算法揭示了本地帕累托式平衡平衡的主要方向,找到了有效的不同权衡的更多解决方案,并且可以很好地扩展到具有数百万参数的任务。
Tasks in multi-task learning often correlate, conflict, or even compete with each other. As a result, a single solution that is optimal for all tasks rarely exists. Recent papers introduced the concept of Pareto optimality to this field and directly cast multi-task learning as multi-objective optimization problems, but solutions returned by existing methods are typically finite, sparse, and discrete. We present a novel, efficient method that generates locally continuous Pareto sets and Pareto fronts, which opens up the possibility of continuous analysis of Pareto optimal solutions in machine learning problems. We scale up theoretical results in multi-objective optimization to modern machine learning problems by proposing a sample-based sparse linear system, for which standard Hessian-free solvers in machine learning can be applied. We compare our method to the state-of-the-art algorithms and demonstrate its usage of analyzing local Pareto sets on various multi-task classification and regression problems. The experimental results confirm that our algorithm reveals the primary directions in local Pareto sets for trade-off balancing, finds more solutions with different trade-offs efficiently, and scales well to tasks with millions of parameters.