论文标题
多任务高斯流程的有条件的单输出可能性公式
A conditional one-output likelihood formulation for multitask Gaussian processes
论文作者
论文摘要
多任务高斯流程(MTGP)是高斯流程(GP)框架的多输出回归问题的解决方案,在这些问题中,鉴于观察值,回归器的$ T $元素不能被认为是有条件独立的。标准MTGP模型假设同时存在多任务协方差矩阵,该矩阵是Intertask矩阵的函数和噪声协方差矩阵。这些矩阵需要通过订单$ p $的低级简化来近似,以减少从$ t^2 $到$ tp $学习的参数数量。在这里,我们介绍了一种新颖的方法,该方法通过将其减少到一组条件的单变量GP来简化多任务学习而无需任何低级近似值,因此完全消除了为超参数$ p $选择足够值的要求。同时,通过使用层次结构和近似模型扩展这种方法,提议的扩展只能在仅学习$ 2T $参数之后恢复多任务协方差和噪声矩阵,从而避免验证任何模型超级参数并降低模型的整体复杂性,以及模型的整体复杂性以及过度配置的风险。关于合成和实际问题的实验结果证实了这种推论方法在准确恢复原始噪声和信号矩阵的能力方面的优势,以及与其他最先进的MTGP方法相比,实现的性能提高了。我们还将该模型与标准GP工具箱集成在一起,表明它具有与最先进的选项的计算竞争。
Multitask Gaussian processes (MTGP) are the Gaussian process (GP) framework's solution for multioutput regression problems in which the $T$ elements of the regressors cannot be considered conditionally independent given the observations. Standard MTGP models assume that there exist both a multitask covariance matrix as a function of an intertask matrix, and a noise covariance matrix. These matrices need to be approximated by a low rank simplification of order $P$ in order to reduce the number of parameters to be learnt from $T^2$ to $TP$. Here we introduce a novel approach that simplifies the multitask learning by reducing it to a set of conditioned univariate GPs without the need for any low rank approximations, therefore completely eliminating the requirement to select an adequate value for hyperparameter $P$. At the same time, by extending this approach with both a hierarchical and an approximate model, the proposed extensions are capable of recovering the multitask covariance and noise matrices after learning only $2T$ parameters, avoiding the validation of any model hyperparameter and reducing the overall complexity of the model as well as the risk of overfitting. Experimental results over synthetic and real problems confirm the advantages of this inference approach in its ability to accurately recover the original noise and signal matrices, as well as the achieved performance improvement in comparison to other state of art MTGP approaches. We have also integrated the model with standard GP toolboxes, showing that it is computationally competitive with state of the art options.