论文标题
PixelHop ++:用于图像分类的小型连续填充学习(基于SSL)模型
PixelHop++: A Small Successive-Subspace-Learning-Based (SSL-based) Model for Image Classification
论文作者
论文摘要
开发了连续的子空间学习(SSL)原理,并用于设计一种可解释的学习模型,称为PixelHop方法,用于我们先前的工作中的图像分类。在这里,我们提出了一种改进的PixelHop方法,并将其称为PixelHop ++。首先,为了使像素旋转模型尺寸较小,我们将一个联合空间 - 光谱输入张量张紧到多个空间张量(每个频谱组件一个)在空间光谱可分离性假设下,并以频道的方式执行saab变换,称为频道(c/w)频道(c/w)saab whise.second。节点包含一个维度(1D)的特征。第三,这些1D特征根据它们的跨凝结值对其进行排名,这使我们能够为图像分类选择一个判别特征的子集。在PixelHop ++中,可以控制细粒度的学习模型大小,从而在模型大小和分类性能之间进行灵活的权衡。我们演示了PixelHop ++在MNIST,时尚MNIST和CIFAR-10上的灵活性。
The successive subspace learning (SSL) principle was developed and used to design an interpretable learning model, known as the PixelHop method,for image classification in our prior work. Here, we propose an improved PixelHop method and call it PixelHop++. First, to make the PixelHop model size smaller, we decouple a joint spatial-spectral input tensor to multiple spatial tensors (one for each spectral component) under the spatial-spectral separability assumption and perform the Saab transform in a channel-wise manner, called the channel-wise (c/w) Saab transform.Second, by performing this operation from one hop to another successively, we construct a channel-decomposed feature tree whose leaf nodes contain features of one dimension (1D). Third, these 1D features are ranked according to their cross-entropy values, which allows us to select a subset of discriminant features for image classification. In PixelHop++, one can control the learning model size of fine-granularity,offering a flexible tradeoff between the model size and the classification performance. We demonstrate the flexibility of PixelHop++ on MNIST, Fashion MNIST, and CIFAR-10 three datasets.