论文标题

使用灰狼优化器进行数据分类问题的新的新稀疏自动编码器框架

A new Sparse Auto-encoder based Framework using Grey Wolf Optimizer for Data Classification Problem

论文作者

Karim, Ahmad Mozaffer

论文摘要

深度自动编码器(DAE)最重要的属性之一是它们从行数据中提取高级功能的能力。因此,尤其是最近,尤其是在各种分类问题中使用自动编码器,例如图像和语音识别,计算机安全性,医疗数据分析等。尽管其受欢迎程度和高性能,但自动编码器的训练阶段仍然是一项艰巨的任务,仍然涉及最佳参数,以选择让模型取得最佳结果的最佳参数。采用不同的培训方法来训练稀疏的自动编码器。先前的研究和初步实验表明,这些方法在相同的问题中可能会带来显着的结果,但在其他复杂问题中也可以获得令人失望的结果。在过去的二十年中,元关节算法已经出现,并已成为当代优化技术的重要组成部分。灰狼优化(GWO)是这些算法的当前之一,可用于训练本研究的稀疏自动编码器。该模型通过使用几个流行的基因表达数据库来验证。将结果与以前使用相同数据集研究的先前最先进的方法进行了比较,并将其与其他流行的元启发式算法进行了比较,即遗传算法(GA),粒子群优化(PSO)和人造蜜蜂菌落(ABC)。结果表明,使用GWO的训练模型的性能在传统模型和使用最流行的元启发式算法训练的模型上均优于传统模型。

One of the most important properties of deep auto-encoders (DAEs) is their capability to extract high level features from row data. Hence, especially recently, the autoencoders are preferred to be used in various classification problems such as image and voice recognition, computer security, medical data analysis, etc. Despite, its popularity and high performance, the training phase of autoencoders is still a challenging task, involving to select best parameters that let the model to approach optimal results. Different training approaches are applied to train sparse autoencoders. Previous studies and preliminary experiments reveal that those approaches may present remarkable results in same problems but also disappointing results can be obtained in other complex problems. Metaheuristic algorithms have emerged over the last two decades and are becoming an essential part of contemporary optimization techniques. Gray wolf optimization (GWO) is one of the current of those algorithms and is applied to train sparse auto-encoders for this study. This model is validated by employing several popular Gene expression databases. Results are compared with previous state-of-the art methods studied with the same data sets and also are compared with other popular metaheuristic algorithms, namely, Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC). Results reveal that the performance of the trained model using GWO outperforms on both conventional models and models trained with most popular metaheuristic algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源