论文标题

SEQ2SEQ和基于联合学习的UNIX命令行预测系统

Seq2Seq and Joint Learning Based Unix Command Line Prediction System

论文作者

Singh, Thoudam Doren, Khilji, Abdullah Faiz Ur Rahman, Divyansha, Singh, Apoorva Vikram, Thokchom, Surmila, Bandyopadhyay, Sivaji

论文摘要

尽管是90年代初期开创的开源操作系统,但基于UNIX的平台仍无法获得业余最终用户的压倒性接待。在基于UNIX的系统的普及的基本原理之一是与它们相对应的陡峭学习曲线,这是由于命令行界面的广泛使用而不是通常的交互式图形用户界面。在过去的几年中,用于探索关注点的大多数见解都围绕着利用用户的慢性日志历史记录来进行连续命令的概念。针对该概念的解剖学的方法主要符合概率推理模型。然而,过去所采用的技术还没有足够的能力来像预期的那样合理地解决困境。我们通过利用自我策划的详尽知识库(KB)的连续表示来增强模型中使用的嵌入方式,而不是采用了建议系统的常规机制,而是采用了SEQ2SEQ模型的简单而新颖的方法。这项工作描述了一种增强UNIX命令行预测系统的辅助,自适应和动态的方式。实验方法指出,我们的模型已经达到了精确度,超过了其他技术和自适应命令线接口机制的混合物。

Despite being an open-source operating system pioneered in the early 90s, UNIX based platforms have not been able to garner an overwhelming reception from amateur end users. One of the rationales for under popularity of UNIX based systems is the steep learning curve corresponding to them due to extensive use of command line interface instead of usual interactive graphical user interface. In past years, the majority of insights used to explore the concern are eminently centered around the notion of utilizing chronic log history of the user to make the prediction of successive command. The approaches directed at anatomization of this notion are predominantly in accordance with Probabilistic inference models. The techniques employed in past, however, have not been competent enough to address the predicament as legitimately as anticipated. Instead of deploying usual mechanism of recommendation systems, we have employed a simple yet novel approach of Seq2seq model by leveraging continuous representations of self-curated exhaustive Knowledge Base (KB) to enhance the embedding employed in the model. This work describes an assistive, adaptive and dynamic way of enhancing UNIX command line prediction systems. Experimental methods state that our model has achieved accuracy surpassing mixture of other techniques and adaptive command line interface mechanism as acclaimed in the past.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源