论文标题
ULDGNN:基于图神经网络的零散的UI层检测器
ULDGNN: A Fragmented UI Layer Detector Based on Graph Neural Networks
论文作者
论文摘要
尽管某些工作尝试从UI屏幕截图智能生成前端代码,但在Sketch中使用UI设计草图可能更方便,这是一种流行的UI设计软件,因为我们可以直接访问多模式UI信息,例如图层类型,位置,大小和视觉图像。但是,如果所有这些层都参与了代码生成,则分散的层可能会降低代码质量,而不会合并为整个部分。在本文中,我们提出了一条管道,以自动合并碎片层。我们首先为UI草稿的图层树构造图表,并根据视觉特征和图形神经网络检测所有碎片层。然后,基于规则的算法旨在合并碎片层。通过在新建的数据集上的实验,我们的方法可以在UI设计草案中检索最碎片的层,并在检测任务中实现87%的精度,并在简单且一般的情况下开发了后处理算法以聚集关联层。
While some work attempt to generate front-end code intelligently from UI screenshots, it may be more convenient to utilize UI design drafts in Sketch which is a popular UI design software, because we can access multimodal UI information directly such as layers type, position, size, and visual images. However, fragmented layers could degrade the code quality without being merged into a whole part if all of them are involved in the code generation. In this paper, we propose a pipeline to merge fragmented layers automatically. We first construct a graph representation for the layer tree of a UI draft and detect all fragmented layers based on the visual features and graph neural networks. Then a rule-based algorithm is designed to merge fragmented layers. Through experiments on a newly constructed dataset, our approach can retrieve most fragmented layers in UI design drafts, and achieve 87% accuracy in the detection task, and the post-processing algorithm is developed to cluster associative layers under simple and general circumstances.