Search Results for author: Chaofan Wang

Found 8 papers, 2 papers with code

融入多尺度特征注意力的胶囊神经网络及其在文本分类中的应用(Incorporating Multi-scale Feature Attention into Capsule Network and its Application in Text Classification)

no code implementations CCL 2020 Chaofan Wang, Shenggen Ju, Jieping Sun, Run Chen

近些年来, 胶囊神经网络(Capsnets)由于拥有强大的文本特征学习能力已被应用到了文本分类任务当中。目前的研究工作大部分都将提取到的文本多元语法特征视为同等重要, 而忽略了单词所对应各个多元语法特征的重要程度应该是由具体上下文决定的这一问题, 这将直接影响到模型对整个文本的语义理解。针对上述问题, 本文提出了多尺度特征部分连接胶囊网络(MulPart-Capsnets)。该方法将多尺度特征注意力融入到Capsnets中, 多尺度特征注意力能够自动选择不同尺度的多元语法特征, 通过对其进行加权求和, 就能为每个单词精确捕捉到丰富的多元语法特征。同时, 为了减少子胶囊与父胶囊之间的冗余信息传递, 本文同时也对路由算法进行了改进。 本文提出的算法在文本分类任务上针对七个著名的数据集进行了有效性验证, 和现有的研究工作相比, 性能显著提高, 说明了本文的算法能够捕获文本中更丰富的多元语法特征, 具有更加强大的文本特征学习能力。

text-classification Text Classification

Factory Operators' Perspectives on Cognitive Assistants for Knowledge Sharing: Challenges, Risks, and Impact on Work

no code implementations30 Sep 2024 Samuel Kernan Freire, Tianhao He, Chaofan Wang, Evangelos Niforatos, Alessandro Bozzon

In the shift towards human-centered manufacturing, our two-year longitudinal study investigates the real-world impact of deploying Cognitive Assistants (CAs) in factories.

Knowledge Sharing in Manufacturing using Large Language Models: User Evaluation and Model Benchmarking

no code implementations10 Jan 2024 Samuel Kernan Freire, Chaofan Wang, Mina Foosherian, Stefan Wellsandt, Santiago Ruiz-Arenas, Evangelos Niforatos

This paper introduces a Large Language Model (LLM)-based system designed to retrieve information from the extensive knowledge contained in factory documentation and knowledge shared by expert operators.

Benchmarking Information Retrieval +4

Heterogeneous Graph Tree Networks

1 code implementation1 Sep 2022 Nan Wu, Chaofan Wang

Heterogeneous graph neural networks (HGNNs) have attracted increasing research interest in recent three years.

Node Classification

GTNet: A Tree-Based Deep Graph Learning Architecture

1 code implementation27 Apr 2022 Nan Wu, Chaofan Wang

We formulate a general propagation rule following the nature of message passing in the tree to update a node's feature by aggregating its initial feature and its neighbor nodes' updated features.

Graph Attention Graph Learning +1

DEEP GRAPH TREE NETWORKS

no code implementations29 Sep 2021 Nan Wu, Chaofan Wang

Models adopting this scheme has the capability of going deep.

Graph Learning Graph Neural Network

Cannot find the paper you are looking for? You can Submit a new open access paper.