Search Results for author: Yuchen Shi

Found 8 papers, 3 papers with code

Negation Triplet Extraction with Syntactic Dependency and Semantic Consistency

1 code implementation15 Apr 2024 Yuchen Shi, Deqing Yang, Jingping Liu, Yanghua Xiao, ZongYu Wang, Huimin Xu

To achieve NTE, we devise a novel Syntax&Semantic-Enhanced Negation Extraction model, namely SSENE, which is built based on a generative pretrained language model (PLM) {of Encoder-Decoder architecture} with a multi-task learning framework.

Language Modelling Multi-Task Learning +2

ToNER: Type-oriented Named Entity Recognition with Generative Language Model

no code implementations14 Apr 2024 Guochao Jiang, Ziqin Luo, Yuchen Shi, Dixuan Wang, Jiaqing Liang, Deqing Yang

In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task.

Binary Classification Language Modelling +4

CARSS: Cooperative Attention-guided Reinforcement Subpath Synthesis for Solving Traveling Salesman Problem

no code implementations24 Dec 2023 Yuchen Shi, Congying Han, Tiande Guo

The algorithm's primary objective is to enhance efficiency in terms of training memory consumption, testing time, and scalability, through the adoption of a multi-agent divide and conquer paradigm.

Multi-agent Reinforcement Learning Traveling Salesman Problem

FedNC: A Secure and Efficient Federated Learning Method with Network Coding

no code implementations5 May 2023 Yuchen Shi, Zheqi Zhu, Pingyi Fan, Khaled B. Letaief, Chenghui Peng

Federated Learning (FL) is a promising distributed learning mechanism which still faces two major challenges, namely privacy breaches and system efficiency.

Federated Learning

FedLP: Layer-wise Pruning Mechanism for Communication-Computation Efficient Federated Learning

1 code implementation11 Mar 2023 Zheqi Zhu, Yuchen Shi, Jiajun Luo, Fei Wang, Chenghui Peng, Pingyi Fan, Khaled B. Letaief

By adopting layer-wise pruning in local training and federated updating, we formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models.

Federated Learning

NeuroPrim: An Attention-based Model for Solving NP-hard Spanning Tree Problems

no code implementations22 Oct 2022 Yuchen Shi, Congying Han, Tiande Guo

We apply our framework to three difficult problems on Euclidean space: the Degree-constrained Minimum Spanning Tree (DCMST) problem, the Minimum Routing Cost Spanning Tree (MRCST) problem, and the Steiner Tree Problem in graphs (STP).

Combinatorial Optimization Steiner Tree Problem

Improving Information Cascade Modeling by Social Topology and Dual Role User Dependency

1 code implementation7 Apr 2022 Baichuan Liu, Deqing Yang, Yueyi Wang, Yuchen Shi

However, the user dependencies in a cascade sequence captured by sequential models are generally unidirectional and inconsistent with diffusion trees.

Adversarial Joint Training with Self-Attention Mechanism for Robust End-to-End Speech Recognition

no code implementations3 Apr 2021 Lujun Li, Yikai Kang, Yuchen Shi, Ludwig Kürzinger, Tobias Watzel, Gerhard Rigoll

Inspired by the extensive applications of the generative adversarial networks (GANs) in speech enhancement and ASR tasks, we propose an adversarial joint training framework with the self-attention mechanism to boost the noise robustness of the ASR system.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.