no code implementations • 13 May 2024 • Clinton Mo, Kun Hu, Chengjiang Long, Dong Yuan, Zhiyong Wang
Comprehensive experiments demonstrate the effectiveness of PC-MRL in motion interpolation for desired skeletons without supervision from native datasets.
no code implementations • 3 May 2024 • Yanli Li, Jehad Ibrahim, Huaming Chen, Dong Yuan, Kim-Kwang Raymond Choo
However, the evaluation of such approaches often relies on a single metric (e. g., accuracy).
no code implementations • 14 Mar 2024 • Dong Yuan, Eti Rastogi, Gautam Naik, Sree Prasanna Rajagopal, Sagar Goyal, Fen Zhao, Bharath Chintagunta, Jeff Ward
LLMs are revolutionizing NLP tasks.
1 code implementation • 11 Jan 2024 • Zhiyu Zhu, Huaming Chen, Xinyi Wang, Jiayu Zhang, Zhibo Jin, Kim-Kwang Raymond Choo, Jun Shen, Dong Yuan
With the functional and characteristic similarity analysis, we introduce a novel gradient editing (GE) mechanism and verify its feasibility in generating transferable samples on various models.
1 code implementation • 18 Dec 2023 • Jiawen Wen, Dong Yuan, Lei Ma, Huaming Chen
As open-source AI software projects become an integral component in the AI software development, it is critical to develop a novel methods to ensure and measure the security of the open-source projects for developers.
no code implementations • 16 Apr 2023 • Yu Zhang, Huaming Chen, Wei Bao, Zhongzheng Lai, Zao Zhang, Dong Yuan
Being able to identify and track all the pedestrians in the dense crowd scene with computer vision approaches is a typical challenge in this field, also known as the Multiple Object Tracking (MOT) challenge.
no code implementations • 20 Mar 2023 • Nan Yang, Xuanyu Chen, Charles Z. Liu, Dong Yuan, Wei Bao, Lizhen Cui
Latest federated learning (FL) methods started to focus on how to use unlabeled data in clients for training due to users' privacy concerns, high labeling costs, or lack of expertise.
no code implementations • 23 Feb 2023 • Nan Yang, Dong Yuan, Charles Z Liu, Yongkun Deng, Wei Bao
Most existing federated learning methods assume that clients have fully labeled data to train on, while in reality, it is hard for the clients to get task-specific labels due to users' privacy concerns, high labeling costs, or lack of expertise.
no code implementations • 17 Feb 2023 • Nan Yang, Laicheng Zhong, Fan Huang, Dong Yuan, Wei Bao
Random Padding is parameter-free, simple to construct, and compatible with the majority of CNN-based recognition models.
no code implementations • 5 Dec 2022 • Weiyuan Gong, Dong Yuan, Weikang Li, Dong-Ling Deng
To address this issue, we propose a general scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples through unitary or quantum error correction encoders.
no code implementations • 26 Oct 2022 • Zhengjie Yang, Sen Fu, Wei Bao, Dong Yuan, Albert Y. Zomaya
In this paper, we propose Hierarchical Federated Learning with Momentum Acceleration (HierMo), a three-tier worker-edge-cloud federated learning algorithm that applies momentum for training acceleration.
no code implementations • NeurIPS 2021 • Xiuwen Gong, Dong Yuan, Wei Bao
To deal with ambiguities in partial multilabel learning (PML), state-of-the-art methods perform disambiguation by identifying ground-truth labels directly.
no code implementations • 31 Aug 2021 • Xiuwen Gong, Dong Yuan, Wei Bao
The goal of this paper is to provide a simple method, yet with provable guarantees, which can achieve competitive performance without a complex training process.
no code implementations • 24 Feb 2021 • Xuejun Li, Tianxiang Chen, Dong Yuan, Jia Xu, Xiao Liu
To achieve better Quality of Service (QoS), for instance, faster response time and lower energy consumption, computation offloading is widely used in the MEC environment.
Edge-computing Distributed, Parallel, and Cluster Computing C.2.4
no code implementations • 18 Sep 2020 • Zhengjie Yang, Wei Bao, Dong Yuan, Nguyen H. Tran, Albert Y. Zomaya
It is well-known that Nesterov Accelerated Gradient (NAG) is a more advantageous form of momentum, but it is not clear how to quantify the benefits of NAG in FL so far.
no code implementations • 12 Jun 2020 • Xiuwen Gong, Jiahui Yang, Dong Yuan, Wei Bao
Specifically, in order to learn the new $k$NN-based metric, we first project instances in the training dataset into the label space, which make it possible for the comparisons of instances and labels in the same dimension.