no code implementations • 27 Dec 2024 • Dong Yuan, Eti Rastogi, Fen Zhao, Sagar Goyal, Gautam Naik, Sree Prasanna Rajagopal
Due to the exponential growth of information and the need for efficient information consumption the task of summarization has gained paramount importance.
no code implementations • 26 Dec 2024 • Haonan He, Yuchen Ren, Yining Tang, Ziyang Xu, Junxian Li, Minghao Yang, Di Zhang, Dong Yuan, Tao Chen, Shufei Zhang, Yuqiang Li, Nanqing Dong, Wanli Ouyang, Dongzhan Zhou, Peng Ye
Large language models have already demonstrated their formidable capabilities in general domains, ushering in a revolutionary transformation.
no code implementations • 13 Dec 2024 • Yuchen Ren, Wenwei Han, Qianyuan Zhang, Yining Tang, Weiqiang Bai, Yuchen Cai, Lifeng Qiao, Hao Jiang, Dong Yuan, Tao Chen, Siqi Sun, Pan Tan, Wanli Ouyang, Nanqing Dong, Xinzhu Ma, Peng Ye
To address this, we introduce the first comprehensive multi-omics benchmark COMET (Benchmark for Biological COmprehensive Multi-omics Evaluation Tasks and Language Models), designed to evaluate models across single-omics, cross-omics, and multi-omics tasks.
no code implementations • 17 Oct 2024 • Zao Zhang, Huaming Chen, Pei Ning, Nan Yang, Dong Yuan
Leveraging these insights, we present the Correlation-Aware Knowledge Distillation (CAKD) framework.
1 code implementation • 28 Aug 2024 • Dong Yuan, Frederic Maire, Feras Dayoub
This paper introduces a novel approach to enhancing cross-view localization, focusing on the fine-grained, sequential localization of street-view images within a single known satellite image patch, a significant departure from traditional one-to-one image retrieval methods.
1 code implementation • 27 Jul 2024 • Penghui Wen, Kun Hu, Dong Yuan, Zhiyuan Ning, Changyang Li, Zhiyong Wang
Additionally, the spatio-temporal patterns have not been fully explored for human motion dynamics in HSS.
no code implementations • 9 Jul 2024 • Yanli Li, Zhongliang Guo, Nan Yang, Huaming Chen, Dong Yuan, Weiping Ding
To provide a clear understanding of the current research landscape, this paper reviews the most representative and state-of-the-art threats and defense frameworks throughout the FL service life cycle.
no code implementations • 25 Jun 2024 • Kacy Zhou, Jiawen Wen, Nan Yang, Dong Yuan, Qinghua Lu, Huaming Chen
Intersectional bias, which disproportionately affects members of subgroups, is a prime example of this.
1 code implementation • 14 Jun 2024 • Yuchen Ren, ZhiYuan Chen, Lifeng Qiao, Hongtai Jing, Yuchen Cai, Sheng Xu, Peng Ye, Xinzhu Ma, Siqi Sun, Hongliang Yan, Dong Yuan, Wanli Ouyang, Xihui Liu
RNA plays a pivotal role in translating genetic instructions into functional outcomes, underscoring its importance in biological processes and disease mechanisms.
1 code implementation • 12 Jun 2024 • Zhongzheng Lai, Huaming Chen, Ruoxi Sun, Yu Zhang, Minhui Xue, Dong Yuan
In this work, we specifically look into deep learning (DL) framework and perform the first systematic study of vulnerabilities in DL systems through a comprehensive analysis of identified vulnerabilities from Common Vulnerabilities and Exposures (CVE) and open-source DL tools, including TensorFlow, Caffe, OpenCV, Keras, and PyTorch.
no code implementations • 13 May 2024 • Clinton Mo, Kun Hu, Chengjiang Long, Dong Yuan, Zhiyong Wang
Comprehensive experiments demonstrate the effectiveness of PC-MRL in motion interpolation for desired skeletons without supervision from native datasets.
no code implementations • 3 May 2024 • Yanli Li, Jehad Ibrahim, Huaming Chen, Dong Yuan, Kim-Kwang Raymond Choo
However, the evaluation of such approaches often relies on a single metric (e. g., accuracy).
no code implementations • 14 Mar 2024 • Dong Yuan, Eti Rastogi, Gautam Naik, Sree Prasanna Rajagopal, Sagar Goyal, Fen Zhao, Bharath Chintagunta, Jeff Ward
LLMs are revolutionizing NLP tasks.
1 code implementation • 11 Jan 2024 • Zhiyu Zhu, Huaming Chen, Xinyi Wang, Jiayu Zhang, Zhibo Jin, Kim-Kwang Raymond Choo, Jun Shen, Dong Yuan
With the functional and characteristic similarity analysis, we introduce a novel gradient editing (GE) mechanism and verify its feasibility in generating transferable samples on various models.
1 code implementation • 18 Dec 2023 • Jiawen Wen, Dong Yuan, Lei Ma, Huaming Chen
As open-source AI software projects become an integral component in the AI software development, it is critical to develop a novel methods to ensure and measure the security of the open-source projects for developers.
no code implementations • 16 Apr 2023 • Yu Zhang, Huaming Chen, Wei Bao, Zhongzheng Lai, Zao Zhang, Dong Yuan
Being able to identify and track all the pedestrians in the dense crowd scene with computer vision approaches is a typical challenge in this field, also known as the Multiple Object Tracking (MOT) challenge.
no code implementations • 20 Mar 2023 • Nan Yang, Xuanyu Chen, Charles Z. Liu, Dong Yuan, Wei Bao, Lizhen Cui
Latest federated learning (FL) methods started to focus on how to use unlabeled data in clients for training due to users' privacy concerns, high labeling costs, or lack of expertise.
no code implementations • 23 Feb 2023 • Nan Yang, Dong Yuan, Charles Z Liu, Yongkun Deng, Wei Bao
Most existing federated learning methods assume that clients have fully labeled data to train on, while in reality, it is hard for the clients to get task-specific labels due to users' privacy concerns, high labeling costs, or lack of expertise.
no code implementations • 17 Feb 2023 • Nan Yang, Laicheng Zhong, Fan Huang, Dong Yuan, Wei Bao
Random Padding is parameter-free, simple to construct, and compatible with the majority of CNN-based recognition models.
no code implementations • 5 Dec 2022 • Weiyuan Gong, Dong Yuan, Weikang Li, Dong-Ling Deng
To address this issue, we propose a general scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples through unitary or quantum error correction encoders.
no code implementations • 26 Oct 2022 • Zhengjie Yang, Sen Fu, Wei Bao, Dong Yuan, Albert Y. Zomaya
In this paper, we propose Hierarchical Federated Learning with Momentum Acceleration (HierMo), a three-tier worker-edge-cloud federated learning algorithm that applies momentum for training acceleration.
no code implementations • NeurIPS 2021 • Xiuwen Gong, Dong Yuan, Wei Bao
To deal with ambiguities in partial multilabel learning (PML), state-of-the-art methods perform disambiguation by identifying ground-truth labels directly.
no code implementations • 31 Aug 2021 • Xiuwen Gong, Dong Yuan, Wei Bao
The goal of this paper is to provide a simple method, yet with provable guarantees, which can achieve competitive performance without a complex training process.
no code implementations • 24 Feb 2021 • Xuejun Li, Tianxiang Chen, Dong Yuan, Jia Xu, Xiao Liu
To achieve better Quality of Service (QoS), for instance, faster response time and lower energy consumption, computation offloading is widely used in the MEC environment.
Edge-computing
Distributed, Parallel, and Cluster Computing
C.2.4
no code implementations • 18 Sep 2020 • Zhengjie Yang, Wei Bao, Dong Yuan, Nguyen H. Tran, Albert Y. Zomaya
It is well-known that Nesterov Accelerated Gradient (NAG) is a more advantageous form of momentum, but it is not clear how to quantify the benefits of NAG in FL so far.
no code implementations • 12 Jun 2020 • Xiuwen Gong, Jiahui Yang, Dong Yuan, Wei Bao
Specifically, in order to learn the new $k$NN-based metric, we first project instances in the training dataset into the label space, which make it possible for the comparisons of instances and labels in the same dimension.