no code implementations • 27 Mar 2025 • YuHan Liu, Yixiong Zou, Yuhua Li, Ruixuan Li
Based on this phenomenon and interpretation, we further propose a method that includes two plug-and-play modules: one to flatten the loss landscapes for low-level features during source-domain training as a novel sharpness-aware minimization method, and the other to directly supplement target-domain information to the model during target-domain testing by low-level-based calibration.
1 code implementation • 8 Mar 2025 • Wei Liu, Zhiying Deng, Zhongyu Niu, Jun Wang, Haozhao Wang, Zhigang Zeng, Ruixuan Li
If an input is fully utilized by the network, {it generally matches these directions (e. g., a portion of a hypersphere), resulting in a representation with a high norm.
1 code implementation • 5 Mar 2025 • Yixin Su, Wei Jiang, Fangquan Lin, Cheng Yang, Sarah M. Erfani, Junhao Gan, Yunxiang Zhao, Ruixuan Li, Rui Zhang
In recommender systems, the patterns of user behaviors (e. g., purchase, click) may vary greatly in different contexts (e. g., time and location).
no code implementations • 13 Feb 2025 • Xuzhao Geng, Haozhao Wang, Jun Wang, Wei Liu, Ruixuan Li
Retrieval-augmented generation (RAG) is a key technique for leveraging external knowledge and reducing hallucinations in large language models (LLMs).
no code implementations • 10 Feb 2025 • Longtao Xiao, Haozhao Wang, Cheng Wang, Linfei Ji, Yifan Wang, Jieming Zhu, Zhenhua Dong, Rui Zhang, Ruixuan Li
In the second stage, we propose an in-modality knowledge distillation task, designed to effectively capture and integrate knowledge from both semantic and collaborative modalities.
no code implementations • 15 Jan 2025 • Yichen Li, Yuying Wang, Jiahua Dong, Haozhao Wang, Yining Qi, Rui Zhang, Ruixuan Li
We revisit this problem with a large-scale benchmark and analyze the performance of state-of-the-art FCL approaches under different resource-constrained settings.
no code implementations • 10 Jan 2025 • You Li, Heyu Huang, Chi Chen, Kaiyu Huang, Chao Huang, Zonghao Guo, Zhiyuan Liu, Jinan Xu, Yuhua Li, Ruixuan Li, Maosong Sun
The recent advancement of Multimodal Large Language Models (MLLMs) has significantly improved their fine-grained perception of single images and general comprehension across multiple images.
no code implementations • 26 Dec 2024 • Ran Ma, Yixiong Zou, Yuhua Li, Ruixuan Li
We find that MAE tends to focus on low-level domain information during reconstructing pixels while changing the reconstruction target to token features could mitigate this problem.
no code implementations • 24 Dec 2024 • Tianzhe Xiao, Yichen Li, Yining Qi, Haozhao Wang, Ruixuan Li
Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients.
no code implementations • 18 Dec 2024 • Yichen Li, Yuying Wang, Tianzhe Xiao, Haozhao Wang, Yining Qi, Ruixuan Li
Specifically, we first apply traditional regularization techniques to CFL and observe that existing regularization techniques, especially synaptic intelligence, can achieve promising results under homogeneous data distribution but fail when the data is heterogeneous.
no code implementations • 18 Dec 2024 • Yichen Li, Haozhao Wang, Wenchao Xu, Tianzhe Xiao, Hong Liu, Minzhu Tu, Yuying Wang, Xin Yang, Rui Zhang, Shui Yu, Song Guo, Ruixuan Li
To achieve high reliability and scalability in deploying this paradigm in distributed systems, it is essential to conquer challenges stemming from both spatial and temporal dimensions, manifesting as distribution shifts, catastrophic forgetting, heterogeneity, and privacy issues.
1 code implementation • 29 Oct 2024 • Jintao Tong, Yixiong Zou, Yuhua Li, Ruixuan Li
Cross-domain few-shot segmentation (CD-FSS) is proposed to first pre-train the model on a large-scale source-domain dataset, and then transfer the model to data-scarce target-domain datasets for pixel-level segmentation.
1 code implementation • 8 Oct 2024 • Wei Liu, Zhiying Deng, Zhongyu Niu, Jun Wang, Haozhao Wang, Yuankai Zhang, Ruixuan Li
In the optimization objectives of these methods, spurious features are still distinguished from plain noise, which hinders the discovery of causal rationales.
1 code implementation • 30 Sep 2024 • Shiwei Li, Zhuoqi Hu, Xing Tang, Haozhao Wang, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li
Specifically, to reduce the size of the search space, we first group features by frequency and then search precision for each feature group.
1 code implementation • 26 Aug 2024 • Yang Qiu, Wei Liu, Jun Wang, Ruixuan Li
Due to the dimensionality reduction of features in the latent space of the auto-encoder, it becomes easier to extract causal features leading to the model's output, which can be easily employed to generate explanations.
1 code implementation • 23 Aug 2024 • Zhenyu Zhang, Guangyao Chen, Yixiong Zou, Yuhua Li, Ruixuan Li
Few-shot open-set recognition (FSOR) is a challenging task that requires a model to recognize known classes and identify unknown classes with limited labeled data.
1 code implementation • 23 Aug 2024 • Zhenyu Zhang, Guangyao Chen, Yixiong Zou, Zhimeng Huang, Yuhua Li, Ruixuan Li
Humans exhibit a remarkable ability to learn quickly from a limited number of labeled samples, a capability that starkly contrasts with that of current machine learning systems.
no code implementations • 20 Aug 2024 • Yuankai Zhang, Lingxiao Kong, Haozhao Wang, Ruixuan Li, Jun Wang, Yuhua Li, Wei Liu
Based on this, we make a series of recommendations for improving rationalization models in terms of explanation.
1 code implementation • 6 Aug 2024 • Shiwei Li, Yingyi Cheng, Haozhao Wang, Xing Tang, Shijie Xu, Weihong Luo, Yuhua Li, Dugang Liu, Xiuqiang He, Ruixuan Li
For this purpose, we propose Federated Masked Random Noise (FedMRN), a novel framework that enables clients to learn a 1-bit mask for each model parameter and apply masked random noise (i. e., the Hadamard product of random noise and masks) to represent model updates.
1 code implementation • 6 Aug 2024 • Shiwei Li, Wenchao Xu, Haozhao Wang, Xing Tang, Yining Qi, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li
To this end, we propose Federated Binarization-Aware Training (FedBAT), a novel framework that directly learns binary model updates during the local training process, thus inherently reducing the approximation errors.
no code implementations • 5 Aug 2024 • Shiwei Li, Huifeng Guo, Xing Tang, Ruiming Tang, Lu Hou, Ruixuan Li, Rui Zhang
In this survey, we provide a comprehensive review of embedding compression approaches in recommender systems.
no code implementations • 6 Jul 2024 • Yichen Li, Wenchao Xu, Haozhao Wang, Ruixuan Li, Yining Qi, Jingcai Guo
Then, the client can choose to adopt a new initial model or a previous model with similar knowledge to train the new task and simultaneously migrate knowledge from previous tasks based on these correlations.
1 code implementation • 27 May 2024 • Yixiong Zou, Shanghang Zhang, Haichen Zhou, Yuhua Li, Ruixuan Li
Few-shot class-incremental learning (FSCIL) is proposed to continually learn from novel classes with only a few samples after the (pre-)training on base classes with sufficient data.
class-incremental learning
Few-Shot Class-Incremental Learning
+1
no code implementations • 8 May 2024 • Haichen Zhou, Yixiong Zou, Ruixuan Li, Yuhua Li, Kui Xiao
We first interpret the confusion as the collision between the novel-class and the base-class region in the feature space.
class-incremental learning
Few-Shot Class-Incremental Learning
+1
1 code implementation • 24 Mar 2024 • Ziwen Zhao, Yixin Su, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang
As the ultimate goal of GFMs is to learn generalized graph knowledge, we provide a comprehensive survey of self-supervised GFMs from a novel knowledge-based perspective.
no code implementations • CVPR 2024 • Yichen Li, Qunwei Li, Haozhao Wang, Ruixuan Li, Wenliang Zhong, Guannan Zhang
Then, the client trains the local model with both the cached samples and the samples from the new task.
1 code implementation • CVPR 2024 • Yixiong Zou, Yicong Liu, Yiman Hu, Yuhua Li, Ruixuan Li
To enhance the transferability and facilitate fine-tuning, we introduce a simple yet effective approach to achieve long-range flattening of the minima in the loss landscape.
1 code implementation • 6 Feb 2024 • Ziwen Zhao, Yuhua Li, Yixiong Zou, Jiliang Tang, Ruixuan Li
Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution.
no code implementations • 21 Dec 2023 • Jie Han, Yixiong Zou, Haozhao Wang, Jun Wang, Wei Liu, Yao Wu, Tao Zhang, Ruixuan Li
Therefore, current works first train a model on source domains with sufficiently labeled data, and then transfer the model to target domains where only rarely labeled data is available.
1 code implementation • 7 Dec 2023 • Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, Yuankai Zhang, Cheng Wang, Ruixuan Li
Rationalization empowers deep learning models with self-explaining capabilities through a cooperative game, where a generator selects a semantically consistent subset of the input as a rationale, and a subsequent predictor makes predictions based on the selected rationale.
1 code implementation • NeurIPS 2023 • Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, Yuankai Zhang, Yang Qiu
Instead of attempting to rectify the issues of the MMI criterion, we propose a novel criterion to uncover the causal rationale, termed the Minimum Conditional Dependence (MCD) criterion, which is grounded on our finding that the non-causal features and the target label are \emph{d-separated} by the causal rationale.
1 code implementation • 23 May 2023 • Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, Yixiong Zou
However, such a cooperative game may incur the degeneration problem where the predictor overfits to the uninformative pieces generated by a not yet well-trained generator and in turn, leads the generator to converge to a sub-optimal model that tends to select senseless pieces.
1 code implementation • 8 May 2023 • Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, Yuankai Zhang, Yang Qiu
Rationalization is to employ a generator and a predictor to construct a self-explaining NLP model in which the generator selects a subset of human-intelligible pieces of the input text to the following predictor.
1 code implementation • 8 May 2023 • Han Chen, Ziwen Zhao, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang
Graph Contrastive Learning (GCL) is an effective way to learn generalized graph representations in a self-supervised manner, and has grown rapidly in recent years.
no code implementations • 26 Apr 2023 • Meixuan Qiao, Jun Wang, Junfu Xiang, Qiyu Hou, Ruixuan Li
Accurately extracting structured data from structure diagrams in financial announcements is of great practical importance for building financial knowledge graphs and further improving the efficiency of various financial applications.
no code implementations • CVPR 2023 • Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, Zhigang Zeng
In this paper, we propose a new perspective that treats the local data in each client as a specific domain and design a novel domain knowledge aware federated distillation method, dubbed DaFKD, that can discern the importance of each model to the distillation sample, and thus is able to optimize the ensemble of soft predictions from diverse models.
no code implementations • 12 Dec 2022 • Shiwei Li, Huifeng Guo, Lu Hou, Wei zhang, Xing Tang, Ruiming Tang, Rui Zhang, Ruixuan Li
To this end, we formulate a novel quantization training paradigm to compress the embeddings from the training stage, termed low-precision training (LPT).
1 code implementation • 10 Oct 2022 • Yixiong Zou, Shanghang Zhang, Yuhua Li, Ruixuan Li
Few-shot class-incremental learning (FSCIL) is designed to incrementally recognize novel classes with only few training samples after the (pre-)training on base classes with sufficient samples, which focuses on both base-class performance and novel-class generalization.
class-incremental learning
Few-Shot Class-Incremental Learning
+1
1 code implementation • 17 Sep 2022 • Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, Yuankai Zhang
Conventional works generally employ a two-phase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces.
no code implementations • 31 Aug 2022 • Xiang Fang, Daizong Liu, Pan Zhou, Zichuan Xu, Ruixuan Li
To address this issue, in this paper, we propose a novel Hierarchical Local-Global Transformer (HLGT) to leverage this hierarchy information and model the interactions between different levels of granularity and different modalities for learning more fine-grained multi-modal representations.
no code implementations • 8 Dec 2021 • Wenbo Gou, Wen Shi, Jian Lou, Lijie Huang, Pan Zhou, Ruixuan Li
Natural language video localization (NLVL) is an important task in the vision-language understanding area, which calls for an in-depth understanding of not only computer vision and natural language side alone, but more importantly the interplay between both sides.
no code implementations • 22 Jan 2020 • Haozhao Wang, Zhihao Qu, Song Guo, Xin Gao, Ruixuan Li, Baoliu Ye
A major bottleneck on the performance of distributed Stochastic Gradient Descent (SGD) algorithm for large-scale Federated Learning is the communication overhead on pushing local gradients and pulling global model.
no code implementations • 21 Feb 2019 • Chengjie Li, Ruixuan Li, Haozhao Wang, Yuhua Li, Pan Zhou, Song Guo, Keqin Li
Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models.
no code implementations • 21 Jan 2019 • Jinrong Guo, Wantao Liu, Wang Wang, Qu Lu, Songlin Hu, Jizhong Han, Ruixuan Li
Typically, Ultra-deep neural network(UDNN) tends to yield high-quality model, but its training process is usually resource intensive and time-consuming.