Search Results for author: Ruixuan Li

Found 20 papers, 8 papers with code

A Survey on Self-Supervised Pre-Training of Graph Foundation Models: A Knowledge-Based Perspective

1 code implementation24 Mar 2024 Ziwen Zhao, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang

Graph self-supervised learning is now a go-to method for pre-training graph foundation models, including graph neural networks, graph transformers, and more recent large language model (LLM)-based graph models.

Language Modelling Large Language Model +1

Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot Learning

no code implementations1 Mar 2024 Yixiong Zou, Yicong Liu, Yiman Hu, Yuhua Li, Ruixuan Li

To enhance the transferability and facilitate fine-tuning, we introduce a simple yet effective approach to achieve long-range flattening of the minima in the loss landscape.

cross-domain few-shot learning

Masked Graph Autoencoder with Non-discrete Bandwidths

1 code implementation6 Feb 2024 Ziwen Zhao, Yuhua Li, Yixiong Zou, Jiliang Tang, Ruixuan Li

Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution.

Blocking Link Prediction +2

Decoupling Representation and Knowledge for Few-Shot Intent Classification and Slot Filling

no code implementations21 Dec 2023 Jie Han, Yixiong Zou, Haozhao Wang, Jun Wang, Wei Liu, Yao Wu, Tao Zhang, Ruixuan Li

Therefore, current works first train a model on source domains with sufficiently labeled data, and then transfer the model to target domains where only rarely labeled data is available.

intent-classification Intent Classification +4

Enhancing the Rationale-Input Alignment for Self-explaining Rationalization

no code implementations7 Dec 2023 Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, Yuankai Zhang, Cheng Wang, Ruixuan Li

Rationalization empowers deep learning models with self-explaining capabilities through a cooperative game, where a generator selects a semantically consistent subset of the input as a rationale, and a subsequent predictor makes predictions based on the selected rationale.

D-Separation for Causal Self-Explanation

1 code implementation NeurIPS 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, Yuankai Zhang, Yang Qiu

Instead of attempting to rectify the issues of the MMI criterion, we propose a novel criterion to uncover the causal rationale, termed the Minimum Conditional Dependence (MCD) criterion, which is grounded on our finding that the non-causal features and the target label are \emph{d-separated} by the causal rationale.

Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint

1 code implementation23 May 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, Yixiong Zou

However, such a cooperative game may incur the degeneration problem where the predictor overfits to the uninformative pieces generated by a not yet well-trained generator and in turn, leads the generator to converge to a sub-optimal model that tends to select senseless pieces.

MGR: Multi-generator Based Rationalization

1 code implementation8 May 2023 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, Yuankai Zhang, Yang Qiu

Rationalization is to employ a generator and a predictor to construct a self-explaining NLP model in which the generator selects a subset of human-intelligible pieces of the input text to the following predictor.

CSGCL: Community-Strength-Enhanced Graph Contrastive Learning

1 code implementation8 May 2023 Han Chen, Ziwen Zhao, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang

Graph Contrastive Learning (GCL) is an effective way to learn generalized graph representations in a self-supervised manner, and has grown rapidly in recent years.

Attribute Contrastive Learning +3

Structure Diagram Recognition in Financial Announcements

no code implementations26 Apr 2023 Meixuan Qiao, Jun Wang, Junfu Xiang, Qiyu Hou, Ruixuan Li

Accurately extracting structured data from structure diagrams in financial announcements is of great practical importance for building financial knowledge graphs and further improving the efficiency of various financial applications.

Knowledge Graphs

DaFKD: Domain-Aware Federated Knowledge Distillation

no code implementations CVPR 2023 Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, Zhigang Zeng

In this paper, we propose a new perspective that treats the local data in each client as a specific domain and design a novel domain knowledge aware federated distillation method, dubbed DaFKD, that can discern the importance of each model to the distillation sample, and thus is able to optimize the ensemble of soft predictions from diverse models.

Knowledge Distillation

Adaptive Low-Precision Training for Embeddings in Click-Through Rate Prediction

no code implementations12 Dec 2022 Shiwei Li, Huifeng Guo, Lu Hou, Wei zhang, Xing Tang, Ruiming Tang, Rui Zhang, Ruixuan Li

To this end, we formulate a novel quantization training paradigm to compress the embeddings from the training stage, termed low-precision training (LPT).

Click-Through Rate Prediction Quantization

Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation

1 code implementation10 Oct 2022 Yixiong Zou, Shanghang Zhang, Yuhua Li, Ruixuan Li

Few-shot class-incremental learning (FSCIL) is designed to incrementally recognize novel classes with only few training samples after the (pre-)training on base classes with sufficient samples, which focuses on both base-class performance and novel-class generalization.

Few-Shot Class-Incremental Learning Incremental Learning

FR: Folded Rationalization with a Unified Encoder

1 code implementation17 Sep 2022 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, Yuankai Zhang

Conventional works generally employ a two-phase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces.

Hierarchical Local-Global Transformer for Temporal Sentence Grounding

no code implementations31 Aug 2022 Xiang Fang, Daizong Liu, Pan Zhou, Zichuan Xu, Ruixuan Li

To address this issue, in this paper, we propose a novel Hierarchical Local-Global Transformer (HLGT) to leverage this hierarchy information and model the interactions between different levels of granularity and different modalities for learning more fine-grained multi-modal representations.

Sentence Temporal Sentence Grounding

SNEAK: Synonymous Sentences-Aware Adversarial Attack on Natural Language Video Localization

no code implementations8 Dec 2021 Wenbo Gou, Wen Shi, Jian Lou, Lijie Huang, Pan Zhou, Ruixuan Li

Natural language video localization (NLVL) is an important task in the vision-language understanding area, which calls for an in-depth understanding of not only computer vision and natural language side alone, but more importantly the interplay between both sides.

Adversarial Attack Adversarial Robustness

Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning

no code implementations22 Jan 2020 Haozhao Wang, Zhihao Qu, Song Guo, Xin Gao, Ruixuan Li, Baoliu Ye

A major bottleneck on the performance of distributed Stochastic Gradient Descent (SGD) algorithm for large-scale Federated Learning is the communication overhead on pushing local gradients and pulling global model.

BIG-bench Machine Learning Federated Learning

Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training

no code implementations21 Feb 2019 Chengjie Li, Ruixuan Li, Haozhao Wang, Yuhua Li, Pan Zhou, Song Guo, Keqin Li

Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models.

Scheduling

AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-deep Neural Networks

no code implementations21 Jan 2019 Jinrong Guo, Wantao Liu, Wang Wang, Qu Lu, Songlin Hu, Jizhong Han, Ruixuan Li

Typically, Ultra-deep neural network(UDNN) tends to yield high-quality model, but its training process is usually resource intensive and time-consuming.

Cannot find the paper you are looking for? You can Submit a new open access paper.