Search Results for author: Ming Gao

Found 44 papers, 28 papers with code

Federated Learning via Consensus Mechanism on Heterogeneous Data: A New Perspective on Convergence

1 code implementation21 Nov 2023 Shu Zheng, Tiandi Ye, Xiang Li, Ming Gao

We theoretically show that the consensus mechanism can guarantee the convergence of the global objective.

Fairness Federated Learning

Prioritized Propagation in Graph Neural Networks

no code implementations6 Nov 2023 Yao Cheng, Minjie Chen, Xiang Li, Caihua Shan, Ming Gao

Specifically, the framework consists of three components: a backbone GNN model, a propagation controller to determine the optimal propagation steps for nodes, and a weight controller to compute the priority scores for nodes.

Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding

1 code implementation19 Oct 2023 Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li

The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.

DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller Language Models

1 code implementation8 Oct 2023 Chengcheng Han, Xiaowei Du, Che Zhang, Yixin Lian, Xiang Li, Ming Gao, Baoyuan Wang

Chain-of-Thought (CoT) prompting has proven to be effective in enhancing the reasoning capabilities of Large Language Models (LLMs) with at least 100 billion parameters.

Arithmetic Reasoning

Boosting In-Context Learning with Factual Knowledge

no code implementations26 Sep 2023 Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, Ming Gao

In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks by conditioning on a few training examples, eliminating the need for parameter updates and achieving competitive performance.

Few-Shot Learning Question Answering +2

Exchanging-based Multimodal Fusion with Transformer

1 code implementation5 Sep 2023 Renyu Zhu, Chengcheng Han, Yong Qian, Qiushi Sun, Xiang Li, Ming Gao, Xuezhi Cao, Yunsen Xian

To solve these issues, in this paper, we propose a novel exchanging-based multimodal fusion model MuSE for text-vision fusion based on Transformer.

Image Captioning Multimodal Sentiment Analysis +2

Graph Self-Contrast Representation Learning

no code implementations5 Sep 2023 Minjie Chen, Yao Cheng, Ye Wang, Xiang Li, Ming Gao

Further, Since the triplet loss only optimizes the relative distance between the anchor and its positive/negative samples, it is difficult to ensure the absolute distance between the anchor and positive sample.

Contrastive Learning Graph Representation Learning +1

TransPrompt v2: A Transferable Prompting Framework for Cross-task Text Classification

no code implementations29 Aug 2023 Jianing Wang, Chengyu Wang, Cen Chen, Ming Gao, Jun Huang, Aoying Zhou

We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks.

Few-Shot Learning Few-Shot Text Classification +2

You Can Backdoor Personalized Federated Learning

1 code implementation29 Jul 2023 Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao

The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts.

Backdoor Attack Meta-Learning +1

UPFL: Unsupervised Personalized Federated Learning towards New Clients

no code implementations29 Jul 2023 Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao

To address this challenge, we extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.

Knowledge Distillation Personalized Federated Learning

Boosting Language Models Reasoning with Chain-of-Knowledge Prompting

no code implementations10 Jun 2023 Jianing Wang, Qiushi Sun, Nuo Chen, Xiang Li, Ming Gao

To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple.

Arithmetic Reasoning

CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer

no code implementations25 May 2023 Ming Gao, Yanwu Xu, Yang Zhao, Tingbo Hou, Chenkai Zhao, Mingming Gong

In this paper, we propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler).

Style Transfer

TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills

no code implementations23 May 2023 Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao

To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.

Clone Detection Code Summarization +2

HugNLP: A Unified and Comprehensive Library for Natural Language Processing

1 code implementation28 Feb 2023 Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao

In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.

GradMA: A Gradient-Memory-based Accelerated Federated Learning with Alleviated Catastrophic Forgetting

1 code implementation CVPR 2023 Kangyang Luo, Xiang Li, Yunshi Lan, Ming Gao

Federated Learning (FL) has emerged as a de facto machine learning area and received rapid increasing research interests from the community.

Continual Learning Federated Learning +1

Uncertainty-aware Self-training for Low-resource Neural Sequence Labeling

no code implementations17 Feb 2023 Jianing Wang, Chengyu Wang, Jun Huang, Ming Gao, Aoying Zhou

Neural sequence labeling (NSL) aims at assigning labels for input language tokens, which covers a broad range of applications, such as named entity recognition (NER) and slot filling, etc.

named-entity-recognition Named Entity Recognition +3

Meta-Learning Triplet Network with Adaptive Margins for Few-Shot Named Entity Recognition

1 code implementation14 Feb 2023 Chengcheng Han, Renyu Zhu, Jun Kuang, FengJiao Chen, Xiang Li, Ming Gao, Xuezhi Cao, Wei Wu

We design an improved triplet network to map samples and prototype vectors into a low-dimensional space that is easier to be classified and propose an adaptive margin for each entity type.

few-shot-ner Few-shot NER +5

Meta-Learning Siamese Network for Few-Shot Text Classification

1 code implementation5 Feb 2023 Chengcheng Han, Yuhe Wang, Yingnan Fu, Xiang Li, Minghui Qiu, Ming Gao, Aoying Zhou

Few-shot learning has been used to tackle the problem of label scarcity in text classification, of which meta-learning based methods have shown to be effective, such as the prototypical networks (PROTO).

Descriptive Few-Shot Learning +3

SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders with Masking

no code implementations29 Jan 2023 Xiang Li, Tiandi Ye, Caihua Shan, Dongsheng Li, Ming Gao

In this paper, to comprehensively enhance the performance of generative graph SSL against other GCL models on both unsupervised and supervised learning tasks, we propose the SeeGera model, which is based on the family of self-supervised variational graph auto-encoder (VGAE).

Contrastive Learning Self-Supervised Learning +1

DDIT: Semantic Scene Completion via Deformable Deep Implicit Templates

no code implementations ICCV 2023 Haoang Li, Jinhu Dong, Binghui Wen, Ming Gao, Tianyu Huang, Yun-hui Liu, Daniel Cremers

It abstracts the shape prior of a category, and thus can provide constraints on the overall shape of an instance.

Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding

1 code implementation16 Oct 2022 Jianing Wang, Wenkang Huang, Qiuhui Shi, Hongbin Wang, Minghui Qiu, Xiang Li, Ming Gao

In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM.

Language Modelling Natural Language Understanding

Understanding Long Programming Languages with Structure-Aware Sparse Attention

1 code implementation27 May 2022 Tingting Liu, Chengyu Wang, Cen Chen, Ming Gao, Aoying Zhou

With top-$k$ sparse attention, the most crucial attention relation can be obtained with a lower computational cost.

Towards Unified Prompt Tuning for Few-shot Text Classification

1 code implementation11 May 2022 Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao

Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.

Few-Shot Learning Few-Shot Text Classification +5

KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering

1 code implementation6 May 2022 Jianing Wang, Chengyu Wang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Jun Huang, Ming Gao

Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs).

Contrastive Learning Extractive Question-Answering +5

Optimal estimation of Gaussian DAG models

1 code implementation25 Jan 2022 Ming Gao, Wai Ming Tai, Bryon Aragam

In other words, at least for Gaussian models with equal error variances, learning a directed graphical model is statistically no more difficult than learning an undirected graphical model.

Programming Knowledge Tracing: A Comprehensive Dataset and A New Model

no code implementations11 Dec 2021 Renyu Zhu, Dongxiang Zhang, Chengcheng Han, Ming Gao, Xuesong Lu, Weining Qian, Aoying Zhou

More specifically, we construct a bipartite graph for programming problem embedding, and design an improved pre-training model PLCodeBERT for code embedding, as well as a double-sequence RNN model with exponential decay attention for effective feature fusion.

Clone Detection Knowledge Tracing

Efficient Bayesian network structure learning via local Markov boundary search

1 code implementation NeurIPS 2021 Ming Gao, Bryon Aragam

Perhaps surprisingly, we show that for certain graph ensembles, a simple forward greedy search algorithm (i. e. without a backward pruning phase) suffices to learn the Markov boundary of each node.

Structure learning in polynomial time: Greedy algorithms, Bregman information, and exponential families

no code implementations NeurIPS 2021 Goutham Rajendran, Bohdan Kivva, Ming Gao, Bryon Aragam

Greedy algorithms have long been a workhorse for learning graphical models, and more broadly for learning statistical models with sparse structure.

On Disambiguating Authors: Collaboration Network Reconstruction in a Bottom-up Manner

1 code implementation29 Nov 2020 Na Li, Renyu Zhu, Xiaoxu Zhou, Xiangnan He, Wenyuan Cai, Ming Gao, Aoying Zhou

In this paper, we model the author disambiguation as a collaboration network reconstruction problem, and propose an incremental and unsupervised author disambiguation method, namely IUAD, which performs in a bottom-up manner.

Learning Relation Prototype from Unlabeled Texts for Long-tail Relation Extraction

1 code implementation27 Nov 2020 Yixin Cao, Jun Kuang, Ming Gao, Aoying Zhou, Yonggang Wen, Tat-Seng Chua

In this paper, we propose a general approach to learn relation prototypesfrom unlabeled texts, to facilitate the long-tail relation extraction by transferring knowledge from the relation types with sufficient trainingdata.

Relation Extraction Transfer Learning

A polynomial-time algorithm for learning nonparametric causal graphs

1 code implementation NeurIPS 2020 Ming Gao, Yi Ding, Bryon Aragam

We establish finite-sample guarantees for a polynomial-time algorithm for learning a nonlinear, nonparametric directed acyclic graphical (DAG) model from data.

Hierarchical Optimization Time Integration for CFL-rate MPM Stepping

1 code implementation18 Nov 2019 Xinlei Wang, Minchen Li, Yu Fang, Xinxin Zhang, Ming Gao, Min Tang, Danny M. Kaufman, Chenfanfu Jiang

We propose Hierarchical Optimization Time Integration (HOT) for efficient implicit time-stepping of the Material Point Method (MPM) irrespective of simulated materials and conditions.


Improving Neural Relation Extraction with Implicit Mutual Relations

1 code implementation8 Jul 2019 Jun Kuang, Yixin Cao, Jianbing Zheng, Xiangnan He, Ming Gao, Aoying Zhou

In contrast to existing distant supervision approaches that suffer from insufficient training corpora to extract relations, our proposal of mining implicit mutual relation from the massive unlabeled corpora transfers the semantic information of entity pairs into the RE model, which is more expressive and semantically plausible.

Relation Extraction

Learning Vertex Representations for Bipartite Networks

1 code implementation16 Jan 2019 Ming Gao, Xiangnan He, Leihui Chen, Tingting Liu, Jinglin Zhang, Aoying Zhou

Recent years have witnessed a widespread increase of interest in network representation learning (NRL).

Collaborative Filtering Knowledge Graphs +2

BiRank: Towards Ranking on Bipartite Graphs

3 code implementations15 Aug 2017 Xiangnan He, Ming Gao, Min-Yen Kan, Dingxian Wang

In this paper, we study the problem of ranking vertices of a bipartite graph, based on the graph's link structure as well as prior information about vertices (which we term a query vector).

Cannot find the paper you are looking for? You can Submit a new open access paper.