2 code implementations • EMNLP 2021 • Chengyu Wang, Jianing Wang, Minghui Qiu, Jun Huang, Ming Gao
Based on continuous prompt embeddings, we propose TransPrompt, a transferable prompting framework for few-shot learning across similar tasks.
1 code implementation • 21 Nov 2023 • Shu Zheng, Tiandi Ye, Xiang Li, Ming Gao
We theoretically show that the consensus mechanism can guarantee the convergence of the global objective.
no code implementations • 6 Nov 2023 • Yao Cheng, Minjie Chen, Xiang Li, Caihua Shan, Ming Gao
Specifically, the framework consists of three components: a backbone GNN model, a propagation controller to determine the optimal propagation steps for nodes, and a weight controller to compute the priority scores for nodes.
1 code implementation • 19 Oct 2023 • Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li
The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.
1 code implementation • 8 Oct 2023 • Chengcheng Han, Xiaowei Du, Che Zhang, Yixin Lian, Xiang Li, Ming Gao, Baoyuan Wang
Chain-of-Thought (CoT) prompting has proven to be effective in enhancing the reasoning capabilities of Large Language Models (LLMs) with at least 100 billion parameters.
no code implementations • 26 Sep 2023 • Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, Ming Gao
In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks by conditioning on a few training examples, eliminating the need for parameter updates and achieving competitive performance.
1 code implementation • 5 Sep 2023 • Renyu Zhu, Chengcheng Han, Yong Qian, Qiushi Sun, Xiang Li, Ming Gao, Xuezhi Cao, Yunsen Xian
To solve these issues, in this paper, we propose a novel exchanging-based multimodal fusion model MuSE for text-vision fusion based on Transformer.
no code implementations • 5 Sep 2023 • Minjie Chen, Yao Cheng, Ye Wang, Xiang Li, Ming Gao
Further, Since the triplet loss only optimizes the relative distance between the anchor and its positive/negative samples, it is difficult to ensure the absolute distance between the anchor and positive sample.
no code implementations • 29 Aug 2023 • Jianing Wang, Chengyu Wang, Cen Chen, Ming Gao, Jun Huang, Aoying Zhou
We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks.
1 code implementation • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts.
no code implementations • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
To address this challenge, we extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.
no code implementations • 10 Jun 2023 • Jianing Wang, Qiushi Sun, Nuo Chen, Xiang Li, Ming Gao
To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple.
no code implementations • 25 May 2023 • Ming Gao, Yanwu Xu, Yang Zhao, Tingbo Hou, Chenkai Zhao, Mingming Gong
In this paper, we propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler).
no code implementations • 23 May 2023 • Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao
To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.
no code implementations • 17 May 2023 • Chengcheng Han, Liqing Cui, Renyu Zhu, Jianing Wang, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
In this paper, we introduce gradient descent into black-box tuning scenario through knowledge distillation.
1 code implementation • 14 May 2023 • Qiushi Sun, Chengcheng Han, Nuo Chen, Renyu Zhu, Jingyang Gong, Xiang Li, Ming Gao
Large language models (LLMs) have shown increasing power on various natural language processing (NLP) tasks.
1 code implementation • 28 Feb 2023 • Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.
1 code implementation • CVPR 2023 • Kangyang Luo, Xiang Li, Yunshi Lan, Ming Gao
Federated Learning (FL) has emerged as a de facto machine learning area and received rapid increasing research interests from the community.
no code implementations • 17 Feb 2023 • Jianing Wang, Chengyu Wang, Jun Huang, Ming Gao, Aoying Zhou
Neural sequence labeling (NSL) aims at assigning labels for input language tokens, which covers a broad range of applications, such as named entity recognition (NER) and slot filling, etc.
1 code implementation • 14 Feb 2023 • Chengcheng Han, Renyu Zhu, Jun Kuang, FengJiao Chen, Xiang Li, Ming Gao, Xuezhi Cao, Wei Wu
We design an improved triplet network to map samples and prototype vectors into a low-dimensional space that is easier to be classified and propose an adaptive margin for each entity type.
1 code implementation • 5 Feb 2023 • Chengcheng Han, Yuhe Wang, Yingnan Fu, Xiang Li, Minghui Qiu, Ming Gao, Aoying Zhou
Few-shot learning has been used to tackle the problem of label scarcity in text classification, of which meta-learning based methods have shown to be effective, such as the prototypical networks (PROTO).
no code implementations • 29 Jan 2023 • Xiang Li, Tiandi Ye, Caihua Shan, Dongsheng Li, Ming Gao
In this paper, to comprehensively enhance the performance of generative graph SSL against other GCL models on both unsupervised and supervised learning tasks, we propose the SeeGera model, which is based on the family of self-supervised variational graph auto-encoder (VGAE).
no code implementations • ICCV 2023 • Haoang Li, Jinhu Dong, Binghui Wen, Ming Gao, Tianyu Huang, Yun-hui Liu, Daniel Cremers
It abstracts the shape prior of a category, and thus can provide constraints on the overall shape of an instance.
no code implementations • 24 Oct 2022 • Jiakuan Fan, Haoyue Wang, Wei Wang, Ming Gao, Shengyu Zhao
In open source project governance, there has been a lot of concern about how to measure developers' contributions.
no code implementations • 17 Oct 2022 • Jianing Wang, Chengcheng Han, Chengyu Wang, Chuanqi Tan, Minghui Qiu, Songfang Huang, Jun Huang, Ming Gao
Few-shot Named Entity Recognition (NER) aims to identify named entities with very little annotated data.
1 code implementation • 16 Oct 2022 • Jianing Wang, Wenkang Huang, Qiuhui Shi, Hongbin Wang, Minghui Qiu, Xiang Li, Ming Gao
In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM.
1 code implementation • 7 Oct 2022 • Nuo Chen, Qiushi Sun, Renyu Zhu, Xiang Li, Xuesong Lu, Ming Gao
To interpret these models, some probing methods have been applied.
1 code implementation • 27 May 2022 • Tingting Liu, Chengyu Wang, Cen Chen, Ming Gao, Aoying Zhou
With top-$k$ sparse attention, the most crucial attention relation can be obtained with a lower computational cost.
1 code implementation • 11 May 2022 • Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.
1 code implementation • ACL 2022 • Renyu Zhu, Lei Yuan, Xiang Li, Ming Gao, Wenyuan Cai
In this paper, we consider human behaviors and propose the PGNN-EK model that consists of two main components.
1 code implementation • 6 May 2022 • Jianing Wang, Chengyu Wang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Jun Huang, Ming Gao
Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs).
1 code implementation • 25 Jan 2022 • Ming Gao, Wai Ming Tai, Bryon Aragam
In other words, at least for Gaussian models with equal error variances, learning a directed graphical model is statistically no more difficult than learning an undirected graphical model.
no code implementations • 11 Dec 2021 • Renyu Zhu, Dongxiang Zhang, Chengcheng Han, Ming Gao, Xuesong Lu, Weining Qian, Aoying Zhou
More specifically, we construct a bipartite graph for programming problem embedding, and design an improved pre-training model PLCodeBERT for code embedding, as well as a double-sequence RNN model with exponential decay attention for effective feature fusion.
1 code implementation • NeurIPS 2021 • Ming Gao, Bryon Aragam
Perhaps surprisingly, we show that for certain graph ensembles, a simple forward greedy search algorithm (i. e. without a backward pruning phase) suffices to learn the Markov boundary of each node.
no code implementations • NeurIPS 2021 • Goutham Rajendran, Bohdan Kivva, Ming Gao, Bryon Aragam
Greedy algorithms have long been a workhorse for learning graphical models, and more broadly for learning statistical models with sparse structure.
1 code implementation • Findings (ACL) 2021 • Chengcheng Han, Zeqiu Fan, Dongxiang Zhang, Minghui Qiu, Ming Gao, Aoying Zhou
Meta-learning has emerged as a trending technique to tackle few-shot text classification and achieved state-of-the-art performance.
1 code implementation • 29 Nov 2020 • Na Li, Renyu Zhu, Xiaoxu Zhou, Xiangnan He, Wenyuan Cai, Ming Gao, Aoying Zhou
In this paper, we model the author disambiguation as a collaboration network reconstruction problem, and propose an incremental and unsupervised author disambiguation method, namely IUAD, which performs in a bottom-up manner.
1 code implementation • 27 Nov 2020 • Yixin Cao, Jun Kuang, Ming Gao, Aoying Zhou, Yonggang Wen, Tat-Seng Chua
In this paper, we propose a general approach to learn relation prototypesfrom unlabeled texts, to facilitate the long-tail relation extraction by transferring knowledge from the relation types with sufficient trainingdata.
1 code implementation • 6 Jul 2020 • Yingnan Fu, Tingting Liu, Ming Gao, Aoying Zhou
The symbol-level image encoder of EDSL consists of segmentation module and reconstruction module.
1 code implementation • NeurIPS 2020 • Ming Gao, Yi Ding, Bryon Aragam
We establish finite-sample guarantees for a polynomial-time algorithm for learning a nonlinear, nonparametric directed acyclic graphical (DAG) model from data.
1 code implementation • 18 Nov 2019 • Xinlei Wang, Minchen Li, Yu Fang, Xinxin Zhang, Ming Gao, Min Tang, Danny M. Kaufman, Chenfanfu Jiang
We propose Hierarchical Optimization Time Integration (HOT) for efficient implicit time-stepping of the Material Point Method (MPM) irrespective of simulated materials and conditions.
Graphics
1 code implementation • 8 Jul 2019 • Jun Kuang, Yixin Cao, Jianbing Zheng, Xiangnan He, Ming Gao, Aoying Zhou
In contrast to existing distant supervision approaches that suffer from insufficient training corpora to extract relations, our proposal of mining implicit mutual relation from the massive unlabeled corpora transfers the semantic information of entity pairs into the RE model, which is more expressive and semantically plausible.
1 code implementation • 16 Jan 2019 • Ming Gao, Xiangnan He, Leihui Chen, Tingting Liu, Jinglin Zhang, Aoying Zhou
Recent years have witnessed a widespread increase of interest in network representation learning (NRL).
3 code implementations • 15 Aug 2017 • Xiangnan He, Ming Gao, Min-Yen Kan, Dingxian Wang
In this paper, we study the problem of ranking vertices of a bipartite graph, based on the graph's link structure as well as prior information about vertices (which we term a query vector).