2 code implementations • EMNLP 2021 • Chengyu Wang, Jianing Wang, Minghui Qiu, Jun Huang, Ming Gao
Based on continuous prompt embeddings, we propose TransPrompt, a transferable prompting framework for few-shot learning across similar tasks.
no code implementations • 2 Apr 2025 • Jin Lian, Zhongyu Wan, Ming Gao, Junfeng Chen
Cross-layer feature pyramid networks (CFPNs) have achieved notable progress in multi-scale feature fusion and boundary detail preservation for salient object detection.
no code implementations • 5 Feb 2025 • Ming Gao, Ruichen Qiu, Zeng Hui Chang, Kanjian Zhang, Haikun Wei, Hong Cai Chen
The proposed methodology consists of two principal components: a circuit diagram recognition algorithm designed to extract the circuit components and topological structure of the circuit using proposed GAM-YOLO model and a 2-step connected domain filtering algorithm, and a hierarchical retrieval strategy based on graph similarity and different graph representation methods for analog circuits.
1 code implementation • 19 Dec 2024 • Jiayi Wu, Hengyi Cai, Lingyong Yan, Hao Sun, Xiang Li, Shuaiqiang Wang, Dawei Yin, Ming Gao
The emergence of Retrieval-augmented generation (RAG) has alleviated the issues of outdated and hallucinatory content in the generation of large language models (LLMs), yet it still reveals numerous limitations.
no code implementations • 25 Nov 2024 • Hong Cai Chen, Longchang Wu, Ming Gao, Lingrui Shen, Jiarui Zhong, Yipin Xu
Efficient and accurate extraction of electrical parameters from circuit datasheets and design documents is critical for accelerating circuit design in Electronic Design Automation (EDA).
1 code implementation • 23 Oct 2024 • Jiayi Wu, Hao Sun, Hengyi Cai, Lixin Su, Shuaiqiang Wang, Dawei Yin, Xiang Li, Ming Gao
Based on this insight, we incorporate a tiny language model with a minimal number of parameters.
1 code implementation • 26 Sep 2024 • Guangyu Wang, Yujie Chen, Ming Gao, Zhiqiao Wu, Jiafu Tang, Jiabi Zhao
Accurate traffic prediction faces significant challenges, necessitating a deep understanding of both temporal and spatial cues and their complex interactions across multiple variables.
Ranked #1 on
Traffic Prediction
on METR-LA
no code implementations • 24 Sep 2024 • Xiaohong Liu, Guoxing Yang, Yulin Luo, Jiaji Mao, Xiang Zhang, Ming Gao, Shanghang Zhang, Jun Shen, Guangyu Wang
When evaluated on the real-world benchmark involving three representative modalities, 2D images (chest X-rays), multi-view images (mammograms), and 3D images (thyroid CT scans), RadFound significantly outperforms other VL foundation models on both quantitative metrics and human evaluation.
no code implementations • 12 Sep 2024 • Kangyang Luo, Shuai Wang, Yexuan Fu, Renrong Shao, Xiang Li, Yunshi Lan, Ming Gao, Jinlong Shu
In dual-model distillation, the trained dual generators work together to provide the training data for updates of the global model.
no code implementations • 11 Sep 2024 • Kangyang Luo, Shuai Wang, Xiang Li, Yunshi Lan, Ming Gao, Jinlong Shu
Federated Learning (FL) is gaining popularity as a distributed learning framework that only shares model parameters or gradient updates and keeps private data locally.
no code implementations • 27 Aug 2024 • Andrew Saba, Aderotimi Adetunji, Adam Johnson, Aadi Kothari, Matthew Sivaprakasam, Joshua Spisak, Prem Bharatia, Arjun Chauhan, Brendan Duff Jr., Noah Gasparro, Charles King, Ryan Larkin, Brian Mao, Micah Nye, Anjali Parashar, Joseph Attias, Aurimas Balciunas, Austin Brown, Chris Chang, Ming Gao, Cindy Heredia, Andrew Keats, Jose Lavariega, William Muckelroy III, Andre Slavescu, Nickolas Stathas, Nayana Suvarna, Chuan Tian Zhang, Sebastian Scherer, Deva Ramanan
While far from challenging what a human racecar driver can do, the IAC is pushing the state of the art by facilitating full-sized ARV competitions.
no code implementations • 21 May 2024 • Jjahao Zhang, Yin Gu, Deyu Sun, Yuhua Gao, Ming Gao, Ming Cui, Teng Zhang, He Ma
The fusion of the information characteristics of both computed tomography (CT) and magnetic resonance imaging(MRI) modalities may be useful in achieving a precise outline of the extent of paracervical tissue invasion.
no code implementations • 6 May 2024 • Qunlong Ma, Zhi Ma, Ming Gao
Here we design a Hamming distance regularizer in the framework of a class of generative models, variational autoregressive networks (VAN), to quantify the generalization capabilities of various network architectures combined with VAN.
no code implementations • 11 Apr 2024 • Jiayi Wu, Renyu Zhu, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
Over the past few years, we have witnessed remarkable advancements in Code Pre-trained Models (CodePTMs).
no code implementations • 9 Apr 2024 • Qunlong Ma, Zhi Ma, Jinlong Xu, Hairui Zhang, Ming Gao
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
no code implementations • 20 Feb 2024 • Yingfan Liu, Renyu Zhu, Ming Gao
With the rapid development of big data and AI technology, programming is in high demand and has become an essential skill for students.
1 code implementation • 13 Feb 2024 • Jianing Wang, Junda Wu, Yupeng Hou, Yao Liu, Ming Gao, Julian McAuley
In this paper, we propose InstructGraph, a framework that empowers LLMs with the abilities of graph reasoning and generation by instruction tuning and preference alignment.
1 code implementation • 9 Feb 2024 • Yuhao Wang, Ming Gao, Wai Ming Tai, Bryon Aragam, Arnab Bhattacharyya
We develop optimal algorithms for learning undirected Gaussian trees and directed Gaussian polytrees from data.
1 code implementation • 27 Jan 2024 • Yiyuan Zhu, Yongjun Li, Jialiang Wang, Ming Gao, Jiali Wei
Over the past years, a large number of fake news detection algorithms based on deep learning have emerged.
1 code implementation • 15 Jan 2024 • Yunshi Lan, Xinyuan Li, Hanyue Du, Xuesong Lu, Ming Gao, Weining Qian, Aoying Zhou
Natural Language Processing (NLP) aims to analyze text or speech via techniques in the computer science field.
1 code implementation • 21 Nov 2023 • Shu Zheng, Tiandi Ye, Xiang Li, Ming Gao
We theoretically show that the consensus mechanism can guarantee the convergence of the global objective.
no code implementations • 6 Nov 2023 • Yao Cheng, Minjie Chen, Xiang Li, Caihua Shan, Ming Gao
Specifically, the framework consists of three components: a backbone GNN model, a propagation controller to determine the optimal propagation steps for nodes, and a weight controller to compute the priority scores for nodes.
1 code implementation • 19 Oct 2023 • Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li
The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.
1 code implementation • 8 Oct 2023 • Chengcheng Han, Xiaowei Du, Che Zhang, Yixin Lian, Xiang Li, Ming Gao, Baoyuan Wang
Chain-of-Thought (CoT) prompting has proven to be effective in enhancing the reasoning capabilities of Large Language Models (LLMs) with at least 100 billion parameters.
no code implementations • 26 Sep 2023 • Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, Ming Gao
Large language models (LLMs) enable in-context learning (ICL) by conditioning on a few labeled training examples as a text-based prompt, eliminating the need for parameter updates and achieving competitive performance.
no code implementations • 5 Sep 2023 • Minjie Chen, Yao Cheng, Ye Wang, Xiang Li, Ming Gao
Further, Since the triplet loss only optimizes the relative distance between the anchor and its positive/negative samples, it is difficult to ensure the absolute distance between the anchor and positive sample.
1 code implementation • 5 Sep 2023 • Renyu Zhu, Chengcheng Han, Yong Qian, Qiushi Sun, Xiang Li, Ming Gao, Xuezhi Cao, Yunsen Xian
To solve these issues, in this paper, we propose a novel exchanging-based multimodal fusion model MuSE for text-vision fusion based on Transformer.
no code implementations • 29 Aug 2023 • Jianing Wang, Chengyu Wang, Cen Chen, Ming Gao, Jun Huang, Aoying Zhou
We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks.
no code implementations • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
To address this challenge, we extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.
1 code implementation • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts.
1 code implementation • 10 Jun 2023 • Jianing Wang, Qiushi Sun, Xiang Li, Ming Gao
To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple.
no code implementations • 25 May 2023 • Ming Gao, Yanwu Xu, Yang Zhao, Tingbo Hou, Chenkai Zhao, Mingming Gong
In this paper, we propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler).
1 code implementation • 23 May 2023 • Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao
To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.
no code implementations • 17 May 2023 • Chengcheng Han, Liqing Cui, Renyu Zhu, Jianing Wang, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
In this paper, we introduce gradient descent into black-box tuning scenario through knowledge distillation.
1 code implementation • 14 May 2023 • Qiushi Sun, Chengcheng Han, Nuo Chen, Renyu Zhu, Jingyang Gong, Xiang Li, Ming Gao
Large language models (LLMs) have shown increasing power on various natural language processing (NLP) tasks.
2 code implementations • 28 Feb 2023 • Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.
1 code implementation • CVPR 2023 • Kangyang Luo, Xiang Li, Yunshi Lan, Ming Gao
Federated Learning (FL) has emerged as a de facto machine learning area and received rapid increasing research interests from the community.
no code implementations • 17 Feb 2023 • Jianing Wang, Chengyu Wang, Jun Huang, Ming Gao, Aoying Zhou
Neural sequence labeling (NSL) aims at assigning labels for input language tokens, which covers a broad range of applications, such as named entity recognition (NER) and slot filling, etc.
1 code implementation • 14 Feb 2023 • Chengcheng Han, Renyu Zhu, Jun Kuang, FengJiao Chen, Xiang Li, Ming Gao, Xuezhi Cao, Wei Wu
We design an improved triplet network to map samples and prototype vectors into a low-dimensional space that is easier to be classified and propose an adaptive margin for each entity type.
1 code implementation • 5 Feb 2023 • Chengcheng Han, Yuhe Wang, Yingnan Fu, Xiang Li, Minghui Qiu, Ming Gao, Aoying Zhou
Few-shot learning has been used to tackle the problem of label scarcity in text classification, of which meta-learning based methods have shown to be effective, such as the prototypical networks (PROTO).
no code implementations • 29 Jan 2023 • Xiang Li, Tiandi Ye, Caihua Shan, Dongsheng Li, Ming Gao
In this paper, to comprehensively enhance the performance of generative graph SSL against other GCL models on both unsupervised and supervised learning tasks, we propose the SeeGera model, which is based on the family of self-supervised variational graph auto-encoder (VGAE).
no code implementations • ICCV 2023 • Haoang Li, Jinhu Dong, Binghui Wen, Ming Gao, Tianyu Huang, Yun-hui Liu, Daniel Cremers
It abstracts the shape prior of a category, and thus can provide constraints on the overall shape of an instance.
no code implementations • 24 Oct 2022 • Jiakuan Fan, Haoyue Wang, Wei Wang, Ming Gao, Shengyu Zhao
In open source project governance, there has been a lot of concern about how to measure developers' contributions.
no code implementations • 17 Oct 2022 • Jianing Wang, Chengcheng Han, Chengyu Wang, Chuanqi Tan, Minghui Qiu, Songfang Huang, Jun Huang, Ming Gao
Few-shot Named Entity Recognition (NER) aims to identify named entities with very little annotated data.
1 code implementation • 16 Oct 2022 • Jianing Wang, Wenkang Huang, Qiuhui Shi, Hongbin Wang, Minghui Qiu, Xiang Li, Ming Gao
In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM.
1 code implementation • 7 Oct 2022 • Nuo Chen, Qiushi Sun, Renyu Zhu, Xiang Li, Xuesong Lu, Ming Gao
To interpret these models, some probing methods have been applied.
1 code implementation • 27 May 2022 • Tingting Liu, Chengyu Wang, Cen Chen, Ming Gao, Aoying Zhou
With top-$k$ sparse attention, the most crucial attention relation can be obtained with a lower computational cost.
1 code implementation • 11 May 2022 • Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.
1 code implementation • ACL 2022 • Renyu Zhu, Lei Yuan, Xiang Li, Ming Gao, Wenyuan Cai
In this paper, we consider human behaviors and propose the PGNN-EK model that consists of two main components.
1 code implementation • 6 May 2022 • Jianing Wang, Chengyu Wang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Jun Huang, Ming Gao
Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs).
1 code implementation • 25 Jan 2022 • Ming Gao, Wai Ming Tai, Bryon Aragam
In other words, at least for Gaussian models with equal error variances, learning a directed graphical model is statistically no more difficult than learning an undirected graphical model.
no code implementations • 11 Dec 2021 • Renyu Zhu, Dongxiang Zhang, Chengcheng Han, Ming Gao, Xuesong Lu, Weining Qian, Aoying Zhou
More specifically, we construct a bipartite graph for programming problem embedding, and design an improved pre-training model PLCodeBERT for code embedding, as well as a double-sequence RNN model with exponential decay attention for effective feature fusion.
1 code implementation • NeurIPS 2021 • Ming Gao, Bryon Aragam
Perhaps surprisingly, we show that for certain graph ensembles, a simple forward greedy search algorithm (i. e. without a backward pruning phase) suffices to learn the Markov boundary of each node.
no code implementations • NeurIPS 2021 • Goutham Rajendran, Bohdan Kivva, Ming Gao, Bryon Aragam
Greedy algorithms have long been a workhorse for learning graphical models, and more broadly for learning statistical models with sparse structure.
1 code implementation • Findings (ACL) 2021 • Chengcheng Han, Zeqiu Fan, Dongxiang Zhang, Minghui Qiu, Ming Gao, Aoying Zhou
Meta-learning has emerged as a trending technique to tackle few-shot text classification and achieved state-of-the-art performance.
1 code implementation • 29 Nov 2020 • Na Li, Renyu Zhu, Xiaoxu Zhou, Xiangnan He, Wenyuan Cai, Ming Gao, Aoying Zhou
In this paper, we model the author disambiguation as a collaboration network reconstruction problem, and propose an incremental and unsupervised author disambiguation method, namely IUAD, which performs in a bottom-up manner.
1 code implementation • 27 Nov 2020 • Yixin Cao, Jun Kuang, Ming Gao, Aoying Zhou, Yonggang Wen, Tat-Seng Chua
In this paper, we propose a general approach to learn relation prototypesfrom unlabeled texts, to facilitate the long-tail relation extraction by transferring knowledge from the relation types with sufficient trainingdata.
1 code implementation • 6 Jul 2020 • Yingnan Fu, Tingting Liu, Ming Gao, Aoying Zhou
The symbol-level image encoder of EDSL consists of segmentation module and reconstruction module.
1 code implementation • NeurIPS 2020 • Ming Gao, Yi Ding, Bryon Aragam
We establish finite-sample guarantees for a polynomial-time algorithm for learning a nonlinear, nonparametric directed acyclic graphical (DAG) model from data.
1 code implementation • 18 Nov 2019 • Xinlei Wang, Minchen Li, Yu Fang, Xinxin Zhang, Ming Gao, Min Tang, Danny M. Kaufman, Chenfanfu Jiang
We propose Hierarchical Optimization Time Integration (HOT) for efficient implicit time-stepping of the Material Point Method (MPM) irrespective of simulated materials and conditions.
Graphics
1 code implementation • 8 Jul 2019 • Jun Kuang, Yixin Cao, Jianbing Zheng, Xiangnan He, Ming Gao, Aoying Zhou
In contrast to existing distant supervision approaches that suffer from insufficient training corpora to extract relations, our proposal of mining implicit mutual relation from the massive unlabeled corpora transfers the semantic information of entity pairs into the RE model, which is more expressive and semantically plausible.
1 code implementation • 16 Jan 2019 • Ming Gao, Xiangnan He, Leihui Chen, Tingting Liu, Jinglin Zhang, Aoying Zhou
Recent years have witnessed a widespread increase of interest in network representation learning (NRL).
2 code implementations • 15 Aug 2017 • Xiangnan He, Ming Gao, Min-Yen Kan, Dingxian Wang
In this paper, we study the problem of ranking vertices of a bipartite graph, based on the graph's link structure as well as prior information about vertices (which we term a query vector).