Search Results for author: Yaqing Wang

Found 48 papers, 16 papers with code

CoRelation: Boosting Automatic ICD Coding Through Contextualized Code Relation Learning

no code implementations24 Feb 2024 Junyu Luo, Xiaochen Wang, Jiaqi Wang, Aofei Chang, Yaqing Wang, Fenglong Ma

Automatic International Classification of Diseases (ICD) coding plays a crucial role in the extraction of relevant information from clinical notes for proper recording and billing.

Relation

Accurate and interpretable drug-drug interaction prediction enabled by knowledge subgraph learning

1 code implementation25 Nov 2023 Yaqing Wang, Zaifei Yang, Quanming Yao

Thus, the lack of DDIs is implicitly compensated by the enriched drug representations and propagated drug similarities.

Knowledge Graphs

Automated Evaluation of Personalized Text Generation using Large Language Models

no code implementations17 Oct 2023 Yaqing Wang, Jiepu Jiang, Mingyang Zhang, Cheng Li, Yi Liang, Qiaozhu Mei, Michael Bendersky

Personalized text generation presents a specialized mechanism for delivering content that is specific to a user's personal context.

Text Generation text similarity

Hierarchical Pretraining on Multimodal Electronic Health Records

1 code implementation11 Oct 2023 Xiaochen Wang, Junyu Luo, Jiaqi Wang, Ziyi Yin, Suhan Cui, Yuan Zhong, Yaqing Wang, Fenglong Ma

Pretraining has proven to be a powerful technique in natural language processing (NLP), exhibiting remarkable success in various NLP downstream tasks.

Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity

1 code implementation8 Oct 2023 Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Mykola Pechenizkiy, Yi Liang, Zhangyang Wang, Shiwei Liu

Large Language Models (LLMs), renowned for their remarkable performance across diverse domains, present a challenge when it comes to practical deployment due to their colossal model size.

Network Pruning

MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data Augmentation

no code implementations4 Oct 2023 Yuan Zhong, Suhan Cui, Jiaqi Wang, Xiaochen Wang, Ziyi Yin, Yaqing Wang, Houping Xiao, Mengdi Huai, Ting Wang, Fenglong Ma

Health risk prediction is one of the fundamental tasks under predictive modeling in the medical domain, which aims to forecast the potential health risks that patients may face in the future using their historical Electronic Health Records (EHR).

Data Augmentation

DeeDiff: Dynamic Uncertainty-Aware Early Exiting for Accelerating Diffusion Model Generation

no code implementations29 Sep 2023 Shengkun Tang, Yaqing Wang, Caiwen Ding, Yi Liang, Yao Li, Dongkuan Xu

In this work, we propose DeeDiff, an early exiting framework that adaptively allocates computation resources in each sampling step to improve the generation efficiency of diffusion models.

text-guided-generation

Teach LLMs to Personalize -- An Approach inspired by Writing Education

no code implementations15 Aug 2023 Cheng Li, Mingyang Zhang, Qiaozhu Mei, Yaqing Wang, Spurthi Amba Hombaiah, Yi Liang, Michael Bendersky

Inspired by the practice of writing education, we develop a multistage and multitask framework to teach LLMs for personalized generation.

Retrieval Text Generation

Efficient and Joint Hyperparameter and Architecture Search for Collaborative Filtering

1 code implementation12 Jul 2023 Yan Wen, Chen Gao, Lingling Yi, Liwei Qiu, Yaqing Wang, Yong Li

Automated Machine Learning (AutoML) techniques have recently been introduced to design Collaborative Filtering (CF) models in a data-specific manner.

AutoML Collaborative Filtering

ColdNAS: Search to Modulate for User Cold-Start Recommendation

1 code implementation6 Jun 2023 Shiguang Wu, Yaqing Wang, Qinghe Jing, daxiang dong, Dejing Dou, Quanming Yao

Instead of using a fixed modulation function and deciding modulation position by expertise, we propose a modulation framework called ColdNAS for user cold-start problem, where we look for proper modulation structure, including function and position, via neural architecture search.

Neural Architecture Search Position +1

SimFair: A Unified Framework for Fairness-Aware Multi-Label Classification

no code implementations19 Feb 2023 Tianci Liu, Haoyu Wang, Yaqing Wang, Xiaoqian Wang, Lu Su, Jing Gao

This new framework utilizes data that have similar labels when estimating fairness on a particular label group for better stability, and can unify DP and EOp.

Classification Fairness +1

Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement

1 code implementation8 Jan 2023 Yan Li, Xinjiang Lu, Yaqing Wang, Dejing Dou

In this work, we propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder (BVAE) equipped with diffusion, denoise, and disentanglement, namely D3VAE.

Denoising Disentanglement +2

You Need Multiple Exiting: Dynamic Early Exiting for Accelerating Unified Vision Language Model

1 code implementation CVPR 2023 Shengkun Tang, Yaqing Wang, Zhenglun Kong, Tianchi Zhang, Yao Li, Caiwen Ding, Yanzhi Wang, Yi Liang, Dongkuan Xu

To handle this challenge, we propose a novel early exiting strategy for unified visual language models, which allows dynamically skip the layers in encoder and decoder simultaneously in term of input layer-wise similarities with multiple times of early exiting, namely \textbf{MuE}.

Language Modelling

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

1 code implementation31 Oct 2022 Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao

Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.

Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training of Image Segmentation Models

2 code implementations4 Jul 2022 Xuhong LI, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, Dejing Dou

Though image classification datasets could provide the backbone networks with rich visual features and discriminative ability, they are incapable of fully pre-training the target model (i. e., backbone+segmentation modules) in an end-to-end manner.

Classification Image Classification +3

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

1 code implementation24 May 2022 Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao

Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.

Natural Language Understanding Sparse Learning

Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm Regularization

no code implementations6 May 2022 Quanming Yao, Yaqing Wang, Bo Han, James Kwok

While the optimization problem is nonconvex and nonsmooth, we show that its critical points still have good statistical performance on the tensor completion problem.

Exploring the Common Principal Subspace of Deep Features in Neural Networks

no code implementations6 Oct 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.

Image Reconstruction Self-Supervised Learning

FedTriNet: A Pseudo Labeling Method with Three Players for Federated Semi-supervised Learning

no code implementations12 Sep 2021 Liwei Che, Zewei Long, Jiaqi Wang, Yaqing Wang, Houping Xiao, Fenglong Ma

In particular, we propose to use three networks and a dynamic quality control mechanism to generate high-quality pseudo labels for unlabeled data, which are added to the training set.

Federated Learning

FedCon: A Contrastive Framework for Federated Semi-Supervised Learning

no code implementations9 Sep 2021 Zewei Long, Jiaqi Wang, Yaqing Wang, Houping Xiao, Fenglong Ma

Most existing FedSSL methods focus on the classical scenario, i. e, the labeled and unlabeled data are stored at the client side.

Multimodal Emergent Fake News Detection via Meta Neural Process Networks

no code implementations22 Jun 2021 Yaqing Wang, Fenglong Ma, Haoyu Wang, Kishlay Jha, Jing Gao

The experimental results show our proposed MetaFEND model can detect fake news on never-seen events effectively and outperform the state-of-the-art methods.

Fake News Detection Hard Attention +1

Adaptive Self-training for Neural Sequence Labeling with Few Labels

no code implementations1 Jan 2021 Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah

Neural sequence labeling is an important technique employed for many Natural Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot tagging for dialog systems and semantic parsing.

Meta-Learning named-entity-recognition +3

Empirical Studies on the Convergence of Feature Spaces in Deep Learning

no code implementations1 Jan 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.

Image Reconstruction Self-Supervised Learning

FedSiam: Towards Adaptive Federated Semi-Supervised Learning

no code implementations6 Dec 2020 Zewei Long, Liwei Che, Yaqing Wang, Muchao Ye, Junyu Luo, Jinze Wu, Houping Xiao, Fenglong Ma

In this paper, we focus on designing a general framework FedSiam to tackle different scenarios of federated semi-supervised learning, including four settings in the labels-at-client scenario and two setting in the labels-at-server scenario.

Federated Learning

Adaptive Self-training for Few-shot Neural Sequence Labeling

no code implementations7 Oct 2020 Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah

While self-training serves as an effective mechanism to learn from large amounts of unlabeled data -- meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels.

Meta-Learning named-entity-recognition +3

Efficient Knowledge Graph Validation via Cross-Graph Representation Learning

no code implementations16 Aug 2020 Yaqing Wang, Fenglong Ma, Jing Gao

To tackle this challenging task, we propose a cross-graph representation learning framework, i. e., CrossVal, which can leverage an external KG to validate the facts in the target KG efficiently.

Graph Representation Learning Knowledge Graphs

A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix Completion

no code implementations14 Aug 2020 Yaqing Wang, Quanming Yao, James T. Kwok

Extensive low-rank matrix completion experiments on a number of synthetic and real-world data sets show that the proposed method obtains state-of-the-art recovery performance while being the fastest in comparison to existing low-rank matrix learning methods.

Collaborative Filtering Low-Rank Matrix Completion

Automatic Validation of Textual Attribute Values in E-commerce Catalog by Learning with Limited Labeled Data

no code implementations15 Jun 2020 Yaqing Wang, Yifan Ethan Xu, Xi-An Li, Xin Luna Dong, Jing Gao

(1) We formalize the problem of validating the textual attribute values of products from a variety of categories as a natural language inference task in the few-shot learning setting, and propose a meta-learning latent variable model to jointly process the signals obtained from product profiles and textual attribute values.

Attribute Few-Shot Learning +1

Decomposed Adversarial Learned Inference

no code implementations21 Apr 2020 Alexander Hanbo Li, Yaqing Wang, Changyou Chen, Jing Gao

Effective inference for a generative adversarial model remains an important and challenging problem.

Weak Supervision for Fake News Detection via Reinforcement Learning

1 code implementation28 Dec 2019 Yaqing Wang, Weifeng Yang, Fenglong Ma, Jin Xu, Bin Zhong, Qiang Deng, Jing Gao

In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i. e., WeFEND, which can leverage users' reports as weak supervision to enlarge the amount of training data for fake news detection.

Fake News Detection reinforcement-learning +1

Generalizing from a Few Examples: A Survey on Few-Shot Learning

4 code implementations10 Apr 2019 Yaqing Wang, Quanming Yao, James Kwok, Lionel M. Ni

Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small.

BIG-bench Machine Learning Few-Shot Learning

General Convolutional Sparse Coding with Unknown Noise

no code implementations8 Mar 2019 Yaqing Wang, James T. Kwok, Lionel M. Ni

However, existing CSC methods can only model noises from Gaussian distribution, which is restrictive and unrealistic.

AIM: Adversarial Inference by Matching Priors and Conditionals

no code implementations27 Sep 2018 Hanbo Li, Yaqing Wang, Changyou Chen, Jing Gao

We propose a novel approach, Adversarial Inference by Matching priors and conditionals (AIM), which explicitly matches prior and conditional distributions in both data and code spaces, and puts a direct constraint on the dependency structure of the generative model.

Online Convolutional Sparse Coding with Sample-Dependent Dictionary

no code implementations ICML 2018 Yaqing Wang, Quanming Yao, James T. Kwok, Lionel M. Ni

Convolutional sparse coding (CSC) has been popularly used for the learning of shift-invariant dictionaries in image and signal processing.

Scalable Online Convolutional Sparse Coding

no code implementations21 Jun 2017 Yaqing Wang, Quanming Yao, James T. Kwok, Lionel M. Ni

Convolutional sparse coding (CSC) improves sparse coding by learning a shift-invariant dictionary from the data.

Cannot find the paper you are looking for? You can Submit a new open access paper.