Search Results for author: Ruiyi Zhang

Found 51 papers, 17 papers with code

Influence Diagram Bandits

no code implementations ICML 2020 Tong Yu, Branislav Kveton, Zheng Wen, Ruiyi Zhang, Ole J. Mengshoel

We experiment with three structured bandit problems: cascading bandits, online learning to rank in the position-based model, and rank-1 bandits.

Learning-To-Rank

Few-Shot Class-Incremental Learning for Named Entity Recognition

no code implementations ACL 2022 Rui Wang, Tong Yu, Handong Zhao, Sungchul Kim, Subrata Mitra, Ruiyi Zhang, Ricardo Henao

In this work, we study a more challenging but practical problem, i. e., few-shot class-incremental learning for NER, where an NER model is trained with only few labeled samples of the new classes, without forgetting knowledge of the old ones.

class-incremental learning Few-Shot Class-Incremental Learning +4

Bias and Fairness in Large Language Models: A Survey

1 code implementation2 Sep 2023 Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, Nesreen K. Ahmed

Rapid advancements of large language models (LLMs) have enabled the processing, understanding, and generation of human-like text, with increasing integration into systems that touch our social sphere.

Fairness

Knowledge Graph Prompting for Multi-Document Question Answering

no code implementations22 Aug 2023 Yu Wang, Nedim Lipka, Ryan A. Rossi, Alexa Siu, Ruiyi Zhang, Tyler Derr

Concurrently, the LM-guided traverser acts as a local navigator that gathers pertinent context to progressively approach the question and guarantee retrieval quality.

graph construction Open-Domain Question Answering +1

InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural Language Understanding

no code implementations8 Jun 2023 Junda Wu, Tong Yu, Rui Wang, Zhao Song, Ruiyi Zhang, Handong Zhao, Chaochao Lu, Shuai Li, Ricardo Henao

With this framework, we develop two novel mutual information based loss functions, to (i) discover proper prompt initialization for the downstream tasks and learn sufficient task-relevant information from prompt tokens and (ii) encourage the output representation from the pretrained language model to be more aware of the task-relevant information captured in the learnt prompt.

Language Modelling Natural Language Understanding

Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels

1 code implementation31 May 2023 Jian Chen, Ruiyi Zhang, Tong Yu, Rohan Sharma, Zhiqiang Xu, Tong Sun, Changyou Chen

Remarkably, by incorporating conditional information from the powerful CLIP model, our method can boost the current SOTA accuracy by 10-20 absolute points in many cases.

Image Classification Retrieval

Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach

1 code implementation23 May 2023 Yufan Zhou, Ruiyi Zhang, Tong Sun, Jinhui Xu

However, generating images of novel concept provided by the user input image is still a challenging task.

Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer

no code implementations20 May 2023 Kaige Xie, Tong Yu, Haoliang Wang, Junda Wu, Handong Zhao, Ruiyi Zhang, Kanak Mahadik, Ani Nenkova, Mark Riedl

In this paper, we focus on improving the prompt transfer from dialogue state tracking to dialogue summarization and propose Skeleton-Assisted Prompt Transfer (SAPT), which leverages skeleton generation as extra supervision that functions as a medium connecting the distinct source and target task and resulting in the model's better consumption of dialogue state information.

Dialogue State Tracking Transfer Learning

DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs

1 code implementation18 May 2023 Youwei Liang, Ruiyi Zhang, Li Zhang, Pengtao Xie

The DrugChat system consists of a graph neural network (GNN), a large language model (LLM), and an adaptor.

Drug Discovery Language Modelling +1

Towards Building the Federated GPT: Federated Instruction Tuning

1 code implementation9 May 2023 Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Guoyin Wang, Yiran Chen

This repository offers a foundational framework for exploring federated fine-tuning of LLMs using heterogeneous instructions across diverse categories.

Federated Learning

Robustness of Demonstration-based Learning Under Limited Data Scenario

1 code implementation19 Oct 2022 Hongxin Zhang, Yanzhe Zhang, Ruiyi Zhang, Diyi Yang

Demonstration-based learning has shown great potential in stimulating pretrained language models' ability under limited data scenario.

Few-shot NER

STT: Soft Template Tuning for Few-Shot Adaptation

no code implementations18 Jul 2022 Ping Yu, Wei Wang, Chunyuan Li, Ruiyi Zhang, Zhanpeng Jin, Changyou Chen

Significantly, it can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.

Few-Shot Learning Language Modelling +3

Federated Non-negative Matrix Factorization for Short Texts Topic Modeling with Mutual Information

no code implementations26 May 2022 Shijing Si, Jianzong Wang, Ruiyi Zhang, Qinliang Su, Jing Xiao

Non-negative matrix factorization (NMF) based topic modeling is widely used in natural language processing (NLP) to uncover hidden topics of short text documents.

Federated Learning text-classification +1

Towards Language-Free Training for Text-to-Image Generation

no code implementations CVPR 2022 Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun

One of the major challenges in training text-to-image generation models is the need of a large number of high-quality text-image pairs.

Zero-Shot Text-to-Image Generation

Improving Zero-shot Voice Style Transfer via Disentangled Representation Learning

1 code implementation ICLR 2021 Siyang Yuan, Pengyu Cheng, Ruiyi Zhang, Weituo Hao, Zhe Gan, Lawrence Carin

Voice style transfer, also called voice conversion, seeks to modify one speaker's voice to generate speech as if it came from another (target) speaker.

Representation Learning Style Transfer +1

SDA: Improving Text Generation with Self Data Augmentation

no code implementations2 Jan 2021 Ping Yu, Ruiyi Zhang, Yang Zhao, Yizhe Zhang, Chunyuan Li, Changyou Chen

Data augmentation has been widely used to improve deep neural networks in many research fields, such as computer vision.

Data Augmentation Imitation Learning +1

Reinforcement Learning for Flexibility Design Problems

no code implementations2 Jan 2021 Yehua Wei, Lei Zhang, Ruiyi Zhang, Shijing Si, Hao Zhang, Lawrence Carin

Flexibility design problems are a class of problems that appear in strategic decision-making across industries, where the objective is to design a ($e. g.$, manufacturing) network that affords flexibility and adaptivity.

Decision Making reinforcement-learning +1

Semantic Matching for Sequence-to-Sequence Learning

no code implementations Findings of the Association for Computational Linguistics 2020 Ruiyi Zhang, Changyou Chen, Xinyuan Zhang, Ke Bai, Lawrence Carin

In sequence-to-sequence models, classical optimal transport (OT) can be applied to semantically match generated sentences with target sentences.

Unsupervised Abstractive Dialogue Summarization for Tete-a-Tetes

no code implementations15 Sep 2020 Xinyuan Zhang, Ruiyi Zhang, Manzil Zaheer, Amr Ahmed

High-quality dialogue-summary paired data is expensive to produce and domain-sensitive, making abstractive dialogue summarization a challenging task.

Abstractive Dialogue Summarization dialogue summary +1

When does MAML Work the Best? An Empirical Study on Model-Agnostic Meta-Learning in NLP Applications

no code implementations24 May 2020 Zequn Liu, Ruiyi Zhang, Yiping Song, Ming Zhang

Model-Agnostic Meta-Learning (MAML), a model-agnostic meta-learning method, is successfully employed in NLP applications including few-shot text classification and multi-domain low-resource language generation.

Few-Shot Text Classification Language Modelling +4

Reward Constrained Interactive Recommendation with Natural Language Feedback

no code implementations4 May 2020 Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, Changyou Chen, Lawrence Carin

Text-based interactive recommendation provides richer user feedback and has demonstrated advantages over traditional interactive recommender systems.

Recommendation Systems reinforcement-learning +2

Bayesian Meta Sampling for Fast Uncertainty Adaptation

1 code implementation ICLR 2020 Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, Changyou Chen

Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter.

Meta-Learning

GenDICE: Generalized Offline Estimation of Stationary Values

1 code implementation ICLR 2020 Ruiyi Zhang, Bo Dai, Lihong Li, Dale Schuurmans

An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain.

Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions

1 code implementation AAAI 2019 Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, Changyou Chen

In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality.

Action Generation

Text-Based Interactive Recommendation via Constraint-Augmented Reinforcement Learning

no code implementations NeurIPS 2019 Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, Changyou Chen

Text-based interactive recommendation provides richer user preferences and has demonstrated advantages over traditional interactive recommender systems.

Recommendation Systems reinforcement-learning +2

Learning to Recommend from Sparse Data via Generative User Feedback

no code implementations ICLR 2020 Wenlin Wang, Hongteng Xu, Ruiyi Zhang, Wenqi Wang, Piyush Rai, Lawrence Carin

To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback.

Collaborative Filtering Recommendation Systems

Figure Captioning with Reasoning and Sequence-Level Training

no code implementations7 Jun 2019 Charles Chen, Ruiyi Zhang, Eunyee Koh, Sungchul Kim, Scott Cohen, Tong Yu, Ryan Rossi, Razvan Bunescu

In this work, we investigate the problem of figure captioning where the goal is to automatically generate a natural language description of the figure.

Image Captioning

Scalable Thompson Sampling via Optimal Transport

no code implementations19 Feb 2019 Ruiyi Zhang, Zheng Wen, Changyou Chen, Lawrence Carin

Thompson sampling (TS) is a class of algorithms for sequential decision-making, which requires maintaining a posterior distribution over a model.

Decision Making Thompson Sampling

Sequence Generation with Guider Network

no code implementations2 Nov 2018 Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Liqun Chen, Dinghan Shen, Guoyin Wang, Lawrence Carin

Sequence generation with reinforcement learning (RL) has received significant attention recently.

Reinforcement Learning (RL)

Towards More Theoretically-Grounded Particle Optimization Sampling for Deep Learning

no code implementations27 Sep 2018 Jianyi Zhang, Ruiyi Zhang, Changyou Chen

With such theoretical guarantees, SPOS can be safely and effectively applied on both Bayesian DL and deep RL tasks.

POS Reinforcement Learning (RL)

Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory

no code implementations5 Sep 2018 Jianyi Zhang, Ruiyi Zhang, Lawrence Carin, Changyou Chen

Particle-optimization-based sampling (POS) is a recently developed effective sampling technique that interactively updates a set of particles.

POS

Policy Optimization as Wasserstein Gradient Flows

no code implementations ICML 2018 Ruiyi Zhang, Changyou Chen, Chunyuan Li, Lawrence Carin

Policy optimization is a core component of reinforcement learning (RL), and most existing RL methods directly optimize parameters of a policy based on maximizing the expected total reward, or its surrogate.

Reinforcement Learning (RL)

Understanding and Accelerating Particle-Based Variational Inference

1 code implementation4 Jul 2018 Chang Liu, Jingwei Zhuo, Pengyu Cheng, Ruiyi Zhang, Jun Zhu, Lawrence Carin

Particle-based variational inference methods (ParVIs) have gained attention in the Bayesian inference literature, for their capacity to yield flexible and accurate approximations.

Bayesian Inference Variational Inference

Variational Inference and Model Selection with Generalized Evidence Bounds

no code implementations ICML 2018 Liqun Chen, Chenyang Tao, Ruiyi Zhang, Ricardo Henao, Lawrence Carin Duke

Recent advances on the scalability and flexibility of variational inference have made it successful at unravelling hidden patterns in complex data.

Model Selection Variational Inference

A Unified Particle-Optimization Framework for Scalable Bayesian Sampling

no code implementations29 May 2018 Changyou Chen, Ruiyi Zhang, Wenlin Wang, Bai Li, Liqun Chen

There has been recent interest in developing scalable Bayesian sampling methods such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD) for big-data analysis.

Learning Structural Weight Uncertainty for Sequential Decision-Making

1 code implementation30 Dec 2017 Ruiyi Zhang, Chunyuan Li, Changyou Chen, Lawrence Carin

Learning probability distributions on the weights of neural networks (NNs) has recently proven beneficial in many applications.

Decision Making Multi-Armed Bandits +1

Particle Optimization in Stochastic Gradient MCMC

no code implementations29 Nov 2017 Changyou Chen, Ruiyi Zhang

Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has been increasingly popular in Bayesian learning due to its ability to deal with large data.

Cannot find the paper you are looking for? You can Submit a new open access paper.