Search Results for author: Yuexin Wu

Found 24 papers, 13 papers with code

Computational Protein Design Using AND/OR Branch-and-Bound Search

no code implementations8 Dec 2014 Yichao Zhou, Yuexin Wu, Jianyang Zeng

The computation of the global minimum energy conformation (GMEC) is an important and challenging topic in structure-based computational protein design.

Combinatorial Optimization Protein Design

Analogical Inference for Multi-Relational Embeddings

1 code implementation ICML 2017 Hanxiao Liu, Yuexin Wu, Yiming Yang

Large-scale multi-relational embedding refers to the task of learning the latent representations for entities and relations in large knowledge graphs.

Knowledge Graphs Link Prediction

Contextual Encoding for Translation Quality Estimation

1 code implementation WS 2018 Junjie Hu, Wei-Cheng Chang, Yuexin Wu, Graham Neubig

In this paper, propose a method to effectively encode the local and global contextual information for each target word using a three-part neural network approach.

Sentence Translation

Unsupervised Cross-lingual Transfer of Word Embedding Spaces

1 code implementation EMNLP 2018 Ruochen Xu, Yiming Yang, Naoki Otani, Yuexin Wu

Supervised methods for this problem rely on the availability of cross-lingual supervision, either using parallel corpora or bilingual lexicons as the labeled data for training, which may not be available for many low resource languages.

Bilingual Lexicon Induction Cross-Lingual Transfer +4

Switch-based Active Deep Dyna-Q: Efficient Adaptive Planning for Task-Completion Dialogue Policy Learning

1 code implementation19 Nov 2018 Yuexin Wu, Xiujun Li, Jingjing Liu, Jianfeng Gao, Yiming Yang

Training task-completion dialogue agents with reinforcement learning usually requires a large number of real user experiences.

Active Learning Q-Learning +1

Active Learning Graph Neural Networks via Node Feature Propagation

no code implementations25 Sep 2019 Yuexin Wu, Yichong Xu, Aarti Singh, Artur Dubrawski, Yiming Yang

Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.

Active Learning Node Classification +1

Active Learning for Graph Neural Networks via Node Feature Propagation

no code implementations16 Oct 2019 Yuexin Wu, Yichong Xu, Aarti Singh, Yiming Yang, Artur Dubrawski

Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.

Active Learning Clustering +3

Graph-Revised Convolutional Network

4 code implementations17 Nov 2019 Donghan Yu, Ruohong Zhang, Zhengbao Jiang, Yuexin Wu, Yiming Yang

Graph Convolutional Networks (GCNs) have received increasing attention in the machine learning community for effectively leveraging both the content features of nodes and the linkage patterns across graphs in various applications.

Knowledge Embedding Based Graph Convolutional Network

1 code implementation12 Jun 2020 Donghan Yu, Yiming Yang, Ruohong Zhang, Yuexin Wu

Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN).

Knowledge Graph Embedding Knowledge Graphs +1

TADO: Time-varying Attention with Dual-Optimizer Model

1 code implementation8 Dec 2020 Yuexin Wu, Tianyu Gao, Sihao Wang, Zhongmin Xiong

As the first attempt in this field to address this problem, we propose a flexible dual-optimizer model to gain robustness from both regression loss and classification loss.

Recommendation Systems

A Gumbel-based Rating Prediction Framework for Imbalanced Recommendation

1 code implementation9 Dec 2020 Yuexin Wu, Xiaolei Huang

Rating prediction is a core problem in recommender systems to quantify user's preferences towards items, however, rating imbalance naturally roots in real-world user ratings that cause biased predictions and lead to poor performance on tail ratings.

Recommendation Systems

Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance

1 code implementation24 Feb 2022 Zhuoning Yuan, Yuexin Wu, Zi-Hao Qiu, Xianzhi Du, Lijun Zhang, Denny Zhou, Tianbao Yang

In this paper, we study contrastive learning from an optimization perspective, aiming to analyze and address a fundamental issue of existing contrastive learning methods that either rely on a large batch size or a large dictionary of feature vectors.

Contrastive Learning Self-Supervised Learning +1

Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification

1 code implementation *SEM (NAACL) 2022 Yuexin Wu, Xiaolei Huang

Unsupervised domain adaptation (UDA) augments model performance with only accessible annotations from the source domain and unlabeled data from the target domain.

reinforcement-learning Reinforcement Learning (RL) +3

Large Language Models Can Self-Improve

no code implementations20 Oct 2022 Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han

We show that our approach improves the general reasoning ability of a 540B-parameter LLM (74. 4%->82. 1% on GSM8K, 78. 2%->83. 0% on DROP, 90. 0%->94. 4% on OpenBookQA, and 63. 4%->67. 9% on ANLI-A3) and achieves state-of-the-art-level performance, without any ground truth label.

Arithmetic Reasoning Common Sense Reasoning +3

Augmentation with Projection: Towards an Effective and Efficient Data Augmentation Paradigm for Distillation

1 code implementation21 Oct 2022 Ziqi Wang, Yuexin Wu, Frederick Liu, Daogao Liu, Le Hou, Hongkun Yu, Jing Li, Heng Ji

However, these data augmentation methods either potentially cause shifts in decision boundaries (representation interpolation), are not expressive enough (token replacement), or introduce too much computational overhead (augmentation with models).

Data Augmentation Knowledge Distillation

Token Imbalance Adaptation for Radiology Report Generation

1 code implementation18 Apr 2023 Yuexin Wu, I-Chan Huang, Xiaolei Huang

Experiments demonstrate the effectiveness of our approach in enhancing model robustness overall and infrequent tokens.

Enabling Language Models to Implicitly Learn Self-Improvement

no code implementations2 Oct 2023 Ziqi Wang, Le Hou, Tianjian Lu, Yuexin Wu, Yunxuan Li, Hongkun Yu, Heng Ji

Specifically, we reformulate the training objective of reinforcement learning from human feedback (RLHF) -- instead of maximizing response quality for a given input, we maximize the quality gap of the response conditioned on a reference response.

Text Generation

Multi-step Problem Solving Through a Verifier: An Empirical Analysis on Model-induced Process Supervision

no code implementations5 Feb 2024 Zihan Wang, Yunxuan Li, Yuexin Wu, Liangchen Luo, Le Hou, Hongkun Yu, Jingbo Shang

Process supervision, using a trained verifier to evaluate the intermediate steps generated by reasoner, has demonstrated significant improvements in multi-step problem solving.

GSM8K Math

Cannot find the paper you are looking for? You can Submit a new open access paper.