Search Results for author: Huaxiu Yao

Found 34 papers, 19 papers with code

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback

no code implementations24 May 2023 Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, Christopher D. Manning

A trustworthy real-world prediction system should be well-calibrated; that is, its confidence in an answer is indicative of the likelihood that the answer is correct, enabling deferral to a more expensive expert in cases of low-confidence predictions.

Unsupervised Pre-training

Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks

1 code implementation8 Apr 2023 Yuzhen Mao, Zhun Deng, Huaxiu Yao, Ting Ye, Kenji Kawaguchi, James Zou

Although imposing fairness constraints have been studied extensively for classical machine learning models, the effect these techniques have on deep neural networks is still unclear.

Fairness Representation Learning

Leveraging Domain Relations for Domain Generalization

no code implementations6 Feb 2023 Huaxiu Yao, Xinyu Yang, Xinyi Pan, Shengchao Liu, Pang Wei Koh, Chelsea Finn

Distribution shift is a major challenge in machine learning, as models often perform poorly during the test stage if the test distribution differs from the training distribution.

Domain Generalization

Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time

1 code implementation25 Nov 2022 Huaxiu Yao, Caroline Choi, Bochuan Cao, Yoonho Lee, Pang Wei Koh, Chelsea Finn

Temporal shifts -- distribution shifts arising from the passage of time -- often occur gradually and have the additional structure of timestamp metadata.

Continual Learning Domain Generalization +3

Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

no code implementations20 Oct 2022 Yoonho Lee, Annie S. Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn

A common approach to transfer learning under distribution shift is to fine-tune the last few layers of a pre-trained model, preserving learned features while also adapting to the new task.

Transfer Learning

C-Mixup: Improving Generalization in Regression

1 code implementation11 Oct 2022 Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, Chelsea Finn

In this paper, we propose a simple yet powerful algorithm, C-Mixup, to improve generalization on regression tasks.

regression

Knowledge-Driven New Drug Recommendation

no code implementations11 Oct 2022 Zhenbang Wu, Huaxiu Yao, Zhe Su, David M Liebovitz, Lucas M Glass, James Zou, Chelsea Finn, Jimeng Sun

However, newly approved drugs do not have much historical prescription data and cannot leverage existing drug recommendation methods.

Few-Shot Learning Multi-Label Classification

Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge Transfer

1 code implementation27 May 2022 Bin Lu, Xiaoying Gan, Weinan Zhang, Huaxiu Yao, Luoyi Fu, Xinbing Wang

To address this challenge, cross-city knowledge transfer has shown its promise, where the model learned from data-sufficient cities is leveraged to benefit the learning process of data-scarce cities.

Few-Shot Learning Graph Learning +2

Diversify and Disambiguate: Learning From Underspecified Data

1 code implementation7 Feb 2022 Yoonho Lee, Huaxiu Yao, Chelsea Finn

Many datasets are underspecified: there exist multiple equally viable solutions to a given task.

Image Classification

Improving Out-of-Distribution Robustness via Selective Augmentation

2 code implementations2 Jan 2022 Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn

Machine learning algorithms typically assume that training and test examples are drawn from the same distribution.

Functionally Regionalized Knowledge Transfer for Low-resource Drug Discovery

no code implementations NeurIPS 2021 Huaxiu Yao, Ying WEI, Long-Kai Huang, Ding Xue, Junzhou Huang, Zhenhui (Jessie) Li

More recently, there has been a surge of interest in employing machine learning approaches to expedite the drug discovery process where virtual screening for hit discovery and ADMET prediction for lead optimization play essential roles.

Drug Discovery Meta-Learning +1

Meta-learning with an Adaptive Task Scheduler

1 code implementation NeurIPS 2021 Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn

In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.

Drug Discovery Meta-Learning

Knowledge-Aware Meta-learning for Low-Resource Text Classification

1 code implementation EMNLP 2021 Huaxiu Yao, Yingxin Wu, Maruan Al-Shedivat, Eric P. Xing

Meta-learning has achieved great success in leveraging the historical learned knowledge to facilitate the learning process of the new task.

Meta-Learning text-classification +1

Meta-Learning with Fewer Tasks through Task Interpolation

1 code implementation ICLR 2022 Huaxiu Yao, Linjun Zhang, Chelsea Finn

Meta-learning enables algorithms to quickly learn a newly encountered task with just a few labeled examples by transferring previously learned knowledge.

Image Classification Medical Image Classification +2

FEW-SHOTLEARNING WITH WEAK SUPERVISION

no code implementations ICLR Workshop Learning_to_Learn 2021 Ali Ghadirzadeh, Petra Poklukar, Xi Chen, Huaxiu Yao, Hossein Azizpour, Mårten Björkman, Chelsea Finn, Danica Kragic

Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data.

Meta-Learning Variational Inference

Online Structured Meta-learning

no code implementations NeurIPS 2020 Huaxiu Yao, Yingbo Zhou, Mehrdad Mahdavi, Zhenhui Li, Richard Socher, Caiming Xiong

When a new task is encountered, it constructs a meta-knowledge pathway by either utilizing the most relevant knowledge blocks or exploring new blocks.

Meta-Learning

Relation-aware Meta-learning for Market Segment Demand Prediction with Limited Records

no code implementations1 Aug 2020 Jiatu Shi, Huaxiu Yao, Xian Wu, Tong Li, Zedong Lin, Tengfei Wang, Binqiang Zhao

The goal is to facilitate the learning process in the target segments by leveraging the learned knowledge from data-sufficient source segments.

Meta-Learning

Improving Generalization in Meta-learning via Task Augmentation

1 code implementation26 Jul 2020 Huaxiu Yao, Long-Kai Huang, Linjun Zhang, Ying WEI, Li Tian, James Zou, Junzhou Huang, Zhenhui Li

Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.

Meta-Learning

Investigating and Mitigating Degree-Related Biases in Graph Convolutional Networks

no code implementations28 Jun 2020 Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, Suhang Wang

Pseudo labels increase the chance of connecting to labeled neighbors for low-degree nodes, thus reducing the biases of GCNs from the data perspective.

Self-Supervised Learning

Automated Relational Meta-learning

1 code implementation ICLR 2020 Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li, Zhenhui Li

In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones.

Few-Shot Image Classification Meta-Learning

Few-Shot Knowledge Graph Completion

1 code implementation26 Nov 2019 Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, Nitesh V. Chawla

Knowledge graphs (KGs) serve as useful resources for various natural language processing applications.

One-Shot Learning

Transferable Neural Processes for Hyperparameter Optimization

no code implementations7 Sep 2019 Ying Wei, Peilin Zhao, Huaxiu Yao, Junzhou Huang

Automated machine learning aims to automate the whole process of machine learning, including model configuration.

BIG-bench Machine Learning Hyperparameter Optimization +1

Targeted Source Detection for Environmental Data

no code implementations29 Aug 2019 Guanjie Zheng, Mengqi Liu, Tao Wen, Hongjian Wang, Huaxiu Yao, Susan L. Brantley, Zhenhui Li

In the face of growing needs for water and energy, a fundamental understanding of the environmental impacts of human activities becomes critical for managing water and energy resources, remedying water pollution, and making regulatory policy wisely.

Transferring Robustness for Graph Neural Network Against Poisoning Attacks

1 code implementation20 Aug 2019 Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, Suhang Wang

To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph.

Node Classification Transfer Learning

Joint Modeling of Dense and Incomplete Trajectories for Citywide Traffic Volume Inference

no code implementations25 Feb 2019 Xianfeng Tang, Boqing Gong, Yanwei Yu, Huaxiu Yao, Yandong Li, Haiyong Xie, Xiaoyu Wang

In this paper, we propose a novel framework for the citywide traffic volume inference using both dense GPS trajectories and incomplete trajectories captured by camera surveillance systems.

Graph Embedding

Revisiting Spatial-Temporal Similarity: A Deep Learning Framework for Traffic Prediction

5 code implementations3 Mar 2018 Huaxiu Yao, Xianfeng Tang, Hua Wei, Guanjie Zheng, Zhenhui Li

Although both factors have been considered in modeling, existing works make strong assumptions about spatial dependence and temporal dynamics, i. e., spatial dependence is stationary in time, and temporal dynamics is strictly periodical.

Traffic Prediction

Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction

1 code implementation23 Feb 2018 Huaxiu Yao, Fei Wu, Jintao Ke, Xianfeng Tang, Yitian Jia, Siyu Lu, Pinghua Gong, Jieping Ye, Zhenhui Li

Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations.

Image Classification Time Series Forecasting +1

Cannot find the paper you are looking for? You can Submit a new open access paper.