Search Results for author: Huaxiu Yao

Found 56 papers, 34 papers with code

LITE: Modeling Environmental Ecosystems with Multimodal Large Language Models

1 code implementation1 Apr 2024 Haoran Li, Junqi Liu, Zexian Wang, Shiyuan Luo, Xiaowei Jia, Huaxiu Yao

To address these issues, we propose LITE -- a multimodal large language model for environmental ecosystems modeling.

Decision Making Language Modelling +1

Electrocardiogram Instruction Tuning for Report Generation

no code implementations7 Mar 2024 Zhongwei Wan, Che Liu, Xin Wang, Chaofan Tao, Hui Shen, Zhenwu Peng, Jie Fu, Rossella Arcucci, Huaxiu Yao, Mi Zhang

Electrocardiogram (ECG) serves as the primary non-invasive diagnostic tool for cardiac conditions monitoring, are crucial in assisting clinicians.

HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding

1 code implementation1 Mar 2024 Zhaorun Chen, Zhuokai Zhao, Hongyin Luo, Huaxiu Yao, Bo Li, Jiawei Zhou

While large vision-language models (LVLMs) have demonstrated impressive capabilities in interpreting multi-modal contexts, they invariably suffer from object hallucinations (OH).

Hallucination Object +1

Distribution-Free Fair Federated Learning with Small Samples

no code implementations25 Feb 2024 Qichuan Yin, Junzhou Huang, Huaxiu Yao, Linjun Zhang

As federated learning gains increasing importance in real-world applications due to its capacity for decentralized data training, addressing fairness concerns across demographic groups becomes critically important.

Fairness Federated Learning

$C^3$: Confidence Calibration Model Cascade for Inference-Efficient Cross-Lingual Natural Language Understanding

no code implementations25 Feb 2024 Taixi Lu, Haoyu Wang, Huajie Shao, Jing Gao, Huaxiu Yao

Existing model cascade methods seek to enhance inference efficiency by greedily selecting the lightest model capable of processing the current input from a variety of models, based on model confidence scores.

Natural Language Understanding

Aligning Modalities in Vision Large Language Models via Preference Fine-tuning

1 code implementation18 Feb 2024 Yiyang Zhou, Chenhang Cui, Rafael Rafailov, Chelsea Finn, Huaxiu Yao

This procedure is not perfect and can cause the model to hallucinate - provide answers that do not accurately reflect the image, even when the core LLM is highly factual and the vision backbone has sufficiently complete representations.

Hallucination Instruction Following +1

AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition

no code implementations18 Feb 2024 Zhaorun Chen, Zhuokai Zhao, Zhihong Zhu, Ruiqi Zhang, Xiang Li, Bhiksha Raj, Huaxiu Yao

Recent advancements in large language models (LLMs) have shown promise in multi-step reasoning tasks, yet their reliance on extensive manual labeling to provide procedural feedback remains a significant impediment.

Selective Learning: Towards Robust Calibration with Dynamic Regularization

no code implementations13 Feb 2024 Zongbo Han, Yifeng Yang, Changqing Zhang, Linjun Zhang, Joey Tianyi Zhou, QinGhua Hu, Huaxiu Yao

The objective can be understood as seeking a model that fits the ground-truth labels by increasing the confidence while also maximizing the entropy of predicted probabilities by decreasing the confidence.

Generating Chain-of-Thoughts with a Direct Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought

no code implementations10 Feb 2024 Zhen-Yu Zhang, Siwei Han, Huaxiu Yao, Gang Niu, Masashi Sugiyama

In this paper, motivated by Vapnik's principle, we propose a novel comparison-based CoT generation algorithm that directly identifies the most promising thoughts with the noisy feedback from the LLM.

Language Modelling Large Language Model

Multimodal Clinical Trial Outcome Prediction with Large Language Models

1 code implementation9 Feb 2024 Wenhao Zheng, Dongsheng Peng, Hongxia Xu, Hongtu Zhu, Tianfan Fu, Huaxiu Yao

To address these issues, we propose a multimodal mixture-of-experts (LIFTED) approach for clinical trial outcome prediction.

Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences

1 code implementation19 Jan 2024 Xiyao Wang, YuHang Zhou, Xiaoyu Liu, Hongjin Lu, Yuancheng Xu, Feihong He, Jaehong Yoon, Taixi Lu, Gedas Bertasius, Mohit Bansal, Huaxiu Yao, Furong Huang

However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image, and the ability of modern MLLMs to extrapolate from image sequences, which is essential for understanding our ever-changing world, has been less investigated.

Language Modelling Large Language Model

How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs

1 code implementation27 Nov 2023 Haoqin Tu, Chenhang Cui, Zijun Wang, Yiyang Zhou, Bingchen Zhao, Junlin Han, Wangchunshu Zhou, Huaxiu Yao, Cihang Xie

Different from prior studies, we shift our focus from evaluating standard performance to introducing a comprehensive safety evaluation suite, covering both out-of-distribution (OOD) generalization and adversarial robustness.

Adversarial Robustness Visual Question Answering (VQA) +1

FREE: The Foundational Semantic Recognition for Modeling Environmental Ecosystems

no code implementations17 Nov 2023 Shiyuan Luo, Juntong Ni, Shengyu Chen, Runlong Yu, Yiqun Xie, Licheng Liu, Zhenong Jin, Huaxiu Yao, Xiaowei Jia

This raises a fundamental question in advancing the modeling of environmental ecosystems: how to build a general framework for modeling the complex relationships amongst various environmental data over space and time?

Future prediction

Multimodal Representation Learning by Alternating Unimodal Adaptation

1 code implementation17 Nov 2023 Xiaohui Zhang, Jaehong Yoon, Mohit Bansal, Huaxiu Yao

This optimization process is controlled by a gradient modification mechanism to prevent the shared head from losing previously acquired information.

Representation Learning

Fine-tuning Language Models for Factuality

no code implementations14 Nov 2023 Katherine Tian, Eric Mitchell, Huaxiu Yao, Christopher D. Manning, Chelsea Finn

The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines.

Misconceptions Misinformation +1

Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges

1 code implementation6 Nov 2023 Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, Huaxiu Yao

To bridge this gap, we introduce a new benchmark, namely, the Bias and Interference Challenges in Visual Language Models (Bingo).

Hallucination

Conformal Prediction for Deep Classifier via Label Ranking

2 code implementations10 Oct 2023 Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei

In this paper, we empirically and theoretically show that disregarding the probabilities' value will mitigate the undesirable effect of miscalibrated probability values.

Conformal Prediction

Conservative Prediction via Data-Driven Confidence Minimization

1 code implementation8 Jun 2023 Caroline Choi, Fahim Tajwar, Yoonho Lee, Huaxiu Yao, Ananya Kumar, Chelsea Finn

Taking inspiration from this result, we present data-driven confidence minimization (DCM), which minimizes confidence on an uncertainty dataset containing examples that the model is likely to misclassify at test time.

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback

no code implementations24 May 2023 Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, Christopher D. Manning

A trustworthy real-world prediction system should produce well-calibrated confidence scores; that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, enabling deferral to an expert in cases of low-confidence predictions.

TriviaQA Unsupervised Pre-training

Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks

2 code implementations8 Apr 2023 Yuzhen Mao, Zhun Deng, Huaxiu Yao, Ting Ye, Kenji Kawaguchi, James Zou

As machine learning has been deployed ubiquitously across applications in modern data science, algorithmic fairness has become a great concern.

Fairness Open-Ended Question Answering +1

Improving Domain Generalization with Domain Relations

no code implementations6 Feb 2023 Huaxiu Yao, Xinyu Yang, Xinyi Pan, Shengchao Liu, Pang Wei Koh, Chelsea Finn

Distribution shift presents a significant challenge in machine learning, where models often underperform during the test stage when faced with a different distribution than the one they were trained on.

Domain Generalization

Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time

1 code implementation25 Nov 2022 Huaxiu Yao, Caroline Choi, Bochuan Cao, Yoonho Lee, Pang Wei Koh, Chelsea Finn

Temporal shifts -- distribution shifts arising from the passage of time -- often occur gradually and have the additional structure of timestamp metadata.

Continual Learning Domain Generalization +3

Multi-Domain Long-Tailed Learning by Augmenting Disentangled Representations

1 code implementation25 Oct 2022 Xinyu Yang, Huaxiu Yao, Allan Zhou, Chelsea Finn

We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.

Data Augmentation Disentanglement +1

Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

1 code implementation20 Oct 2022 Yoonho Lee, Annie S. Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn

A common approach to transfer learning under distribution shift is to fine-tune the last few layers of a pre-trained model, preserving learned features while also adapting to the new task.

Transfer Learning

C-Mixup: Improving Generalization in Regression

1 code implementation11 Oct 2022 Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, Chelsea Finn

In this paper, we propose a simple yet powerful algorithm, C-Mixup, to improve generalization on regression tasks.

regression

Knowledge-Driven New Drug Recommendation

no code implementations11 Oct 2022 Zhenbang Wu, Huaxiu Yao, Zhe Su, David M Liebovitz, Lucas M Glass, James Zou, Chelsea Finn, Jimeng Sun

However, newly approved drugs do not have much historical prescription data and cannot leverage existing drug recommendation methods.

Few-Shot Learning Multi-Label Classification

Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge Transfer

1 code implementation27 May 2022 Bin Lu, Xiaoying Gan, Weinan Zhang, Huaxiu Yao, Luoyi Fu, Xinbing Wang

To address this challenge, cross-city knowledge transfer has shown its promise, where the model learned from data-sufficient cities is leveraged to benefit the learning process of data-scarce cities.

Few-Shot Learning Graph Learning +2

Diversify and Disambiguate: Learning From Underspecified Data

1 code implementation7 Feb 2022 Yoonho Lee, Huaxiu Yao, Chelsea Finn

Many datasets are underspecified: there exist multiple equally viable solutions to a given task.

Image Classification

Improving Out-of-Distribution Robustness via Selective Augmentation

2 code implementations2 Jan 2022 Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn

Machine learning algorithms typically assume that training and test examples are drawn from the same distribution.

Functionally Regionalized Knowledge Transfer for Low-resource Drug Discovery

no code implementations NeurIPS 2021 Huaxiu Yao, Ying WEI, Long-Kai Huang, Ding Xue, Junzhou Huang, Zhenhui (Jessie) Li

More recently, there has been a surge of interest in employing machine learning approaches to expedite the drug discovery process where virtual screening for hit discovery and ADMET prediction for lead optimization play essential roles.

Drug Discovery Meta-Learning +1

Meta-learning with an Adaptive Task Scheduler

2 code implementations NeurIPS 2021 Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn

In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.

Drug Discovery Meta-Learning

Knowledge-Aware Meta-learning for Low-Resource Text Classification

1 code implementation EMNLP 2021 Huaxiu Yao, Yingxin Wu, Maruan Al-Shedivat, Eric P. Xing

Meta-learning has achieved great success in leveraging the historical learned knowledge to facilitate the learning process of the new task.

Meta-Learning Sentence +2

Meta-Learning with Fewer Tasks through Task Interpolation

1 code implementation ICLR 2022 Huaxiu Yao, Linjun Zhang, Chelsea Finn

Meta-learning enables algorithms to quickly learn a newly encountered task with just a few labeled examples by transferring previously learned knowledge.

Image Classification Medical Image Classification +3

FEW-SHOTLEARNING WITH WEAK SUPERVISION

no code implementations ICLR Workshop Learning_to_Learn 2021 Ali Ghadirzadeh, Petra Poklukar, Xi Chen, Huaxiu Yao, Hossein Azizpour, Mårten Björkman, Chelsea Finn, Danica Kragic

Few-shot meta-learning methods aim to learn the common structure shared across a set of tasks to facilitate learning new tasks with small amounts of data.

Meta-Learning Variational Inference

Online Structured Meta-learning

no code implementations NeurIPS 2020 Huaxiu Yao, Yingbo Zhou, Mehrdad Mahdavi, Zhenhui Li, Richard Socher, Caiming Xiong

When a new task is encountered, it constructs a meta-knowledge pathway by either utilizing the most relevant knowledge blocks or exploring new blocks.

Meta-Learning

Relation-aware Meta-learning for Market Segment Demand Prediction with Limited Records

no code implementations1 Aug 2020 Jiatu Shi, Huaxiu Yao, Xian Wu, Tong Li, Zedong Lin, Tengfei Wang, Binqiang Zhao

The goal is to facilitate the learning process in the target segments by leveraging the learned knowledge from data-sufficient source segments.

Meta-Learning Relation

Improving Generalization in Meta-learning via Task Augmentation

1 code implementation26 Jul 2020 Huaxiu Yao, Long-Kai Huang, Linjun Zhang, Ying WEI, Li Tian, James Zou, Junzhou Huang, Zhenhui Li

Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.

Meta-Learning

Investigating and Mitigating Degree-Related Biases in Graph Convolutional Networks

no code implementations28 Jun 2020 Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, Suhang Wang

Pseudo labels increase the chance of connecting to labeled neighbors for low-degree nodes, thus reducing the biases of GCNs from the data perspective.

Self-Supervised Learning

Automated Relational Meta-learning

1 code implementation ICLR 2020 Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li, Zhenhui Li

In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones.

Few-Shot Image Classification Meta-Learning

Few-Shot Knowledge Graph Completion

1 code implementation26 Nov 2019 Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, Nitesh V. Chawla

Knowledge graphs (KGs) serve as useful resources for various natural language processing applications.

One-Shot Learning Relation

Transferable Neural Processes for Hyperparameter Optimization

no code implementations7 Sep 2019 Ying Wei, Peilin Zhao, Huaxiu Yao, Junzhou Huang

Automated machine learning aims to automate the whole process of machine learning, including model configuration.

BIG-bench Machine Learning Hyperparameter Optimization +1

Targeted Source Detection for Environmental Data

no code implementations29 Aug 2019 Guanjie Zheng, Mengqi Liu, Tao Wen, Hongjian Wang, Huaxiu Yao, Susan L. Brantley, Zhenhui Li

In the face of growing needs for water and energy, a fundamental understanding of the environmental impacts of human activities becomes critical for managing water and energy resources, remedying water pollution, and making regulatory policy wisely.

Transferring Robustness for Graph Neural Network Against Poisoning Attacks

1 code implementation20 Aug 2019 Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, Suhang Wang

To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph.

Node Classification Transfer Learning

Hierarchically Structured Meta-learning

1 code implementation13 May 2019 Huaxiu Yao, Ying WEI, Junzhou Huang, Zhenhui Li

In order to learn quickly with few samples, meta-learning utilizes prior knowledge learned from previous tasks.

Clustering Continual Learning +2

Joint Modeling of Dense and Incomplete Trajectories for Citywide Traffic Volume Inference

no code implementations25 Feb 2019 Xianfeng Tang, Boqing Gong, Yanwei Yu, Huaxiu Yao, Yandong Li, Haiyong Xie, Xiaoyu Wang

In this paper, we propose a novel framework for the citywide traffic volume inference using both dense GPS trajectories and incomplete trajectories captured by camera surveillance systems.

Graph Embedding

Revisiting Spatial-Temporal Similarity: A Deep Learning Framework for Traffic Prediction

5 code implementations3 Mar 2018 Huaxiu Yao, Xianfeng Tang, Hua Wei, Guanjie Zheng, Zhenhui Li

Although both factors have been considered in modeling, existing works make strong assumptions about spatial dependence and temporal dynamics, i. e., spatial dependence is stationary in time, and temporal dynamics is strictly periodical.

Traffic Prediction

Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction

1 code implementation23 Feb 2018 Huaxiu Yao, Fei Wu, Jintao Ke, Xianfeng Tang, Yitian Jia, Siyu Lu, Pinghua Gong, Jieping Ye, Zhenhui Li

Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations.

Image Classification Time Series Forecasting +1

Cannot find the paper you are looking for? You can Submit a new open access paper.