no code implementations • 9 Jul 2014 • Ming Xu, Jianping Wu, Yiman Du, Haohan Wang, Geqi Qi, Kezhen Hu, Yun-Peng Xiao
However, none of existing approaches addresses the problem of identifying network-wide important crossroads in real road network.
no code implementations • 9 Dec 2014 • Seungwhan Moon, Suyoun Kim, Haohan Wang
We propose a transfer deep learning (TDL) framework that can transfer the knowledge obtained from a single-modal neural network to a network with a different modality.
no code implementations • 18 Sep 2015 • Haohan Wang, Madhavi K. Ganapathiraju
In this paper, we focused on the problem that non-availability of accurately labeled testing data sets in the domain of protein-protein interaction (PPI) prediction may lead to biased evaluation results.
no code implementations • 16 Oct 2015 • Haohan Wang, Bhiksha Raj
Further, we will also look into the development history of modelling time series data with neural networks.
no code implementations • 6 Nov 2015 • Haohan Wang, Madhavi K. Ganapathiraju
In order for the predicted interactions to be directly adopted by biologists, the ma- chine learning predictions have to be of high precision, regardless of recall.
1 code implementation • 16 Sep 2016 • Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, Eric P. Xing
In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis.
no code implementations • 29 Nov 2016 • Jingkang Yang, Haohan Wang, Jun Zhu, Eric P. Xing
In this paper, we propose an extension of State Space Model to work with different sources of information together with its learning and inference algorithms.
no code implementations • 24 Feb 2017 • Haohan Wang, Bhiksha Raj
This paper is a review of the evolutionary history of deep learning models.
1 code implementation • 20 Mar 2018 • Haohan Wang, Zhenglin Wu, Eric P. Xing
The proliferation of healthcare data has brought the opportunities of applying data-driven approaches, such as machine learning methods, to assist diagnosis.
no code implementations • 7 Sep 2018 • Haohan Wang, Da Sun, Eric P. Xing
In this paper, we further investigate the statistical irregularities, what we refer as confounding factors, of the NLI data sets.
no code implementations • ICLR 2019 • Haohan Wang, Zexue He, Zachary C. Lipton, Eric P. Xing
We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.
Ranked #113 on Domain Generalization on PACS
1 code implementation • 28 May 2019 • Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing
We investigate the relationship between the frequency spectrum of image data and the generalization behavior of convolutional neural networks (CNN).
4 code implementations • NeurIPS 2019 • Haohan Wang, Songwei Ge, Eric P. Xing, Zachary C. Lipton
Despite their renowned predictive power on i. i. d.
Ranked #104 on Domain Generalization on PACS
1 code implementation • WS 2019 • He He, Sheng Zha, Haohan Wang
We first learn a biased model that only uses features that are known to relate to dataset bias.
no code implementations • ICLR 2020 • Haohan Wang, Xindi Wu, Songwei Ge, Zachary C. Lipton, Eric P. Xing
Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns.
1 code implementation • CVPR 2020 • Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing
We investigate the relationship between the frequency spectrum of image data and the generalization behavior of convolutional neural networks (CNN).
8 code implementations • ECCV 2020 • Zeyi Huang, Haohan Wang, Eric P. Xing, Dong Huang
We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
Ranked #27 on Domain Generalization on PACS
no code implementations • 20 Oct 2020 • Haohan Wang, Peiyan Zhang, Eric P. Xing
Neural machine translation has achieved remarkable empirical performance over standard benchmark datasets, yet recent evidence suggests that the models can still fail easily dealing with substandard inputs such as misspelled words, To overcome this issue, we introduce a new encoding heuristic of the input symbols for character-level NLP models: it encodes the shape of each character through the images depicting the letters when printed.
1 code implementation • 25 Nov 2020 • Haohan Wang, Zeyi Huang, Xindi Wu, Eric P. Xing
Data augmentation is one of the most popular techniques for improving the robustness of neural networks.
no code implementations • 1 Jan 2021 • Haohan Wang, Zeyi Huang, Eric Xing
In this paper, we formally study the generalization error bound for this setup with the knowledge of how the spurious features are associated with the label.
no code implementations • 1 Jan 2021 • Haohan Wang, Zeyi Huang, Xindi Wu, Eric Xing
Data augmentation is one of the most popular techniques for improving the robustness of neural networks.
no code implementations • 24 Feb 2021 • Zhuoling Li, Haohan Wang, Tymoteusz Swistek, Weixin Chen, Yuanzheng Li, Haoqian Wang
Few-shot learning is challenging due to the limited data and labels.
no code implementations • 24 Feb 2021 • Xuefeng Du, Haohan Wang, Zhenxi Zhu, Xiangrui Zeng, Yi-Wei Chang, Jing Zhang, Min Xu
Deep learning based subtomogram classification have played critical roles for such tasks.
1 code implementation • NeurIPS 2021 • Songwei Ge, Shlok Mishra, Haohan Wang, Chun-Liang Li, David Jacobs
We also show that model bias favors texture and shape features differently under different test settings.
no code implementations • 5 Nov 2021 • Haohan Wang, Bryon Aragam, Eric Xing
Motivated by empirical arguments that are well-known from the genome-wide association studies (GWAS) literature, we study the statistical properties of linear mixed models (LMMs) applied to GWAS.
1 code implementation • 5 Nov 2021 • Haohan Wang, Zeyi Huang, HANLIN ZHANG, Yong Jae Lee, Eric Xing
Machine learning has demonstrated remarkable prediction accuracy over i. i. d data, but the accuracy often drops when tested with data from another distribution.
no code implementations • NAACL 2022 • Xuezhi Wang, Haohan Wang, Diyi Yang
Despite robustness being an increasingly studied topic, it has been separately explored in applications like vision and NLP, with various definitions, evaluation and mitigation strategies in multiple lines of research.
no code implementations • CVPR 2022 • Zeyi Huang, Haohan Wang, Dong Huang, Yong Jae Lee, Eric P. Xing
Training with an emphasis on "hard-to-learn" components of the data has been proven as an effective method to improve the generalization of machine learning models, especially in the settings where robustness (e. g., generalization across distributions) is valued.
1 code implementation • 9 Apr 2022 • Zeyi Huang, Haohan Wang, Dong Huang, Yong Jae Lee, Eric P. Xing
Training with an emphasis on "hard-to-learn" components of the data has been proven as an effective method to improve the generalization of machine learning models, especially in the settings where robustness (e. g., generalization across distributions) is valued.
1 code implementation • 4 Jun 2022 • Haohan Wang, Zeyi Huang, Xindi Wu, Eric P. Xing
Finally, we test this simple technique we identify (worst-case data augmentation with squared l2 norm alignment regularization) and show that the benefits of this method outrun those of the specially designed methods.
no code implementations • 18 Jun 2022 • Chonghan Chen, Qi Jiang, Chih-Hao Wang, Noel Chen, Haohan Wang, Xiang Li, Bhiksha Raj
With our proposed QCM, the downstream fusion module receives visual features that are more discriminative and focused on the desired object described in the expression, leading to more accurate predictions.
1 code implementation • 26 Jun 2022 • Peiyan Zhang, Jiayan Guo, Chaozhuo Li, Yueqi Xie, Jaeboum Kim, Yan Zhang, Xing Xie, Haohan Wang, Sunghun Kim
Based on this observation, we intuitively propose to remove the GNN propagation part, while the readout module will take on more responsibility in the model reasoning process.
no code implementations • 18 Jul 2022 • Yifan Zhong, Haohan Wang, Eric P. Xing
Many recent neural models have shown remarkable empirical results in Machine Reading Comprehension, but evidence suggests sometimes the models take advantage of dataset biases to predict and fail to generalize on out-of-sample data.
1 code implementation • 18 Jul 2022 • Chonghan Chen, Haohan Wang, Leyang Hu, Yuhao Zhang, Shuguang Lyu, Jingcheng Wu, Xinnuo Li, Linjing Sun, Eric P. Xing
We introduce the initial release of our software Robustar, which aims to improve the robustness of vision classification machine learning models through a data-driven perspective.
no code implementations • 5 Sep 2022 • Esla Timothy Anzaku, Haohan Wang, Arnout Van Messem, Wesley De Neve
Deep Neural Network (DNN) models are increasingly evaluated using new replication test datasets, which have been carefully created to be similar to older and popular benchmark datasets.
1 code implementation • 30 Nov 2022 • Kun Xiang, Xing Zhang, Jinwen She, Jinpeng Liu, Haohan Wang, Shiqi Deng, Shancheng Jiang
As the COVID-19 pandemic puts pressure on healthcare systems worldwide, the computed tomography image based AI diagnostic system has become a sustainable solution for early diagnosis.
1 code implementation • 9 Dec 2022 • Minh-Long Luu, Zeyi Huang, Eric P. Xing, Yong Jae Lee, Haohan Wang
Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks.
Ranked #1 on Classifier calibration on CIFAR-100
1 code implementation • 10 Mar 2023 • Haohan Wang, Liang Liu, Wuhao Zhang, Jiangning Zhang, Zhenye Gan, Yabiao Wang, Chengjie Wang, Haoqian Wang
Few-shot semantic segmentation aims to learn to segment unseen class objects with the guidance of only a few support images.
Ranked #40 on Few-Shot Semantic Segmentation on COCO-20i (1-shot)
1 code implementation • 14 Mar 2023 • Haohan Wang, Liang Liu, Boshen Zhang, Jiangning Zhang, Wuhao Zhang, Zhenye Gan, Yabiao Wang, Chengjie Wang, Haoqian Wang
Recent works on sparsely annotated object detection alleviate this problem by generating pseudo labels for the missing annotations.
1 code implementation • 28 May 2023 • Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama
To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
no code implementations • 9 Jun 2023 • Mu Cai, Zeyi Huang, Yuheng Li, Haohan Wang, Yong Jae Lee
By leveraging the XML-based textual descriptions of SVG representations instead of raster images, we aim to bridge the gap between the visual and textual modalities, allowing LLMs to directly understand and manipulate images without the need for parameterized visual components.
1 code implementation • 10 Jun 2023 • Wenxuan Bao, Haohan Wang, Jun Wu, Jingrui He
In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized.
no code implementations • 31 Jul 2023 • Haoyang Liu, Maheep Chaudhary, Haohan Wang
Accordingly, this survey presents the background of trustworthy machine learning development using a unified set of concepts, connects this language to Pearl's causal hierarchy, and finally discusses methods explicitly inspired by causality literature.
no code implementations • 21 Aug 2023 • Peiyan Zhang, Haoyang Liu, Chaozhuo Li, Xing Xie, Sunghun Kim, Haohan Wang
Machine learning has demonstrated remarkable performance over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model's performance in the real world is still in discussion.
1 code implementation • ICCV 2023 • Zeyi Huang, Andy Zhou, Zijian Lin, Mu Cai, Haohan Wang, Yong Jae Lee
Domain generalization studies the problem of training a model with samples from several domains (or distributions) and then testing the model with samples from a new, unseen domain.
Ranked #15 on Domain Generalization on PACS
no code implementations • 1 Oct 2023 • Yijiang Li, Ying Gao, Haohan Wang
We investigate the robustness and security issues from a novel and practical setting: a group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients, and only revealing their adversary position after the training to conduct transferable adversarial attacks with their data, which is usually a subset of the data that FL system is trained with.
1 code implementation • 6 Oct 2023 • Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang
While large language models (LLMs) have demonstrated impressive performance on a range of decision-making tasks, they rely on simple acting processes and fall short of broad deployment as autonomous agents.
Ranked #3 on Code Generation on HumanEval
1 code implementation • 8 Oct 2023 • Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji
When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.
1 code implementation • NeurIPS 2023 • Wenxuan Bao, Tianxin Wei, Haohan Wang, Jingrui He
To tackle this challenge, we propose a novel algorithm called ATP to adaptively learns the adaptation rates for each module in the model from distribution shifts among source domains.
1 code implementation • NeurIPS 2023 • Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang
We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation.
Ranked #13 on Domain Generalization on ImageNet-Sketch
no code implementations • 26 Nov 2023 • Jixuan Leng, Yijiang Li, Haohan Wang
In this paper, we introduce a novel approach, namely, Selective Cross-Modality Distillation for Domain Generalization (SCMD).
no code implementations • 27 Nov 2023 • Tong Zhang, Haoyang Liu, Peiyan Zhang, Yuxuan Cheng, Haohan Wang
Our method focuses on producing SVGs that are both accurate and simple, aligning with human readability and understanding.
no code implementations • 30 Nov 2023 • Haoyang Liu, Yijiang Li, Tiancheng Xing, Vibhu Dalal, Luwei Li, Jingrui He, Haohan Wang
Dataset Distillation (DD) emerges as a powerful strategy to encapsulate the expansive information of large datasets into significantly smaller, synthetic equivalents, thereby preserving model performance with reduced computational overhead.
no code implementations • 20 Dec 2023 • Haohan Wang, Wei Feng, Yang Lu, Yaoyu Li, Zheng Zhang, Jingjing Lv, Xin Zhu, Junjie Shen, Zhangang Lin, Lixing Bo, Jingping Shao
Furthermore, for products with specific and fine-grained requirements in layout, elements, etc, a Personality-Wise Generator is devised to learn such personalized style directly from a reference image to resolve textual ambiguities, and is trained in a self-supervised manner for more efficient training data usage.
no code implementations • 12 Jan 2024 • Yifeng Wang, Ke Chen, Haohan Wang
Automated diagnosis of Alzheimer Disease(AD) from brain imaging, such as magnetic resonance imaging (MRI), has become increasingly important and has attracted the community to contribute many deep learning methods.
1 code implementation • 30 Jan 2024 • Andy Zhou, Bo Li, Haohan Wang
Despite advances in AI alignment, language models (LM) remain vulnerable to adversarial attacks or jailbreaking, in which adversaries modify input prompts to induce harmful behavior.
no code implementations • 5 Feb 2024 • Haibo Jin, Ruoxi Chen, Andy Zhou, Jinyin Chen, Yang Zhang, Haohan Wang
Our system of different roles will leverage this knowledge graph to generate new jailbreaks, which have proved effective in inducing LLMs to generate unethical or guideline-violating responses.
no code implementations • 15 Feb 2024 • Haoyang Liu, Yijiang Li, Jinglin Jian, Yuxuan Cheng, Jianrong Lu, Shuyi Guo, Jinglei Zhu, Mianchen Zhang, Miantong Zhang, Haohan Wang
For instance, it has facilitated the identification of disease-predictive genes from gene expression data, significantly advancing healthcare.
no code implementations • 8 Mar 2024 • Yijiang Li, Sucheng Ren, Weipeng Deng, Yuzhi Xu, Ying Gao, Edith Ngai, Haohan Wang
Starting with the class of interest, we query the LLMs to extract relevant knowledge for these novel domains.
no code implementations • 15 Mar 2024 • Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang
In this work, we provide a finetuning approach to enhance the robustness of vision transformers inspired by the concept of nullspace from linear algebra.
no code implementations • 15 Mar 2024 • Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang
Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.