no code implementations • SemEval (NAACL) 2022 • Ziming Zhou, Han Zhao, Jingjing Dong, Ning Ding, Xiaolong Liu, Kangli Zhang
This paper describes our submission for task 5 Multimedia Automatic Misogyny Identification (MAMI) at SemEval-2022.
no code implementations • CONSTRAINT (ACL) 2022 • Ziming Zhou, Han Zhao, Jingjing Dong, Jun Gao, Xiaolong Liu
The memes serve as an important tool in online communication, whereas some hateful memes endanger cyberspace by attacking certain people or subjects.
no code implementations • Findings (EMNLP) 2021 • Zixuan Zhang, Hongwei Wang, Han Zhao, Hanghang Tong, Heng Ji
Relations in most of the traditional knowledge graphs (KGs) only reflect static and factual connections, but fail to represent the dynamic activities and state changes about entities.
1 code implementation • 11 Sep 2024 • Yang Liu, Pengxiang Ding, Siteng Huang, Min Zhang, Han Zhao, Donglin Wang
Fueled by the Large Language Models (LLMs) wave, Large Visual-Language Models (LVLMs) have emerged as a pivotal advancement, bridging the gap between image and text.
1 code implementation • 10 Sep 2024 • Yifei He, Haoxiang Wang, Ziyan Jiang, Alexandros Papangelis, Han Zhao
Reward models (RM) capture the values and preferences of humans and play a central role in Reinforcement Learning with Human Feedback (RLHF) to align pretrained large language models (LLMs).
1 code implementation • 4 Sep 2024 • Xiaoyuan Zhang, Liang Zhao, Yingying Yu, Xi Lin, Zhenkun Wang, Han Zhao, Qingfu Zhang
Multiobjective optimization problems (MOPs) are prevalent in machine learning, with applications in multi-task learning, learning under fairness or robustness constraints, etc.
1 code implementation • 24 Aug 2024 • Yifei He, Yuzheng Hu, Yong Lin, Tong Zhang, Han Zhao
Our algorithm works in two steps: i) Localization: identify tiny ($1\%$ of the total parameters) localized regions in the finetuned models containing essential skills for the downstream tasks, and ii) Stitching: reintegrate only these essential regions back into the pretrained model for task synergy.
no code implementations • 23 Aug 2024 • Dillon Davis, Huiji Gao, Weiwei Guo, Thomas Legrand, Malay Haldar, Alex Deng, Han Zhao, Liwei He, Sanjeev Katariya
The Airbnb search system grapples with many unique challenges as it continues to evolve.
2 code implementations • 18 Jun 2024 • Haoxiang Wang, Wei Xiong, Tengyang Xie, Han Zhao, Tong Zhang
The trained RM serves as a proxy for human preferences.
3 code implementations • 13 May 2024 • Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
We present the workflow of Online Iterative Reinforcement Learning from Human Feedback (RLHF) in this technical report, which is widely reported to outperform its offline counterpart by a large margin in the recent large language model (LLM) literature.
1 code implementation • 7 May 2024 • Ruicheng Xian, Qiaobo Li, Gautam Kamath, Han Zhao
This paper describes a differentially private post-processing algorithm for learning fair regressors satisfying statistical parity, addressing privacy concerns of machine learning models trained on sensitive data, as well as fairness concerns of their potential to propagate historical biases.
1 code implementation • 7 May 2024 • Ruicheng Xian, Han Zhao
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria covering statistical parity, equal opportunity, and equalized odds, applicable to multi-class problems and both attribute-aware and attribute-blind settings.
no code implementations • 2 May 2024 • Jayanth Shenoy, Xingjian Davis Zhang, Shlok Mehrotra, Bill Tao, Rem Yang, Han Zhao, Deepak Vasisht
We propose S4 a new self-supervised pre-training approach that significantly reduces the requirement for labeled training data by utilizing two new insights: (a) Satellites capture images in different parts of the spectrum such as radio frequencies, and visible frequencies.
1 code implementation • 10 Apr 2024 • Longwei Zou, Qingyang Wang, Han Zhao, Jiangang Kong, Yi Yang, Yangdong Deng
Empirical experiments of the proposed approach on the LLaMA models confirm that Concurrent Computation of Quasi-Independent Layers (CQIL) can reduce latency by up to 48. 3% on LLaMA-33B, while maintaining a close level of performance.
no code implementations • 24 Mar 2024 • Chunyu Xue, Weihao Cui, Han Zhao, Quan Chen, Shulai Zhang, Pengyu Yang, Jing Yang, Shaobo Li, Minyi Guo
The exponentially enlarged scheduling space and ever-changing optimal parallelism plan from adaptive parallelism together result in the contradiction between low-overhead and accurate performance data acquisition for efficient cluster scheduling.
1 code implementation • 21 Mar 2024 • Han Zhao, Min Zhang, Wei Zhao, Pengxiang Ding, Siteng Huang, Donglin Wang
In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success.
no code implementations • 20 Mar 2024 • Wenxuan Song, Han Zhao, Pengxiang Ding, Can Cui, Shangke Lyu, Yaning Fan, Donglin Wang
Multi-task robot learning holds significant importance in tackling diverse and complex scenarios.
1 code implementation • 17 Mar 2024 • Zihan Wang, Fanheng Kong, Shi Feng, Ming Wang, Xiaocui Yang, Han Zhao, Daling Wang, Yifei Zhang
For TSF tasks, these characteristics enable Mamba to comprehend hidden patterns as the Transformer and reduce computational overhead compared to the Transformer.
Ranked #55 on Time Series Forecasting on ETTh1 (336) Multivariate
1 code implementation • 2 Mar 2024 • Shikun Liu, Deyu Zou, Han Zhao, Pan Li
Graph-based methods, pivotal for label inference over interconnected objects in many real-world applications, often encounter generalization challenges, if the graph used for model training differs significantly from the graph used for testing.
1 code implementation • 28 Feb 2024 • Haoxiang Wang, Yong Lin, Wei Xiong, Rui Yang, Shizhe Diao, Shuang Qiu, Han Zhao, Tong Zhang
Additionally, DPA models user preferences as directions (i. e., unit vectors) in the reward space to achieve user-dependent preference control.
no code implementations • 13 Feb 2024 • Yifan Yang, Mingquan Lin, Han Zhao, Yifan Peng, Furong Huang, Zhiyong Lu
Such biases can occur before, during, or after the development of AI models, making it critical to understand and address potential biases to enable the accurate and reliable application of AI models in clinical settings.
1 code implementation • 5 Feb 2024 • Haoxiang Wang, Haozhe Si, Huajie Shao, Han Zhao
To delve into the CG challenge, we develop CG-Bench, a suite of CG benchmarks derived from existing real-world image datasets, and observe that the prevalent pretraining-finetuning paradigm on foundational models, such as CLIP and DINOv2, struggles with the challenge.
1 code implementation • 3 Feb 2024 • Yifei He, Shiji Zhou, Guojun Zhang, Hyokun Yun, Yi Xu, Belinda Zeng, Trishul Chilimbi, Han Zhao
To overcome this limitation, we propose Multi-Task Learning with Excess Risks (ExcessMTL), an excess risk-based task balancing method that updates the task weights by their distances to convergence instead.
no code implementations • CVPR 2024 • Fuli Wan, Han Zhao, Xu Yang, Cheng Deng
In contrast this paper advocates that exploring unknown classes can better identify known ones and proposes a domain adaptation model to transfer knowledge on known and unknown classes jointly.
no code implementations • 22 Dec 2023 • Pengxiang Ding, Han Zhao, Wenxuan Song, Wenjie Zhang, Min Zhang, Siteng Huang, Ningxi Yang, Donglin Wang
Within this framework, a notable challenge lies in aligning fine-grained instructions with visual perception information.
no code implementations • 10 Nov 2023 • Hongyin Zhang, Diyuan Shi, Zifeng Zhuang, Han Zhao, Zhenyu Wei, Feng Zhao, Sibo Gai, Shangke Lyu, Donglin Wang
Developing robotic intelligent systems that can adapt quickly to unseen wild situations is one of the critical challenges in pursuing autonomous robotics.
1 code implementation • 2 Nov 2023 • Haoxiang Wang, Gargi Balasubramaniam, Haozhe Si, Bo Li, Han Zhao
First, in the binary classification setup of Rosenfeld et al. (2021), we show that our first algorithm, ISR-Mean, can identify the subspace spanned by invariant features from the first-order moments of the class-conditional distributions, and achieve provable domain generalization with $d_s+1$ training environments.
no code implementations • 30 Oct 2023 • Ziyu Gong, Ben Usman, Han Zhao, David I. Inouye
Distribution matching can be used to learn invariant representations with applications in fairness and robustness.
1 code implementation • 20 Oct 2023 • Yifei He, Haoxiang Wang, Bo Li, Han Zhao
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way.
no code implementations • 16 Oct 2023 • Makoto Yamada, Yuki Takezawa, Guillaume Houry, Kira Michaela Dusterwald, Deborah Sulem, Han Zhao, Yao-Hung Hubert Tsai
We find that the model performance depends on the combination of TWD and probability model, and that the Jeffrey divergence regularization helps in model training.
1 code implementation • 10 Oct 2023 • Bowen Jin, Wentao Zhang, Yu Zhang, Yu Meng, Han Zhao, Jiawei Han
Mainstream text representation learning methods use pretrained language models (PLMs) to generate one embedding for each text unit, expecting that all types of relations between texts can be captured by these single-view embeddings.
no code implementations • 10 Oct 2023 • Zikun Chen, Han Zhao, Parham Aarabi, Ruowei Jiang
We propose a novel framework SC$^2$GAN that achieves disentanglement by re-projecting low-density latent code samples in the original latent space and correcting the editing directions based on both the high-density and low-density regions.
no code implementations • 12 Sep 2023 • Yong Lin, Hangyu Lin, Wei Xiong, Shizhe Diao, Jianmeng Liu, Jipeng Zhang, Rui Pan, Haoxiang Wang, Wenbin Hu, Hanning Zhang, Hanze Dong, Renjie Pi, Han Zhao, Nan Jiang, Heng Ji, Yuan YAO, Tong Zhang
Building on the analysis and the observation that averaging different layers of the transformer leads to significantly different reward-tax trade-offs, we propose Adaptive Model Averaging (AMA) to adaptively find various combination ratios of model layers.
1 code implementation • 15 Jun 2023 • Xiaotian Han, Jianfeng Chi, Yu Chen, Qifan Wang, Han Zhao, Na Zou, Xia Hu
This paper introduces the Fair Fairness Benchmark (\textsf{FFB}), a benchmarking framework for in-processing group fairness methods.
1 code implementation • 5 Jun 2023 • Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiu Qiang, Pan Li
This work examines different impacts of distribution shifts caused by either graph structure or node attributes and identifies a new type of shift, named conditional structure shift (CSS), which current GDA approaches are provably sub-optimal to deal with.
no code implementations • 22 May 2023 • Chi Han, Ziqi Wang, Han Zhao, Heng Ji
Then, we empirically investigate the in-context behaviors of language models.
1 code implementation • 20 Apr 2023 • Costas Mavromatis, Vassilis N. Ioannidis, Shen Wang, Da Zheng, Soji Adeshina, Jun Ma, Han Zhao, Christos Faloutsos, George Karypis
Different from conventional knowledge distillation, GRAD jointly optimizes a GNN teacher and a graph-free student over the graph's nodes via a shared LM.
no code implementations • CVPR 2023 • Qian Jiang, Changyou Chen, Han Zhao, Liqun Chen, Qing Ping, Son Dinh Tran, Yi Xu, Belinda Zeng, Trishul Chilimbi
Hence we advocate that the key of better performance lies in meaningful latent modality structures instead of perfect modality alignment.
no code implementations • 27 Jan 2023 • Fei Pan, Yutong Wu, Kangning Cui, Shuxun Chen, Yanfang Li, Yaofang Liu, Adnan Shakoor, Han Zhao, Beijia Lu, Shaohua Zhi, Raymond Chan, Dong Sun
In this study, we developed a novel deep-learning algorithm called dual-view selective instance segmentation network (DVSISN) for segmenting unstained adherent cells in differential interference contrast (DIC) images.
1 code implementation • 28 Nov 2022 • Yuzheng Hu, Fan Wu, Hongyang Zhang, Han Zhao
More specifically, we demonstrate that while the constraint of adversarial robustness consistently degrades the standard accuracy in the balanced class setting, the class imbalance ratio plays a fundamentally different role in accuracy disparity compared to the Gaussian case, due to the heavy tail of the stable distribution.
1 code implementation • 3 Nov 2022 • Ruicheng Xian, Lang Yin, Han Zhao
To mitigate the bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance.
no code implementations • 22 Oct 2022 • Runxiang Cheng, Gargi Balasubramaniam, Yifei He, Yao-Hung Hubert Tsai, Han Zhao
We formulate a theoretical framework for optimizing modality selection in multimodal learning and introduce a utility measure to quantify the benefit of selecting a modality.
no code implementations • 27 Sep 2022 • Ruikang Luo, Yaofeng Song, Han Zhao, YiCheng Zhang, Yi Zhang, Nanbin Zhao, Liping Huang, Rong Su
Accurate vehicle type classification serves a significant role in the intelligent transportation system.
no code implementations • 7 Sep 2022 • Yaofeng Song, Han Zhao, Ruikang Luo, Liping Huang, YiCheng Zhang, Rong Su
To better serve further research on various related Deep Reinforcment Learning (DRL) EV dispatching algorithms, an efficient simulation environment is necessary to ensure success.
1 code implementation • 1 Sep 2022 • Zikun Chen, Ruowei Jiang, Brendan Duke, Han Zhao, Parham Aarabi
Generative Adversarial Networks (GANs) have been widely applied in modeling diverse image distributions.
no code implementations • 31 Aug 2022 • Kaifang Long, Jikun Dong, Shengyu Fan, Yanfang Geng, Yang Cao, Han Zhao, Hui Yu, Weizhi Xu
Recently, with the continuous development of deep learning, the performance of named entity recognition tasks has been dramatically improved.
Chinese Named Entity Recognition named-entity-recognition +1
no code implementations • 21 Jul 2022 • Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Arash Nourian, Han Zhao, Bo Li
However, due to the heterogeneous nature of local data, it is challenging to optimize or even define fairness of the trained global model for the agents.
1 code implementation • 23 May 2022 • Jianfeng Chi, William Shand, Yaodong Yu, Kai-Wei Chang, Han Zhao, Yuan Tian
Contrastive representation learning has gained much attention due to its superior performance in learning representations from both image and sequential data.
no code implementations • 25 Apr 2022 • Jing Dong, Shiji Zhou, Baoxiang Wang, Han Zhao
We thus study the problem of supervised gradual domain adaptation, where labeled data from shifting distributions are available to the learner along the trajectory, and we aim to learn a classifier on a target data distribution of interest.
2 code implementations • 18 Apr 2022 • Haoxiang Wang, Bo Li, Han Zhao
Gradual domain adaptation (GDA), on the other hand, assumes a path of $(T-1)$ unlabeled intermediate domains bridging the source and target, and aims to provide better generalization in the target domain by leveraging the intermediate ones.
1 code implementation • MMMPIE (COLING) 2022 • Zhenhailong Wang, Hang Yu, Manling Li, Han Zhao, Heng Ji
While much literature has been devoted to exploring alternative optimization strategies, we identify another essential aspect towards effective few-shot transfer learning, task sampling, which is previously only be viewed as part of data pre-processing in MAML.
1 code implementation • ICLR 2022 • Yao-Hung Hubert Tsai, Tianqin Li, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov
Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables.
1 code implementation • 30 Jan 2022 • Haoxiang Wang, Haozhe Si, Bo Li, Han Zhao
Our first algorithm, ISR-Mean, can identify the subspace spanned by invariant features from the first-order moments of the class-conditional distributions, and achieve provable domain generalization with $d_s+1$ training environments under the data model of Rosenfeld et al. (2021).
no code implementations • CVPR 2022 • Huajie Shao, Yifei Yang, Haohong Lin, Longzhong Lin, Yizhuo Chen, Qinmin Yang, Han Zhao
It has shown success in a variety of applications, such as image generation, disentangled representation learning, and language modeling.
no code implementations • 7 Dec 2021 • Han Zhao, Lei Guo
In this paper a novel model-free algorithm is proposed.
1 code implementation • 19 Nov 2021 • Jianfeng Chi, Jian Shen, Xinyi Dai, Weinan Zhang, Yuan Tian, Han Zhao
We first provide a decomposition theorem for return disparity, which decomposes the return disparity of any two MDPs sharing the same state and action spaces into the distance between group-wise reward functions, the discrepancy of group policies, and the discrepancy between state visitation distributions induced by the group policies.
no code implementations • 25 Oct 2021 • Lei Guo, Han Zhao, Yuan Song
First, the deficiency of chattering in traditional SMC and the quasi-SMC method are analyzed in this paper.
1 code implementation • 16 Oct 2021 • Yan Shen, Jian Du, Han Zhao, Benyu Zhang, Zhanghexuan Ji, Mingchen Gao
Federated adversary domain adaptation is a unique distributed minimax training task due to the prevalence of label imbalance among clients, with each client only seeing a subset of the classes of labels required to train a global model.
no code implementations • ICLR 2022 • Ruicheng Xian, Heng Ji, Han Zhao
Recent advances in neural modeling have produced deep multilingual language models capable of extracting cross-lingual knowledge from unparallel texts, as evidenced by their decent zero-shot transfer performance.
no code implementations • 29 Sep 2021 • Xiaoyang Wang, Han Zhao, Klara Nahrstedt, Oluwasanmi O Koyejo
To this end, we propose a strategy to mitigate the effect of spurious features based on our observation that the global model in the federated learning step has a low accuracy disparity due to statistical heterogeneity.
no code implementations • 16 Jun 2021 • Han Zhao
In this paper, we characterize the inherent tradeoff between statistical parity and accuracy in the regression setting by providing a lower bound on the error of any fair regressor.
1 code implementation • 16 Jun 2021 • Haoxiang Wang, Han Zhao, Bo Li
Despite the subtle difference between MTL and meta-learning in the problem formulation, both learning paradigms share the same insight that the shared structure between existing training tasks could lead to better generalization and adaptation.
Ranked #17 on Few-Shot Image Classification on FC100 5-way (1-shot)
no code implementations • 11 Jun 2021 • Bo Li, Yifei Shen, Yezhen Wang, Wenzhen Zhu, Colorado J. Reed, Jun Zhang, Dongsheng Li, Kurt Keutzer, Han Zhao
IIB significantly outperforms IRM on synthetic datasets, where the pseudo-invariant features and geometric skews occur, showing the effectiveness of proposed formulation in overcoming failure modes of IRM.
no code implementations • 11 Jun 2021 • Shiji Zhou, Han Zhao, Shanghang Zhang, Lianzhe Wang, Heng Chang, Zhi Wang, Wenwu Zhu
Our theoretical results show that OSAMD can fast adapt to changing environments with active queries.
2 code implementations • NeurIPS 2021 • Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart
We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.
no code implementations • 5 Jun 2021 • Martin Q. Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, Louis-Philippe Morency
In this paper, we propose a Conditional Contrastive Learning (CCL) approach to improve the fairness of contrastive SSL methods.
no code implementations • 19 May 2021 • Lei Guo, Han Zhao
In this paper, we present a novel algorithm named synchronous integral Q-learning, which is based on synchronous policy iteration, to solve the continuous-time infinite horizon optimal control problems of input-affine system dynamics.
no code implementations • 23 Mar 2021 • Xiaolong Chen, Wenyu Liang, Han Zhao, Abdullah Al Mamun
Ultrasonic motors (USMs) are commonly used in aerospace, robotics, and medical devices, where fast and precise motion is needed.
1 code implementation • ICLR 2021 • Yao-Hung Hubert Tsai, Martin Q. Ma, Muqiao Yang, Han Zhao, Louis-Philippe Morency, Ruslan Salakhutdinov
This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance.
1 code implementation • 24 Feb 2021 • Jianfeng Chi, Yuan Tian, Geoffrey J. Gordon, Han Zhao
With the widespread deployment of large-scale prediction systems in high-stakes domains, e. g., face recognition, criminal justice, etc., disparity in prediction accuracy between different demographic subgroups has called for fundamental understanding on the source of such disparity and algorithmic intervention to mitigate it.
no code implementations • 29 Jan 2021 • Xuecong Sun, Han Jia, Yuzhen Yang, Han Zhao, Yafeng Bi, Zhaoyong Sun, Jun Yang
From ancient to modern times, acoustic structures have been used to control the propagation of acoustic waves.
1 code implementation • ICLR 2021 • Peizhao Li, Yifei Wang, Han Zhao, Pengyu Hong, Hongfu Liu
Disparate impact has raised serious concerns in machine learning applications and its societal impacts.
no code implementations • NeurIPS 2023 • Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar
A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e. g., for fairness, privacy, etc).
1 code implementation • NeurIPS 2020 • Jian Shen, Han Zhao, Weinan Zhang, Yong Yu
However, due to the potential distribution mismatch between simulated data and real data, this could lead to degraded performance.
no code implementations • CVPR 2021 • Bo Li, Yezhen Wang, Shanghang Zhang, Dongsheng Li, Trevor Darrell, Kurt Keutzer, Han Zhao
First, we provide a finite sample bound for both classification and regression problems under Semi-DA.
1 code implementation • 28 Sep 2020 • Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi Jaakkola, Geoffrey Gordon, Stefanie Jegelka, Ruslan Salakhutdinov
While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level information about sensitive attributes.
no code implementations • 15 Sep 2020 • Huajie Shao, Haohong Lin, Qinmin Yang, Shuochao Yao, Han Zhao, Tarek Abdelzaher
Existing methods, such as $\beta$-VAE and FactorVAE, assign a large weight to the KL-divergence term in the objective function, leading to high reconstruction errors for the sake of better disentanglement.
1 code implementation • 1 Sep 2020 • Sicheng Zhao, Xiangyu Yue, Shanghang Zhang, Bo Li, Han Zhao, Bichen Wu, Ravi Krishna, Joseph E. Gonzalez, Alberto L. Sangiovanni-Vincentelli, Sanjit A. Seshia, Kurt Keutzer
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
no code implementations • ICML 2020 • Han Zhao, Junjie Hu, Andrej Risteski
The goal of universal machine translation is to learn to translate between any pair of languages, given a corpus of paired translated documents for \emph{a small subset} of all pairs of languages.
1 code implementation • NeurIPS 2020 • Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov
Since its inception, the neural estimation of mutual information (MI) has demonstrated the empirical success of modeling expected dependency between high-dimensional random variables.
1 code implementation • NeurIPS 2020 • Remi Tachet, Han Zhao, Yu-Xiang Wang, Geoff Gordon
However, recent work has shown limitations of this approach when label distributions differ between the source and target domains.
no code implementations • ICLR 2020 • Tameem Adel, Han Zhao, Richard E. Turner
Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner.
1 code implementation • ICLR 2020 • Han Zhao, Amanda Coston, Tameem Adel, Geoffrey J. Gordon
We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting.
no code implementations • 25 Sep 2019 • Han Zhao, Jianfeng Chi, Yuan Tian, Geoffrey J. Gordon
With the prevalence of machine learning services, crowdsourced data containing sensitive information poses substantial privacy challenges.
1 code implementation • NeurIPS 2019 • Han Zhao, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Geoffrey J. Gordon
Feed-forward neural networks can be understood as a combination of an intermediate representation and a linear hypothesis.
no code implementations • NeurIPS 2019 • Han Zhao, Geoffrey J. Gordon
On the upside, we prove that if the group-wise Bayes optimal classifiers are close, then learning fair representations leads to an alternative notion of fairness, known as the accuracy parity, which states that the error rates are close between groups.
no code implementations • NeurIPS 2020 • Han Zhao, Jianfeng Chi, Yuan Tian, Geoffrey J. Gordon
Meanwhile, it is clear that in general there is a tension between minimizing information leakage and maximizing task accuracy.
no code implementations • ICLR 2019 • Han Zhao, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Geoff Gordon
Learning deep neural networks could be understood as the combination of representation learning and learning halfspaces.
2 code implementations • 27 Jan 2019 • Han Zhao, Remi Tachet des Combes, Kun Zhang, Geoffrey J. Gordon
Our result characterizes a fundamental tradeoff between learning invariant representations and achieving small joint error on both domains when the marginal label distributions differ from source to target.
no code implementations • NeurIPS 2018 • Han Zhao, Shanghang Zhang, Guanhang Wu, José M. F. Moura, Joao P. Costeira, Geoffrey J. Gordon
In this paper we propose new generalization bounds and algorithms under both classification and regression settings for unsupervised multiple source domain adaptation.
Ranked #3 on Domain Adaptation on GTA5+Synscapes to Cityscapes
1 code implementation • 16 Jun 2018 • Yichong Xu, Han Zhao, Xiaofei Shi, Jeremy Zhang, Nihar B. Shah
We then empirically show that the requisite property on the authorship graph is indeed satisfied in the submission data from the ICLR conference, and further demonstrate a simple trick to make the partitioning method more practically appealing for conference peer review.
no code implementations • 2 May 2018 • Han Zhao, Shuayb Zarar, Ivan Tashev, Chin-Hui Lee
By incorporating prior knowledge of speech signals into the design of model structures, we build a model that is more data-efficient and achieves better generalization on both seen and unseen noise.
no code implementations • 19 Jan 2018 • Chen Liang, Jianbo Ye, Han Zhao, Bart Pursel, C. Lee Giles
Strict partial order is a mathematical structure commonly seen in relational data.
no code implementations • ICLR 2018 • Yao-Hung Hubert Tsai, Han Zhao, Nebojsa Jojic, Ruslan Salakhutdinov
The assumption that data samples are independently identically distributed is the backbone of many learning algorithms.
no code implementations • ICLR 2018 • Han Zhao, Shanghang Zhang, Guanhang Wu, Jo\~{a}o P. Costeira, Jos\'{e} M. F. Moura, Geoffrey J. Gordon
We propose a new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances.
no code implementations • ICLR 2018 • Yao-Hung Hubert Tsai, Han Zhao, Ruslan Salakhutdinov, Nebojsa Jojic
In this technical report, we introduce OrderNet that can be used to extract the order of data instances in an unsupervised way.
no code implementations • 20 Jun 2017 • Han Zhao, Geoff Gordon
Symmetric nonnegative matrix factorization has found abundant applications in various domains by providing a symmetric low-rank decomposition of nonnegative matrices.
4 code implementations • 26 May 2017 • Han Zhao, Shanghang Zhang, Guanhang Wu, João P. Costeira, José M. F. Moura, Geoffrey J. Gordon
As a step toward bridging the gap, we propose a new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances.
no code implementations • ICLR 2018 • Han Zhao, Zhenyao Zhu, Junjie Hu, Adam Coates, Geoff Gordon
This provides us a very general way to interpolate between generative and discriminative extremes through different choices of priors.
no code implementations • NeurIPS 2017 • Han Zhao, Geoff Gordon
We propose a dynamic programming method to further reduce the computation of the moments of all the edges in the graph from quadratic to linear.
no code implementations • 14 Feb 2017 • Han Zhao, Otilia Stretcu, Alex Smola, Geoff Gordon
In this paper, we consider a formulation of multitask learning that learns the relationships both between tasks and between features, represented through a task covariance and a feature covariance matrix, respectively.
no code implementations • NeurIPS 2016 • Han Zhao, Pascal Poupart, Geoff Gordon
We present a unified approach for learning the parameters of Sum-Product networks (SPNs).
1 code implementation • 20 Apr 2015 • Han Zhao, Zhengdong Lu, Pascal Poupart
The ability to accurately model a sentence at varying stages (e. g., word-phrase-sentence) plays a central role in natural language processing.
Ranked #5 on Subjectivity Analysis on SUBJ
no code implementations • 6 Jan 2015 • Han Zhao, Mazen Melibari, Pascal Poupart
We conclude the paper with some discussion of the implications of the proof and establish a connection between the depth of an SPN and a lower bound of the tree-width of its corresponding BN.
no code implementations • 18 Jun 2014 • Han Zhao, Pascal Poupart
In contrast, maximum likelihood estimates may get trapped in local optima due to the non-convex nature of the likelihood function of latent variable models.