no code implementations • 30 Jun 2023 • Yajing Liu, Christina M Cole, Chris Peterson, Michael Kirby
A ReLU neural network leads to a finite polyhedral decomposition of input space and a corresponding finite dual graph.
no code implementations • 2 May 2023 • Huma Jamil, Yajing Liu, Turgay Caglar, Christina M. Cole, Nathaniel Blanchard, Christopher Peterson, Michael Kirby
Here, we investigate the potential for ReLU activation patterns (encoded as bit vectors) to aid in understanding and interpreting the behavior of neural networks.
no code implementations • CVPR 2023 • Yajing Liu, Yuning Lu, Hao liu, Yaozu An, Zhuoran Xu, Zhuokun Yao, Baofeng Zhang, Zhiwei Xiong, Chenguang Gui
Considering this, we present Hierarchical Prompt (HiPro) learning, a simple and effective method for jointly adapting a pre-trained VLM to multiple downstream tasks.
no code implementations • 23 Nov 2022 • Huma Jamil, Yajing Liu, Christina M. Cole, Nathaniel Blanchard, Emily J. King, Michael Kirby, Christopher Peterson
This paper illustrates how one can utilize the dual graph to detect and analyze adversarial attacks in the context of digital images.
4 code implementations • 19 Jul 2022 • Yuning Lu, Liangjian Wen, Jianzhuang Liu, Yajing Liu, Xinmei Tian
Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.
cross-domain few-shot learning Unsupervised Few-Shot Image Classification +1
no code implementations • 14 Jul 2022 • Hu Yu, Jie Huang, Yajing Liu, Qi Zhu, Man Zhou, Feng Zhao
Although certain Domain Adaptation (DA) dehazing methods have been presented, they inevitably require access to the source dataset to reduce the gap between the source synthetic and target real domains.
no code implementations • CVPR 2022 • Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian
We present prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks.
no code implementations • 12 Feb 2022 • Yajing Liu, Zhengya Sun, Wensheng Zhang
In this paper, we propose a Hierarchical Attention-based Graph Neural Network (HA-GNN) for fraud detection, which incorporates weighted adjacency matrices across different relations against camouflage.
no code implementations • CVPR 2022 • Jie Huang, Yajing Liu, Xueyang Fu, Man Zhou, Yang Wang, Feng Zhao, Zhiwei Xiong
However, the procedures of correcting underexposure and overexposure to normal exposures are much different from each other, leading to large discrepancies for the network in correcting multiple exposures, thus resulting in poor performance.
no code implementations • 8 Apr 2021 • Yajing Liu, Xiulian Peng, Zhiwei Xiong, Yan Lu
Specifically, we propose a phoneme-based distribution regularization (PbDr) for speech enhancement, which incorporates frame-wise phoneme information into speech enhancement network in a conditional manner.
no code implementations • 13 Apr 2020 • Ahmed S. Zamzam, Yajing Liu, Andrey Bernstein
As electric grids experience high penetration levels of renewable generation, fundamental changes are required to address real-time situational awareness.
no code implementations • CVPR 2019 • Yajing Liu, Xinmei Tian, Ya Li, Zhiwei Xiong, Feng Wu
However, they view the distributions of features from different classes as a general distribution and try to match these distributions across domains, which lead to the mixture of features from different classes across domains and degrade the performance of classification.
no code implementations • ECCV 2018 • Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, DaCheng Tao
Under the assumption that the conditional distribution $P(Y|X)$ remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation $T(X)$ by minimizing the discrepancy of the marginal distribution $P(T(X))$.
Ranked #67 on Domain Generalization on PACS