1 code implementation • 4 Sep 2023 • Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Kwei-Herng Lai, Daochen Zha, Ruixiang Tang, Fan Yang, Alfredo Costilla Reyes, Kaixiong Zhou, Xiaoqian Jiang, Xia Hu
The exponential growth in scholarly publications necessitates advanced tools for efficient article retrieval, especially in interdisciplinary fields where diverse terminologies are used to describe similar research.
no code implementations • 14 Jul 2023 • Chia-Yuan Chang, Yu-Neng Chuang, Guanchu Wang, Mengnan Du, Na Zou
Domain generalization aims to learn a generalization model that can perform well on unseen test domains by only training on limited source domains.
no code implementations • 9 Jun 2023 • Yao Rong, Guanchu Wang, Qizhang Feng, Ninghao Liu, Zirui Liu, Enkelejda Kasneci, Xia Hu
A strategy of subgraph sampling is designed in LARA to improve the scalability of the training process.
no code implementations • 21 Apr 2023 • Guanchu Wang, Ninghao Liu, Daochen Zha, Xia Hu
Anomaly detection, where data instances are discovered containing feature patterns different from the majority, plays a fundamental role in various applications.
1 code implementation • NeurIPS 2023 • Zhimeng Jiang, Xiaotian Han, Hongye Jin, Guanchu Wang, Rui Chen, Na Zou, Xia Hu
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR) by considering the worst case within the model weight perturbation ball for each sensitive attribute group.
1 code implementation • 5 Mar 2023 • Yu-Neng Chuang, Guanchu Wang, Fan Yang, Quan Zhou, Pushkar Tripathi, Xuanting Cai, Xia Hu
In this work, we propose a COntrastive Real-Time eXplanation (CoRTX) framework to learn the explanation-oriented representation and relieve the intensive dependence of explainer training on explanation labels.
no code implementations • 7 Feb 2023 • Yu-Neng Chuang, Guanchu Wang, Fan Yang, Zirui Liu, Xuanting Cai, Mengnan Du, Xia Hu
Finally, we summarize the challenges of deploying XAI acceleration methods to real-world scenarios, overcoming the trade-off between faithfulness and efficiency, and the selection of different acceleration methods.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
1 code implementation • 5 Aug 2022 • Guanchu Wang, Zirui Liu, Zhimeng Jiang, Ninghao Liu, Na Zou, Xia Hu
Activation compressed training provides a solution towards reducing the memory cost of training deep neural networks~(DNNs).
1 code implementation • 20 Jul 2022 • Guanchu Wang, Mengnan Du, Ninghao Liu, Na Zou, Xia Hu
Existing work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information.
1 code implementation • 17 Jun 2022 • Guanchu Wang, Yu-Neng Chuang, Mengnan Du, Fan Yang, Quan Zhou, Pushkar Tripathi, Xuanting Cai, Xia Hu
Even though Shapley value provides an effective explanation for a DNN model prediction, the computation relies on the enumeration of all possible input feature coalitions, which leads to the exponentially growing complexity.
3 code implementations • 14 Feb 2022 • Guanchu Wang, Zaid Pervaiz Bhat, Zhimeng Jiang, Yi-Wei Chen, Daochen Zha, Alfredo Costilla Reyes, Afshin Niktash, Gorkem Ulkar, Erman Okman, Xuanting Cai, Xia Hu
DNNs have been an effective tool for data processing and analysis.
no code implementations • NeurIPS 2021 • Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Hassan Awadallah, Xia Hu
This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder.
1 code implementation • 18 Sep 2020 • Kwei-Herng Lai, Daochen Zha, Guanchu Wang, Junjie Xu, Yue Zhao, Devesh Kumar, Yile Chen, Purav Zumkhawaka, Minyang Wan, Diego Martinez, Xia Hu
We present TODS, an automated Time Series Outlier Detection System for research and industrial applications.