1 code implementation • 10 Jun 2023 • Weiyan Xie, Xiao-Hui Li, Zhi Lin, Leonard K. M. Poon, Caleb Chen Cao, Nevin L. Zhang
The need to explain the output of a deep neural network classifier is now widely recognized.
no code implementations • ICCV 2023 • Yunfei Guo, Fei Yin, Xiao-Hui Li, Xudong Yan, Tao Xue, Shuqi Mei, Cheng-Lin Liu
Although previous works on traffic scene understanding have achieved great success, most of them stop at a lowlevel perception stage, such as road segmentation and lane detection, and few concern high-level understanding.
1 code implementation • 6 Nov 2022 • Weiyan Xie, Xiao-Hui Li, Caleb Chen Cao, Nevin L. Zhang
Despite the popularity of Vision Transformers (ViTs) and eXplainable AI (XAI), only a few explanation methods have been designed specially for ViTs thus far.
2 code implementations • International Conference on Data Engineering 2022 • Shendi Wang, Haoyang Li, Caleb Chen Cao, Xiao-Hui Li, Ng Ngai Fai, Jianxin Liu, Xun Xue, Hu Song, Jinyu Li, Guangye Gu, Lei Chen
Recently, neural networks based models have been widely used for recommender systems (RS).
1 code implementation • 16 Mar 2022 • Nevin L. Zhang, Weiyan Xie, Zhi Lin, Guanfang Dong, Xiao-Hui Li, Caleb Chen Cao, Yunpeng Wang
Some examples are easier for humans to classify than others.
no code implementations • Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 • Xiao-Hui Li, Yuhan Shi, Haoyang Li, Wei Bai, Caleb Chen Cao, Lei Chen
It has been long debated that eXplainable AI (XAI) is an important technology for model and data exploration, validation, and debugging.
no code implementations • 31 Dec 2020 • Xiao-Hui Li, Yuhan Shi, Haoyang Li, Wei Bai, Yuanwei Song, Caleb Chen Cao, Lei Chen
It has been long debated that eXplainable AI (XAI) is an important topic, but it lacks rigorous definition and fair metrics.