no code implementations • 24 Nov 2021 • Yicong Li, Hongxu Chen, Yile Li, Lin Li, Philip S. Yu, Guandong Xu
Recent advances in path-based explainable recommendation systems have attracted increasing attention thanks to the rich information provided by knowledge graphs.
no code implementations • 3 Jan 2023 • Yicong Li, Yang Tan, Jingyun Yang, Yang Li, Xiao-Ping Zhang
Furthermore, within the same modality, transferring from the source task that has stronger RoI shape similarity with the target task can significantly improve the final transfer performance.
no code implementations • 1 Feb 2023 • Jingpeng Wu, Yicong Li, Nishika Gupta, Kazunori Shinomiya, Pat Gunn, Alexey Polilov, Hanspeter Pfister, Dmitri Chklovskii, Donglai Wei
The size of image stacks in connectomics studies now reaches the terabyte and often petabyte scales with a great diversity of appearance across brain regions and samples.
no code implementations • 8 Feb 2023 • Tri Nguyen, Mukul Narwani, Mark Larson, Yicong Li, Shuhan Xie, Hanspeter Pfister, Donglai Wei, Nir Shavit, Lu Mi, Alexandra Pacureanu, Wei-Chung Lee, Aaron T. Kuan
In this task, we provide volumetric XNH images of cortical white matter axons from the mouse brain along with ground truth annotations for axon trajectories.
no code implementations • 2 Mar 2023 • Yicong Li, Yaron Meirovitch, Aaron T. Kuan, Jasper S. Phelps, Alexandra Pacureanu, Wei-Chung Allen Lee, Nir Shavit, Lu Mi
Comprehensive, synapse-resolution imaging of the brain will be crucial for understanding neuronal computations and function.
no code implementations • 7 Aug 2023 • Yicong Li, Xun Yang, An Zhang, Chun Feng, Xiang Wang, Tat-Seng Chua
This paper identifies two kinds of redundancy in the current VideoQA paradigm.
no code implementations • 11 Jan 2024 • Yicong Li, Xiangguo Sun, Hongxu Chen, Sixiao Zhang, Yu Yang, Guandong Xu
Unfortunately, these attention weights are intentionally designed for model accuracy but not explainability.
no code implementations • 7 Mar 2024 • Shuaiqi Liu, Jiannong Cao, Yicong Li, Ruosong Yang, Zhiyuan Wen
Current summarization datasets are insufficient to satisfy the demands of summarizing precedents across multiple jurisdictions, especially when labeled data are scarce for many jurisdictions.
1 code implementation • ICCV 2023 • Yicong Li, Junbin Xiao, Chun Feng, Xiang Wang, Tat-Seng Chua
We then conduct extensive studies to verify the importance of STR as well as the proposed answer interaction mechanism.
1 code implementation • 26 Jul 2022 • Yicong Li, Xiang Wang, Junbin Xiao, Tat-Seng Chua
Specifically, the equivariant grounding encourages the answering to be sensitive to the semantic changes in the causal scene and question; in contrast, the invariant grounding enforces the answering to be insensitive to the changes in the environment scene.
1 code implementation • 27 Feb 2023 • Junbin Xiao, Pan Zhou, Angela Yao, Yicong Li, Richang Hong, Shuicheng Yan, Tat-Seng Chua
CoVGT's uniqueness and superiority are three-fold: 1) It proposes a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations and dynamics, for complex spatio-temporal reasoning.
Ranked #12 on Video Question Answering on NExT-QA (using extra training data)
1 code implementation • 5 Jan 2021 • Hongxu Chen, Yicong Li, Xiangguo Sun, Guandong Xu, Hongzhi Yin
This paper utilizes well-designed item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendations.
Social and Information Networks
1 code implementation • 20 Jan 2022 • Sixiao Zhang, Hongxu Chen, Xiangguo Sun, Yicong Li, Guandong Xu
Extensive experiments show that our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks including node classification and link prediction.
1 code implementation • 4 Sep 2023 • Junbin Xiao, Angela Yao, Yicong Li, Tat Seng Chua
We study visually grounded VideoQA in response to the emerging trends of utilizing pretraining techniques for video-language understanding.
1 code implementation • CVPR 2022 • Yicong Li, Xiang Wang, Junbin Xiao, Wei Ji, Tat-Seng Chua
At its core is understanding the alignments between visual scenes in video and linguistic semantics in question to yield the answer.
1 code implementation • 12 Dec 2021 • Junbin Xiao, Angela Yao, Zhiyuan Liu, Yicong Li, Wei Ji, Tat-Seng Chua
To align with the multi-granular essence of linguistic concepts in language queries, we propose to model video as a conditional graph hierarchy which weaves together visual facts of different granularity in a level-wise manner, with the guidance of corresponding textual cues.
Ranked #24 on Video Question Answering on NExT-QA
1 code implementation • 2 Mar 2022 • Yaoyao Zhong, Junbin Xiao, Wei Ji, Yicong Li, Weihong Deng, Tat-Seng Chua
Video Question Answering (VideoQA) aims to answer natural language questions according to the given videos.
2 code implementations • CVPR 2022 • Richard J. Chen, Chengkuan Chen, Yicong Li, Tiffany Y. Chen, Andrew D. Trister, Rahul G. Krishnan, Faisal Mahmood
Vision Transformers (ViTs) and their multi-scale and hierarchical variations have been successful at capturing image representations but their use has been generally studied for low-resolution images (e. g. - 256x256, 384384).