Search Results for author: Yicong Li

Found 13 papers, 8 papers with code

Contrastive Video Question Answering via Video Graph Transformer

1 code implementation27 Feb 2023 Junbin Xiao, Pan Zhou, Angela Yao, Yicong Li, Richang Hong, Shuicheng Yan, Tat-Seng Chua

CoVGT's uniqueness and superiority are three-fold: 1) It proposes a dynamic graph transformer module which encodes video by explicitly capturing the visual objects, their relations and dynamics, for complex spatio-temporal reasoning.

Contrastive Learning Question Answering +1

An Out-of-Domain Synapse Detection Challenge for Microwasp Brain Connectomes

no code implementations1 Feb 2023 Jingpeng Wu, Yicong Li, Nishika Gupta, Kazunori Shinomiya, Pat Gunn, Alexey Polilov, Hanspeter Pfister, Dmitri Chklovskii, Donglai Wei

The size of image stacks in connectomics studies now reaches the terabyte and often petabyte scales with a great diversity of appearance across brain regions and samples.

Domain Adaptation

Finding the Most Transferable Tasks for Brain Image Segmentation

no code implementations3 Jan 2023 Yicong Li, Yang Tan, Jingyun Yang, Yang Li, Xiao-Ping Zhang

Furthermore, within the same modality, transferring from the source task that has stronger RoI shape similarity with the target task can significantly improve the final transfer performance.

Brain Image Segmentation Image Segmentation +2

Equivariant and Invariant Grounding for Video Question Answering

2 code implementations26 Jul 2022 Yicong Li, Xiang Wang, Junbin Xiao, Tat-Seng Chua

Specifically, the equivariant grounding encourages the answering to be sensitive to the semantic changes in the causal scene and question; in contrast, the invariant grounding enforces the answering to be insensitive to the changes in the environment scene.

Question Answering Video Question Answering

Invariant Grounding for Video Question Answering

1 code implementation CVPR 2022 Yicong Li, Xiang Wang, Junbin Xiao, Wei Ji, Tat-Seng Chua

At its core is understanding the alignments between visual scenes in video and linguistic semantics in question to yield the answer.

Question Answering Video Question Answering

Scaling Vision Transformers to Gigapixel Images via Hierarchical Self-Supervised Learning

1 code implementation CVPR 2022 Richard J. Chen, Chengkuan Chen, Yicong Li, Tiffany Y. Chen, Andrew D. Trister, Rahul G. Krishnan, Faisal Mahmood

Vision Transformers (ViTs) and their multi-scale and hierarchical variations have been successful at capturing image representations but their use has been generally studied for low-resolution images (e. g. - 256x256, 384384).

Self-Supervised Learning Survival Prediction

Video Question Answering: Datasets, Algorithms and Challenges

1 code implementation2 Mar 2022 Yaoyao Zhong, Junbin Xiao, Wei Ji, Yicong Li, Weihong Deng, Tat-Seng Chua

Video Question Answering (VideoQA) aims to answer natural language questions according to the given videos.

Question Answering Video Question Answering

Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation

1 code implementation20 Jan 2022 Sixiao Zhang, Hongxu Chen, Xiangguo Sun, Yicong Li, Guandong Xu

Extensive experiments show that our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks including node classification and link prediction.

Adversarial Attack Contrastive Learning +3

Video as Conditional Graph Hierarchy for Multi-Granular Question Answering

1 code implementation12 Dec 2021 Junbin Xiao, Angela Yao, Zhiyuan Liu, Yicong Li, Wei Ji, Tat-Seng Chua

To align with the multi-granular essence of linguistic concepts in language queries, we propose to model video as a conditional graph hierarchy which weaves together visual facts of different granularity in a level-wise manner, with the guidance of corresponding textual cues.

Question Answering Video Question Answering +1

Reinforcement Learning based Path Exploration for Sequential Explainable Recommendation

no code implementations24 Nov 2021 Yicong Li, Hongxu Chen, Yile Li, Lin Li, Philip S. Yu, Guandong Xu

Recent advances in path-based explainable recommendation systems have attracted increasing attention thanks to the rich information provided by knowledge graphs.

Explainable Recommendation Knowledge Graphs +3

Temporal Meta-path Guided Explainable Recommendation

1 code implementation5 Jan 2021 Hongxu Chen, Yicong Li, Xiangguo Sun, Guandong Xu, Hongzhi Yin

This paper utilizes well-designed item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendations.

Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.