no code implementations • 25 Feb 2025 • Hans Hao-Hsun Hsu, Shikun Liu, Han Zhao, Pan Li
Graph-based learning has achieved remarkable success in domains ranging from recommendation to fraud detection and particle physics by effectively capturing underlying interaction patterns.
1 code implementation • 17 Feb 2025 • Haoyu Wang, Shikun Liu, Rongzhe Wei, Pan Li
Large language models (LLMs) have recently been introduced to graph learning, aiming to extend their zero-shot generalization success to tasks where labeled graph data is scarce.
1 code implementation • 11 Dec 2024 • Zijian Zhou, Shikun Liu, Xiao Han, Haozhe Liu, Kam Woh Ng, Tian Xie, Yuren Cong, Hang Li, Mengmeng Xu, Juan-Manuel Pérez-Rúa, Aditya Patel, Tao Xiang, Miaojing Shi, Sen He
Additionally, we show that our loss is model-agnostic and can be used to improve the performance of other diffusion models.
Ranked #1 on
Pose Transfer
on Deep-Fashion
(FID metric)
no code implementations • 26 Oct 2024 • Haozhe Liu, Shikun Liu, Zijian Zhou, Mengmeng Xu, Yanping Xie, Xiao Han, Juan C. Pérez, Ding Liu, Kumara Kahatapitiya, Menglin Jia, Jui-Chieh Wu, Sen He, Tao Xiang, Jürgen Schmidhuber, Juan-Manuel Pérez-Rúa
We introduce MarDini, a new family of video diffusion models that integrate the advantages of masked auto-regression (MAR) into a unified diffusion model (DM) framework.
1 code implementation • 2 Mar 2024 • Shikun Liu, Deyu Zou, Han Zhao, Pan Li
Graph-based methods, pivotal for label inference over interconnected objects in many real-world applications, often encounter generalization challenges, if the graph used for model training differs significantly from the graph used for testing.
1 code implementation • CVPR 2024 • Xin Kong, Shikun Liu, Xiaoyang Lyu, Marwan Taher, Xiaojuan Qi, Andrew J. Davison
We introduce EscherNet, a multi-view conditioned diffusion model for view synthesis.
2 code implementations • 12 Oct 2023 • Deyu Zou, Shikun Liu, Siqi Miao, Victor Fung, Shiyu Chang, Pan Li
Geometric deep learning (GDL) has gained significant attention in scientific fields, for its proficiency in modeling data with intricate geometric structures.
1 code implementation • 5 Jun 2023 • Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiu Qiang, Pan Li
This work examines different impacts of distribution shifts caused by either graph structure or node attributes and identifies a new type of shift, named conditional structure shift (CSS), which current GDA approaches are provably sub-optimal to deal with.
2 code implementations • 4 Mar 2023 • Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar
Recent vision-language models have shown impressive multi-modal generation capabilities.
Ranked #1 on
Image Captioning
on nocaps val
1 code implementation • CVPR 2023 • Xin Kong, Shikun Liu, Marwan Taher, Andrew J. Davison
We present vMAP, an object-level dense SLAM system using neural field representations.
1 code implementation • 7 Feb 2022 • Shikun Liu, Stephen James, Andrew J. Davison, Edward Johns
Unlike previous methods where task relationships are assumed to be fixed, Auto-Lambda is a gradient-based meta learning framework which explores continuous, dynamic task relationships via task-specific weightings, and can optimise any choice of combination of tasks through the formulation of a meta-loss; where the validation loss automatically influences task weightings throughout training.
Ranked #3 on
Robot Manipulation
on RLBench
(Succ. Rate (10 tasks, 100 demos/task) metric)
no code implementations • NeurIPS Workshop AI4Scien 2021 • Tianchun Li, Shikun Liu, Yongbin Feng, Nhan Tran, Miaoyuan Liu, Pan Li
The graph neural network is trained on charged particles with well-known labels, which can be obtained from simulation truth information or measurements from data, and inferred on neutral particles of which such labeling is missing.
2 code implementations • ICLR 2022 • Shikun Liu, Shuaifeng Zhi, Edward Johns, Andrew J. Davison
We present ReCo, a contrastive learning framework designed at a regional level to assist learning in semantic segmentation.
3 code implementations • ICCV 2021 • Edgar Sucar, Shikun Liu, Joseph Ortiz, Andrew J. Davison
We show for the first time that a multilayer perceptron (MLP) can serve as the only scene representation in a real-time SLAM system for a handheld RGB-D camera.
1 code implementation • ECCV 2020 • Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi, Edward Johns
We present a novel resizing module for neural networks: shape adaptor, a drop-in enhancement built on top of traditional resizing layers, such as pooling, bilinear sampling, and strided convolution.
4 code implementations • NeurIPS 2019 • Shikun Liu, Andrew J. Davison, Edward Johns
The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient.
4 code implementations • CVPR 2019 • Shikun Liu, Edward Johns, Andrew J. Davison
Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task.
1 code implementation • 17 May 2017 • Shikun Liu, C. Lee Giles, Alexander G. Ororbia II
We propose the Variational Shape Learner (VSL), a generative model that learns the underlying structure of voxelized 3D shapes in an unsupervised fashion.
Ranked #6 on
3D Object Recognition
on ModelNet40