Search Results for author: Shikun Liu

Found 14 papers, 13 papers with code

Pairwise Alignment Improves Graph Domain Adaptation

1 code implementation2 Mar 2024 Shikun Liu, Deyu Zou, Han Zhao, Pan Li

Graph-based methods, pivotal for label inference over interconnected objects in many real-world applications, often encounter generalization challenges, if the graph used for model training differs significantly from the graph used for testing.

Domain Adaptation Node Classification

GDL-DS: A Benchmark for Geometric Deep Learning under Distribution Shifts

1 code implementation12 Oct 2023 Deyu Zou, Shikun Liu, Siqi Miao, Victor Fung, Shiyu Chang, Pan Li

Geometric deep learning (GDL) has gained significant attention in various scientific fields, chiefly for its proficiency in modeling data with intricate geometric structures.

Structural Re-weighting Improves Graph Domain Adaptation

1 code implementation5 Jun 2023 Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiu Qiang, Pan Li

This work examines different impacts of distribution shifts caused by either graph structure or node attributes and identifies a new type of shift, named conditional structure shift (CSS), which current GDA approaches are provably sub-optimal to deal with.

Attribute Domain Adaptation +1

vMAP: Vectorised Object Mapping for Neural Field SLAM

1 code implementation CVPR 2023 Xin Kong, Shikun Liu, Marwan Taher, Andrew J. Davison

We present vMAP, an object-level dense SLAM system using neural field representations.

Object

Auto-Lambda: Disentangling Dynamic Task Relationships

1 code implementation7 Feb 2022 Shikun Liu, Stephen James, Andrew J. Davison, Edward Johns

Unlike previous methods where task relationships are assumed to be fixed, Auto-Lambda is a gradient-based meta learning framework which explores continuous, dynamic task relationships via task-specific weightings, and can optimise any choice of combination of tasks through the formulation of a meta-loss; where the validation loss automatically influences task weightings throughout training.

Ranked #3 on Robot Manipulation on RLBench (Succ. Rate (10 tasks, 100 demos/task) metric)

Auxiliary Learning Meta-Learning +2

Semi-supervised Graph Neural Network for Particle-level Noise Removal

no code implementations NeurIPS Workshop AI4Scien 2021 Tianchun Li, Shikun Liu, Yongbin Feng, Nhan Tran, Miaoyuan Liu, Pan Li

The graph neural network is trained on charged particles with well-known labels, which can be obtained from simulation truth information or measurements from data, and inferred on neutral particles of which such labeling is missing.

Graph Neural Network

iMAP: Implicit Mapping and Positioning in Real-Time

3 code implementations ICCV 2021 Edgar Sucar, Shikun Liu, Joseph Ortiz, Andrew J. Davison

We show for the first time that a multilayer perceptron (MLP) can serve as the only scene representation in a real-time SLAM system for a handheld RGB-D camera.

Shape Adaptor: A Learnable Resizing Module

1 code implementation ECCV 2020 Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi, Edward Johns

We present a novel resizing module for neural networks: shape adaptor, a drop-in enhancement built on top of traditional resizing layers, such as pooling, bilinear sampling, and strided convolution.

Image Classification Neural Architecture Search +1

Self-Supervised Generalisation with Meta Auxiliary Learning

4 code implementations NeurIPS 2019 Shikun Liu, Andrew J. Davison, Edward Johns

The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient.

Auxiliary Learning Meta-Learning +1

End-to-End Multi-Task Learning with Attention

4 code implementations CVPR 2019 Shikun Liu, Edward Johns, Andrew J. Davison

Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task.

Multi-Task Learning

Learning a Hierarchical Latent-Variable Model of 3D Shapes

1 code implementation17 May 2017 Shikun Liu, C. Lee Giles, Alexander G. Ororbia II

We propose the Variational Shape Learner (VSL), a generative model that learns the underlying structure of voxelized 3D shapes in an unsupervised fashion.

3D Object Classification 3D Object Recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.