Search Results for author: Shikun Liu

Found 8 papers, 7 papers with code

Auto-Lambda: Disentangling Dynamic Task Relationships

1 code implementation7 Feb 2022 Shikun Liu, Stephen James, Andrew J. Davison, Edward Johns

Unlike previous methods where task relationships are assumed to be fixed, Auto-Lambda is a gradient-based meta learning framework which explores continuous, dynamic task relationships via task-specific weightings, and can optimise any choice of combination of tasks through the formulation of a meta-loss; where the validation loss automatically influences task weightings throughout training.

Auxiliary Learning Meta-Learning +1

Semi-supervised Graph Neural Network for Particle-level Noise Removal

no code implementations NeurIPS Workshop AI4Scien 2021 Tianchun Li, Shikun Liu, Yongbin Feng, Nhan Tran, Miaoyuan Liu, Pan Li

The graph neural network is trained on charged particles with well-known labels, which can be obtained from simulation truth information or measurements from data, and inferred on neutral particles of which such labeling is missing.

iMAP: Implicit Mapping and Positioning in Real-Time

2 code implementations ICCV 2021 Edgar Sucar, Shikun Liu, Joseph Ortiz, Andrew J. Davison

We show for the first time that a multilayer perceptron (MLP) can serve as the only scene representation in a real-time SLAM system for a handheld RGB-D camera.

Shape Adaptor: A Learnable Resizing Module

1 code implementation ECCV 2020 Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi, Edward Johns

We present a novel resizing module for neural networks: shape adaptor, a drop-in enhancement built on top of traditional resizing layers, such as pooling, bilinear sampling, and strided convolution.

Image Classification Neural Architecture Search +1

Self-Supervised Generalisation with Meta Auxiliary Learning

4 code implementations NeurIPS 2019 Shikun Liu, Andrew J. Davison, Edward Johns

The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient.

Auxiliary Learning Meta-Learning +1

End-to-End Multi-Task Learning with Attention

3 code implementations CVPR 2019 Shikun Liu, Edward Johns, Andrew J. Davison

Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task.

Multi-Task Learning

Learning a Hierarchical Latent-Variable Model of 3D Shapes

1 code implementation17 May 2017 Shikun Liu, C. Lee Giles, Alexander G. Ororbia II

We propose the Variational Shape Learner (VSL), a generative model that learns the underlying structure of voxelized 3D shapes in an unsupervised fashion.

3D Object Classification 3D Object Recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.