Search Results for author: Thalaiyasingam Ajanthan

Found 30 papers, 16 papers with code

Adaptive Cross Batch Normalization for Metric Learning

no code implementations30 Mar 2023 Thalaiyasingam Ajanthan, Matt Ma, Anton Van Den Hengel, Stephen Gould

In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration as the learnable parameters are being updated.

Image Retrieval Metric Learning +1

Understanding and Improving the Role of Projection Head in Self-Supervised Learning

no code implementations22 Dec 2022 Kartik Gupta, Thalaiyasingam Ajanthan, Anton Van Den Hengel, Stephen Gould

Most current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective and then discard the learned projection head after training.

Contrastive Learning Image Classification +1

Few-shot Weakly-Supervised Object Detection via Directional Statistics

no code implementations25 Mar 2021 Amirreza Shaban, Amir Rahimi, Thalaiyasingam Ajanthan, Byron Boots, Richard Hartley

When the novel objects are localized, we utilize them to learn a linear appearance model to detect novel classes in new images.

Multiple Instance Learning Object +3

RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs

1 code implementation9 Feb 2021 Zhiwei Xu, Thalaiyasingam Ajanthan, Vibhav Vineet, Richard Hartley

In this work, we introduce a Resource Aware Neuron Pruning (RANP) algorithm that prunes 3D CNNs at initialization to high sparsity levels.

3D Semantic Segmentation Stereo Matching +1

A Chaos Theory Approach to Understand Neural Network Optimization

no code implementations1 Jan 2021 Michele Sasdelli, Thalaiyasingam Ajanthan, Tat-Jun Chin, Gustavo Carneiro

Then, we empirically show that for a large range of learning rates, SGD traverses the loss landscape across regions with largest eigenvalue of the Hessian similar to the inverse of the learning rate.

Second-order methods

Refining Semantic Segmentation with Superpixel by Transparent Initialization and Sparse Encoder

1 code implementation9 Oct 2020 Zhiwei Xu, Thalaiyasingam Ajanthan, Richard Hartley

We achieve it with fully-connected layers with Transparent Initialization (TI) and efficient logit consistency using a sparse encoder.

Segmentation Semantic Segmentation +1

RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs

1 code implementation6 Oct 2020 Zhiwei Xu, Thalaiyasingam Ajanthan, Vibhav Vineet, Richard Hartley

Specifically, the core idea is to obtain an importance score for each neuron based on their sensitivity to the loss function.

3D Semantic Segmentation Video Classification

Calibration of Neural Networks using Splines

1 code implementation ICLR 2021 Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, Richard Hartley

From this, by approximating the empirical cumulative distribution using a differentiable function via splines, we obtain a recalibration function, which maps the network outputs to actual (calibrated) class assignment probabilities.

Decision Making Image Classification

Post-hoc Calibration of Neural Networks by g-Layers

no code implementations23 Jun 2020 Amir Rahimi, Thomas Mensink, Kartik Gupta, Thalaiyasingam Ajanthan, Cristian Sminchisescu, Richard Hartley

Calibration of neural networks is a critical aspect to consider when incorporating machine learning models in real-world decision-making systems where the confidence of decisions are equally important as the decisions themselves.

Decision Making Image Classification

Bidirectionally Self-Normalizing Neural Networks

1 code implementation22 Jun 2020 Yao Lu, Stephen Gould, Thalaiyasingam Ajanthan

The problem of vanishing and exploding gradients has been a long-standing obstacle that hinders the effective training of neural networks.

Improved Gradient based Adversarial Attacks for Quantized Networks

1 code implementation30 Mar 2020 Kartik Gupta, Thalaiyasingam Ajanthan

In this work, we systematically study the robustness of quantized networks against gradient based adversarial attacks and demonstrate that these quantized models suffer from gradient vanishing issues and show a fake sense of robustness.

Image Classification Quantization

Understanding the Effects of Data Parallelism and Sparsity on Neural Network Training

no code implementations ICLR 2021 Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr, Martin Jaggi

As a result, we find across various workloads of data set, network model, and optimization algorithm that there exists a general scaling trend between batch size and number of training steps to convergence for the effect of data parallelism, and further, difficulty of training under sparsity.

Network Pruning

Pairwise Similarity Knowledge Transfer for Weakly Supervised Object Localization

1 code implementation ECCV 2020 Amir Rahimi, Amirreza Shaban, Thalaiyasingam Ajanthan, Richard Hartley, Byron Boots

Weakly Supervised Object Localization (WSOL) methods only require image level labels as opposed to expensive bounding box annotations required by fully supervised algorithms.

Transfer Learning Weakly-Supervised Object Localization

Fast and Differentiable Message Passing on Pairwise Markov Random Fields

1 code implementation24 Oct 2019 Zhiwei Xu, Thalaiyasingam Ajanthan, Richard Hartley

In addition to differentiability, the two main aspects that enable learning these model parameters are the forward and backward propagation time of the MRF optimization algorithm and its inference capabilities.

Denoising Semantic Segmentation

Mirror Descent View for Neural Network Quantization

1 code implementation18 Oct 2019 Thalaiyasingam Ajanthan, Kartik Gupta, Philip H. S. Torr, Richard Hartley, Puneet K. Dokania

Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity.

Quantization valid

A Signal Propagation Perspective for Pruning Neural Networks at Initialization

1 code implementation ICLR 2020 Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, Philip H. S. Torr

Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity.

Image Classification Network Pruning

Learning to Adapt for Stereo

1 code implementation CVPR 2019 Alessio Tonioni, Oscar Rahnama, Thomas Joy, Luigi Di Stefano, Thalaiyasingam Ajanthan, Philip H. S. Torr

Real world applications of stereo depth estimation require models that are robust to dynamic variations in the environment.

Autonomous Driving Stereo Depth Estimation

Proximal Mean-field for Neural Network Quantization

1 code implementation ICCV 2019 Thalaiyasingam Ajanthan, Puneet K. Dokania, Richard Hartley, Philip H. S. Torr

Compressing large Neural Networks (NN) by quantizing the parameters, while maintaining the performance is highly desirable due to reduced memory and time complexity.

Image Classification Quantization

Generalized Range Moves

no code implementations22 Nov 2018 Richard Hartley, Thalaiyasingam Ajanthan

We consider move-making algorithms for energy minimization of multi-label Markov Random Fields (MRFs).

SNIP: Single-shot Network Pruning based on Connection Sensitivity

8 code implementations ICLR 2019 Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr

To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task.

Image Classification Network Pruning +1

Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

2 code implementations ECCV 2018 Arslan Chaudhry, Puneet K. Dokania, Thalaiyasingam Ajanthan, Philip H. S. Torr

We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, inability of a model to update its knowledge.

Incremental Learning

Memory Efficient Max Flow for Multi-label Submodular MRFs

1 code implementation CVPR 2016 Thalaiyasingam Ajanthan, Richard Hartley, Mathieu Salzmann

Multi-label submodular Markov Random Fields (MRFs) have been shown to be solvable using max-flow based on an encoding of the labels proposed by Ishikawa, in which each variable $X_i$ is represented by $\ell$ nodes (where $\ell$ is the number of labels) arranged in a column.

Efficient Linear Programming for Dense CRFs

no code implementations CVPR 2017 Thalaiyasingam Ajanthan, Alban Desmaison, Rudy Bunel, Mathieu Salzmann, Philip H. S. Torr, M. Pawan Kumar

To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block coordinate descent.

Semantic Segmentation

Iteratively Reweighted Graph Cut for Multi-label MRFs with Non-convex Priors

no code implementations CVPR 2015 Thalaiyasingam Ajanthan, Richard Hartley, Mathieu Salzmann, Hongdong Li

While widely acknowledged as highly effective in computer vision, multi-label MRFs with non-convex priors are difficult to optimize.

Cannot find the paper you are looking for? You can Submit a new open access paper.