no code implementations • ECCV 2020 • Viveka Kulharia, Siddhartha Chandra, Amit Agrawal, Philip Torr, Ambrish Tyagi
We propose a weakly supervised approach to semantic segmentation using bounding box annotations.
no code implementations • 18 Apr 2022 • Feihu Zhang, Vladlen Koltun, Philip Torr, René Ranftl, Stephan R. Richter
Semantic segmentation models struggle to generalize in the presence of domain shift.
no code implementations • 18 Apr 2022 • Menghan Wang, Yuchen Guo, Zhenqi Zhao, Guangzheng Hu, Yuming Shen, Mingming Gong, Philip Torr
To alleviate the influence of the annotation bias, we perform a momentum update to ensure a consistent item representation.
no code implementations • 15 Mar 2022 • A. Tuan Nguyen, Ser Nam Lim, Philip Torr
To tackle this problem, a great amount of research has been done to study the training procedure of a network to improve its robustness.
no code implementations • 8 Mar 2022 • Chuhui Xue, Yu Hao, Shijian Lu, Philip Torr, Song Bai
This paper presents a weakly supervised pre-training method that can acquire effective scene text representations by jointly learning and aligning visual and textual information.
1 code implementation • 17 Feb 2022 • Atılım Güneş Baydin, Barak A. Pearlmutter, Don Syme, Frank Wood, Philip Torr
Using backpropagation to compute gradients of objective functions for optimization has remained a mainstay of machine learning.
no code implementations • 23 Jan 2022 • Ming-Ming Cheng, Peng-Tao Jiang, Ling-Hao Han, Liang Wang, Philip Torr
The proposed framework can generate a deep hierarchy of strongly associated supporting evidence for the network decision, which provides insight into the decision-making process.
1 code implementation • 9 Dec 2021 • Yujun Shi, Kuangqi Zhou, Jian Liang, Zihang Jiang, Jiashi Feng, Philip Torr, Song Bai, Vincent Y. F. Tan
Specifically, we experimentally show that directly encouraging CIL Learner at the initial phase to output similar representations as the model jointly trained on all classes can greatly boost the CIL performance.
no code implementations • NeurIPS 2021 • Keyu Tian, Chen Lin, Ser Nam Lim, Wanli Ouyang, Puneet Dokania, Philip Torr
Automated data augmentation (ADA) techniques have played an important role in boosting the performance of deep models.
no code implementations • NeurIPS 2021 • Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham
Recent progress in neural network verification has challenged the notion of a convex barrier, that is, an inherent weakness in the convex relaxation of the output of a neural network.
no code implementations • NeurIPS 2021 • Feihu Zhang, Philip Torr, Rene Ranftl, Stephan Richter
We present an approach to contrastive representation learning for semantic segmentation.
1 code implementation • 23 Nov 2021 • Zitong Yu, Yuming Shen, Jingang Shi, Hengshuang Zhao, Philip Torr, Guoying Zhao
Remote photoplethysmography (rPPG), which aims at measuring heart activities and physiological signals from facial video without any contact, has great potential in many applications (e. g., remote healthcare and affective computing).
no code implementations • British Machine Vision Conference 2021 • Zhao Yang, Yansong Tang, Luca Bertinetto, Hengshuang Zhao, Philip Torr
In this paper, we investigate the problem of video object segmentation from referring expressions (VOSRE).
Ranked #1 on
Referring Expression Segmentation
on J-HMDB
(Precision@0.9 metric)
Optical Flow Estimation
Referring Expression Segmentation
+3
no code implementations • 22 Nov 2021 • Jindong Gu, Hengshuang Zhao, Volker Tresp, Philip Torr
The high transferability achieved by our method shows that, in contrast to the observations in previous work, adversarial examples on a segmentation model can be easy to transfer to other segmentation models.
1 code implementation • 18 Nov 2021 • Jie-Neng Chen, Shuyang Sun, Ju He, Philip Torr, Alan Yuille, Song Bai
The confidence of the label will be larger if the corresponding input image is weighted higher by the attention map.
no code implementations • 29 Sep 2021 • Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip Torr, Puneet K. Dokania
We propose an extremely simple approach to regularize a single deterministic neural network to obtain improved accuracy and reliable uncertainty estimates.
no code implementations • 29 Sep 2021 • Bowen Li, Philip Torr, Thomas Lukasiewicz
We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques.
no code implementations • 29 Sep 2021 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania
In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.
no code implementations • 29 Sep 2021 • Samuel Sokota, Christian Schroeder de Witt, Maximilian Igl, Luisa M Zintgraf, Philip Torr, J Zico Kolter, Shimon Whiteson, Jakob Nicolaus Foerster
We consider the problem of communicating exogenous information by means of Markov decision process trajectories.
no code implementations • 29 Sep 2021 • Motasem Alfarra, Adel Bibi, Philip Torr, Bernard Ghanem
In this work, we revisit Gaussian randomized smoothing and show that the variance of the Gaussian distribution can be optimized at each input so as to maximize the certification radius for the construction of the smooth classifier.
no code implementations • NeurIPS Workshop DLDE 2021 • Naeemullah Khan, Angira Sharma, Philip Torr, Ganesh Sundaramoorthi
ST-DNN are deep networks formulated through the use of partial differential equations (PDE) to be defined on arbitrarily shaped regions.
1 code implementation • ICCV 2021 • Xiaoyu Yue, Shuyang Sun, Zhanghui Kuang, Meng Wei, Philip Torr, Wayne Zhang, Dahua Lin
As a typical example, the Vision Transformer (ViT) directly applies a pure transformer architecture on image classification, by simply splitting images into tokens with a fixed length, and employing transformers to learn relations between these tokens.
no code implementations • 2 Aug 2021 • Botos Csaba, Xiaojuan Qi, Arslan Chaudhry, Puneet Dokania, Philip Torr
The key ingredients to our approach are -- (a) mapping the source to the target domain on pixel-level; (b) training a teacher network on the mapped source and the unannotated target domain using adversarial feature alignment; and (c) finally training a student network using the pseudo-labels obtained from the teacher.
2 code implementations • 29 Jul 2021 • Lu Qi, Jason Kuen, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia
By removing the need of class label prediction, the models trained for such task can focus more on improving segmentation quality.
no code implementations • 17 Jul 2021 • Samuel Sokota, Christian Schroeder de Witt, Maximilian Igl, Luisa Zintgraf, Philip Torr, Shimon Whiteson, Jakob Foerster
In many common-payoff games, achieving good performance requires players to develop protocols for communicating their private information implicitly -- i. e., using actions that have non-communicative effects on the environment.
1 code implementation • 16 Jul 2021 • Angira Sharma, Naeemullah Khan, Muhammad Mubashar, Ganesh Sundaramoorthi, Philip Torr
For low-fidelity training data (incorrect class label) class-agnostic segmentation loss outperforms the state-of-the-art methods on salient object detection datasets by staggering margins of around 50%.
2 code implementations • 13 Jul 2021 • Shuyang Sun, Xiaoyu Yue, Song Bai, Philip Torr
To model the representations of the two levels, we first encode the information from the whole into part vectors through an attention mechanism, then decode the global information within the part vectors back into the whole representation.
Ranked #159 on
Image Classification
on ImageNet
no code implementations • 6 Jun 2021 • ShangHua Gao, Zhong-Yu Li, Ming-Hsuan Yang, Ming-Ming Cheng, Junwei Han, Philip Torr
Powered by the ImageNet dataset, unsupervised learning on large-scale data has made significant advances for classification tasks.
no code implementations • 1 Jan 2021 • Roy Henha Eyono, Fabio Maria Carlucci, Pedro M Esperança, Binxin Ru, Philip Torr
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models.
no code implementations • 1 Jan 2021 • Xiaogang Xu, Hengshuang Zhao, Philip Torr, Jiaya Jia
Specifically, compared with previous methods, we propose a more efficient pixel-level training constraint to weaken the hardness of aligning adversarial samples to clean samples, which can thus obviously enhance the robustness on adversarial samples.
no code implementations • ICLR 2021 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip Torr
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
no code implementations • 1 Jan 2021 • Naeemullah Khan, Angira Sharma, Philip Torr, Ganesh Sundaramoorthi
We present Shape-Tailored Deep Neural Networks (ST-DNN).
7 code implementations • ICCV 2021 • Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun
For example, on the challenging S3DIS dataset for large-scale semantic scene segmentation, the Point Transformer attains an mIoU of 70. 4% on Area 5, outperforming the strongest prior model by 3. 3 absolute percentage points and crossing the 70% mIoU threshold for the first time.
Ranked #2 on
Semantic Segmentation
on S3DIS Area5
no code implementations • NeurIPS 2020 • Arnab Ghosh, Harkirat Behl, Emilien Dupont, Philip Torr, Vinay Namboodiri
Training Neural Ordinary Differential Equations (ODEs) is often computationally expensive.
no code implementations • 24 Nov 2020 • Shuyang Sun, Liang Chen, Gregory Slabaugh, Philip Torr
Some image restoration tasks like demosaicing require difficult training samples to learn effective models.
1 code implementation • 28 Oct 2020 • Angira Sharma, Naeemullah Khan, Ganesh Sundaramoorthi, Philip Torr
For low-fidelity training data (incorrect class label) class-agnostic segmentation loss outperforms the state-of-the-art methods on salient object detection datasets by staggering margins of around 50%.
no code implementations • 10 Oct 2020 • Thomas Tanay, Aivar Sootla, Matteo Maggioni, Puneet K. Dokania, Philip Torr, Ales Leonardis, Gregory Slabaugh
Recurrent models are becoming a popular choice for video enhancement tasks such as video denoising.
2 code implementations • 16 Sep 2020 • Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixe, Bastian Leibe
Multi-Object Tracking (MOT) has been notoriously difficult to evaluate.
1 code implementation • ECCV 2020 • Carlo Biffi, Steven McDonagh, Philip Torr, Ales Leonardis, Sarah Parisot
Object detection has witnessed significant progress by relying on large, manually annotated datasets.
2 code implementations • 14 May 2020 • Christian Schroeder de Witt, Bradley Gram-Hansen, Nantas Nardelli, Andrew Gambardella, Rob Zinkov, Puneet Dokania, N. Siddharth, Ana Belen Espinosa-Gonzalez, Ara Darzi, Philip Torr, Atılım Güneş Baydin
The COVID-19 pandemic has highlighted the importance of in-silico epidemiological modelling in predicting the dynamics of infectious diseases to inform health policy and decision makers about suitable prevention and containment strategies.
no code implementations • 19 Feb 2020 • Arslan Chaudhry, Albert Gordo, Puneet K. Dokania, Philip Torr, David Lopez-Paz
In continual learning, the learner faces a stream of data whose distribution changes over time.
no code implementations • CVPR 2020 • Zhengzhe Liu, Xiaojuan Qi, Philip Torr
In this paper, we conduct an empirical study on fake/real faces, and have two important observations: firstly, the texture of fake faces is substantially different from real ones; secondly, global texture statistics are more robust to image editing and transferable to fake faces from different GANs and datasets.
1 code implementation • ECCV 2020 • Feihu Zhang, Xiaojuan Qi, Ruigang Yang, Victor Prisacariu, Benjamin Wah, Philip Torr
State-of-the-art stereo matching networks have difficulties in generalizing to new unseen environments due to significant domain differences, such as color, illumination, contrast, and texture.
no code implementations • pproximateinference AABI Symposium 2019 • Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Saeid Naderiparizi, Adam Scibior, Andreas Munk, Frank Wood, Mehrdad Ghadiri, Philip Torr, Yee Whye Teh, Atilim Gunes Baydin, Tom Rainforth
We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i. e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.
no code implementations • 25 Sep 2019 • Saumya Jetley, Tommaso Cavallari, Philip Torr, Stuart Golodetz
Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret.
no code implementations • 25 Sep 2019 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, Puneet Dokania
When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks.
3 code implementations • 8 Jul 2019 • Atılım Güneş Baydin, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, Saeid Naderiparizi, Bradley Gram-Hansen, Gilles Louppe, Mingfei Ma, Xiaohui Zhao, Philip Torr, Victor Lee, Kyle Cranmer, Prabhat, Frank Wood
Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models.
no code implementations • 20 Jun 2019 • Tommaso Cavallari, Luca Bertinetto, Jishnu Mukhoti, Philip Torr, Stuart Golodetz
Many applications require a camera to be relocalised online, without expensive offline training on the target scene.
18 code implementations • 2 Apr 2019 • Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, Philip Torr
We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e. g., CIFAR-100 and ImageNet.
Ranked #7 on
RGB Salient Object Detection
on PASCAL-S
3 code implementations • NeurIPS 2019 • Atılım Güneş Baydin, Lukas Heinrich, Wahid Bhimji, Lei Shao, Saeid Naderiparizi, Andreas Munk, Jialin Liu, Bradley Gram-Hansen, Gilles Louppe, Lawrence Meadows, Philip Torr, Victor Lee, Prabhat, Kyle Cranmer, Frank Wood
We present a novel probabilistic programming framework that couples directly to existing large-scale simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way.
no code implementations • 17 Apr 2018 • Rodrigo de Bem, Arnab Ghosh, Thalaiyasingam Ajanthan, Ondrej Miksik, Adnane Boukhayma, N. Siddharth, Philip Torr
However, the latent space learned by such approaches is typically not interpretable, resulting in less flexibility.
no code implementations • ECCV 2018 • Jack Valmadre, Luca Bertinetto, João F. Henriques, Ran Tao, Andrea Vedaldi, Arnold Smeulders, Philip Torr, Efstratios Gavves
We introduce the OxUvA dataset and benchmark for evaluating single-object tracking algorithms.
no code implementations • 24 Jan 2017 • Måns Larsson, Anurag Arnab, Fredrik Kahl, Shuai Zheng, Philip Torr
It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential.
no code implementations • 7 Dec 2016 • Qinbin Hou, Puneet Kumar Dokania, Daniela Massiceti, Yunchao Wei, Ming-Ming Cheng, Philip Torr
We focus on the following three aspects of EM: (i) initialization; (ii) latent posterior estimation (E-step) and (iii) the parameter update (M-step).
4 code implementations • ICCV 2017 • Gurkirt Singh, Suman Saha, Michael Sapienza, Philip Torr, Fabio Cuzzolin
To the best of our knowledge, ours is the first real-time (up to 40fps) system able to perform online S/T action localisation and early action prediction on the untrimmed videos of UCF101-24.
2 code implementations • CVPR 2017 • Qibin Hou, Ming-Ming Cheng, Xiao-Wei Hu, Ali Borji, Zhuowen Tu, Philip Torr
Recent progress on saliency detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs).
Ranked #4 on
RGB Salient Object Detection
on SBU
no code implementations • 18 Mar 2016 • Julien Valentin, Angela Dai, Matthias Nießner, Pushmeet Kohli, Philip Torr, Shahram Izadi, Cem Keskin
We demonstrate the efficacy of our approach on the challenging problem of RGB Camera Relocalization.
no code implementations • 10 Jan 2016 • Anurag Arnab, Michael Sapienza, Stuart Golodetz, Julien Valentin, Ondrej Miksik, Shahram Izadi, Philip Torr
It is not always possible to recognise objects and infer material properties for a scene from visual cues alone, since objects can look visually similar whilst being made of very different materials.
2 code implementations • CVPR 2016 • Luca Bertinetto, Jack Valmadre, Stuart Golodetz, Ondrej Miksik, Philip Torr
Correlation Filter-based trackers have recently achieved excellent performance, showing great robustness to challenging situations exhibiting motion blur and illumination changes.
Ranked #13 on
Visual Object Tracking
on TrackingNet
no code implementations • 3 Dec 2015 • Saumya Jetley, Bernardino Romera-Paredes, Sadeep Jayasumana, Philip Torr
Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time.
1 code implementation • 25 Nov 2015 • Anurag Arnab, Sadeep Jayasumana, Shuai Zheng, Philip Torr
Recent deep learning approaches have incorporated CRFs into Convolutional Neural Networks (CNNs), with some even training the CRF end-to-end with the rest of the network.
Ranked #45 on
Semantic Segmentation
on PASCAL Context
no code implementations • CVPR 2014 • Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin, Philip Torr
Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm.
no code implementations • 20 Apr 2014 • Peng Wang, Chunhua Shen, Anton Van Den Hengel, Philip Torr
We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems.
no code implementations • NeurIPS 2013 • Vibhav Vineet, Carsten Rother, Philip Torr
Many methods have been proposed to recover the intrinsic scene properties such as shape, reflectance and illumination from a single image.
no code implementations • 16 Oct 2013 • Ming-Ming Cheng, Shuai Zheng, Wen-Yan Lin, Jonathan Warrell, Vibhav Vineet, Paul Sturgess, Nigel Crook, Niloy Mitra, Philip Torr
This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images.
no code implementations • NeurIPS 2011 • Ziming Zhang, Lubor Ladicky, Philip Torr, Amir Saffari
It provides a set of anchor points which form a local coordinate system, such that each data point on the manifold can be approximated by a linear combination of its anchor points, and the linear weights become the local coordinate coding.
no code implementations • NeurIPS 2008 • Philip Torr, M. P. Kumar
Compared to previous approaches based on the LP relaxation, e. g. interior-point algorithms or tree-reweighted message passing (TRW), our method is faster as it uses only the efficient st-mincut algorithm in its design.