no code implementations • ECCV 2020 • Viveka Kulharia, Siddhartha Chandra, Amit Agrawal, Philip Torr, Ambrish Tyagi
We propose a weakly supervised approach to semantic segmentation using bounding box annotations.
no code implementations • 28 Nov 2023 • Hang Li, Chengzhi Shen, Philip Torr, Volker Tresp, Jindong Gu
Previous work interprets vectors in an interpretable latent space of diffusion models as semantic concepts.
1 code implementation • 26 Oct 2023 • Jindong Gu, Xiaojun Jia, Pau de Jorge, Wenqain Yu, Xinwei Liu, Avery Ma, Yuan Xun, Anjun Hu, Ashkan Khakzar, Zhijiang Li, Xiaochun Cao, Philip Torr
This survey explores the landscape of the adversarial transferability of adversarial examples.
no code implementations • 26 Oct 2023 • Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila Mcilraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, Sören Mindermann
In this short consensus paper, we outline risks from upcoming, advanced AI systems.
no code implementations • 16 Oct 2023 • Jianhao Yuan, Jie Zhang, Shuyang Sun, Philip Torr, Bo Zhao
Synthetic training data has gained prominence in numerous learning tasks and scenarios, offering advantages such as dataset augmentation, generalization evaluation, and privacy preservation.
no code implementations • 12 Oct 2023 • Luke Marks, Amir Abdullah, Luna Mendez, Rauno Arike, Philip Torr, Fazl Barez
We propose a novel method for interpreting implicit reward models (IRMs) in LLMs learned through RLHF.
no code implementations • 11 Oct 2023 • Jia-Wang Bian, Wenjing Bian, Victor Adrian Prisacariu, Philip Torr
On the MobileBrick dataset that contains casually captured unbounded 360-degree videos, our method refines ARKit poses and improves the reconstruction F1 score from 69. 18 to 75. 67, outperforming that with the dataset provided ground-truth pose (75. 14).
no code implementations • 10 Oct 2023 • Yang Zhang, Yawei Li, Hannah Brown, Mina Rezaei, Bernd Bischl, Philip Torr, Ashkan Khakzar, Kenji Kawaguchi
In this paper, we solve this missing link by explicitly designing the neural network by manually setting its weights, along with designing data, so we know precisely which input features in the dataset are relevant to the designed network.
no code implementations • 12 Sep 2023 • Jindong Gu, Fangyun Wei, Philip Torr, Han Hu
In this work, we first taxonomize the stochastic defense strategies against QBBA.
2 code implementations • 3 Aug 2023 • Yibo Yang, Haobo Yuan, Xiangtai Li, Jianlong Wu, Lefei Zhang, Zhouchen Lin, Philip Torr, DaCheng Tao, Bernard Ghanem
Beyond the normal case, long-tail class incremental learning and few-shot class incremental learning are also proposed to consider the data imbalance and data scarcity, respectively, which are common in real-world implementations and further exacerbate the well-known problem of catastrophic forgetting.
1 code implementation • 24 Jul 2023 • Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr
This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-language models: multimodal-to-text generation models (e. g. Flamingo), image-text matching models (e. g.
no code implementations • ICCV 2023 • Runjia Li, Shuyang Sun, Mohamed Elhoseiny, Philip Torr
Hence, humour generation and understanding can serve as a new task for evaluating the ability of deep-learning methods to process abstract and subjective information.
no code implementations • 14 Jun 2023 • Wenqian Yu, Jindong Gu, Zhijiang Li, Philip Torr
Adversarial examples (AEs) with small adversarial perturbations can mislead deep neural networks (DNNs) into wrong predictions.
1 code implementation • 27 May 2023 • Liheng Ma, Chen Lin, Derek Lim, Adriana Romero-Soriano, Puneet K. Dokania, Mark Coates, Philip Torr, Ser-Nam Lim
Graph inductive biases are crucial for Graph Transformers, and previous works incorporate them using message-passing modules and/or positional encodings.
Ranked #1 on
Graph Classification
on CIFAR10 100k
(Accuracy metric)
no code implementations • 17 May 2023 • Francisco Eiras, Adel Bibi, Rudy Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar
Recent work provides promising evidence that Physics-informed neural networks (PINN) can efficiently solve partial differential equations (PDE).
1 code implementation • 16 May 2023 • Ameya Prabhu, Zhipeng Cai, Puneet Dokania, Philip Torr, Vladlen Koltun, Ozan Sener
In this paper, we target such applications, investigating the online continual learning problem under relaxed storage constraints and limited computational budgets.
no code implementations • 17 Apr 2023 • Jindong Gu, Ahmad Beirami, Xuezhi Wang, Alex Beutel, Philip Torr, Yao Qin
With the advent of vision-language models (VLMs) that can perform in-context and prompt-based learning, how can we design prompting approaches that robustly generalize to distribution shift and can be used on novel classes outside the support set of the prompts?
no code implementations • 21 Mar 2023 • Haoheng Lan, Jindong Gu, Philip Torr, Hengshuang Zhao
In this work, we explore backdoor attacks on segmentation models to misclassify all pixels of a victim class by injecting a specific trigger on non-victim pixels during inferences, which is dubbed Influencer Backdoor Attack (IBA).
1 code implementation • CVPR 2023 • Pau de Jorge, Riccardo Volpi, Philip Torr, Gregory Rogez
We analyze a broad variety of models, spanning from older ResNet-based architectures to novel transformers and assess their reliability based on four metrics: robustness, calibration, misclassification detection and out-of-distribution (OOD) detection.
no code implementations • 7 Feb 2023 • Zitong Yu, Yuming Shen, Jingang Shi, Hengshuang Zhao, Yawen Cui, Jiehua Zhang, Philip Torr, Guoying Zhao
As key modules in PhysFormer, the temporal difference transformers first enhance the quasi-periodic rPPG features with temporal difference guided global attention, and then refine the local spatio-temporal representation against interference.
1 code implementation • 6 Feb 2023 • Yibo Yang, Haobo Yuan, Xiangtai Li, Zhouchen Lin, Philip Torr, DaCheng Tao
In this paper, we deal with this misalignment dilemma in FSCIL inspired by the recently discovered phenomenon named neural collapse, which reveals that the last-layer features of the same class will collapse into a vertex, and the vertices of all classes are aligned with the classifier prototypes, which are formed as a simplex equiangular tight frame (ETF).
1 code implementation • ICLR 2023 • Yibo Yang, Haobo Yuan, Xiangtai Li, Zhouchen Lin, Philip Torr, DaCheng Tao
In this paper, we deal with this misalignment dilemma in FSCIL inspired by the recently discovered phenomenon named neural collapse, which reveals that the last-layer features of the same class will collapse into a vertex, and the vertices of all classes are aligned with the classifier prototypes, which are formed as a simplex equiangular tight frame (ETF).
Ranked #2 on
Few-Shot Class-Incremental Learning
on CUB-200-2011
(Average Accuracy metric)
no code implementations • 21 Dec 2022 • Jianhao Yuan, Francesco Pinto, Adam Davies, Philip Torr
Neural image classifiers are known to undergo severe performance degradation when exposed to inputs that exhibit covariate shifts with respect to the training distribution.
no code implementations • 11 Dec 2022 • Xiaogang Xu, Hengshuang Zhao, Philip Torr, Jiaya Jia
In this paper, we use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
no code implementations • 29 Nov 2022 • Shuyang Sun, Jie-Neng Chen, Ruifei He, Alan Yuille, Philip Torr, Song Bai
LUMix is simple as it can be implemented in just a few lines of code and can be universally applied to any deep networks \eg CNNs and Vision Transformers, with minimal computational cost.
1 code implementation • 14 Oct 2022 • Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, Xiaojuan Qi
Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images.
no code implementations • 26 Sep 2022 • Botos Csaba, Adel Bibi, Yanwei Li, Philip Torr, Ser-Nam Lim
Deep learning models for vision tasks are trained on large datasets under the assumption that there exists a universal representation that can be used to make predictions for all samples.
1 code implementation • 25 Jul 2022 • Jindong Gu, Hengshuang Zhao, Volker Tresp, Philip Torr
Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models.
no code implementations • 13 Jul 2022 • Ziyi Shen, Qianye Yang, Yuming Shen, Francesco Giganti, Vasilis Stavrinides, Richard Fan, Caroline Moore, Mirabela Rusu, Geoffrey Sonn, Philip Torr, Dean Barratt, Yipeng Hu
Image registration is useful for quantifying morphological changes in longitudinal MR images from prostate cancer patients.
no code implementations • 18 Apr 2022 • Menghan Wang, Yuchen Guo, Zhenqi Zhao, Guangzheng Hu, Yuming Shen, Mingming Gong, Philip Torr
To alleviate the influence of the annotation bias, we perform a momentum update to ensure a consistent item representation.
no code implementations • 18 Apr 2022 • Feihu Zhang, Vladlen Koltun, Philip Torr, René Ranftl, Stephan R. Richter
Semantic segmentation models struggle to generalize in the presence of domain shift.
no code implementations • 15 Mar 2022 • A. Tuan Nguyen, Ser Nam Lim, Philip Torr
To tackle this problem, a great amount of research has been done to study the training procedure of a network to improve its robustness.
no code implementations • 8 Mar 2022 • Chuhui Xue, Wenqing Zhang, Yu Hao, Shijian Lu, Philip Torr, Song Bai
Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features, respectively, as well as a visual-textual decoder that models the interaction among textual and visual features for learning effective scene text representations.
Optical Character Recognition
Optical Character Recognition (OCR)
+2
2 code implementations • 17 Feb 2022 • Atılım Güneş Baydin, Barak A. Pearlmutter, Don Syme, Frank Wood, Philip Torr
Using backpropagation to compute gradients of objective functions for optimization has remained a mainstay of machine learning.
no code implementations • 23 Jan 2022 • Ming-Ming Cheng, Peng-Tao Jiang, Ling-Hao Han, Liang Wang, Philip Torr
The proposed framework can generate a deep hierarchy of strongly associated supporting evidence for the network decision, which provides insight into the decision-making process.
2 code implementations • 17 Jan 2022 • Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, Ponnurangam Kumaraguru
Machine Learning models face increased concerns regarding the storage of personal user data and adverse impacts of corrupted data like backdoors or systematic bias.
1 code implementation • CVPR 2022 • Yujun Shi, Kuangqi Zhou, Jian Liang, Zihang Jiang, Jiashi Feng, Philip Torr, Song Bai, Vincent Y. F. Tan
Specifically, we experimentally show that directly encouraging CIL Learner at the initial phase to output similar representations as the model jointly trained on all classes can greatly boost the CIL performance.
no code implementations • NeurIPS 2021 • Keyu Tian, Chen Lin, Ser Nam Lim, Wanli Ouyang, Puneet Dokania, Philip Torr
Automated data augmentation (ADA) techniques have played an important role in boosting the performance of deep models.
no code implementations • NeurIPS 2021 • Feihu Zhang, Philip Torr, Rene Ranftl, Stephan Richter
We present an approach to contrastive representation learning for semantic segmentation.
no code implementations • NeurIPS 2021 • Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham
Recent progress in neural network verification has challenged the notion of a convex barrier, that is, an inherent weakness in the convex relaxation of the output of a neural network.
1 code implementation • CVPR 2022 • Zitong Yu, Yuming Shen, Jingang Shi, Hengshuang Zhao, Philip Torr, Guoying Zhao
Remote photoplethysmography (rPPG), which aims at measuring heart activities and physiological signals from facial video without any contact, has great potential in many applications (e. g., remote healthcare and affective computing).
no code implementations • British Machine Vision Conference 2021 • Zhao Yang, Yansong Tang, Luca Bertinetto, Hengshuang Zhao, Philip Torr
In this paper, we investigate the problem of video object segmentation from referring expressions (VOSRE).
Ranked #1 on
Referring Expression Segmentation
on J-HMDB
(Precision@0.9 metric)
Optical Flow Estimation
Referring Expression Segmentation
+3
no code implementations • 22 Nov 2021 • Jindong Gu, Hengshuang Zhao, Volker Tresp, Philip Torr
The high transferability achieved by our method shows that, in contrast to the observations in previous work, adversarial examples on a segmentation model can be easy to transfer to other segmentation models.
2 code implementations • CVPR 2022 • Jie-Neng Chen, Shuyang Sun, Ju He, Philip Torr, Alan Yuille, Song Bai
The confidence of the label will be larger if the corresponding input image is weighted higher by the attention map.
no code implementations • 29 Sep 2021 • Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip Torr, Puneet K. Dokania
We propose an extremely simple approach to regularize a single deterministic neural network to obtain improved accuracy and reliable uncertainty estimates.
no code implementations • 29 Sep 2021 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania
In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.
no code implementations • NeurIPS Workshop DLDE 2021 • Naeemullah Khan, Angira Sharma, Philip Torr, Ganesh Sundaramoorthi
ST-DNN are deep networks formulated through the use of partial differential equations (PDE) to be defined on arbitrarily shaped regions.
1 code implementation • ICCV 2021 • Xiaoyu Yue, Shuyang Sun, Zhanghui Kuang, Meng Wei, Philip Torr, Wayne Zhang, Dahua Lin
As a typical example, the Vision Transformer (ViT) directly applies a pure transformer architecture on image classification, by simply splitting images into tokens with a fixed length, and employing transformers to learn relations between these tokens.
no code implementations • 2 Aug 2021 • Botos Csaba, Xiaojuan Qi, Arslan Chaudhry, Puneet Dokania, Philip Torr
The key ingredients to our approach are -- (a) mapping the source to the target domain on pixel-level; (b) training a teacher network on the mapped source and the unannotated target domain using adversarial feature alignment; and (c) finally training a student network using the pseudo-labels obtained from the teacher.
2 code implementations • 29 Jul 2021 • Lu Qi, Jason Kuen, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia
By removing the need of class label prediction, the models trained for such task can focus more on improving segmentation quality.
1 code implementation • 17 Jul 2021 • Samuel Sokota, Christian Schroeder de Witt, Maximilian Igl, Luisa Zintgraf, Philip Torr, Martin Strohmeier, J. Zico Kolter, Shimon Whiteson, Jakob Foerster
We contribute a theoretically grounded approach to MCGs based on maximum entropy reinforcement learning and minimum entropy coupling that we call MEME.
1 code implementation • 16 Jul 2021 • Angira Sharma, Naeemullah Khan, Muhammad Mubashar, Ganesh Sundaramoorthi, Philip Torr
For low-fidelity training data (incorrect class label) class-agnostic segmentation loss outperforms the state-of-the-art methods on salient object detection datasets by staggering margins of around 50%.
2 code implementations • 13 Jul 2021 • Shuyang Sun, Xiaoyu Yue, Song Bai, Philip Torr
To model the representations of the two levels, we first encode the information from the whole into part vectors through an attention mechanism, then decode the global information within the part vectors back into the whole representation.
Ranked #292 on
Image Classification
on ImageNet
3 code implementations • 6 Jun 2021 • ShangHua Gao, Zhong-Yu Li, Ming-Hsuan Yang, Ming-Ming Cheng, Junwei Han, Philip Torr
In this work, we propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to help the research progress.
Ranked #1 on
Unsupervised Semantic Segmentation
on ImageNet-S-50
no code implementations • 1 Jan 2021 • Xiaogang Xu, Hengshuang Zhao, Philip Torr, Jiaya Jia
Specifically, compared with previous methods, we propose a more efficient pixel-level training constraint to weaken the hardness of aligning adversarial samples to clean samples, which can thus obviously enhance the robustness on adversarial samples.
no code implementations • 1 Jan 2021 • Naeemullah Khan, Angira Sharma, Philip Torr, Ganesh Sundaramoorthi
We present Shape-Tailored Deep Neural Networks (ST-DNN).
no code implementations • ICLR 2021 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip Torr
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
no code implementations • 1 Jan 2021 • Roy Henha Eyono, Fabio Maria Carlucci, Pedro M Esperança, Binxin Ru, Philip Torr
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models.
22 code implementations • ICCV 2021 • Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun
For example, on the challenging S3DIS dataset for large-scale semantic scene segmentation, the Point Transformer attains an mIoU of 70. 4% on Area 5, outperforming the strongest prior model by 3. 3 absolute percentage points and crossing the 70% mIoU threshold for the first time.
Ranked #3 on
3D Semantic Segmentation
on STPLS3D
no code implementations • NeurIPS 2020 • Arnab Ghosh, Harkirat Behl, Emilien Dupont, Philip Torr, Vinay Namboodiri
Training Neural Ordinary Differential Equations (ODEs) is often computationally expensive.
no code implementations • 24 Nov 2020 • Shuyang Sun, Liang Chen, Gregory Slabaugh, Philip Torr
Some image restoration tasks like demosaicing require difficult training samples to learn effective models.
1 code implementation • 28 Oct 2020 • Angira Sharma, Naeemullah Khan, Ganesh Sundaramoorthi, Philip Torr
For low-fidelity training data (incorrect class label) class-agnostic segmentation loss outperforms the state-of-the-art methods on salient object detection datasets by staggering margins of around 50%.
no code implementations • 10 Oct 2020 • Thomas Tanay, Aivar Sootla, Matteo Maggioni, Puneet K. Dokania, Philip Torr, Ales Leonardis, Gregory Slabaugh
Recurrent models are a popular choice for video enhancement tasks such as video denoising or super-resolution.
4 code implementations • 16 Sep 2020 • Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixe, Bastian Leibe
Multi-Object Tracking (MOT) has been notoriously difficult to evaluate.
1 code implementation • ECCV 2020 • Carlo Biffi, Steven McDonagh, Philip Torr, Ales Leonardis, Sarah Parisot
Object detection has witnessed significant progress by relying on large, manually annotated datasets.
2 code implementations • 14 May 2020 • Christian Schroeder de Witt, Bradley Gram-Hansen, Nantas Nardelli, Andrew Gambardella, Rob Zinkov, Puneet Dokania, N. Siddharth, Ana Belen Espinosa-Gonzalez, Ara Darzi, Philip Torr, Atılım Güneş Baydin
The COVID-19 pandemic has highlighted the importance of in-silico epidemiological modelling in predicting the dynamics of infectious diseases to inform health policy and decision makers about suitable prevention and containment strategies.
no code implementations • 19 Feb 2020 • Arslan Chaudhry, Albert Gordo, Puneet K. Dokania, Philip Torr, David Lopez-Paz
In continual learning, the learner faces a stream of data whose distribution changes over time.
1 code implementation • CVPR 2020 • Zhengzhe Liu, Xiaojuan Qi, Philip Torr
In this paper, we conduct an empirical study on fake/real faces, and have two important observations: firstly, the texture of fake faces is substantially different from real ones; secondly, global texture statistics are more robust to image editing and transferable to fake faces from different GANs and datasets.
1 code implementation • ECCV 2020 • Feihu Zhang, Xiaojuan Qi, Ruigang Yang, Victor Prisacariu, Benjamin Wah, Philip Torr
State-of-the-art stereo matching networks have difficulties in generalizing to new unseen environments due to significant domain differences, such as color, illumination, contrast, and texture.
no code implementations • pproximateinference AABI Symposium 2019 • Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Saeid Naderiparizi, Adam Scibior, Andreas Munk, Frank Wood, Mehrdad Ghadiri, Philip Torr, Yee Whye Teh, Atilim Gunes Baydin, Tom Rainforth
We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i. e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.
no code implementations • 25 Sep 2019 • Saumya Jetley, Tommaso Cavallari, Philip Torr, Stuart Golodetz
Deep CNNs have achieved state-of-the-art performance for numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret.
no code implementations • 25 Sep 2019 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, Puneet Dokania
When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks.
3 code implementations • 8 Jul 2019 • Atılım Güneş Baydin, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, Saeid Naderiparizi, Bradley Gram-Hansen, Gilles Louppe, Mingfei Ma, Xiaohui Zhao, Philip Torr, Victor Lee, Kyle Cranmer, Prabhat, Frank Wood
Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models.
no code implementations • 20 Jun 2019 • Tommaso Cavallari, Luca Bertinetto, Jishnu Mukhoti, Philip Torr, Stuart Golodetz
Many applications require a camera to be relocalised online, without expensive offline training on the target scene.
26 code implementations • 2 Apr 2019 • Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, Philip Torr
We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e. g., CIFAR-100 and ImageNet.
Ranked #2 on
Image Classification
on GasHisSDB
3 code implementations • NeurIPS 2019 • Atılım Güneş Baydin, Lukas Heinrich, Wahid Bhimji, Lei Shao, Saeid Naderiparizi, Andreas Munk, Jialin Liu, Bradley Gram-Hansen, Gilles Louppe, Lawrence Meadows, Philip Torr, Victor Lee, Prabhat, Kyle Cranmer, Frank Wood
We present a novel probabilistic programming framework that couples directly to existing large-scale simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way.
no code implementations • 17 Apr 2018 • Rodrigo de Bem, Arnab Ghosh, Thalaiyasingam Ajanthan, Ondrej Miksik, Adnane Boukhayma, N. Siddharth, Philip Torr
However, the latent space learned by such approaches is typically not interpretable, resulting in less flexibility.
no code implementations • ECCV 2018 • Jack Valmadre, Luca Bertinetto, João F. Henriques, Ran Tao, Andrea Vedaldi, Arnold Smeulders, Philip Torr, Efstratios Gavves
We introduce the OxUvA dataset and benchmark for evaluating single-object tracking algorithms.
no code implementations • 24 Jan 2017 • Måns Larsson, Anurag Arnab, Fredrik Kahl, Shuai Zheng, Philip Torr
It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential.
no code implementations • 7 Dec 2016 • Qinbin Hou, Puneet Kumar Dokania, Daniela Massiceti, Yunchao Wei, Ming-Ming Cheng, Philip Torr
We focus on the following three aspects of EM: (i) initialization; (ii) latent posterior estimation (E-step) and (iii) the parameter update (M-step).
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
4 code implementations • ICCV 2017 • Gurkirt Singh, Suman Saha, Michael Sapienza, Philip Torr, Fabio Cuzzolin
To the best of our knowledge, ours is the first real-time (up to 40fps) system able to perform online S/T action localisation and early action prediction on the untrimmed videos of UCF101-24.
2 code implementations • CVPR 2017 • Qibin Hou, Ming-Ming Cheng, Xiao-Wei Hu, Ali Borji, Zhuowen Tu, Philip Torr
Recent progress on saliency detection is substantial, benefiting mostly from the explosive development of Convolutional Neural Networks (CNNs).
Ranked #4 on
RGB Salient Object Detection
on SBU
no code implementations • 18 Mar 2016 • Julien Valentin, Angela Dai, Matthias Nießner, Pushmeet Kohli, Philip Torr, Shahram Izadi, Cem Keskin
We demonstrate the efficacy of our approach on the challenging problem of RGB Camera Relocalization.
no code implementations • 10 Jan 2016 • Anurag Arnab, Michael Sapienza, Stuart Golodetz, Julien Valentin, Ondrej Miksik, Shahram Izadi, Philip Torr
It is not always possible to recognise objects and infer material properties for a scene from visual cues alone, since objects can look visually similar whilst being made of very different materials.
3 code implementations • CVPR 2016 • Luca Bertinetto, Jack Valmadre, Stuart Golodetz, Ondrej Miksik, Philip Torr
Correlation Filter-based trackers have recently achieved excellent performance, showing great robustness to challenging situations exhibiting motion blur and illumination changes.
Ranked #24 on
Visual Object Tracking
on TrackingNet
no code implementations • 3 Dec 2015 • Saumya Jetley, Bernardino Romera-Paredes, Sadeep Jayasumana, Philip Torr
Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time.
1 code implementation • 25 Nov 2015 • Anurag Arnab, Sadeep Jayasumana, Shuai Zheng, Philip Torr
Recent deep learning approaches have incorporated CRFs into Convolutional Neural Networks (CNNs), with some even training the CRF end-to-end with the rest of the network.
Ranked #56 on
Semantic Segmentation
on PASCAL Context
no code implementations • CVPR 2014 • Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin, Philip Torr
Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm.
no code implementations • 20 Apr 2014 • Peng Wang, Chunhua Shen, Anton Van Den Hengel, Philip Torr
We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems.
no code implementations • NeurIPS 2013 • Vibhav Vineet, Carsten Rother, Philip Torr
Many methods have been proposed to recover the intrinsic scene properties such as shape, reflectance and illumination from a single image.
no code implementations • 16 Oct 2013 • Ming-Ming Cheng, Shuai Zheng, Wen-Yan Lin, Jonathan Warrell, Vibhav Vineet, Paul Sturgess, Nigel Crook, Niloy Mitra, Philip Torr
This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images.
no code implementations • NeurIPS 2011 • Ziming Zhang, Lubor Ladicky, Philip Torr, Amir Saffari
It provides a set of anchor points which form a local coordinate system, such that each data point on the manifold can be approximated by a linear combination of its anchor points, and the linear weights become the local coordinate coding.
no code implementations • NeurIPS 2008 • Philip Torr, M. P. Kumar
Compared to previous approaches based on the LP relaxation, e. g. interior-point algorithms or tree-reweighted message passing (TRW), our method is faster as it uses only the efficient st-mincut algorithm in its design.