2 code implementations • ECCV 2020 • Ameya Prabhu, Philip H. S. Torr, Puneet K. Dokania
We discuss a general formulation for the Continual Learning (CL) problem for classification---a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones.
1 code implementation • 17 May 2023 • Aleksandar Petrov, Emanuele La Malfa, Philip H. S. Torr, Adel Bibi
Recent language models have shown impressive multilingual performance, even when not explicitly trained for it.
1 code implementation • 16 May 2023 • Hasan Abed Al Kader Hammoud, Ameya Prabhu, Ser-Nam Lim, Philip H. S. Torr, Adel Bibi, Bernard Ghanem
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples.
no code implementations • 25 Apr 2023 • Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi
Improving and guaranteeing the robustness of deep learning models has been a topic of intense research.
no code implementations • 16 Apr 2023 • Ondrej Bohdal, Timothy Hospedales, Philip H. S. Torr, Fazl Barez
Successful deployment of artificial intelligence (AI) in various settings has led to numerous positive outcomes for individuals and society.
no code implementations • 23 Mar 2023 • Hasan Abed Al Kader Hammoud, Adel Bibi, Philip H. S. Torr, Bernard Ghanem
In this paper we investigate the frequency sensitivity of Deep Neural Networks (DNNs) when presented with clean samples versus poisoned samples.
1 code implementation • CVPR 2023 • Ameya Prabhu, Hasan Abed Al Kader Hammoud, Puneet Dokania, Philip H. S. Torr, Ser-Nam Lim, Bernard Ghanem, Adel Bibi
Our conclusions are consistent in a different number of stream time steps, e. g., 20 to 200, and under several computational budgets.
no code implementations • 11 Mar 2023 • Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, Philip H. S. Torr
Referring image segmentation segments an image from a language expression.
1 code implementation • CVPR 2023 • Kejie Li, Jia-Wang Bian, Robert Castle, Philip H. S. Torr, Victor Adrian Prisacariu
The distinct data modality offered by high-resolution RGB images and low-resolution depth maps captured on a mobile device, when combined with precise 3D geometry annotations, presents a unique opportunity for future research on high-fidelity 3D reconstruction.
1 code implementation • 3 Feb 2023 • Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Philip H. S. Torr, Song Bai
However, since the target objects in these existing datasets are usually relatively salient, dominant, and isolated, VOS under complex scenes has rarely been studied.
1 code implementation • CVPR 2023 • Yasir Ghunaim, Adel Bibi, Kumail Alhamoud, Motasem Alfarra, Hasan Abed Al Kader Hammoud, Ameya Prabhu, Philip H. S. Torr, Bernard Ghanem
We show that a simple baseline outperforms state-of-the-art CL methods under this evaluation, questioning the applicability of existing methods in realistic settings.
no code implementations • CVPR 2023 • Jishnu Mukhoti, Tsung-Yu Lin, Omid Poursaeed, Rui Wang, Ashish Shah, Philip H. S. Torr, Ser-Nam Lim
We introduce Patch Aligned Contrastive Learning (PACL), a modified compatibility function for CLIP's contrastive loss, intending to train an alignment between the patch tokens of the vision encoder and the CLS token of the text encoder.
no code implementations • 27 Nov 2022 • Guangrun Wang, Philip H. S. Torr
Proving that classifiers have learned the data distribution and are ready for image generation has far-reaching implications, for classifiers are much easier to train than generative models like DDPMs and GANs.
no code implementations • 12 Nov 2022 • Xipeng Chen, Guangrun Wang, Dizhong Zhu, Xiaodan Liang, Philip H. S. Torr, Liang Lin
In this paper, we propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling, which is capable of learning representations for garments with diverse shapes and topologies and is successfully applied to 3D garment reconstruction and controllable manipulation.
1 code implementation • 12 Nov 2022 • Hao Tang, Ling Shao, Philip H. S. Torr, Nicu Sebe
To further capture the change in pose of each part more precisely, we propose a novel part-aware bipartite graph reasoning (PBGR) block to decompose the task of reasoning the global structure transformation with a bipartite graph into learning different local transformations for different semantic body/face parts.
1 code implementation • 24 Oct 2022 • Nan Xue, Tianfu Wu, Song Bai, Fu-Dong Wang, Gui-Song Xia, Liangpei Zhang, Philip H. S. Torr
At the core is a parsimonious representation that encodes a line segment using a closed-form 4D geometric vector, which enables lifting line segments in wireframe to an end-to-end trainable holistic attraction field that has built-in geometry-awareness, context-awareness and robustness.
no code implementations • 24 Sep 2022 • Jishnu Mukhoti, Tsung-Yu Lin, Bor-Chun Chen, Ashish Shah, Philip H. S. Torr, Puneet K. Dokania, Ser-Nam Lim
In this paper, we define 2 categories of OoD data using the subtly different concepts of perceptual/visual and semantic similarity to in-distribution (iD) data.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
+2
no code implementations • 24 Sep 2022 • Tim Franzmeyer, Philip H. S. Torr, João F. Henriques
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent.
2 code implementations • 20 Sep 2022 • Li Zhang, Mohan Chen, Anurag Arnab, xiangyang xue, Philip H. S. Torr
A fully-connected graph, such as the self-attention operation in Transformers, is beneficial for such modelling, however, its computational overhead is prohibitive.
no code implementations • 15 Aug 2022 • Bowen Li, Philip H. S. Torr, Thomas Lukasiewicz
We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques.
no code implementations • 22 Jul 2022 • Francesco Pinto, Philip H. S. Torr, Puneet K. Dokania
Following the surge of popularity of Transformers in Computer Vision, several studies have attempted to determine whether they could be more robust to distribution shifts and provide better uncertainty estimates than Convolutional Neural Networks (CNNs).
no code implementations • 20 Jul 2022 • Tim Franzmeyer, Stephen Mcaleer, João F. Henriques, Jakob N. Foerster, Philip H. S. Torr, Adel Bibi, Christian Schroeder de Witt
Autonomous agents deployed in the real world need to be robust against adversarial attacks on sensory inputs.
1 code implementation • 13 Jul 2022 • Tom Joy, Francesco Pinto, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania
The most common post-hoc approach to compensate for this is to perform temperature scaling, which adjusts the confidences of the predictions on any input by scaling the logits by a fixed value.
no code implementations • 5 Jul 2022 • Weiming Hu, Qiang Wang, Li Zhang, Luca Bertinetto, Philip H. S. Torr
In this paper we introduce SiamMask, a framework to perform both visual object tracking and video object segmentation, in real-time, with the same simple method.
1 code implementation • 29 Jun 2022 • Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania
We show that the effectiveness of the well celebrated Mixup [Zhang et al., 2018] can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss.
no code implementations • 17 Jun 2022 • Yuge Shi, Imant Daunhawer, Julia E. Vogt, Philip H. S. Torr, Amartya Sanyal
As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms (AE), to distribution shift.
1 code implementation • 16 Jun 2022 • Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr
Despite clear computational advantages in building robust neural networks, adversarial training (AT) using single-step methods is unstable as it suffers from catastrophic overfitting (CO): Networks gain non-trivial robustness during the first stages of adversarial training, but suddenly reach a breaking point where they quickly lose all robustness in just a few iterations.
1 code implementation • 25 Apr 2022 • Dubing Chen, Yuming Shen, Haofeng Zhang, Philip H. S. Torr
As a consequence of our derivation, the aforementioned two properties are incorporated into the classifier training as seen-unseen priors via logit adjustment.
Ranked #1 on
Generalized Zero-Shot Learning
on AwA2
(Accuracy Unseen metric)
1 code implementation • 24 Apr 2022 • Dubing Chen, Yuming Shen, Haofeng Zhang, Philip H. S. Torr
Recent research on Generalized Zero-Shot Learning (GZSL) has focused primarily on generation-based methods.
1 code implementation • CVPR 2022 • Kejie Li, Yansong Tang, Victor Adrian Prisacariu, Philip H. S. Torr
Dense 3D reconstruction from a stream of depth images is the key to many mixed reality and robotic applications.
1 code implementation • 28 Feb 2022 • Hao Tang, Ling Shao, Philip H. S. Torr, Nicu Sebe
To learn more discriminative class-specific feature representations for the local generation, we also propose a novel classification module.
1 code implementation • 2 Feb 2022 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania
Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.
no code implementations • 31 Jan 2022 • Jiaguo Yu, Yuming Shen, Menghan Wang, Haofeng Zhang, Philip H. S. Torr
In this paper, we tackle this problem by introducing Naturally-Sorted Hashing (NSH).
1 code implementation • 31 Jan 2022 • Motasem Alfarra, Juan C. Pérez, Anna Frühstück, Philip H. S. Torr, Peter Wonka, Bernard Ghanem
Finally, we show that the FID can be robustified by simply replacing the standard Inception with a robust Inception.
1 code implementation • 31 Jan 2022 • Yuge Shi, N. Siddharth, Philip H. S. Torr, Adam R. Kosiorek
We propose ADIOS, a masked image model (MIM) framework for self-supervised learning, which simultaneously learns a masking function and an image encoder using an adversarial objective.
1 code implementation • CVPR 2022 • Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, Philip H. S. Torr
Referring image segmentation is a fundamental vision-language task that aims to segment out an object referred to by a natural language expression from an image.
Ranked #3 on
Referring Expression Segmentation
on RefCOCOg-test
no code implementations • 23 Nov 2021 • Christian Schroeder de Witt, Yongchao Huang, Philip H. S. Torr, Martin Strohmeier
We then argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions, and introduce a temporally extended multi-agent reinforcement learning framework in which the resultant dynamics can be studied.
no code implementations • 15 Nov 2021 • Jiyang Qi, Yan Gao, Yao Hu, Xinggang Wang, Xiaoyu Liu, Xiang Bai, Serge Belongie, Alan Yuille, Philip H. S. Torr, Song Bai
To promote the development of occlusion understanding, we collect a large-scale dataset called OVIS for video instance segmentation in the occluded scenario.
no code implementations • 29 Oct 2021 • Jishnu Mukhoti, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal
We extend Deep Deterministic Uncertainty (DDU), a method for uncertainty estimation using feature space densities, to semantic segmentation.
no code implementations • 6 Oct 2021 • Andrew Gambardella, Bogdan State, Naeemullah Khan, Leo Tsourides, Philip H. S. Torr, Atılım Güneş Baydin
We propose the use of probabilistic programming techniques to tackle the malicious user identification problem in a recommendation algorithm.
1 code implementation • 11 Sep 2021 • Shiyu Tang, Ruihao Gong, Yan Wang, Aishan Liu, Jiakai Wang, Xinyun Chen, Fengwei Yu, Xianglong Liu, Dawn Song, Alan Yuille, Philip H. S. Torr, DaCheng Tao
Thus, we propose RobustART, the first comprehensive Robustness investigation benchmark on ImageNet regarding ARchitecture design (49 human-designed off-the-shelf architectures and 1200+ networks from neural architecture search) and Training techniques (10+ techniques, e. g., data augmentation) towards diverse noises (adversarial, natural, and system noises).
1 code implementation • 9 Jul 2021 • Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, Adel Bibi
Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale.
1 code implementation • NeurIPS 2021 • Zhongdao Wang, Hengshuang Zhao, Ya-Li Li, Shengjin Wang, Philip H. S. Torr, Luca Bertinetto
We show how most tracking tasks can be solved within this framework, and that the same appearance model can be successfully used to obtain results that are competitive against specialised methods for most of the tasks considered.
Ranked #2 on
Video Object Segmentation
on DAVIS 2017
(mIoU metric)
Multi-Object Tracking
Multi-Object Tracking and Segmentation
+10
2 code implementations • 2 Jul 2021 • Motasem Alfarra, Adel Bibi, Naeemullah Khan, Philip H. S. Torr, Bernard Ghanem
Deep neural networks are vulnerable to input deformations in the form of vector fields of pixel displacements and to other parameterized geometric deformations e. g. translations, rotations, etc.
1 code implementation • ICLR 2022 • Tom Joy, Yuge Shi, Philip H. S. Torr, Tom Rainforth, Sebastian M. Schmon, N. Siddharth
Here we introduce a novel alternative, the MEME, that avoids such explicit combinations by repurposing semi-supervised VAEs to combine information between modalities implicitly through mutual supervision.
1 code implementation • ICLR 2022 • A. Tuan Nguyen, Toan Tran, Yarin Gal, Philip H. S. Torr, Atılım Güneş Baydin
A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain.
no code implementations • NeurIPS 2021 • Yuming Shen, Ziyi Shen, Menghan Wang, Jie Qin, Philip H. S. Torr, Ling Shao
On one hand, with the corresponding assignment variables being the weight, a weighted aggregation along the data points implements the set representation of a cluster.
1 code implementation • 12 May 2021 • Yansong Tang, Zhenyu Jiang, Zhenda Xie, Yue Cao, Zheng Zhang, Philip H. S. Torr, Han Hu
Previous cycle-consistency correspondence learning methods usually leverage image patches for training.
2 code implementations • ICLR 2022 • Yuge Shi, Jeffrey Seely, Philip H. S. Torr, N. Siddharth, Awni Hannun, Nicolas Usunier, Gabriel Synnaeve
We perform experiments on both the Wilds benchmark, which captures distribution shift in the real world, as well as datasets in DomainBed benchmark that focuses more on synthetic-to-real transfer.
1 code implementation • ICCV 2021 • Guangrun Wang, Keze Wang, Guangcong Wang, Philip H. S. Torr, Liang Lin
In this paper, we reveal two contradictory phenomena in contrastive learning that we call under-clustering and over-clustering problems, which are major obstacles to learning efficiency.
Ranked #1 on
Self-Supervised Person Re-Identification
on SYSU-30k
no code implementations • 14 Apr 2021 • Alessandro De Palma, Rudy Bunel, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
Finally, we design a BaB framework, named Branch and Dual Network Bound (BaDNB), based on our novel bounding and branching algorithms.
1 code implementation • 12 Apr 2021 • Bin Ren, Hao Tang, Fanyang Meng, Runwei Ding, Ling Shao, Philip H. S. Torr, Nicu Sebe
2D image-based virtual try-on has attracted increased attention from the multimedia and computer vision communities.
1 code implementation • ICML Workshop AML 2021 • Motasem Alfarra, Juan C. Pérez, Ali Thabet, Adel Bibi, Philip H. S. Torr, Bernard Ghanem
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks.
4 code implementations • 23 Feb 2021 • Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal
Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.
no code implementations • 16 Feb 2021 • Naeemullah Khan, Angira Sharma, Ganesh Sundaramoorthi, Philip H. S. Torr
We stack multiple PDE layers to generalize a deep CNN to arbitrary regions, and apply it to segmentation.
1 code implementation • 2 Feb 2021 • Jiyang Qi, Yan Gao, Yao Hu, Xinggang Wang, Xiaoyu Liu, Xiang Bai, Serge Belongie, Alan Yuille, Philip H. S. Torr, Song Bai
On the OVIS dataset, the highest AP achieved by state-of-the-art algorithms is only 16. 3, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real-world scenario.
Ranked #20 on
Video Instance Segmentation
on OVIS validation
no code implementations • ICLR 2021 • Alessandro De Palma, Harkirat Singh Behl, Rudy Bunel, Philip H. S. Torr, M. Pawan Kumar
Tight and efficient neural network bounding is crucial to the scaling of neural network verification systems.
5 code implementations • CVPR 2021 • Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H. S. Torr, Li Zhang
In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task.
Ranked #1 on
Semantic Segmentation
on FoodSeg103
(using extra training data)
no code implementations • pproximateinference AABI Symposium 2021 • Jishnu Mukhoti, Puneet K. Dokania, Philip H. S. Torr, Yarin Gal
We study batch normalisation in the context of variational inference methods in Bayesian neural networks, such as mean-field or MC Dropout.
1 code implementation • CVPR 2021 • Xiaolong Liu, Yao Hu, Song Bai, Fei Ding, Xiang Bai, Philip H. S. Torr
Current developments in temporal event or action localization usually target actions captured by a single camera.
Ranked #2 on
Temporal Action Localization
on MUSES
2 code implementations • 13 Dec 2020 • Xiaojuan Qi, Zhengzhe Liu, Renjie Liao, Philip H. S. Torr, Raquel Urtasun, Jiaya Jia
Note that GeoNet++ is generic and can be used in other depth/normal prediction frameworks to improve the quality of 3D reconstruction and pixel-wise accuracy of depth and surface normals.
no code implementations • 8 Dec 2020 • Motasem Alfarra, Adel Bibi, Philip H. S. Torr, Bernard Ghanem
In this work, we revisit Gaussian randomized smoothing and show that the variance of the Gaussian distribution can be optimized at each input so as to maximize the certification radius for the construction of the smooth classifier.
4 code implementations • 18 Nov 2020 • Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H. S. Torr, Mingfei Sun, Shimon Whiteson
Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function.
1 code implementation • NeurIPS 2020 • Bowen Li, Xiaojuan Qi, Philip H. S. Torr, Thomas Lukasiewicz
To achieve this, a new word-level discriminator is proposed, which provides the generator with fine-grained training feedback at word-level, to facilitate training a lightweight generator that has a small number of parameters, but can still correctly focus on specific visual attributes of an image, and then edit them without affecting other contents that are not described in the text.
1 code implementation • NeurIPS 2020 • Arslan Chaudhry, Naeemullah Khan, Puneet K. Dokania, Philip H. S. Torr
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished.
no code implementations • ECCV 2020 • Harkirat Singh Behl, Atılım Güneş Baydin, Ran Gal, Philip H. S. Torr, Vibhav Vineet
Simulation is increasingly being used for generating large labelled datasets in many machine learning problems.
1 code implementation • 10 Aug 2020 • Hao Tang, Song Bai, Philip H. S. Torr, Nicu Sebe
We present a novel Bipartite Graph Reasoning GAN (BiGraphGAN) for the challenging person image generation task.
Ranked #1 on
Pose Transfer
on Market-1501
(PCKh metric)
2 code implementations • ECCV 2020 • Hao Tang, Song Bai, Li Zhang, Philip H. S. Torr, Nicu Sebe
We propose a novel Generative Adversarial Network (XingGAN or CrossingGAN) for person image generation tasks, i. e., translating the pose of a given person to a desired one.
Ranked #1 on
Pose Transfer
on Market-1501
(IS metric)
1 code implementation • ICML Workshop LaReL 2020 • Minqi Jiang, Jelena Luketina, Nantas Nardelli, Pasquale Minervini, Philip H. S. Torr, Shimon Whiteson, Tim Rocktäschel
This is partly due to the lack of lightweight simulation environments that sufficiently reflect the semantics of the real world and provide knowledge sources grounded with respect to observations in an RL environment.
no code implementations • 8 Jul 2020 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip H. S. Torr
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
no code implementations • ICLR 2021 • Yuge Shi, Brooks Paige, Philip H. S. Torr, N. Siddharth
Multimodal learning for generative models often refers to the learning of abstract concepts from the commonality of information in multiple modalities, such as vision and language.
no code implementations • 18 Jun 2020 • Arnab Ghosh, Harkirat Singh Behl, Emilien Dupont, Philip H. S. Torr, Vinay Namboodiri
Training Neural Ordinary Differential Equations (ODEs) is often computationally expensive.
2 code implementations • ICLR 2021 • Tom Joy, Sebastian M. Schmon, Philip H. S. Torr, N. Siddharth, Tom Rainforth
We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels.
1 code implementation • ICLR 2021 • Pau de Jorge, Amartya Sanyal, Harkirat S. Behl, Philip H. S. Torr, Gregory Rogez, Puneet K. Dokania
Recent studies have shown that skeletonization (pruning parameters) of networks \textit{at initialization} provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance.
1 code implementation • 20 Apr 2020 • Daniela Massiceti, Viveka Kulharia, Puneet K. Dokania, N. Siddharth, Philip H. S. Torr
Evaluating Visual Dialogue, the task of answering a sequence of questions relating to a visual input, remains an open research challenge.
no code implementations • ICLR 2021 • Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr, Martin Jaggi
As a result, we find across various workloads of data set, network model, and optimization algorithm that there exists a general scaling trend between batch size and number of training steps to convergence for the effect of data parallelism, and further, difficulty of training under sparsity.
1 code implementation • CVPR 2020 • Victoria Fernandez Abrevaya, Adnane Boukhayma, Philip H. S. Torr, Edmond Boyer
Core to our approach is a novel module that we call deactivable skip connections, which allows integrating both the auto-encoded and image-to-normal branches within the same architecture that can be trained end-to-end.
3 code implementations • NeurIPS 2021 • Bei Peng, Tabish Rashid, Christian A. Schroeder de Witt, Pierre-Alexandre Kamienny, Philip H. S. Torr, Wendelin Böhmer, Shimon Whiteson
We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces.
1 code implementation • CVPR 2020 • Nan Xue, Tianfu Wu, Song Bai, Fu-Dong Wang, Gui-Song Xia, Liangpei Zhang, Philip H. S. Torr
For computing line segment proposals, a novel exact dual representation is proposed which exploits a parsimonious geometric reparameterization for line segments and forms a holistic 4-dimensional attraction field map for an input image.
Ranked #1 on
Line Segment Detection
on wireframe dataset
(FH metric)
2 code implementations • 24 Feb 2020 • Rudy Bunel, Alessandro De Palma, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds.
2 code implementations • NeurIPS 2020 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip H. S. Torr, Puneet K. Dokania
To facilitate the use of focal loss in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function.
no code implementations • 12 Feb 2020 • Bowen Li, Xiaojuan Qi, Philip H. S. Torr, Thomas Lukasiewicz
The goal of this paper is to embed controllable factors, i. e., natural language descriptions, into image-to-image translation with generative adversarial networks, which allows text descriptions to determine the visual attributes of synthetic images.
1 code implementation • 3 Feb 2020 • Hao Tang, Philip H. S. Torr, Nicu Sebe
In the first stage, the input image and the conditional semantic guidance are fed into a cycled semantic-guided generation network to produce initial coarse results.
no code implementations • CVPR 2020 • Qizhu Li, Xiaojuan Qi, Philip H. S. Torr
This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both "stuff" and "thing" classes, without any post-processing.
1 code implementation • CVPR 2021 • Hongguang Zhang, Piotr Koniusz, Songlei Jian, Hongdong Li, Philip H. S. Torr
The majority of existing few-shot learning methods describe image relations with binary labels.
Unsupervised Few-Shot Image Classification
Unsupervised Few-Shot Learning
1 code implementation • ECCV 2020 • Hongguang Zhang, Li Zhang, Xiaojuan Qi, Hongdong Li, Philip H. S. Torr, Piotr Koniusz
Such encoded blocks are aggregated by permutation-invariant pooling to make our approach robust to varying action lengths and long-range temporal dependencies whose patterns are unlikely to repeat even in clips of the same class.
Ranked #6 on
Few Shot Action Recognition
on Kinetics-100
no code implementations • 6 Jan 2020 • Hongguang Zhang, Philip H. S. Torr, Piotr Koniusz
In this paper, we study the impact of scale and location mismatch in the few-shot learning scenario, and propose a novel Spatially-aware Matching (SM) scheme to effectively perform matching across multiple scales and locations, and learn image relations by giving the highest weights to the best matching pairs.
2 code implementations • CVPR 2020 • Hao Tang, Dan Xu, Yan Yan, Philip H. S. Torr, Nicu Sebe
To tackle this issue, in this work we consider learning the scene generation in a local context, and correspondingly design a local class-specific generative network with semantic maps as a guidance, which separately constructs and learns sub-generators concentrating on the generation of different classes, and is able to provide more scene details.
no code implementations • 18 Dec 2019 • Nan Xue, Song Bai, Fu-Dong Wang, Gui-Song Xia, Tianfu Wu, Liangpei Zhang, Philip H. S. Torr
Given a line segment map, the proposed regional attraction first establishes the relationship between line segments and regions in the image lattice.
no code implementations • 13 Dec 2019 • Alex Muryy, Siddharth Narayanaswamy, Nantas Nardelli, Andrew Glennerster, Philip H. S. Torr
Neuroscientists postulate 3D representations in the brain in a variety of different coordinate frames (e. g. 'head-centred', 'hand-centred' and 'world-based').
3 code implementations • 12 Dec 2019 • Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr
The goal of our paper is to semantically edit parts of an image matching a given text that describes desired attributes (e. g., texture, colour, and background), while preserving other contents that are irrelevant to the text.
no code implementations • 29 Nov 2019 • Andrew Gambardella, Atılım Güneş Baydin, Philip H. S. Torr
It is well known that deep generative models have a rich latent space, and that it is possible to smoothly manipulate their outputs by traversing this latent space.
1 code implementation • CVPR 2020 • Paul Voigtlaender, Jonathon Luiten, Philip H. S. Torr, Bastian Leibe
We present Siam R-CNN, a Siamese re-detection architecture which unleashes the full power of two-stage object detection approaches for visual object tracking.
Ranked #14 on
Visual Object Tracking
on TrackingNet
2 code implementations • 27 Nov 2019 • Hao Tang, Hong Liu, Dan Xu, Philip H. S. Torr, Nicu Sebe
State-of-the-art methods in image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data.
Ranked #1 on
Facial Expression Translation
on CelebA
2 code implementations • NeurIPS 2019 • Yuge Shi, N. Siddharth, Brooks Paige, Philip H. S. Torr
In this work, we characterise successful learning of such models as the fulfillment of four criteria: i) implicit latent decomposition into shared and private subspaces, ii) coherent joint generation over all modalities, iii) coherent cross-generation across individual modalities, and iv) improved model learning for individual modalities through multi-modal integration.
1 code implementation • ICCV 2019 • Zhao Yang, Qiang Wang, Luca Bertinetto, Weiming Hu, Song Bai, Philip H. S. Torr
Unsupervised video object segmentation has often been tackled by methods based on recurrent neural networks and optical flow.
Ranked #15 on
Unsupervised Video Object Segmentation
on DAVIS 2016 val
1 code implementation • 20 Oct 2019 • Saeid Naderiparizi, Adam Ścibior, Andreas Munk, Mehrdad Ghadiri, Atılım Güneş Baydin, Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood
Naive approaches to amortized inference in probabilistic programs with unbounded loops can produce estimators with infinite variance.
1 code implementation • 18 Oct 2019 • Thalaiyasingam Ajanthan, Kartik Gupta, Philip H. S. Torr, Richard Hartley, Puneet K. Dokania
Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity.
1 code implementation • ICCV 2019 • Arnab Ghosh, Richard Zhang, Puneet K. Dokania, Oliver Wang, Alexei A. Efros, Philip H. S. Torr, Eli Shechtman
We propose an interactive GAN-based sketch-to-image translation method that helps novice users create images of simple objects.
2 code implementations • NeurIPS 2019 • Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr
In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions.
Ranked #6 on
Text-to-Image Generation
on Multi-Modal-CelebA-HQ
no code implementations • 14 Sep 2019 • Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar
We use the data sets to conduct a thorough experimental comparison of existing and new algorithms and to provide an inclusive analysis of the factors impacting the hardness of verification problems.
5 code implementations • 13 Sep 2019 • Li Zhang, Xiangtai Li, Anurag Arnab, Kuiyuan Yang, Yunhai Tong, Philip H. S. Torr
Exploiting long-range contextual information is key for pixel-wise prediction tasks such as semantic segmentation.
Ranked #26 on
Semantic Segmentation
on Cityscapes test
1 code implementation • CVPR 2020 • Li Zhang, Dan Xu, Anurag Arnab, Philip H. S. Torr
We propose a dynamic graph message passing network, that significantly reduces the computational complexity compared to related works modelling a fully-connected graph.
no code implementations • 17 Jul 2019 • Oscar Rahnama, Tommaso Cavallari, Stuart Golodetz, Alessio Tonioni, Thomas Joy, Luigi Di Stefano, Simon Walker, Philip H. S. Torr
Obtaining highly accurate depth from stereo images in real time has many applications across computer vision and robotics, but in some contexts, upper bounds on power consumption constrain the feasible hardware to embedded platforms such as FPGAs.
1 code implementation • ICLR 2020 • Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, Philip H. S. Torr
Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity.
no code implementations • ICLR 2020 • Amartya Sanyal, Philip H. S. Torr, Puneet K. Dokania
Exciting new work on the generalization bounds for neural networks (NN) given by Neyshabur et al. , Bartlett et al. closely depend on two parameter-depenedent quantities: the Lipschitz constant upper-bound and the stable rank (a softer version of the rank operator).
Ranked #95 on
Image Generation
on CIFAR-10
no code implementations • 29 May 2019 • Bradley Gram-Hansen, Christian Schröder de Witt, Tom Rainforth, Philip H. S. Torr, Yee Whye Teh, Atılım Güneş Baydin
Epidemiology simulations have become a fundamental tool in the fight against the epidemics of various infectious diseases like AIDS and malaria.
1 code implementation • 27 May 2019 • Laurynas Miksys, Saumya Jetley, Michael Sapienza, Stuart Golodetz, Philip H. S. Torr
The STS model can run at 35 FPS on a high-end desktop, but its accuracy is significantly worse than that of offline state-of-the-art methods.
no code implementations • 17 May 2019 • Harkirat Singh Behl, Atılım Güneş Baydin, Philip H. S. Torr
Model-agnostic meta-learning (MAML) is a meta-learning technique to train a model on a multitude of learning tasks in a way that primes the model for few-shot learning of new tasks.
3 code implementations • CVPR 2019 • Feihu Zhang, Victor Prisacariu, Ruigang Yang, Philip H. S. Torr
In the stereo matching task, matching cost aggregation is crucial in both traditional methods and deep neural network models in order to accurately estimate disparities.
no code implementations • CVPR 2019 • Eunwoo Kim, Chanho Ahn, Philip H. S. Torr, Songhwai Oh
To this end, we propose a novel network architecture producing multiple networks of different configurations, termed deep virtual networks (DVNs), for different tasks.
1 code implementation • CVPR 2019 • Alessio Tonioni, Oscar Rahnama, Thomas Joy, Luigi Di Stefano, Thalaiyasingam Ajanthan, Philip H. S. Torr
Real world applications of stereo depth estimation require models that are robust to dynamic variations in the environment.
5 code implementations • 27 Feb 2019 • Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K. Dokania, Philip H. S. Torr, Marc'Aurelio Ranzato
But for a successful knowledge transfer, the learner needs to remember how to perform previous tasks.
Ranked #8 on
Class Incremental Learning
on cifar100
no code implementations • 21 Feb 2019 • Botos Csaba, Adnane Boukhayma, Viveka Kulharia, András Horváth, Philip H. S. Torr
Standard adversarial training involves two agents, namely a generator and a discriminator, playing a mini-max game.
20 code implementations • 11 Feb 2019 • Mikayel Samvelyan, Tabish Rashid, Christian Schroeder de Witt, Gregory Farquhar, Nantas Nardelli, Tim G. J. Rudner, Chia-Man Hung, Philip H. S. Torr, Jakob Foerster, Shimon Whiteson
In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap.
Ranked #5 on
SMAC
on SMAC 6h_vs_8z
2 code implementations • CVPR 2019 • Adnane Boukhayma, Rodrigo de Bem, Philip H. S. Torr
We present in this work the first end-to-end deep learning based method that predicts both 3D hand shape and pose from RGB images in the wild.
Ranked #10 on
3D Hand Pose Estimation
on FreiHAND
(PA-MPVPE metric)
1 code implementation • 30 Jan 2019 • Song Bai, Yingwei Li, Yuyin Zhou, Qizhu Li, Philip H. S. Torr
However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images.
1 code implementation • 23 Jan 2019 • Song Bai, Feihu Zhang, Philip H. S. Torr
To efficiently learn deep embeddings on the high-order graph-structured data, we introduce two end-to-end trainable operators to the family of graph neural networks, i. e., hypergraph convolution and hypergraph attention.
1 code implementation • 29 Dec 2018 • Zhao Yang, Song Bai, Li Zhang, Philip H. S. Torr
Deep reinforcement learning (DeepRL) agents surpass human-level performance in many tasks.
2 code implementations • 16 Dec 2018 • Daniela Massiceti, Puneet K. Dokania, N. Siddharth, Philip H. S. Torr
We characterise some of the quirks and shortcomings in the exploration of Visual Dialogue - a sequential question-answering task where the questions and corresponding answers are related through given visual stimuli.
3 code implementations • CVPR 2019 • Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, Philip H. S. Torr
In this paper we illustrate how to perform both visual object tracking and semi-supervised video object segmentation, in real-time, with a single simple approach.
Ranked #3 on
Visual Object Tracking
on YouTube-VOS 2018
Real-Time Visual Tracking
Semi-Supervised Semantic Segmentation
+2
1 code implementation • ICCV 2019 • Thalaiyasingam Ajanthan, Puneet K. Dokania, Richard Hartley, Philip H. S. Torr
Compressing large Neural Networks (NN) by quantizing the parameters, while maintaining the performance is highly desirable due to reduced memory and time complexity.
1 code implementation • 4 Dec 2018 • Harkirat Singh Behl, Mohammad Najafi, Anurag Arnab, Philip H. S. Torr
We address this problem by considering the task of video object segmentation.
no code implementations • 19 Nov 2018 • Tian Xu, Jiayu Zhan, Oliver G. B. Garrod, Philip H. S. Torr, Song-Chun Zhu, Robin A. A. Ince, Philippe G. Schyns
However, understanding the information represented and processed in CNNs remains in most cases challenging.
no code implementations • 30 Oct 2018 • Oscar Rahnama, Tommaso Cavallari, Stuart Golodetz, Simon Walker, Philip H. S. Torr
Stereo depth estimation is used for many computer vision applications.
1 code implementation • 29 Oct 2018 • Tommaso Cavallari, Stuart Golodetz, Nicholas A. Lord, Julien Valentin, Victor A. Prisacariu, Luigi Di Stefano, Philip H. S. Torr
The adapted forests achieved relocalisation performance that was on par with that of offline forests, and our approach was able to estimate the camera pose in close to real time.
1 code implementation • NeurIPS 2019 • Christian A. Schroeder de Witt, Jakob N. Foerster, Gregory Farquhar, Philip H. S. Torr, Wendelin Boehmer, Shimon Whiteson
In this paper, we show that common knowledge between agents allows for complex decentralised coordination.
Multi-agent Reinforcement Learning
reinforcement-learning
+3
8 code implementations • ICLR 2019 • Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr
To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task.
1 code implementation • ECCV 2018 • Qizhu Li, Anurag Arnab, Philip H. S. Torr
We present a weakly supervised model that jointly performs both semantic- and instance-segmentation -- a particularly relevant problem given the substantial cost of obtaining pixel-perfect annotation for these tasks.
Ranked #31 on
Panoptic Segmentation
on Cityscapes val
1 code implementation • NeurIPS 2018 • Saumya Jetley, Nicholas A. Lord, Philip H. S. Torr
Via a novel experimental analysis, we illustrate some facts about deep convolutional networks for image classification that shed new light on their behaviour and how it connects to the problem of adversaries.
no code implementations • ICLR 2018 • Nantas Nardelli, Gabriel Synnaeve, Zeming Lin, Pushmeet Kohli, Philip H. S. Torr, Nicolas Usunier
We present Value Propagation (VProp), a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments.
no code implementations • 23 May 2018 • Thomas Joy, Alban Desmaison, Thalaiyasingam Ajanthan, Rudy Bunel, Mathieu Salzmann, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
The presented algorithms can be applied to any labelling problem using a dense CRF with sparse higher-order potentials.
5 code implementations • ICLR 2019 • Luca Bertinetto, João F. Henriques, Philip H. S. Torr, Andrea Vedaldi
The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.
no code implementations • ICLR 2019 • Amartya Sanyal, Varun Kanade, Philip H. S. Torr, Puneet K. Dokania
To achieve low dimensionality of learned representations, we propose an easy-to-use, end-to-end trainable, low-rank regularizer (LR) that can be applied to any intermediate layer representation of a DNN.
4 code implementations • ICLR 2018 • Saumya Jetley, Nicholas A. Lord, Namhoon Lee, Philip H. S. Torr
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification.
no code implementations • 27 Mar 2018 • Qibin Hou, Jiang-Jiang Liu, Ming-Ming Cheng, Ali Borji, Philip H. S. Torr
Although these tasks are inherently very different, we show that our unified approach performs very well on all of them and works far better than current single-purpose state-of-the-art methods.
no code implementations • 27 Mar 2018 • Qibin Hou, Ming-Ming Cheng, Jiang-Jiang Liu, Philip H. S. Torr
In this paper, we improve semantic segmentation by automatically learning from Flickr images associated with a particular keyword, without relying on any explicit user annotations, thus substantially alleviating the dependence on accurate annotations when compared to previous weakly supervised methods.
1 code implementation • 20 Feb 2018 • Oscar Rahnama, Duncan Frost, Ondrej Miksik, Philip H. S. Torr
For many applications in low-power real-time robotics, stereo cameras are the sensors of choice for depth perception as they are typically cheaper and more versatile than their active counterparts.
no code implementations • 20 Feb 2018 • Yao Lu, Jack Valmadre, Heng Wang, Juho Kannala, Mehrtash Harandi, Philip H. S. Torr
State-of-the-art neural network models estimate large displacement optical flow in multi-resolution and use warping to propagate the estimation between two resolutions.
no code implementations • CVPR 2018 • Daniela Massiceti, N. Siddharth, Puneet K. Dokania, Philip H. S. Torr
We are the first to extend this paradigm to full two-way visual dialogue, where our model is capable of generating both questions and answers in sequence based on a visual input, for which we propose a set of novel evaluation measures and metrics.
1 code implementation • ECCV 2018 • Arslan Chaudhry, Puneet K. Dokania, Thalaiyasingam Ajanthan, Philip H. S. Torr
We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, inability of a model to update its knowledge.
no code implementations • 25 Jan 2018 • Stuart Golodetz, Tommaso Cavallari, Nicholas A. Lord, Victor A. Prisacariu, David W. Murray, Philip H. S. Torr
Reconstructing dense, volumetric models of real-world 3D scenes is important for many tasks, but capturing large scenes can take significant time, and the risk of transient changes to the scene goes up as the capture time increases.
no code implementations • ICLR 2018 • Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar
Motivated by the need of accelerating progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework.
1 code implementation • CVPR 2018 • Anurag Arnab, Ondrej Miksik, Philip H. S. Torr
Deep Neural Networks (DNNs) have demonstrated exceptional performance on most recognition tasks such as image classification and segmentation.
12 code implementations • CVPR 2018 • Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, Timothy M. Hospedales
Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network.
2 code implementations • NeurIPS 2018 • Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar
The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models.
1 code implementation • 11 Sep 2017 • Qizhu Li, Anurag Arnab, Philip H. S. Torr
We address this problem by segmenting the parts of objects at an instance-level, such that each pixel in the image is assigned a part label, as well as the identity of the object it belongs to.
Ranked #2 on
Multi-Human Parsing
on PASCAL-Part
1 code implementation • 2 Aug 2017 • Victor Adrian Prisacariu, Olaf Kähler, Stuart Golodetz, Michael Sapienza, Tommaso Cavallari, Philip H. S. Torr, David W. Murray
Representing the reconstruction volumetrically as a TSDF leads to most of the simplicity and efficiency that can be achieved with GPU implementations of these systems.
no code implementations • 22 Jul 2017 • Suman Saha, Gurkirt Singh, Michael Sapienza, Philip H. S. Torr, Fabio Cuzzolin
Current state-of-the-art human action recognition is focused on the classification of temporally trimmed videos in which only one action occurs per frame.
1 code implementation • 18 Jul 2017 • Arslan Chaudhry, Puneet K. Dokania, Philip H. S. Torr
We propose an approach to discover class-specific pixels for the weakly-supervised semantic segmentation task.
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
1 code implementation • NeurIPS 2017 • N. Siddharth, Brooks Paige, Jan-Willem van de Meent, Alban Desmaison, Noah D. Goodman, Pushmeet Kohli, Frank Wood, Philip H. S. Torr
We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.
no code implementations • CVPR 2017 • Jack Valmadre, Luca Bertinetto, João F. Henriques, Andrea Vedaldi, Philip H. S. Torr
The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations.
Ranked #3 on
Visual Object Tracking
on OTB-50
3 code implementations • CVPR 2017 • Namhoon Lee, Wongun Choi, Paul Vernaza, Christopher B. Choy, Philip H. S. Torr, Manmohan Chandraker
DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i. e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents.
Ranked #1 on
Trajectory Prediction
on PAID
1 code implementation • CVPR 2018 • Arnab Ghosh, Viveka Kulharia, Vinay Namboodiri, Philip H. S. Torr, Puneet K. Dokania
Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample.
1 code implementation • CVPR 2017 • Anurag Arnab, Philip H. S. Torr
This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances.
Ranked #8 on
Instance Segmentation
on Cityscapes test
1 code implementation • 5 Apr 2017 • Harkirat Singh Behl, Michael Sapienza, Gurkirt Singh, Suman Saha, Fabio Cuzzolin, Philip H. S. Torr
In this work, we introduce a real-time and online joint-labelling and association algorithm for action detection that can incrementally construct space-time action tubes on the most challenging action videos in which different action categories occur concurrently.
4 code implementations • ICML 2017 • Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos Afouras, Philip H. S. Torr, Pushmeet Kohli, Shimon Whiteson
Many real-world problems, such as network packet routing and urban traffic control, are naturally modeled as multi-agent reinforcement learning (RL) problems.
no code implementations • CVPR 2017 • Tommaso Cavallari, Stuart Golodetz, Nicholas A. Lord, Julien Valentin, Luigi Di Stefano, Philip H. S. Torr
Camera relocalisation is an important problem in computer vision, with applications in simultaneous localisation and mapping, virtual/augmented reality and navigation.
no code implementations • CVPR 2017 • Ondrej Miksik, Juan-Manuel Pérez-Rúa, Philip H. S. Torr, Patrick Pérez
Rotoscoping, the detailed delineation of scene elements through a video shot, is a painstaking task of tremendous importance in professional post-production pipelines.
no code implementations • 4 Dec 2016 • Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H. S. Torr, Pushmeet Kohli
Superoptimization requires the estimation of the best program for a given computational task.
1 code implementation • 1 Dec 2016 • Shehroze Bhatti, Alban Desmaison, Ondrej Miksik, Nantas Nardelli, N. Siddharth, Philip H. S. Torr
A number of recent approaches to policy learning in 2D game domains have been successful going directly from raw input images to actions.