no code implementations • ACL 2022 • Aishwarya Agrawal, Damien Teney, Aida Nematzadeh
In addition to the larger pretraining datasets, the transformer architecture (Vaswani et al., 2017) and in particular self-attention applied to two modalities are responsible for the impressive performance of the recent pretrained models on downstream tasks (Hendricks et al., 2021).
1 code implementation • 4 Nov 2022 • Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, Byoung-Tak Zhang
Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
no code implementations • 1 Sep 2022 • Damien Teney, Yong Lin, Seong Joon Oh, Ehsan Abbasnejad
This paper shows that inverse correlations between ID and OOD performance do happen in real-world benchmarks.
no code implementations • 6 Jul 2022 • Damien Teney, Maxime Peyrard, Ehsan Abbasnejad
Underspecification refers to the existence of multiple models that are indistinguishable in their in-domain accuracy, even though they differ in other desirable properties such as out-of-distribution (OOD) performance.
no code implementations • 29 Jun 2022 • Violetta Shevchenko, Ehsan Abbasnejad, Anthony Dick, Anton Van Den Hengel, Damien Teney
In a simple setting similar to CLEVR, we find that CL representations also improve systematic generalization, and even match the performance of representations from a larger, supervised, ImageNet-pretrained model.
1 code implementation • CVPR 2022 • Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Reza Haffari, Anton Van Den Hengel, Javen Qinfeng Shi
We identify unlabelled instances with sufficiently-distinct features by seeking inconsistencies in predictions resulting from interventions on their representations.
3 code implementations • ICCV 2021 • Zheyuan Liu, Cristian Rodriguez-Opazo, Damien Teney, Stephen Gould
We demonstrate that with a relatively simple architecture, CIRPLANT outperforms existing methods on open-domain images, while matching state-of-the-art accuracy on the existing narrow datasets, such as fashion.
Ranked #3 on
Image Retrieval
on CIRR
1 code implementation • CVPR 2022 • Damien Teney, Ehsan Abbasnejad, Simon Lucey, Anton Van Den Hengel
The method - the first to evade the simplicity bias - highlights the need for a better understanding and control of inductive biases in deep learning.
1 code implementation • ICCV 2021 • Corentin Dancette, Remi Cadene, Damien Teney, Matthieu Cord
We use this new evaluation in a large-scale study of existing approaches for VQA.
Ranked #1 on
Visual Question Answering (VQA)
on VQA-CE
no code implementations • EACL (LANTERN) 2021 • Violetta Shevchenko, Damien Teney, Anthony Dick, Anton Van Den Hengel
The technique brings clear benefits to knowledge-demanding question answering tasks (OK-VQA, FVQA) by capturing semantic and relational knowledge absent from existing models.
no code implementations • ICCV 2021 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
subsets treated as multiple training environments can guide the learning of models with better out-of-distribution generalization.
no code implementations • NeurIPS 2020 • Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Qinfeng Shi, Anton Van Den Hengel
The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments.
no code implementations • NeurIPS 2020 • Damien Teney, Kushal Kafle, Robik Shrestha, Ehsan Abbasnejad, Christopher Kanan, Anton Van Den Hengel
Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set.
no code implementations • 4 May 2020 • Violetta Shevchenko, Damien Teney, Anthony Dick, Anton Van Den Hengel
We present a novel mechanism to embed prior knowledge in a model for visual question answering.
no code implementations • ECCV 2020 • Damien Teney, Ehsan Abbasnedjad, Anton Van Den Hengel
One of the primary challenges limiting the applicability of deep learning is its susceptibility to learning spurious correlations rather than the underlying mechanisms of the task of interest.
Multi-Label Image Classification
Natural Language Inference
+3
no code implementations • 27 Feb 2020 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
subsets treated as multiple training environments can guide the learning of models with better out-of-distribution generalization.
no code implementations • 30 Sep 2019 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used.
no code implementations • 25 Sep 2019 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used.
no code implementations • 29 Jul 2019 • Damien Teney, Peng Wang, Jiewei Cao, Lingqiao Liu, Chunhua Shen, Anton Van Den Hengel
One of the primary challenges faced by deep learning is the degree to which current methods exploit superficial statistics and dataset bias, rather than learning to generalise over the specific representations they have experienced.
no code implementations • CVPR 2019 • Damien Teney, Anton Van Den Hengel
One of the key limitations of traditional machine learning methods is their requirement for training data that exemplifies all the information to be learned.
no code implementations • ECCV 2018 • Damien Teney, Anton Van Den Hengel
At test time, the method is provided with a support set of example questions/answers, over which it reasons to resolve the given question.
7 code implementations • CVPR 2018 • Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, Anton Van Den Hengel
This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering.
Ranked #3 on
Visual Navigation
on R2R
10 code implementations • CVPR 2018 • Damien Teney, Peter Anderson, Xiaodong He, Anton Van Den Hengel
This paper presents a state-of-the-art model for visual question answering (VQA), which won the first place in the 2017 VQA Challenge.
Ranked #32 on
Visual Question Answering (VQA)
on VQA v2 test-dev
63 code implementations • CVPR 2018 • Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning.
Ranked #57 on
Visual Question Answering (VQA)
on VQA v2 test-std
no code implementations • 17 Nov 2016 • Damien Teney, Anton Van Den Hengel
Answering general questions about images requires methods capable of Zero-Shot VQA, that is, methods able to answer questions beyond the scope of the training questions.
no code implementations • CVPR 2017 • Damien Teney, Lingqiao Liu, Anton Van Den Hengel
This paper proposes to improve visual question answering (VQA) with structured representations of both scene contents and questions.
1 code implementation • 20 Jul 2016 • Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, Anton Van Den Hengel
Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities.
no code implementations • 27 Jan 2016 • Damien Teney, Martial Hebert
Our contributions on network design and rotation invariance offer insights nonspecific to motion estimation.
no code implementations • CVPR 2015 • Damien Teney, Matthew Brown, Dmitry Kit, Peter Hall
This paper addresses the segmentation of videos with arbitrary motion, including dynamic textures, using novel motion features and a supervised learning approach.