1 code implementation • 27 Mar 2023 • Senmao Li, Joost Van de Weijer, Yaxing Wang, Fahad Shahbaz Khan, Meiqin Liu, Jian Yang
In the second step, based on the well-trained multi-class 3D-aware GAN architecture, that preserves view-consistency, we construct a 3D-aware I2I translation system.
no code implementations • 14 Mar 2023 • Dawid Rymarczyk, Joost Van de Weijer, Bartosz Zieliński, Bartłomiej Twardowski
Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks.
1 code implementation • 1 Feb 2023 • Mert Kilickaya, Joost Van de Weijer, Yuki M. Asano
The current dominant paradigm when building a machine learning model is to iterate over a dataset over and over until convergence.
1 code implementation • 22 Nov 2022 • Marco Cotogni, Fei Yang, Claudio Cusano, Andrew D. Bagdanov, Joost Van de Weijer
Secondly, we propose a new method of feature drift compensation that accommodates feature drift in the backbone when learning new tasks.
1 code implementation • 13 Oct 2022 • Dipam Goswami, René Schuster, Joost Van de Weijer, Didier Stricker
In class-incremental semantic segmentation (CISS), deep learning architectures suffer from the critical problems of catastrophic forgetting and semantic background shift.
Ranked #1 on
Overlapped 14-1
on Cityscapes
1 code implementation • 4 Oct 2022 • Kai Wang, Chenshen Wu, Andy Bagdanov, Xialei Liu, Shiqi Yang, Shangling Jui, Joost Van de Weijer
Lifelong object re-identification incrementally learns from a stream of re-identification tasks.
1 code implementation • 3 Oct 2022 • Kai Wang, Fei Yang, Joost Van de Weijer
In experiments on ImageNet-Subset and ImageNet-1K, we show that our method AttnDistill outperforms existing self-supervised knowledge distillation (SSKD) methods and achieves state-of-the-art k-NN accuracy compared with self-supervised learning (SSL) methods learning from scratch (with the ViT-S model).
1 code implementation • 7 Jun 2022 • Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, Joost Van de Weijer
In this paper, we investigate Source-free Open-partial Domain Adaptation (SF-OPDA), which addresses the situation where there exist both domain and category shifts between source and target domains.
no code implementations • 30 May 2022 • Aitor Alvarez-Gila, Joost Van de Weijer, Yaxing Wang, Estibaliz Garrote
We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116, 000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere.
1 code implementation • 9 May 2022 • Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, Joost Van de Weijer
Treating SFDA as an unsupervised clustering problem and following the intuition that local neighbors in feature space should have more similar predictions than other features, we propose to optimize an objective of prediction consistency.
1 code implementation • 24 Mar 2022 • Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost Van de Weijer
In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM).
1 code implementation • 16 Feb 2022 • Simone Zini, Alex Gomez-Villa, Marco Buzzelli, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
The data augmentations used are of crucial importance to the quality of learned feature representations.
no code implementations • 25 Jan 2022 • Vacit Oguz Yazici, LongLong Yu, Arnau Ramisa, Luis Herranz, Joost Van de Weijer
Computer vision has established a foothold in the online fashion retail industry.
1 code implementation • 30 Dec 2021 • Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost Van de Weijer
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches.
1 code implementation • 10 Dec 2021 • Vacit Oguz Yazici, Joost Van de Weijer, LongLong Yu
However, inputting the same set of object queries to different decoder layers hinders the training: it results in lower performance and delays convergence.
Multi-Label Classification
Multi-Label Image Classification
+2
1 code implementation • 4 Dec 2021 • Héctor Laria, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
GANs have matured in recent years and are able to generate high-resolution, realistic images.
1 code implementation • 9 Nov 2021 • Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui, Joost Van de Weijer
We propose an approach to IML, which we call Episodic Replay Distillation (ERD), that mixes classes from the current task with class exemplars from previous tasks when sampling episodes for meta-learning.
1 code implementation • 21 Oct 2021 • Kai Wang, Xialei Liu, Luis Herranz, Joost Van de Weijer
To overcome forgetting in this benchmark, we propose Hierarchy-Consistency Verification (HCV) as an enhancement to existing continual learning methods.
1 code implementation • 9 Oct 2021 • Javad Zolfaghari Bengar, Joost Van de Weijer, Laura Lopez Fuentes, Bogdan Raducanu
Results on three datasets showed that the method is general (it can be combined with most existing active learning algorithms) and can be effectively applied to boost the performance of both informative and representative-based active learning methods.
2 code implementations • NeurIPS 2021 • Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui
In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data.
no code implementations • ICLR 2022 • Yaxing Wang, Joost Van de Weijer, Lu Yu, Shangling Jui
Therefore, we investigate knowledge distillation to transfer knowledge from a high-quality unconditioned generative model (e. g., StyleGAN) to a conditioned synthetic image generation modules in a variety of systems.
no code implementations • 25 Aug 2021 • Javad Zolfaghari Bengar, Joost Van de Weijer, Bartlomiej Twardowski, Bogdan Raducanu
Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high.
1 code implementation • ICCV 2021 • Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui
In this paper, we propose a new domain adaptation paradigm called Generalized Source-free Domain Adaptation (G-SFDA), where the learned model needs to perform well on both the target and source domains, with only access to current unlabeled target data during adaptation.
no code implementations • 30 Jul 2021 • Javad Zolfaghari Bengar, Bogdan Raducanu, Joost Van de Weijer
Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples.
1 code implementation • 22 Jun 2021 • Albin Soutif--Cormerais, Marc Masana, Joost Van de Weijer, Bartłomiej Twardowski
We also define a new forgetting measure for class-incremental learning, and see that forgetting is not the principal cause of low performance.
no code implementations • 18 May 2021 • Kai Wang, Luis Herranz, Joost Van de Weijer
Methods are typically allowed to use a limited buffer to store some of the images in the stream.
no code implementations • ICCV 2021 • Yaxing Wang, Hector Laria Mantecon, Joost Van de Weijer, Laura Lopez-Fuentes, Bogdan Raducanu
In this paper, we propose a new transfer learning for I2I translation (TransferI2I).
1 code implementation • 28 Apr 2021 • Yaxing Wang, Abel Gonzalez-Garcia, Chenshen Wu, Luis Herranz, Fahad Shahbaz Khan, Shangling Jui, Joost Van de Weijer
Therefore, we propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs.
no code implementations • 14 Apr 2021 • Kai Wang, Luis Herranz, Joost Van de Weijer
We found that the indexing stage pays an important role and that simply avoiding reindexing the database with updated embedding networks can lead to significant gains.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
no code implementations • 8 Mar 2021 • Shiqi Yang, Kai Wang, Luis Herranz, Joost Van de Weijer
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their attribute-based descriptions.
1 code implementation • 27 Jan 2021 • Akshita Gupta, Sanath Narayan, Salman Khan, Fahad Shahbaz Khan, Ling Shao, Joost Van de Weijer
Nevertheless, computing reliable attention maps for unseen classes during inference in a multi-label setting is still a challenge.
Ranked #8 on
Multi-label zero-shot learning
on NUS-WIDE
no code implementations • 6 Dec 2020 • Lu Yu, Xialei Liu, Joost Van de Weijer
In class-incremental semantic segmentation, we have no access to the labeled data of previous tasks.
Class-Incremental Semantic Segmentation
Incremental Learning
1 code implementation • NeurIPS 2020 • Yaxing Wang, Lu Yu, Joost Van de Weijer
To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs.
1 code implementation • 28 Oct 2020 • Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost Van de Weijer
For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning.
2 code implementations • 23 Oct 2020 • Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui
When adapting to the target domain, the additional classifier initialized from source classifier is expected to find misclassified features.
Source-Free Domain Adaptation
Unsupervised Domain Adaptation
no code implementations • 31 Jul 2020 • Minghan Li, Xialei Liu, Joost Van de Weijer, Bogdan Raducanu
Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications (such as image/video indexing and retrieval, autonomous driving, etc.).
no code implementations • 24 Jul 2020 • Carola Figueroa-Flores, Bogdan Raducanu, David Berga, Joost Van de Weijer
Most of the saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline, like for instance, image classification.
no code implementations • 13 Jul 2020 • David Berga, Marc Masana, Joost Van de Weijer
We hypothesize that disentangled feature representations suffer less from catastrophic forgetting.
1 code implementation • NeurIPS 2020 • Riccardo Del Chiaro, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight egularization and knowledge distillation to recurrent continual learning problems.
no code implementations • 4 Jul 2020 • Marc Masana, Bartłomiej Twardowski, Joost Van de Weijer
The influence of class orderings in the evaluation of incremental learning has received very little attention.
no code implementations • 26 Jun 2020 • Kai Wang, Luis Herranz, Anjan Dutta, Joost Van de Weijer
We propose bookworm continual learning(BCL), a flexible setting where unseen classes can be inferred via a semantic model, and the visual model can be updated continually.
no code implementations • 10 Jun 2020 • Shiqi Yang, Kai Wang, Luis Herranz, Joost Van de Weijer
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions.
no code implementations • 22 Apr 2020 • Sudeep Katakol, Basem Elbarashy, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez
Moreover, we may only have compressed images at training time but are able to use original images at inference time, or vice versa, and in such a case, the downstream model suffers from covariate shift.
1 code implementation • 20 Apr 2020 • Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D. Bagdanov, Shangling Jui, Joost Van de Weijer
To prevent forgetting, we combine generative feature replay in the classifier with feature distillation in the feature extractor.
2 code implementations • CVPR 2020 • Lu Yu, Bartłomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, Joost Van de Weijer
The vast majority of methods have studied this scenario for classification networks, where for each new task the classification layer of the network must be augmented with additional weights to make room for the newly added classes.
1 code implementation • CVPR 2020 • Yaxing Wang, Salman Khan, Abel Gonzalez-Garcia, Joost Van de Weijer, Fahad Shahbaz Khan
In this work, we go one step further and reduce the amount of required labeled data also from the source domain during training.
no code implementations • 23 Jan 2020 • Marc Masana, Tinne Tuytelaars, Joost Van de Weijer
To allow already learned features to adapt to the current task without changing the behavior of these features for previous tasks, we introduce task-specific feature normalization.
1 code implementation • 22 Jan 2020 • Mikel Menta, Adriana Romero, Joost Van de Weijer
Recent advances in unsupervised domain adaptation have shown the effectiveness of adversarial training to adapt features across domains, endowing neural networks with the capability of being tested on a target domain without requiring any training annotations in this domain.
2 code implementations • CVPR 2020 • Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, Joost Van de Weijer
We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs.
1 code implementation • 11 Dec 2019 • Fei Yang, Luis Herranz, Joost Van de Weijer, José A. Iglesias Guitián, Antonio López, Mikhail Mozerov
Addressing these limitations, we formulate the problem of variable rate-distortion optimization for deep image compression, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific rate-distortion tradeoff via a modulation network.
1 code implementation • CVPR 2020 • Vacit Oguz Yazici, Abel Gonzalez-Garcia, Arnau Ramisa, Bartlomiej Twardowski, Joost Van de Weijer
Recurrent neural networks (RNN) are popular for many computer vision tasks, including multi-label classification.
1 code implementation • ICCV 2019 • Hamed H. Aghdam, Abel Gonzalez-Garcia, Joost Van de Weijer, Antonio M. López
In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks.
1 code implementation • 30 Aug 2019 • Lichao Zhang, Martin Danelljan, Abel Gonzalez-Garcia, Joost Van de Weijer, Fahad Shahbaz Khan
Our tracker is trained in an end-to-end manner, enabling the components to learn how to fuse the information from both modalities.
no code implementations • 30 Aug 2019 • Javad Zolfaghari Bengar, Abel Gonzalez-Garcia, Gabriel Villalonga, Bogdan Raducanu, Hamed H. Aghdam, Mikhail Mozerov, Antonio M. Lopez, Joost Van de Weijer
Our active learning criterion is based on the estimated number of errors in terms of false positives and false negatives.
1 code implementation • 28 Aug 2019 • Aitor Alvarez-Gila, Adrian Galdran, Estibaliz Garrote, Joost Van de Weijer
Blur detection aims at segmenting the blurred areas of a given image.
2 code implementations • 19 Aug 2019 • Yaxing Wang, Abel Gonzalez-Garcia, Joost Van de Weijer, Luis Herranz
Recently, image-to-image translation research has witnessed remarkable progress.
1 code implementation • ICCV 2019 • Lichao Zhang, Abel Gonzalez-Garcia, Joost Van de Weijer, Martin Danelljan, Fahad Shahbaz Khan
In general, this template is linearly combined with the accumulated template from the previous frame, resulting in an exponential decay of information over time.
no code implementations • 23 Jul 2019 • Yaxing Wang, Abel Gonzalez-Garcia, Joost Van de Weijer, Luis Herranz
The task of unpaired image-to-image translation is highly challenging due to the lack of explicit cross-domain pairs of instances.
no code implementations • 6 May 2019 • Mikhail G. Mozerov, Fei Yang, Joost Van de Weijer
In this paper, we adapt the geodesic distance-based recursive filter to the sparse data interpolation problem.
1 code implementation • CVPR 2019 • Lu Yu, Vacit Oguz Yazici, Xialei Liu, Joost Van de Weijer, Yongmei Cheng, Arnau Ramisa
In this paper, we propose to use network distillation to efficiently compute image embeddings with small networks.
no code implementations • 8 Mar 2019 • Yaxing Wang, Luis Herranz, Joost Van de Weijer
This paper addresses the problem of inferring unseen cross-modal image-to-image translations between multiple modalities.
2 code implementations • 17 Feb 2019 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
no code implementations • 12 Jan 2019 • Mikhail G. Mozerov, Joost Van de Weijer
We show that the OVOD approach considerably improves results for cost augmentation and energy minimization techniques in comparison with the standard one-view affinity space implementation.
no code implementations • 9 Dec 2018 • Rada Deeb, Joost Van de Weijer, Damien Muselet, Mathieu Hebert, Alain Tremeau
In this work, we propose a CNN-based approach to estimate the spectral reflectance of a surface and the spectral power distribution of the light from a single RGB image of a V-shaped surface.
1 code implementation • NeurIPS 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
1 code implementation • 6 Sep 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
no code implementations • WS 2018 • Ozan Caglayan, Adrien Bardet, Fethi Bougares, Loïc Barrault, Kai Wang, Marc Masana, Luis Herranz, Joost Van de Weijer
This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation.
1 code implementation • 16 Aug 2018 • Marc Masana, Idoia Ruiz, Joan Serrat, Joost Van de Weijer, Antonio M. Lopez
When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently.
no code implementations • 1 Aug 2018 • Carola Figueroa Flores, Abel Gonzalez-García, Joost Van de Weijer, Bogdan Raducanu
Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition.
no code implementations • 27 Jun 2018 • Aymen Azaza, Joost Van de Weijer, Ali Douik, Marc Masana
Therefore, we extend object proposal methods with context proposals, which allow to incorporate the immediate context in the saliency computation.
no code implementations • 4 Jun 2018 • Lichao Zhang, Abel Gonzalez-Garcia, Joost Van de Weijer, Martin Danelljan, Fahad Shahbaz Khan
These methods provide us with a large labeled dataset of synthetic TIR sequences, on which we can train end-to-end optimal features for tracking.
1 code implementation • NeurIPS 2018 • Abel Gonzalez-Garcia, Joost Van de Weijer, Yoshua Bengio
We compare our model to the state-of-the-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.
no code implementations • 23 May 2018 • Marco Buzzelli, Joost Van de Weijer, Raimondo Schettini
In this paper we present a deep learning method to estimate the illuminant of an image.
1 code implementation • 11 May 2018 • Lu Yu, Yongmei Cheng, Joost Van de Weijer
The attention branch is used to modulate the pixel-wise color naming predictions of the network.
1 code implementation • ECCV 2018 • Yaxing Wang, Chenshen Wu, Luis Herranz, Joost Van de Weijer, Abel Gonzalez-Garcia, Bogdan Raducanu
Transferring the knowledge of pretrained networks to new domains by means of finetuning is a widely used practice for applications based on discriminative models.
1 code implementation • CVPR 2018 • Yaxing Wang, Joost Van de Weijer, Luis Herranz
We address the problem of image translation between domains or modalities for which no direct paired data is available (i. e. zero-pair translation).
1 code implementation • CVPR 2018 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework.
Ranked #17 on
Crowd Counting
on ShanghaiTech B
2 code implementations • 8 Feb 2018 • Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, Andrew D. Bagdanov
In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios.
2 code implementations • ICCV 2017 • Marc Masana, Joost Van de Weijer, Luis Herranz, Andrew D. Bagdanov, Jose M. Alvarez
We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.
no code implementations • 1 Sep 2017 • Aitor Alvarez-Gila, Joost Van de Weijer, Estibaliz Garrote
Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer.
no code implementations • 24 Aug 2017 • Laura Lopez-Fuentes, Joost Van de Weijer, Manuel Gonzalez-Hidalgo, Harald Skinnemoen, Andrew D. Bagdanov
The number of emergencies where computer vision tools has been considered or used is very wide, and there is a great overlap across related emergency research.
2 code implementations • ICCV 2017 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
no code implementations • WS 2017 • Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Marc Masana, Luis Herranz, Joost Van de Weijer
This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation.
no code implementations • 5 Jun 2017 • Rao Muhammad Anwer, Fahad Shahbaz Khan, Joost Van de Weijer, Matthieu Molinier, Jorma Laaksonen
To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification.
Ranked #9 on
Aerial Scene Classification
on AID (20% as trainset)
no code implementations • 16 Jan 2017 • Laura Lopez-Fuentes, Andrew D. Bagdanov, Joost Van de Weijer, Harald Skinnemoen
This paper proposes a novel method to optimize bandwidth usage for object detection in critical communication scenarios.
no code implementations • 14 Dec 2016 • Fahad Shahbaz Khan, Joost Van de Weijer, Rao Muhammad Anwer, Andrew D. Bagdanov, Michael Felsberg, Jorma Laaksonen
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding.
no code implementations • 3 Dec 2016 • Yaxing Wang, Lichao Zhang, Joost Van de Weijer
The first one is based on the fact that in the minimax game which is played to optimize the GAN objective the generator network keeps on changing even after the network can be considered optimal.
6 code implementations • 19 Nov 2016 • Guim Perarnau, Joost Van de Weijer, Bogdan Raducanu, Jose M. Álvarez
Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions.
Ranked #4 on
Image-to-Image Translation
on RaFD
1 code implementation • WS 2016 • Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Joost Van de Weijer
This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge.
no code implementations • 11 May 2016 • Marc Masana, Joost Van de Weijer, Andrew D. Bagdanov
Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image.
no code implementations • ICCV 2015 • Adria Ruiz, Joost Van de Weijer, Xavier Binefa
Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.
no code implementations • CVPR 2014 • Martin Danelljan, Fahad Shahbaz Khan, Michael Felsberg, Joost Van de Weijer
This paper investigates the contribution of color in a tracking-by-detection framework.
no code implementations • CVPR 2013 • Rahat Khan, Joost Van de Weijer, Fahad Shahbaz Khan, Damien Muselet, Christophe Ducottet, Cecile Barat
This results in a drop of discriminative power of the color description.