1 code implementation • 27 Nov 2024 • David Serrano-Lozano, Luis Herranz, Shaolin Su, Javier Vazquez-Corral
Blind all-in-one image restoration models aim to recover a high-quality image from an input degraded with unknown distortions.
Ranked #1 on
Blind All-in-One Image Restoration
on 3-Degradations
1 code implementation • 13 Jul 2024 • David Serrano-Lozano, Luis Herranz, Michael S. Brown, Javier Vazquez-Corral
A popular method for enhancing images involves learning the style of a professional photo editor using pairs of training images comprised of the original input with the editor-enhanced version.
no code implementations • 1 Sep 2023 • Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui, Jian Yang
We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity.
no code implementations • 13 Jul 2022 • Danna Xue, Fei Yang, Pei Wang, Luis Herranz, Jinqiu Sun, Yu Zhu, Yanning Zhang
Accurate semantic segmentation models typically require significant computational resources, inhibiting their use in practical applications.
1 code implementation • 16 Jun 2022 • Saiping Zhang, Luis Herranz, Marta Mrak, Marc Gorriz Blanch, Shuai Wan, Fuzheng Yang
In this paper we propose a generative adversarial network (GAN) framework to enhance the perceptual quality of compressed videos.
no code implementations • 13 May 2022 • Zhaocheng Liu, Luis Herranz, Fei Yang, Saiping Zhang, Shuai Wan, Marta Mrak, Marc Górriz Blanch
Neural video compression has emerged as a novel paradigm combining trainable multilayer neural networks and machine learning, achieving competitive rate-distortion (RD) performances, but still remaining impractical due to heavy neural architectures, with large memory and computational demands.
no code implementations • 25 Jan 2022 • Vacit Oguz Yazici, LongLong Yu, Arnau Ramisa, Luis Herranz, Joost Van de Weijer
Computer vision has established a foothold in the online fashion retail industry.
no code implementations • 22 Jan 2022 • Saiping Zhang, Luis Herranz, Marta Mrak, Marc Gorriz Blanch, Shuai Wan, Fuzheng Yang
Deformable convolutions can operate on multiple frames, thus leveraging more temporal information, which is beneficial for enhancing the perceptual quality of compressed videos.
no code implementations • 25 Nov 2021 • Fei Yang, Yaxing Wang, Luis Herranz, Yongmei Cheng, Mikhail Mozerov
Thus, we further propose a unified framework that allows both translation and autoencoding capabilities in a single codec.
1 code implementation • 9 Nov 2021 • Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui, Joost Van de Weijer
We propose an approach to IML, which we call Episodic Replay Distillation (ERD), that mixes classes from the current task with class exemplars from previous tasks when sampling episodes for meta-learning.
1 code implementation • 21 Oct 2021 • Kai Wang, Xialei Liu, Luis Herranz, Joost Van de Weijer
To overcome forgetting in this benchmark, we propose Hierarchy-Consistency Verification (HCV) as an enhancement to existing continual learning methods.
2 code implementations • NeurIPS 2021 • Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui
In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data.
Ranked #7 on
Source-Free Domain Adaptation
on VisDA-2017
1 code implementation • 22 Sep 2021 • Saiping Zhang, Marta Mrak, Luis Herranz, Marc Górriz, Shuai Wan, Fuzheng Yang
In this paper, we introduce deep video compression with perceptual optimizations (DVC-P), which aims at increasing perceptual quality of decoded videos.
1 code implementation • ICCV 2021 • Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui
In this paper, we propose a new domain adaptation paradigm called Generalized Source-free Domain Adaptation (G-SFDA), where the learned model needs to perform well on both the target and source domains, with only access to current unlabeled target data during adaptation.
Ranked #8 on
Source-Free Domain Adaptation
on VisDA-2017
no code implementations • 18 May 2021 • Kai Wang, Luis Herranz, Joost Van de Weijer
Methods are typically allowed to use a limited buffer to store some of the images in the stream.
1 code implementation • 28 Apr 2021 • Yaxing Wang, Abel Gonzalez-Garcia, Chenshen Wu, Luis Herranz, Fahad Shahbaz Khan, Shangling Jui, Joost Van de Weijer
Therefore, we propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs.
no code implementations • 19 Apr 2021 • Sudeep Katakol, Luis Herranz, Fei Yang, Marta Mrak
Neural image compression (NIC) is a new coding paradigm where coding capabilities are captured by deep models learned from data.
no code implementations • 14 Apr 2021 • Kai Wang, Luis Herranz, Joost Van de Weijer
We found that the indexing stage pays an important role and that simply avoiding reindexing the database with updated embedding networks can lead to significant gains.
1 code implementation • CVPR 2021 • Fei Yang, Luis Herranz, Yongmei Cheng, Mikhail G. Mozerov
Neural image compression leverages deep neural networks to outperform traditional image codecs in rate-distortion performance.
no code implementations • 8 Mar 2021 • Shiqi Yang, Kai Wang, Luis Herranz, Joost Van de Weijer
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their attribute-based descriptions.
2 code implementations • 23 Oct 2020 • Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui
When adapting to the target domain, the additional classifier initialized from source classifier is expected to find misclassified features.
Source-Free Domain Adaptation
Unsupervised Domain Adaptation
no code implementations • 26 Jun 2020 • Kai Wang, Luis Herranz, Anjan Dutta, Joost Van de Weijer
We propose bookworm continual learning(BCL), a flexible setting where unseen classes can be inferred via a semantic model, and the visual model can be updated continually.
no code implementations • 10 Jun 2020 • Shiqi Yang, Kai Wang, Luis Herranz, Joost Van de Weijer
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions.
no code implementations • 22 Apr 2020 • Sudeep Katakol, Basem Elbarashy, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez
Moreover, we may only have compressed images at training time but are able to use original images at inference time, or vice versa, and in such a case, the downstream model suffers from covariate shift.
1 code implementation • 20 Apr 2020 • Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D. Bagdanov, Shangling Jui, Joost Van de Weijer
To prevent forgetting, we combine generative feature replay in the classifier with feature distillation in the feature extractor.
2 code implementations • CVPR 2020 • Lu Yu, Bartłomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, Joost Van de Weijer
The vast majority of methods have studied this scenario for classification networks, where for each new task the classification layer of the network must be augmented with additional weights to make room for the newly added classes.
1 code implementation • 11 Dec 2019 • Fei Yang, Luis Herranz, Joost Van de Weijer, José A. Iglesias Guitián, Antonio López, Mikhail Mozerov
Addressing these limitations, we formulate the problem of variable rate-distortion optimization for deep image compression, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific rate-distortion tradeoff via a modulation network.
2 code implementations • CVPR 2020 • Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, Joost Van de Weijer
We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs.
2 code implementations • 19 Aug 2019 • Yaxing Wang, Abel Gonzalez-Garcia, Joost Van de Weijer, Luis Herranz
Recently, image-to-image translation research has witnessed remarkable progress.
no code implementations • 23 Jul 2019 • Yaxing Wang, Abel Gonzalez-Garcia, Joost Van de Weijer, Luis Herranz
The task of unpaired image-to-image translation is highly challenging due to the lack of explicit cross-domain pairs of instances.
no code implementations • 11 Jul 2019 • Xiang-Yang Li, Luis Herranz, Shuqiang Jiang
In this paper, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition.
no code implementations • 8 Mar 2019 • Yaxing Wang, Luis Herranz, Joost Van de Weijer
This paper addresses the problem of inferring unseen cross-modal image-to-image translations between multiple modalities.
no code implementations • 1 Dec 2018 • Hugo Prol, Vincent Dumoulin, Luis Herranz
A family of recent successful approaches to few-shot learning relies on learning an embedding space in which predictions are made by computing similarities between examples.
1 code implementation • NeurIPS 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
no code implementations • 17 Sep 2018 • Xinhang Song, Shuqiang Jiang, Luis Herranz, Chengpeng Chen
We show that this limitation can be addressed by using RGB-D videos, where more comprehensive depth information is accumulated as the camera travels across the scene.
2 code implementations • 6 Sep 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
no code implementations • WS 2018 • Ozan Caglayan, Adrien Bardet, Fethi Bougares, Loïc Barrault, Kai Wang, Marc Masana, Luis Herranz, Joost Van de Weijer
This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation.
1 code implementation • ECCV 2018 • Yaxing Wang, Chenshen Wu, Luis Herranz, Joost Van de Weijer, Abel Gonzalez-Garcia, Bogdan Raducanu
Transferring the knowledge of pretrained networks to new domains by means of finetuning is a widely used practice for applications based on discriminative models.
Ranked #7 on
10-shot image generation
on Babies
1 code implementation • CVPR 2018 • Yaxing Wang, Joost Van de Weijer, Luis Herranz
We address the problem of image translation between domains or modalities for which no direct paired data is available (i. e. zero-pair translation).
2 code implementations • 8 Feb 2018 • Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, Andrew D. Bagdanov
In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios.
no code implementations • 22 Jan 2018 • Luis Herranz, Weiqing Min, Shuqiang Jiang
The central role of food in our individual and social life, combined with recent technological advances, has motivated a growing interest in applications that help to better monitor dietary habits as well as the exploration and retrieval of food-related information.
no code implementations • CVPR 2016 • Luis Herranz, Shuqiang Jiang, Xiang-Yang Li
Thus, adapting the feature extractor to each particular scale (i. e. scale-specific CNNs) is crucial to improve recognition, since the objects in the scenes have their specific range of scales.
1 code implementation • 21 Jan 2018 • Xinhang Song, Luis Herranz, Shuqiang Jiang
However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features.
2 code implementations • ICCV 2017 • Marc Masana, Joost Van de Weijer, Luis Herranz, Andrew D. Bagdanov, Jose M. Alvarez
We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.
no code implementations • WS 2017 • Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Marc Masana, Luis Herranz, Joost Van de Weijer
This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation.
no code implementations • CVPR 2015 • Xinhang Song, Shuqiang Jiang, Luis Herranz
An important advantage of modeling features in a semantic space, is that this space is feature independent.