Search Results for author: Firas Laakom

Found 20 papers, 4 papers with code

Color Constancy Convolutional Autoencoder

no code implementations4 Jun 2019 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Jarno Nikkanen, Moncef Gabbouj

In this paper, we study the importance of pre-training for the generalization capability in the color constancy problem.

Color Constancy Unsupervised Pre-training

Bag of Color Features For Color Constancy

1 code implementation11 Jun 2019 Firas Laakom, Nikolaos Passalis, Jenni Raitoharju, Jarno Nikkanen, Anastasios Tefas, Alexandros Iosifidis, Moncef Gabbouj

To further improve the illumination estimation accuracy, we propose a novel attention mechanism for the BoCF model with two variants based on self-attention.

Color Constancy

Probabilistic Color Constancy

no code implementations6 May 2020 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Uygar Tuna, Jarno Nikkanen, Moncef Gabbouj

In this paper, we propose a novel unsupervised color constancy method, called Probabilistic Color Constancy (PCC).

Color Constancy

Graph Embedding with Data Uncertainty

no code implementations1 Sep 2020 Firas Laakom, Jenni Raitoharju, Nikolaos Passalis, Alexandros Iosifidis, Moncef Gabbouj

spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.

Graph Embedding

ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY

no code implementations1 Jan 2021 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results.

Learning distinct features helps, provably

no code implementations10 Jun 2021 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

We study the diversity of the features learned by a two-layer neural network trained with the least squares loss.

Generalization Bounds

Improving Neural Network Generalization via Promoting Within-Layer Diversity

no code implementations29 Sep 2021 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization, where the errors are back-propagated from the last layer back to the first one.

Robust channel-wise illumination estimation

1 code implementation10 Nov 2021 Firas Laakom, Jenni Raitoharju, Jarno Nikkanen, Alexandros Iosifidis, Moncef Gabbouj

We test this approach on the proposed method and show that it can indeed be used to avoid several extreme error cases and, thus, improves the practicality of the proposed technique.

Color Constancy

Learning to ignore: rethinking attention in CNNs

1 code implementation10 Nov 2021 Firas Laakom, Kateryna Chumachenko, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

Based on this idea, we propose to reformulate the attention mechanism in CNNs to learn to ignore instead of learning to attend.

Non-Linear Spectral Dimensionality Reduction Under Uncertainty

no code implementations9 Feb 2022 Firas Laakom, Jenni Raitoharju, Nikolaos Passalis, Alexandros Iosifidis, Moncef Gabbouj

In this paper, we consider the problem of non-linear dimensionality reduction under uncertainty, both from a theoretical and algorithmic perspectives.

Dimensionality Reduction

Reducing Redundancy in the Bottleneck Representation of the Autoencoders

no code implementations9 Feb 2022 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.

Dimensionality Reduction Image Compression +1

Efficient CNN with uncorrelated Bag of Features pooling

no code implementations22 Sep 2022 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

In this paper, we propose an approach that builds on top of BoF pooling to boost its efficiency by ensuring that the items of the learned dictionary are non-redundant.

WLD-Reg: A Data-dependent Within-layer Diversity Regularizer

no code implementations3 Jan 2023 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy.

On Feature Diversity in Energy-based Models

no code implementations ICLR Workshop EBM 2021 Firas Laakom, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj

Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative approaches.

Generalization Bounds regression

Newton Method-based Subspace Support Vector Data Description

no code implementations25 Sep 2023 Fahad Sohrab, Firas Laakom, Moncef Gabbouj

The objective of S-SVDD is to map the original data to a subspace optimized for one-class classification, and the iterative optimization process of data mapping and description in S-SVDD relies on gradient descent.

Classification One-Class Classification

Class-wise Generalization Error: an Information-Theoretic Analysis

no code implementations5 Jan 2024 Firas Laakom, Yuheng Bu, Moncef Gabbouj

Existing generalization theories of supervised learning typically take a holistic approach and provide bounds for the expected generalization over the whole data distribution, which implicitly assumes that the model generalizes similarly for all the classes.

Generalization Bounds

Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant Scenes

no code implementations5 Feb 2024 Umut Cem Entok, Firas Laakom, Farhad Pakdaman, Moncef Gabbouj

Motivated by this, we propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.

Color Constancy

Cannot find the paper you are looking for? You can Submit a new open access paper.