Search Results for author: Emre Aksu

Found 13 papers, 0 papers with code

Learning to Learn to Compress

no code implementations31 Jul 2020 Nannan Zou, Honglei Zhang, Francesco Cricri, Hamed R. -Tavakoli, Jani Lainema, Miska Hannuksela, Emre Aksu, Esa Rahtu

In a second phase, the Model-Agnostic Meta-learning approach is adapted to the specific case of image compression, where the inner-loop performs latent tensor overfitting, and the outer loop updates both encoder and decoder neural networks based on the overfitting performance.

Image Compression Meta-Learning +1

End-to-End Learning for Video Frame Compression with Self-Attention

no code implementations20 Apr 2020 Nannan Zou, Honglei Zhang, Francesco Cricri, Hamed R. -Tavakoli, Jani Lainema, Emre Aksu, Miska Hannuksela, Esa Rahtu

One of the core components of conventional (i. e., non-learned) video codecs consists of predicting a frame from a previously-decoded frame, by leveraging temporal correlations.

MS-SSIM Optical Flow Estimation +1

Calibration of fisheye camera using entrance pupil

no code implementations3 Jul 2019 Peter Fasogbon, Emre Aksu

This is not necessarily true for special imaging device such as fisheye lenses.

A Compression Objective and a Cycle Loss for Neural Image Compression

no code implementations24 May 2019 Caglar Aytekin, Francesco Cricri, Antti Hallapuro, Jani Lainema, Emre Aksu, Miska Hannuksela

In this manuscript we propose two objective terms for neural image compression: a compression objective and a cycle loss.

Image Compression MS-SSIM +1

Compressing Weight-updates for Image Artifacts Removal Neural Networks

no code implementations10 May 2019 Yat Hong Lam, Alireza Zare, Caglar Aytekin, Francesco Cricri, Jani Lainema, Emre Aksu, Miska Hannuksela

In this paper, we present a novel approach for fine-tuning a decoder-side neural network in the context of image compression, such that the weight-updates are better compressible.

Fine-tuning Image Compression +1

Compressibility Loss for Neural Network Weights

no code implementations3 May 2019 Caglar Aytekin, Francesco Cricri, Emre Aksu

In this paper we apply a compressibility loss that enables learning highly compressible neural network weights.

Saliency-Enhanced Robust Visual Tracking

no code implementations8 Feb 2018 Caglar Aytekin, Francesco Cricri, Emre Aksu

In this work, we propose an improvement over DCF based trackers by combining saliency based and other features based filter responses.

RGB Salient Object Detection Salient Object Detection +2

A Theoretical Investigation of Graph Degree as an Unsupervised Normality Measure

no code implementations24 Jan 2018 Caglar Aytekin, Francesco Cricri, Lixin Fan, Emre Aksu

In order to have an in-depth theoretical understanding, in this manuscript, we investigate the graph degree in spectral graph clustering based and kernel based point of views and draw connections to a recent kernel method for the two sample problem.

Graph Clustering Spectral Graph Clustering +1

Memory-Efficient Deep Salient Object Segmentation Networks on Gridized Superpixels

no code implementations27 Dec 2017 Caglar Aytekin, Xingyang Ni, Francesco Cricri, Lixin Fan, Emre Aksu

By using these encoded images, we train a memory-efficient network using only 0. 048\% of the number of parameters that other deep salient object detection networks have.

RGB Salient Object Detection Salient Object Detection +2

Video Ladder Networks

no code implementations6 Dec 2016 Francesco Cricri, Xingyang Ni, Mikko Honkala, Emre Aksu, Moncef Gabbouj

Thanks to the recurrent connections, the decoder can exploit temporal summaries generated from all layers of the encoder.

Cannot find the paper you are looking for? You can Submit a new open access paper.