no code implementations • 10 Nov 2023 • Bharath Bhushan Damodaran, Francois Schnitzler, Anne Lambert, Pierre Hellier
Positional encodings are employed to capture the high frequency information of the encoded signals in implicit neural representation (INR).
no code implementations • 1 Aug 2023 • Muhammet Balcilar, Bharath Bhushan Damodaran, Karam Naser, Franck Galpin, Pierre Hellier
End-to-end image/video codecs are getting competitive compared to traditional compression techniques that have been developed through decades of manual engineering efforts.
no code implementations • 6 Mar 2023 • Bharath Bhushan Damodaran, Muhammet Balcilar, Franck Galpin, Pierre Hellier
Deep variational autoencoders for image and video compression have gained significant attraction in the recent years, due to their potential to offer competitive or better compression rates compared to the decades long traditional codecs such as AVC, HEVC or VVC.
no code implementations • 12 Oct 2022 • Muhammet Balcilar, Bharath Bhushan Damodaran, Pierre Hellier
In this paper, we propose to evaluate the amortization gap for three state-of-the-art ML video compression methods.
no code implementations • 9 Jul 2022 • Mustafa Shukor, Bharath Bhushan Damodaran, Xu Yao, Pierre Hellier
We leverage the generative capacity of GANs such as StyleGAN to represent and compress a video, including intra and inter compression.
no code implementations • 5 Oct 2021 • Bharath Bhushan Damodaran, Emmanuel Jolly, Gilles Puy, Philippe Henri Gosselin, Cédric Thébault, Junghyun Ahn, Tim Christensen, Paul Ghezzo, Pierre Hellier
We present FacialFilmroll, a solution for spatially and temporally consistent editing of faces in one or multiple shots.
no code implementations • 29 Sep 2021 • Mustafa Shukor, Xu Yao, Bharath Bhushan Damodaran, Pierre Hellier
We leverage the generative capacity of GANs such as StyleGAN to represent and compress each video frame (intra compression), as well as the successive differences between frames (inter compression).
no code implementations • 9 Jul 2021 • Mustafa Shukor, Xu Yao, Bharath Bhushan Damodaran, Pierre Hellier
Generative adversarial networks (GANs) have proven to be surprisingly efficient for image editing by inverting and manipulating the latent code corresponding to a natural image.
1 code implementation • 8 Apr 2019 • Kilian Fatras, Bharath Bhushan Damodaran, Sylvain Lobry, Rémi Flamary, Devis Tuia, Nicolas Courty
Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping.
no code implementations • 2 Oct 2018 • Bharath Bhushan Damodaran, Rémi Flamary, Viven Seguy, Nicolas Courty
The state-of-the-art performances of deep neural networks are conditioned to the availability of large number of accurately labeled samples.
no code implementations • 19 Apr 2018 • Chippy Jayaprakash, Bharath Bhushan Damodaran, Sowmya V, K. P. Soman
In literature a fewer number of pixels are randomly selected to partial to overcome this issue, however this sub-optimal strategy might neglect important information in the HSI.
no code implementations • 14 Apr 2018 • Bharath Bhushan Damodaran
Thus, the objective of this letter is to propose an fast and efficient method to select the bandwidth parameter of the Gaussian kernel in the kernel based classification methods.
4 code implementations • ECCV 2018 • Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, Nicolas Courty
In computer vision, one is often confronted with problems of domain shifts, which occur when one applies a classifier trained on a source dataset to target data sharing similar characteristics (e. g. same classes), but also different latent data structures (e. g. different acquisition conditions).
Ranked #2 on Domain Adaptation on MNIST-to-MNIST-M
no code implementations • ICLR 2018 • Vivien Seguy, Bharath Bhushan Damodaran, Remi Flamary, Nicolas Courty, Antoine Rolet, Mathieu Blondel
First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions.
no code implementations • 27 Nov 2017 • Bharath Bhushan Damodaran, Nicolas Courty, Philippe-Henri Gosselin
Thus, reducing the number of feature dimensions is necessary to effectively scale to large datasets.
2 code implementations • 7 Nov 2017 • Vivien Seguy, Bharath Bhushan Damodaran, Rémi Flamary, Nicolas Courty, Antoine Rolet, Mathieu Blondel
We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures.