Search Results for author: Maksim Makarenko

Found 5 papers, 2 papers with code

Artificial intelligence optical hardware empowers high-resolution hyperspectral video understanding at 1.2 Tb/s

no code implementations17 Dec 2023 Maksim Makarenko, Qizhou Wang, Arturo Burguete-Lopez, Silvio Giancola, Bernard Ghanem, Luca Passone, Andrea Fratalocchi

The technology platform combines artificial intelligence hardware, processing information optically, with state-of-the-art machine vision networks, resulting in a data processing speed of 1. 2 Tb/s with hundreds of frequency bands and megapixel spatial resolution at video rates.

Semantic Segmentation Video Semantic Segmentation +1

Adaptive Compression for Communication-Efficient Distributed Training

no code implementations31 Oct 2022 Maksim Makarenko, Elnur Gasanov, Rustem Islamov, Abdurakhmon Sadiev, Peter Richtarik

We propose Adaptive Compressed Gradient Descent (AdaCGD) - a novel optimization algorithm for communication-efficient training of supervised machine learning models with adaptive compression level.

Quantization

Real-time Hyperspectral Imaging in Hardware via Trained Metasurface Encoders

1 code implementation CVPR 2022 Maksim Makarenko, Arturo Burguete-Lopez, Qizhou Wang, Fedor Getman, Silvio Giancola, Bernard Ghanem, Andrea Fratalocchi

Hyperspectral imaging has attracted significant attention to identify spectral signatures for image classification and automated pattern recognition in computer vision.

Image Classification Semantic Segmentation +1

Broadband vectorial ultra-flat optics with experimental efficiency up to 99% in the visible via universal approximators

1 code implementation5 May 2020 Fedor Getman, Maksim Makarenko, Arturo Burguete-Lopez, Andrea Fratalocchi

In this work, we developed an inverse design approach that allows the realization of highly efficient (up to $99\%$) ultra-flat (down to $50$nm thick) optics for vectorial light control and broadband input-output responses on a desired wavefront shape.

Optics

Cannot find the paper you are looking for? You can Submit a new open access paper.