Search Results for author: Vadim Lebedev

Found 8 papers, 2 papers with code

Precipitation Nowcasting with Satellite Imagery

no code implementations23 May 2019 Vadim Lebedev, Vladimir Ivashkin, Irina Rudenko, Alexander Ganshin, Alexander Molchanov, Sergey Ovcharenko, Ruslan Grokhovetskiy, Ivan Bushmarinov, Dmitry Solomentsev

Precipitation nowcasting is a short-range forecast of rain/snow (up to 2 hours), often displayed on top of the geographical map by the weather service.

Optical Flow Estimation

Spatiotemporal Data Fusion for Precipitation Nowcasting

no code implementations28 Dec 2018 Vladimir Ivashkin, Vadim Lebedev

Precipitation nowcasting using neural networks and ground-based radars has become one of the key components of modern weather prediction services, but it is limited to the regions covered by ground-based radars.

Impostor Networks for Fast Fine-Grained Recognition

no code implementations13 Jun 2018 Vadim Lebedev, Artem Babenko, Victor Lempitsky

In this work we introduce impostor networks, an architecture that allows to perform fine-grained recognition with high accuracy and using a light-weight convolutional network, making it particularly suitable for fine-grained applications on low-power and non-GPU enabled platforms.

Learnable Visual Markers

no code implementations NeurIPS 2016 Oleg Grinchuk, Vadim Lebedev, Victor Lempitsky

We propose a new approach to designing visual markers (analogous to QR-codes, markers for augmented reality, and robotic fiducial tags) based on the advances in deep generative networks.

Texture Networks: Feed-forward Synthesis of Textures and Stylized Images

10 code implementations10 Mar 2016 Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, Victor Lempitsky

Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example.

Style Transfer

Fast ConvNets Using Group-wise Brain Damage

no code implementations CVPR 2016 Vadim Lebedev, Victor Lempitsky

We revisit the idea of brain damage, i. e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers.

Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition

10 code implementations19 Dec 2014 Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, Victor Lempitsky

We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning.

General Classification Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.