1 code implementation • 25 Mar 2024 • Artem Khrapov, Vadim Popov, Tasnima Sadekova, Assel Yermekova, Mikhail Kudinov
Diffusion models are known to be vulnerable to outliers in training data.
4 code implementations • ICLR 2022 • Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, Mikhail Kudinov, Jiansheng Wei
Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario.
6 code implementations • 13 May 2021 • Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, Mikhail Kudinov
Recently, denoising diffusion probabilistic models and generative score matching have shown high potential in modelling complex data distributions while stochastic calculus has provided a unified point of view on these techniques allowing for flexible inference schemes.
Ranked #3 on Text-To-Speech Synthesis on LJSpeech (using extra training data)
no code implementations • 12 Nov 2018 • Vadim Popov, Mikhail Kudinov
Cross-entropy loss is a common choice when it comes to multiclass classification tasks and language modeling in particular.
no code implementations • ICLR 2018 • Vadim Popov, Mikhail Kudinov, Irina Piontkovskaya, Petr Vytovtov, Alex Nevidomsky
In language modeling, users’ language (e. g. in private messaging) could change in a year and be completely different from what we observe in publicly available data.
no code implementations • 20 Dec 2017 • Vadim Popov, Mikhail Kudinov, Irina Piontkovskaya, Petr Vytovtov, Alex Nevidomsky
One of the big challenges in machine learning applications is that training data can be different from the real-world data faced by the algorithm.