no code implementations • 2 Dec 2024 • Anton Voronov, Denis Kuznedelev, Mikhail Khoroshikh, Valentin Khrulkov, Dmitry Baranchuk
This work presents Switti, a scale-wise transformer for text-to-image generation.
no code implementations • 3 Oct 2024 • Mikhail Persiianov, Arip Asadulaev, Nikita Andreev, Nikita Starodubcev, Dmitry Baranchuk, Anastasis Kratsios, Evgeny Burnaev, Alexander Korotin
To tackle this issue, we propose a new learning paradigm that integrates both paired and unpaired data $\textbf{seamlessly}$ through the data likelihood maximization techniques.
1 code implementation • 25 Sep 2024 • Harsha Vardhan Simhadri, Martin Aumüller, Amir Ingber, Matthijs Douze, George Williams, Magdalen Dobson Manohar, Dmitry Baranchuk, Edo Liberty, Frank Liu, Ben Landrum, Mazin Karjikar, Laxman Dhulipala, Meng Chen, Yue Chen, Rui Ma, Kai Zhang, Yuzheng Cai, Jiayang Shi, Yizhuo Chen, Weiguo Zheng, Zihao Wan, Jie Yin, Ben Huang
The 2023 Big ANN Challenge, held at NeurIPS 2023, focused on advancing the state-of-the-art in indexing data structures and search algorithms for practical variants of Approximate Nearest Neighbor (ANN) search that reflect the growing complexity and diversity of workloads.
no code implementations • 31 Aug 2024 • Vage Egiazarian, Denis Kuznedelev, Anton Voronov, Ruslan Svirschevski, Michael Goin, Daniil Pavlov, Dan Alistarh, Dmitry Baranchuk
Specifically, we tailor vector-based PTQ methods to recent billion-scale text-to-image models (SDXL and SDXL-Turbo), and show that the diffusion models of 2B+ parameters compressed to around 3 bits using VQ exhibit the similar image quality and textual alignment as previous 4-bit compression techniques.
no code implementations • 20 Jun 2024 • Nikita Starodubcev, Mikhail Khoroshikh, Artem Babenko, Dmitry Baranchuk
Diffusion distillation represents a highly promising direction for achieving faithful text-to-image generation in a few sampling steps.
1 code implementation • CVPR 2024 • Nikita Starodubcev, Artem Fedorov, Artem Babenko, Dmitry Baranchuk
While several powerful distillation methods were recently proposed, the overall quality of student samples is typically lower compared to the teacher ones, which hinders their practical usage.
2 code implementations • NeurIPS 2023 • Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, Colin Raffel
Large language models (LLMs) are useful in many NLP tasks and become more capable with size, with the best open-source models having over 50 billion parameters.
no code implementations • ICCV 2023 • Dmitry Baranchuk, Matthijs Douze, Yash Upadhyay, I. Zeki Yalniz
We investigate the impact of this "content drift" for large-scale similarity search tools, based on nearest neighbor search in embedding space.
1 code implementation • 10 Apr 2023 • Nikita Starodubcev, Dmitry Baranchuk, Valentin Khrulkov, Artem Babenko
Finally, we show that our approach can adapt the pretrained model to the user-specified image and text description on the fly just for 4 seconds.
3 code implementations • 30 Sep 2022 • Akim Kotelnikov, Dmitry Baranchuk, Ivan Rubachev, Artem Babenko
Denoising diffusion probabilistic models are currently becoming the leading paradigm of generative modeling for many important data modalities.
2 code implementations • 2 Sep 2022 • Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel
However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research that requires access to weights, attention or logits.
no code implementations • 8 May 2022 • Harsha Vardhan Simhadri, George Williams, Martin Aumüller, Matthijs Douze, Artem Babenko, Dmitry Baranchuk, Qi Chen, Lucas Hosseini, Ravishankar Krishnaswamy, Gopal Srinivasa, Suhas Jayaram Subramanya, Jingdong Wang
The outcome of the competition was ranked leaderboards of algorithms in each track based on recall at a query throughput threshold.
1 code implementation • ICLR 2022 • Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, Artem Babenko
Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance.
no code implementations • ICLR 2022 • Liudmila Prokhorenkova, Dmitry Baranchuk, Nikolay Bogachev, Yury Demidovich, Alexander Kolpakov
From a theoretical perspective, we rigorously analyze the time and space complexity of graph-based NNS, assuming that an n-element dataset is uniformly distributed within a d-dimensional ball of radius R in the hyperbolic space of curvature -1.
1 code implementation • ICML Workshop INNF 2021 • Dmitry Baranchuk, Vladimir Aliev, Artem Babenko
Normalizing flows are a powerful class of generative models demonstrating strong performance in several speech and vision problems.
1 code implementation • ICML Workshop AutoML 2021 • Dmitry Baranchuk, Artem Babenko
In this study, we propose a task-agnostic approach that discovers initializers for specific network architectures and optimizers by learning the initial weight distributions directly through the use of Meta-Learning.
1 code implementation • 27 Nov 2019 • Dmitry Baranchuk, Artem Babenko
New algorithms for similarity graph construction are continuously being proposed and analyzed by both theoreticians and practitioners.
no code implementations • pproximateinference AABI Symposium 2019 • Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt
Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.
3 code implementations • 9 Jul 2019 • Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, Stephan Mandt
Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.
1 code implementation • 27 May 2019 • Dmitry Baranchuk, Dmitry Persiyanov, Anton Sinitsin, Artem Babenko
Recently similarity graphs became the leading paradigm for efficient nearest neighbor search, outperforming traditional tree-based and LSH-based methods.
12 code implementations • ECCV 2018 • Dmitry Baranchuk, Artem Babenko, Yury Malkov
This work addresses the problem of billion-scale nearest neighbor search.