3 code implementations • 22 Feb 2023 • Oleg Platonov, Denis Kuznedelev, Michael Diskin, Artem Babenko, Liudmila Prokhorenkova
Graphs without this property are called heterophilous, and it is typically assumed that specialized methods are required to achieve strong performance on such graphs.
2 code implementations • 27 Jan 2023 • Max Ryabinin, Tim Dettmers, Michael Diskin, Alexander Borzunov
Many deep learning applications benefit from using large models with billions of parameters.
1 code implementation • 7 Jul 2022 • Alexander Borzunov, Max Ryabinin, Tim Dettmers, Quentin Lhoest, Lucile Saulnier, Michael Diskin, Yacine Jernite, Thomas Wolf
The infrastructure necessary for training state-of-the-art models is becoming overly expensive, which makes training such models affordable only to large corporations and institutions.
no code implementations • 7 Oct 2021 • Aleksandr Beznosikov, Peter Richtárik, Michael Diskin, Max Ryabinin, Alexander Gasnikov
Due to these considerations, it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality.
3 code implementations • 21 Jun 2021 • Eduard Gorbunov, Alexander Borzunov, Michael Diskin, Max Ryabinin
Training such models requires a lot of computational resources (e. g., HPC clusters) that are not available to small research groups and independent researchers.
2 code implementations • NeurIPS 2021 • Michael Diskin, Alexey Bukhtiyarov, Max Ryabinin, Lucile Saulnier, Quentin Lhoest, Anton Sinitsin, Dmitry Popov, Dmitry Pyrkin, Maxim Kashirin, Alexander Borzunov, Albert Villanova del Moral, Denis Mazur, Ilia Kobelev, Yacine Jernite, Thomas Wolf, Gennady Pekhimenko
Modern deep learning applications require increasingly more compute to train state-of-the-art models.