no code implementations • 10 Oct 2023 • Mikhail Salnikov, Maria Lysyuk, Pavel Braslavski, Anton Razzhigaev, Valentin Malykh, Alexander Panchenko
Pre-trained Text-to-Text Language Models (LMs), such as T5 or BART yield promising results in the Knowledge Graph Question Answering (KGQA) task.
no code implementations • 3 Oct 2023 • Mikhail Salnikov, Hai Le, Prateek Rajput, Irina Nikishina, Pavel Braslavski, Valentin Malykh, Alexander Panchenko
Recently, it has been shown that the incorporation of structured knowledge into Large Language Models significantly improves the results for a variety of NLP tasks.
1 code implementation • 27 Jul 2020 • Anatasiia Kornilova, Mikhail Salnikov, Olga Novitskaya, Maria Begicheva, Egor Sevriugov, Kirill Shcherbakov, Valeriya Pronina, Dmitry V. Dylov
Mobile microscopy is a promising technology to assist and to accelerate disease diagnostics, with its widespread adoption being hindered by the mediocre quality of acquired images.
1 code implementation • 15 Jun 2020 • Ilya Trofimov, Nikita Klyuchnikov, Mikhail Salnikov, Alexander Filippov, Evgeny Burnaev
The method relies on a new approach to low-fidelity evaluations of neural architectures by training for a few epochs using a knowledge distillation.
1 code implementation • 12 Jun 2020 • Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, Evgeny Burnaev
In this work, we step outside the computer vision domain by leveraging the language modeling task, which is the core of natural language processing (NLP).