Search Results for author: Ekaterina Artemova

Found 31 papers, 14 papers with code

Single Example Can Improve Zero-Shot Data Generation

no code implementations INLG (ACL) 2021 Pavel Burnyshev, Valentin Malykh, Andrey Bout, Ekaterina Artemova, Irina Piontkovskaya

We explore two approaches to the generation of task-oriented utterances: in the zero-shot approach, the model is trained to generate utterances from seen intents and is further used to generate utterances for intents unseen during training.

Intent Classification Text Generation

Russian SuperGLUE 1.1: Revising the Lessons not Learned by Russian NLP models

no code implementations15 Feb 2022 Alena Fenogenova, Maria Tikhonova, Vladislav Mikhailov, Tatiana Shavrina, Anton Emelyanov, Denis Shevelev, Alexandr Kukushkin, Valentin Malykh, Ekaterina Artemova

In the last year, new neural architectures and multilingual pre-trained models have been released for Russian, which led to performance evaluation problems across a range of language understanding tasks.

Common Sense Reasoning Reading Comprehension

Call Larisa Ivanovna: Code-Switching Fools Multilingual NLU Models

1 code implementation29 Sep 2021 Alexey Birshert, Ekaterina Artemova

This is in line with the common understanding of how multilingual models conduct transferring between languages

Cross-Lingual Transfer Natural Language Understanding +1

Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations

1 code implementation EMNLP (MRL) 2021 Ekaterina Taktasheva, Vladislav Mikhailov, Ekaterina Artemova

Recent research has adopted a new experimental field centered around the concept of text perturbations which has revealed that shuffled word order has little to no impact on the downstream performance of Transformer-based language models across many NLP tasks.

Artificial Text Detection via Examining the Topology of Attention Maps

1 code implementation EMNLP 2021 Laida Kushnareva, Daniil Cherniavskii, Vladislav Mikhailov, Ekaterina Artemova, Serguei Barannikov, Alexander Bernstein, Irina Piontkovskaya, Dmitri Piontkovski, Evgeny Burnaev

The impressive capabilities of recent generative models to create texts that are challenging to distinguish from the human-written ones can be misused for generating fake news, product reviews, and even abusive content.

Topological Data Analysis

A Single Example Can Improve Zero-Shot Data Generation

no code implementations16 Aug 2021 Pavel Burnyshev, Valentin Malykh, Andrey Bout, Ekaterina Artemova, Irina Piontkovskaya

In the zero-shot approach, the model is trained to generate utterances from seen intents and is further used to generate utterances for intents unseen during training.

Intent Classification Text Generation

A Differentiable Language Model Adversarial Attack on Text Classifiers

no code implementations23 Jul 2021 Ivan Fursov, Alexey Zaytsev, Pavel Burnyshev, Ekaterina Dmitrieva, Nikita Klyuchnikov, Andrey Kravchenko, Ekaterina Artemova, Evgeny Burnaev

Moreover, due to the usage of the fine-tuned language model, the generated adversarial examples are hard to detect, thus current models are not robust.

Adversarial Attack Language Modelling

MOROCCO: Model Resource Comparison Framework

2 code implementations29 Apr 2021 Valentin Malykh, Alexander Kukushkin, Ekaterina Artemova, Vladislav Mikhailov, Maria Tikhonova, Tatiana Shavrina

The new generation of pre-trained NLP models push the SOTA to the new limits, but at the cost of computational resources, to the point that their use in real production environments is often prohibitively expensive.

Morph Call: Probing Morphosyntactic Content of Multilingual Transformers

1 code implementation NAACL (SIGTYP) 2021 Vladislav Mikhailov, Oleg Serikov, Ekaterina Artemova

The outstanding performance of transformer-based language models on a great variety of NLP and NLU tasks has stimulated interest in exploring their inner workings.

Common Sense Reasoning POS

RuSentEval: Linguistic Source, Encoder Force!

2 code implementations EACL (BSNLP) 2021 Vladislav Mikhailov, Ekaterina Taktasheva, Elina Sigdel, Ekaterina Artemova

The success of pre-trained transformer language models has brought a great deal of interest on how these models work, and what they learn about language.

14

RuREBus: a Case Study of Joint Named Entity Recognition and Relation Extraction from e-Government Domain

no code implementations29 Oct 2020 Vitaly Ivanin, Ekaterina Artemova, Tatiana Batura, Vladimir Ivanov, Veronika Sarkisyan, Elena Tutubalina, Ivan Smurov

We show-case an application of information extraction methods, such as named entity recognition (NER) and relation extraction (RE) to a novel corpus, consisting of documents, issued by a state agency.

Named Entity Recognition NER +1

ELMo and BERT in semantic change detection for Russian

no code implementations7 Oct 2020 Julia Rodina, Yuliya Trofimova, Andrey Kutuzov, Ekaterina Artemova

We study the effectiveness of contextualized embeddings for the task of diachronic semantic change detection for Russian language data.

Change Detection

NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language Processing

1 code implementation12 Jun 2020 Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, Evgeny Burnaev

In this work, we step outside the computer vision domain by leveraging the language modeling task, which is the core of natural language processing (NLP).

Language Modelling Neural Architecture Search

Data-driven models and computational tools for neurolinguistics: a language technology perspective

1 code implementation23 Mar 2020 Ekaterina Artemova, Amir Bakarov, Aleksey Artemov, Evgeny Burnaev, Maxim Sharaev

In this paper, our focus is the connection and influence of language technologies on the research in neurolinguistics.

Word Embeddings

A Joint Approach to Compound Splitting and Idiomatic Compound Detection

no code implementations LREC 2020 Irina Krotova, Sergey Aksenov, Ekaterina Artemova

Applications such as machine translation, speech recognition, and information retrieval require efficient handling of noun compounds as they are one of the possible sources for out-of-vocabulary (OOV) words.

Information Retrieval Machine Translation +2

Word Sense Disambiguation for 158 Languages using Word Embeddings Only

no code implementations LREC 2020 Varvara Logacheva, Denis Teslenko, Artem Shelmanov, Steffen Remus, Dmitry Ustalov, Andrey Kutuzov, Ekaterina Artemova, Chris Biemann, Simone Paolo Ponzetto, Alexander Panchenko

We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave et al. (2018), enabling WSD in these languages.

Word Embeddings Word Sense Disambiguation

Char-RNN and Active Learning for Hashtag Segmentation

no code implementations8 Nov 2019 Taisiya Glushkova, Ekaterina Artemova

We explore the abilities of character recurrent neural network (char-RNN) for hashtag segmentation.

Active Learning

A Dataset for Noun Compositionality Detection for a Slavic Language

1 code implementation WS 2019 Dmitry Puzyrev, Artem Shelmanov, Alex Panchenko, er, Ekaterina Artemova

This paper presents the first gold-standard resource for Russian annotated with compositionality information of noun compounds.

Cannot find the paper you are looking for? You can Submit a new open access paper.