Search Results for author: Vadim Kimmelman

Found 11 papers, 2 papers with code

Using Computer Vision to Analyze Non-manual Marking of Questions in KRSL

no code implementations MTSummit 2021 Anna Kuznetsova, Alfarabi Imashev, Medet Mukushev, Anara Sandygulova, Vadim Kimmelman

This paper presents a study that compares non-manual markers of polar and wh-questions to statements in Kazakh-Russian Sign Language (KRSL) in a dataset collected for NLP tasks.

Crowdsourcing Kazakh-Russian Sign Language: FluentSigners-50

no code implementations LREC 2022 Medet Mukushev, Aigerim Kydyrbekova, Alfarabi Imashev, Vadim Kimmelman, Anara Sandygulova

This paper presents the methodology we used to crowdsource a data collection of a new large-scale signer independent dataset for Kazakh-Russian Sign Language (KRSL) created for Sign Language Processing.

Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study

1 code implementation SignLang (LREC) 2022 Anastasia Chizhikova, Vadim Kimmelman

We applied OpenFace, a Computer Vision toolkit, to extract head rotation measurements from video recordings, and analyzed the headshake in terms of the number of peaks (turns), the amplitude of the turns, and their frequency.

Functional Data Analysis of Non-manual Marking of Questions in Kazakh-Russian Sign Language

1 code implementation SignLang (LREC) 2022 Anna Kuznetsova, Alfarabi Imashev, Medet Mukushev, Anara Sandygulova, Vadim Kimmelman

This paper is a continuation of Kuznetsova et al. (2021), which described non-manual markers of polar and wh-questions in comparison with statements in an NLP dataset of Kazakh-Russian Sign Language (KRSL) using Computer Vision.

Towards Large Vocabulary Kazakh-Russian Sign Language Dataset: KRSL-OnlineSchool

no code implementations SignLang (LREC) 2022 Medet Mukushev, Aigerim Kydyrbekova, Vadim Kimmelman, Anara Sandygulova

To this end, this corpus contains video recordings of Kazakhstan’s online school translated to Kazakh-Russian sign language by 7 interpreters.

Sign Language Translation

Testing MediaPipe Holistic for Linguistic Analysis of Nonmanual Markers in Sign Languages

no code implementations15 Mar 2024 Anna Kuznetsova, Vadim Kimmelman

Advances in Deep Learning have made possible reliable landmark tracking of human bodies and faces that can be used for a variety of tasks.

Landmark Tracking

Evaluation of Manual and Non-manual Components for Sign Language Recognition

no code implementations LREC 2020 Medet Mukushev, Arman Sabyrov, Alfarabi Imashev, Kenessary Koishybay, Vadim Kimmelman, S, Anara ygulova

The motivation behind this work lies in the need to differentiate between similar signs that differ in non-manual components present in any sign.

Sign Language Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.