no code implementations • EACL (DravidianLangTech) 2021 • Bharathi Raja Chakravarthi, Ruba Priyadharshini, Shubhanker Banerjee, Richard Saldanha, John P. McCrae, Anand Kumar M, Parameswari Krishnamurthy, Melvin Johnson
This paper describes the datasets used, the methodology used for the evaluation of participants, and the experiments’ overall results.
no code implementations • WMT (EMNLP) 2021 • Richard Saldanha, Ananthanarayana V. S, Anand Kumar M, Parameswari Krishnamurthy
The OpenNMT-py toolkit has been used to create quick prototypes of the systems, following which models have been trained on the training datasets containing the parallel corpus and finally the models have been evaluated on the dev datasets provided as part of the task.
no code implementations • PAIL (ICON) 2021 • P Sangeetha, Parameswari Krishnamurthy, Amba Kulkarni
Parsing has been gaining popularity in recent years and attracted the interest of NLP researchers around the world.
no code implementations • DravidianLangTech (ACL) 2022 • Anbukkarasi Sampath, Thenmozhi Durairaj, Bharathi Raja Chakravarthi, Ruba Priyadharshini, Subalalitha Cn, Kogilavani Shanmugavadivel, Sajeetha Thavareesan, Sathiyaraj Thangasamy, Parameswari Krishnamurthy, Adeep Hande, Sean Benhur, Kishore Ponnusamy, Santhiya Pandiyan
This paper presents the dataset used in the shared task, task description, and the methodology used by the participants and the evaluation results of the submission.
no code implementations • DravidianLangTech (ACL) 2022 • Bharathi Raja Chakravarthi, Ruba Priyadharshini, Subalalitha Cn, Sangeetha S, Malliga Subramanian, Kogilavani Shanmugavadivel, Parameswari Krishnamurthy, Adeep Hande, Siddhanth U Hegde, Roshan Nayak, Swetha Valli
It is one of the first shared tasks that focuses on Multi-task Learning for closely related tasks, especially for a very low-resourced language family such as the Dravidian language family.
no code implementations • 15 Nov 2023 • Vandan Mujadia, Ashok Urlana, Yash Bhaskar, Penumalla Aditya Pavani, Kukkapalli Shravya, Parameswari Krishnamurthy, Dipti Misra Sharma
In this work, our aim is to explore the multilingual capabilities of large language models by using machine translation as a task involving English and 22 Indian languages.
1 code implementation • 17 Apr 2021 • Nikhil Ghanghor, Parameswari Krishnamurthy, Sajeetha Thavareesan, Ruba Priyadharshini, Bharathi Raja Chakravarthi
This paper describes the IIITK team’s submissions to the offensive language identification, and troll memes classification shared tasks for Dravidian languages at DravidianLangTech 2021 workshop@EACL 2021.