no code implementations • ACL 2022 • Suma Reddy Duggenpudi, Subba Reddy Oota, Mounika Marreddy, Radhika Mamidi
Our contributions in this paper include (i) Two annotated NER datasets for the Telugu language in multiple domains: Newswire Dataset (ND) and Medical Dataset (MD), and we combined ND and MD to form Combined Dataset (CD) (ii) Comparison of the finetuned Telugu pretrained transformer models (BERT-Te, RoBERTa-Te, and ELECTRA-Te) with other baseline models (CRF, LSTM-CRF, and BiLSTM-CRF) (iii) Further investigation of the performance of Telugu pretrained transformer models against the multilingual models mBERT, XLM-R, and IndicBERT.
no code implementations • WS 2019 • Suma Reddy Duggenpudi, Kusampudi Siva Subrahamanyam Varma, Radhika Mamidi
In this paper, a dialogue system for Hospital domain in Telugu, which is a resource-poor Dravidian language, has been built.
no code implementations • WS 2019 • Rama Rohit Reddy Gangula, Suma Reddy Duggenpudi, Radhika Mamidi
Language is a powerful tool which can be used to state the facts as well as express our views and perceptions.