Search Results for author: Ratish Puduppully

Found 20 papers, 14 papers with code

IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages

2 code implementations25 May 2023 Jay Gala, Pranjal A. Chitale, Raghavan AK, Varun Gumma, Sumanth Doddapaneni, Aswanth Kumar, Janki Nawale, Anupama Sujatha, Ratish Puduppully, Vivek Raghavan, Pratyush Kumar, Mitesh M. Khapra, Raj Dabre, Anoop Kunchukuttan

Prior to this work, there was (i) no parallel training data spanning all 22 languages, (ii) no robust benchmarks covering all these languages and containing content relevant to India, and (iii) no existing translation models which support all the 22 scheduled languages of India.

Machine Translation Sentence +1

Data-to-Text Generation with Content Selection and Planning

2 code implementations3 Sep 2018 Ratish Puduppully, Li Dong, Mirella Lapata

Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order.

Data-to-Text Generation Descriptive

Data-to-text Generation with Entity Modeling

2 code implementations ACL 2019 Ratish Puduppully, Li Dong, Mirella Lapata

Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end.

Data-to-Text Generation Representation Learning

Data-to-text Generation with Macro Planning

1 code implementation4 Feb 2021 Ratish Puduppully, Mirella Lapata

Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof.

Data-to-Text Generation

Data-to-text Generation with Variational Sequential Planning

1 code implementation28 Feb 2022 Ratish Puduppully, Yao Fu, Mirella Lapata

We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input.

Data-to-Text Generation

A Comprehensive Analysis of Adapter Efficiency

2 code implementations12 May 2023 Nandini Mundra, Sumanth Doddapaneni, Raj Dabre, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra

However, adapters have not been sufficiently analyzed to understand if PEFT translates to benefits in training/deployment efficiency and maintainability/extensibility.

Natural Language Understanding

Transition-Based Deep Input Linearization

1 code implementation EACL 2017 Ratish Puduppully, Yue Zhang, Manish Shrivastava

Traditional methods for deep NLG adopt pipeline approaches comprising stages such as constructing syntactic input, predicting function words, linearizing the syntactic input and generating the surface forms.

Data-to-Text Generation Machine Translation

Multi-Document Summarization with Centroid-Based Pretraining

1 code implementation1 Aug 2022 Ratish Puduppully, Parag Jain, Nancy F. Chen, Mark Steedman

In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary.

Document Summarization Multi-Document Summarization

Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models

1 code implementation22 May 2023 Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre, Ai Ti Aw, Nancy F. Chen

This study investigates machine translation between related languages i. e., languages within the same family that share linguistic characteristics such as word order and lexical similarity.

Machine Translation Translation

CTQScorer: Combining Multiple Features for In-context Example Selection for Machine Translation

1 code implementation23 May 2023 Aswanth Kumar, Ratish Puduppully, Raj Dabre, Anoop Kunchukuttan

We learn a regression model, CTQ Scorer (Contextual Translation Quality), that selects examples based on multiple features in order to maximize the translation quality.

In-Context Learning Machine Translation +2

GEMv2: Multilingual NLG Benchmarking in a Single Line of Code

no code implementations22 Jun 2022 Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Laura Perez-Beltrachini, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, Yufang Hou

This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims.

Benchmarking Text Generation

VerityMath: Advancing Mathematical Reasoning by Self-Verification Through Unit Consistency

no code implementations13 Nov 2023 Vernon Toh, Ratish Puduppully, Nancy F. Chen

Large Language Models (LLMs) combined with program-based solving techniques are increasingly demonstrating proficiency in mathematical reasoning.

Math Mathematical Reasoning

Cannot find the paper you are looking for? You can Submit a new open access paper.