1 code implementation • 2 Apr 2024 • Mandar Sharma, Rutuja Murlidhar Taware, Pravesh Koirala, Nikhil Muralidhar, Naren Ramakrishnan
Off-the-shelf pre-trained language models have become the de facto standard in NLP pipelines for a multitude of downstream tasks.
1 code implementation • 14 May 2023 • Mandar Sharma, Nikhil Muralidhar, Naren Ramakrishnan
The field of Math-NLP has witnessed significant growth in recent years, motivated by the desire to expand LLM performance to the learning of non-linguistic notions (numerals, and subsequently, arithmetic reasoning).
1 code implementation • 3 Nov 2022 • Mandar Sharma, Nikhil Muralidhar, Naren Ramakrishnan
Through their transfer learning abilities, highly-parameterized large pre-trained language models have dominated the NLP landscape for a multitude of downstream language tasks.
no code implementations • 25 Jul 2022 • Mandar Sharma, Ajay Gogineni, Naren Ramakrishnan
The neural boom that has sparked natural language processing (NLP) research through the last decade has similarly led to significant innovations in data-to-text generation (DTG).
1 code implementation • 11 Oct 2021 • Mandar Sharma, John S. Brownstein, Naren Ramakrishnan
We present TCube (Time-series-to-text), a domain-agnostic neural framework for time-series narration, that couples the representation of essential time-series elements in the form of a dense knowledge graph and the translation of said knowledge graph into rich and fluent narratives through the transfer-learning capabilities of PLMs (Pre-trained Language Models).
no code implementations • 6 Sep 2020 • Arjun Choudhry, Mandar Sharma, Pramod Chundury, Thomas Kapler, Derek W. S. Gray, Naren Ramakrishnan, Niklas Elmqvist
In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization.