Search Results for author: Yash Kumar Lal

Found 13 papers, 4 papers with code

IrEne-viz: Visualizing Energy Consumption of Transformer Models

1 code implementation EMNLP (ACL) 2021 Yash Kumar Lal, Reetu Singh, Harsh Trivedi, Qingqing Cao, Aruna Balasubramanian, Niranjan Balasubramanian

IrEne is an energy prediction system that accurately predicts the interpretable inference energy consumption of a wide range of Transformer-based NLP models.

SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks

no code implementations3 Feb 2024 Gourab Dey, Adithya V Ganesan, Yash Kumar Lal, Manal Shah, Shreyashee Sinha, Matthew Matero, Salvatore Giorgi, Vivek Kulkarni, H. Andrew Schwartz

Social science NLP tasks, such as emotion or humor detection, are required to capture the semantics along with the implicit pragmatics from text, often with limited amounts of training data.

Humor Detection Reading Comprehension

One Size Does Not Fit All: Customizing Open-Domain Procedures

no code implementations16 Nov 2023 Yash Kumar Lal, Li Zhang, Faeze Brahman, Bodhisattwa Prasad Majumder, Peter Clark, Niket Tandon

Our approach is to test several simple multi-LLM-agent architectures for customization, as well as an end-to-end LLM, using a new evaluation set, called CustomPlans, of over 200 WikiHow procedures each with a customization need.

Evaluating Paraphrastic Robustness in Textual Entailment Models

no code implementations29 Jun 2023 Dhruv Verma, Yash Kumar Lal, Shreyashee Sinha, Benjamin Van Durme, Adam Poliak

We present PaRTE, a collection of 1, 126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing.

Natural Language Inference RTE

Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation

no code implementations1 Jun 2023 Adithya V Ganesan, Yash Kumar Lal, August Håkan Nilsson, H. Andrew Schwartz

Very large language models (LLMs) perform extremely well on a spectrum of NLP tasks in a zero-shot setting.

TellMeWhy: A Dataset for Answering Why-Questions in Narratives

1 code implementation Findings (ACL) 2021 Yash Kumar Lal, Nathanael Chambers, Raymond Mooney, Niranjan Balasubramanian

They are especially worse on questions whose answers are external to the narrative, thus providing a challenge for future QA and narrative understanding research.

IrEne: Interpretable Energy Prediction for Transformers

1 code implementation ACL 2021 Qingqing Cao, Yash Kumar Lal, Harsh Trivedi, Aruna Balasubramanian, Niranjan Balasubramanian

We present IrEne, an interpretable and extensible energy prediction system that accurately predicts the inference energy consumption of a wide range of Transformer-based NLP models.

Johns Hopkins University Submission for WMT News Translation Task

no code implementations WS 2019 Kelly Marchisio, Yash Kumar Lal, Philipp Koehn

We describe the work of Johns Hopkins University for the shared task of news translation organized by the Fourth Conference on Machine Translation (2019).

Machine Translation Translation

De-Mixing Sentiment from Code-Mixed Text

no code implementations ACL 2019 Yash Kumar Lal, Vaibhav Kumar, Mrinal Dhar, Manish Shrivastava, Philipp Koehn

The Collective Encoder captures the overall sentiment of the sentence, while the Specific Encoder utilizes an attention mechanism in order to focus on individual sentiment-bearing sub-words.

Sentence Sentiment Analysis +1

SWDE : A Sub-Word And Document Embedding Based Engine for Clickbait Detection

no code implementations2 Aug 2018 Vaibhav Kumar, Mrinal Dhar, Dhruv Khattar, Yash Kumar Lal, Abhimanshu Mishra, Manish Shrivastava, Vasudeva Varma

We generate sub-word level embeddings of the title using Convolutional Neural Networks and use them to train a bidirectional LSTM architecture.

Clickbait Detection Document Embedding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.