Search Results for author: Meghana Bhat

Found 4 papers, 2 papers with code

XGen-7B Technical Report

1 code implementation7 Sep 2023 Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryściński, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Joty, Caiming Xiong

Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context.

2k 8k

Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence

no code implementations12 Jun 2023 John J. Nay, David Karamardian, Sarah B. Lawsky, WenTing Tao, Meghana Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H. Choi, Jungo Kasai

Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law.

Logical Reasoning

Few-shot Unified Question Answering: Tuning Models or Prompts?

no code implementations23 May 2023 Srijan Bansal, Semih Yavuz, Bo Pang, Meghana Bhat, Yingbo Zhou

Question-answering (QA) tasks often investigate specific question types, knowledge domains, or reasoning skills, leading to specialized models catering to specific categories of QA tasks.

Question Answering Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.