Search Results for author: Ameya Godbole

Found 12 papers, 5 papers with code

Interrogating LLM design under a fair learning doctrine

no code implementations22 Feb 2025 Johnny Tian-Zheng Wei, Maggie Wang, Ameya Godbole, Jonathan H. Choi, Robin Jia

The current discourse on large language models (LLMs) and copyright largely takes a "behavioral" perspective, focusing on model outputs and evaluating whether they are substantially similar to training data.

Memorization

Verify with Caution: The Pitfalls of Relying on Imperfect Factuality Metrics

no code implementations24 Jan 2025 Ameya Godbole, Robin Jia

Improvements in large language models have led to increasing optimism that they can serve as reliable evaluators of natural language generation outputs.

Question Answering Retrieval-augmented Generation +1

Analysis of Plan-based Retrieval for Grounded Text Generation

no code implementations20 Aug 2024 Ameya Godbole, Nicholas Monath, Seungyeon Kim, Ankit Singh Rawat, Andrew McCallum, Manzil Zaheer

In text generation, hallucinations refer to the generation of seemingly coherent text that contradicts established knowledge.

Language Modeling Language Modelling +2

SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples

1 code implementation13 May 2023 Deqing Fu, Ameya Godbole, Robin Jia

In this work, we propose Self-labeled Counterfactuals for Extrapolating to Negative Examples (SCENE), an automatic method for synthesizing training data that greatly improves models' ability to detect challenging negative examples.

Data Augmentation Natural Language Inference +2

Benchmarking Long-tail Generalization with Likelihood Splits

1 code implementation13 Oct 2022 Ameya Godbole, Robin Jia

In order to reliably process natural language, NLP systems must generalize to the long tail of rare utterances.

Benchmarking Language Modeling +4

Knowledge Base Question Answering by Case-based Reasoning over Subgraphs

1 code implementation22 Feb 2022 Rajarshi Das, Ameya Godbole, Ankita Naik, Elliot Tower, Robin Jia, Manzil Zaheer, Hannaneh Hajishirzi, Andrew McCallum

Question answering (QA) over knowledge bases (KBs) is challenging because of the diverse, essentially unbounded, types of reasoning patterns needed.

Knowledge Base Question Answering

A Simple Approach to Case-Based Reasoning in Knowledge Bases

1 code implementation AKBC 2020 Rajarshi Das, Ameya Godbole, Shehzaad Dhuliawala, Manzil Zaheer, Andrew McCallum

We present a surprisingly simple yet accurate approach to reasoning in knowledge graphs (KGs) that requires \emph{no training}, and is reminiscent of case-based reasoning in classical artificial intelligence (AI).

Knowledge Graphs Meta-Learning +1

Chains-of-Reasoning at TextGraphs 2019 Shared Task: Reasoning over Chains of Facts for Explainable Multi-hop Inference

no code implementations WS 2019 Rajarshi Das, Ameya Godbole, Manzil Zaheer, Shehzaad Dhuliawala, Andrew McCallum

This paper describes our submission to the shared task on {``}Multi-hop Inference Explanation Regeneration{''} in TextGraphs workshop at EMNLP 2019 (Jansen and Ustalov, 2019).

Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering

no code implementations WS 2019 Ameya Godbole, Dilip Kavarthapu, Rajarshi Das, Zhiyu Gong, Abhishek Singhal, Hamed Zamani, Mo Yu, Tian Gao, Xiaoxiao Guo, Manzil Zaheer, Andrew McCallum

Multi-hop question answering (QA) requires an information retrieval (IR) system that can find \emph{multiple} supporting evidence needed to answer the question, making the retrieval process very challenging.

Information Retrieval Multi-hop Question Answering +2

Siamese Neural Networks with Random Forest for detecting duplicate question pairs

no code implementations22 Jan 2018 Ameya Godbole, Aman Dalmia, Sunil Kumar Sahu

We got the best result by using the Siamese adaptation of a Bidirectional GRU with a Random Forest classifier, which landed us among the top 24% in the competition Quora Question Pairs hosted on Kaggle.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.