Search Results for author: Ahmed H. Awadallah

Found 7 papers, 6 papers with code

EcoAssistant: Using LLM Assistant More Affordably and Accurately

1 code implementation3 Oct 2023 Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang

Today, users ask Large language models (LLMs) as assistants to answer queries that require external knowledge; they ask about the weather in a specific city, about stock prices, and even about where specific locations are within their neighborhood.

Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference

3 code implementations8 Mar 2023 Chi Wang, Susan Xueqing Liu, Ahmed H. Awadallah

Large Language Models (LLMs) have sparked significant interest in their generative capabilities, leading to the development of various commercial applications.

Hyperparameter Optimization Language Modelling +2

On Improving Summarization Factual Consistency from Natural Language Feedback

1 code implementation20 Dec 2022 Yixin Liu, Budhaditya Deb, Milagro Teruel, Aaron Halfaker, Dragomir Radev, Ahmed H. Awadallah

We collect a high-quality dataset, DeFacto, containing human demonstrations and informational natural language feedback consisting of corrective instructions, edited summaries, and explanations with respect to the factual consistency of the summary.

Text Generation Zero-Shot Learning

Leveraging Locality in Abstractive Text Summarization

1 code implementation25 May 2022 Yixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, Chenguang Zhu, Ahmed H. Awadallah, Dragomir Radev

Our experimental results show that our model has a better performance compared with strong baselines with efficient attention modules, and our analysis provides further insights into our locality-aware modeling strategy.

Abstractive Text Summarization Text Generation

Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners

no code implementations16 Apr 2022 Shashank Gupta, Subhabrata Mukherjee, Krishan Subudhi, Eduardo Gonzalez, Damien Jose, Ahmed H. Awadallah, Jianfeng Gao

Traditional multi-task learning (MTL) methods use dense networks that use the same set of shared weights across several different tasks.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.