Search Results for author: Ahmed Hassan Awadallah

Found 59 papers, 24 papers with code

Say ‘YES’ to Positivity: Detecting Toxic Language in Workplace Communications

no code implementations Findings (EMNLP) 2021 Meghana Moorthy Bhat, Saghar Hosseini, Ahmed Hassan Awadallah, Paul Bennett, Weisheng Li

Specifically, the lack of corpus, sparsity of toxicity in enterprise emails, and well-defined criteria for annotating toxic conversations have prevented researchers from addressing the problem at scale.

Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing

no code implementations22 Apr 2024 Dujian Ding, Ankur Mallick, Chi Wang, Robert Sim, Subhabrata Mukherjee, Victor Ruhle, Laks V. S. Lakshmanan, Ahmed Hassan Awadallah

Large language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size, while smaller models that can be deployed on lower cost (e. g., edge) devices, tend to lag behind in terms of response quality.

Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation

no code implementations4 Oct 2023 Chen Dun, Mirian Hipolito Garcia, Guoqing Zheng, Ahmed Hassan Awadallah, Anastasios Kyrillidis, Robert Sim

Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind.

Model Compression Text Summarization

Automatic Pair Construction for Contrastive Post-training

1 code implementation3 Oct 2023 Canwen Xu, Corby Rosset, Ethan C. Chau, Luciano del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao

Remarkably, our automatic contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to outperform ChatGPT.

GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions

no code implementations24 May 2023 Woojeong Jin, Subhabrata Mukherjee, Yu Cheng, Yelong Shen, Weizhu Chen, Ahmed Hassan Awadallah, Damien Jose, Xiang Ren

Generalization to unseen tasks is an important ability for few-shot learners to achieve better zero-/few-shot performance on diverse tasks.

Object Question Answering +2

Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback

no code implementations21 Apr 2023 Nikhil Mehta, Milagro Teruel, Patricio Figueroa Sanz, Xin Deng, Ahmed Hassan Awadallah, Julia Kiseleva

We explore multiple types of help players can give to the AI to guide it and analyze the impact of this help in AI behavior, resulting in performance improvements.

An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models

1 code implementation22 Jan 2023 Saghar Hosseini, Hamid Palangi, Ahmed Hassan Awadallah

Large-scale Pre-Trained Language Models (PTLMs) capture knowledge from massive human-written data which contains latent societal biases and toxic contents.

Language Modelling

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

1 code implementation31 Oct 2022 Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao

Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.

Boosting Natural Language Generation from Instructions with Meta-Learning

no code implementations20 Oct 2022 Budhaditya Deb, Guoqing Zheng, Ahmed Hassan Awadallah

Recent work has shown that language models (LMs) trained with multi-task \textit{instructional learning} (MTIL) can solve diverse NLP tasks in zero- and few-shot settings with improved performance compared to prompt tuning.

Meta-Learning Text Generation

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning

1 code implementation24 May 2022 Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao

Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.

Natural Language Understanding Sparse Learning

PREME: Preference-based Meeting Exploration through an Interactive Questionnaire

1 code implementation5 May 2022 Negar Arabzadeh, Ali Ahmadvand, Julia Kiseleva, Yang Liu, Ahmed Hassan Awadallah, Ming Zhong, Milad Shokouhi

The recent increase in the volume of online meetings necessitates automated tools for managing and organizing the material, especially when an attendee has missed the discussion and needs assistance in quickly exploring it.

Pathologies of Pre-trained Language Models in Few-shot Fine-tuning

no code implementations insights (ACL) 2022 Hanjie Chen, Guoqing Zheng, Ahmed Hassan Awadallah, Yangfeng Ji

Although adapting pre-trained language models with few examples has shown promising performance on text classification, there is a lack of understanding of where the performance gain comes from.

text-classification Text Classification

Knowledge Infused Decoding

1 code implementation ICLR 2022 Ruibo Liu, Guoqing Zheng, Shashank Gupta, Radhika Gaonkar, Chongyang Gao, Soroush Vosoughi, Milad Shokouhi, Ahmed Hassan Awadallah

Hence, they tend to suffer from counterfactual or hallucinatory generation when used in knowledge-intensive natural language generation (NLG) tasks.

counterfactual Question Answering +1

AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models

no code implementations29 Jan 2022 Dongkuan Xu, Subhabrata Mukherjee, Xiaodong Liu, Debadeepta Dey, Wenhui Wang, Xiang Zhang, Ahmed Hassan Awadallah, Jianfeng Gao

Our framework AutoDistil addresses above challenges with the following steps: (a) Incorporates inductive bias and heuristics to partition Transformer search space into K compact sub-spaces (K=3 for typical student sizes of base, small and tiny); (b) Trains one SuperLM for each sub-space using task-agnostic objective (e. g., self-attention distillation) with weight-sharing of students; (c) Lightweight search for the optimal student without re-training.

Inductive Bias Knowledge Distillation +1

Compositional Generalization for Natural Language Interfaces to Web APIs

no code implementations9 Dec 2021 Saghar Hosseini, Ahmed Hassan Awadallah, Yu Su

We define new compositional generalization tasks for NL2API which explore the models' ability to extrapolate from simple API calls in the training set to new and more complex API calls in the inference phase.

Semantic Parsing

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

1 code implementation4 Nov 2021 Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li

In this paper, we present Adversarial GLUE (AdvGLUE), a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks.

Adversarial Attack Adversarial Robustness +1

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

1 code implementation4 Nov 2021 Subhabrata Mukherjee, Xiaodong Liu, Guoqing Zheng, Saghar Hosseini, Hao Cheng, Greg Yang, Christopher Meek, Ahmed Hassan Awadallah, Jianfeng Gao

We demonstrate that while recent models reach human performance when they have access to large amounts of labeled data, there is a huge gap in performance in the few-shot setting for most tasks.

Few-Shot Learning Natural Language Understanding

DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

1 code implementation30 Oct 2021 Xuxi Chen, Tianlong Chen, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang, Yu Cheng

To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights.

Robustness Challenges in Model Distillation and Pruning for Natural Language Understanding

no code implementations16 Oct 2021 Mengnan Du, Subhabrata Mukherjee, Yu Cheng, Milad Shokouhi, Xia Hu, Ahmed Hassan Awadallah

Recent work has focused on compressing pre-trained language models (PLMs) like BERT where the major focus has been to improve the in-distribution performance for downstream tasks.

Knowledge Distillation Model Compression +1

An Exploratory Study on Long Dialogue Summarization: What Works and What's Next

1 code implementation10 Sep 2021 Yusen Zhang, Ansong Ni, Tao Yu, Rui Zhang, Chenguang Zhu, Budhaditya Deb, Asli Celikyilmaz, Ahmed Hassan Awadallah, Dragomir Radev

Dialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series.

Retrieval

MetaXT: Meta Cross-Task Transfer between Disparate Label Spaces

no code implementations9 Sep 2021 Srinagesh Sharma, Guoqing Zheng, Ahmed Hassan Awadallah

In this paper, we aim to the address of the problem of few shot task learning by exploiting and transferring from a different task which admits a related but disparate label space.

Language Modelling

SummerTime: Text Summarization Toolkit for Non-experts

1 code implementation EMNLP (ACL) 2021 Ansong Ni, Zhangir Azerbayev, Mutethia Mutuma, Troy Feng, Yusen Zhang, Tao Yu, Ahmed Hassan Awadallah, Dragomir Radev

We also provide explanations for models and evaluation metrics to help users understand the model behaviors and select models that best suit their needs.

Document Summarization Multi-Document Summarization

WALNUT: A Benchmark on Semi-weakly Supervised Learning for Natural Language Understanding

no code implementations NAACL 2022 Guoqing Zheng, Giannis Karamanolakis, Kai Shu, Ahmed Hassan Awadallah

In this paper, we propose such a benchmark, named WALNUT (semi-WeAkly supervised Learning for Natural language Understanding Testbed), to advocate and facilitate research on weak supervision for NLU.

Natural Language Understanding Weakly-supervised Learning

Fairness via Representation Neutralization

no code implementations NeurIPS 2021 Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Hassan Awadallah, Xia Hu

This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder.

Attribute Classification +1

XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation

1 code implementation8 Jun 2021 Subhabrata Mukherjee, Ahmed Hassan Awadallah, Jianfeng Gao

While deep and large pre-trained models are the state-of-the-art for various natural language processing tasks, their huge size poses significant challenges for practical uses in resource constrained settings.

Knowledge Distillation NER +1

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning

2 code implementations NAACL 2021 Mengzhou Xia, Guoqing Zheng, Subhabrata Mukherjee, Milad Shokouhi, Graham Neubig, Ahmed Hassan Awadallah

Extensive experiments on real-world low-resource languages - without access to large-scale monolingual corpora or large amounts of labeled data - for tasks like cross-lingual sentiment analysis and named entity recognition show the effectiveness of our approach.

Cross-Lingual Transfer Meta-Learning +5

QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization

1 code implementation NAACL 2021 Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, Dragomir Radev

As increasing numbers of meetings are recorded and transcribed, meeting summaries have become essential to remind those who may or may not have attended the meetings about the key decisions made and the tasks to be completed.

Meeting Summarization

Self-Training with Weak Supervision

1 code implementation NAACL 2021 Giannis Karamanolakis, Subhabrata Mukherjee, Guoqing Zheng, Ahmed Hassan Awadallah

In this work, we develop a weak supervision framework (ASTRA) that leverages all the available data for a given task.

text-classification Text Classification

NL-EDIT: Correcting semantic parse errors through natural language interaction

1 code implementation NAACL 2021 Ahmed Elgohary, Christopher Meek, Matthew Richardson, Adam Fourney, Gonzalo Ramos, Ahmed Hassan Awadallah

We present NL-EDIT, a model for interpreting natural language feedback in the interaction context to generate a sequence of edits that can be applied to the initial parse to correct its errors.

Semantic Parsing Text-To-SQL

SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing

no code implementations NeurIPS Workshop CAP 2020 Tao Yu, Rui Zhang, Alex Polozov, Christopher Meek, Ahmed Hassan Awadallah

Conversational Semantic Parsing (CSP) is the task of converting a sequence of natural language queries to formal language (e. g., SQL, SPARQL) that can be executed against a structured ontology (e. g. databases, knowledge bases).

Ranked #3 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.1 (using extra training data)

Dialogue State Tracking Language Modelling +4

Adaptive Self-training for Neural Sequence Labeling with Few Labels

no code implementations1 Jan 2021 Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah

Neural sequence labeling is an important technique employed for many Natural Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot tagging for dialog systems and semantic parsing.

Meta-Learning named-entity-recognition +3

Structure-Grounded Pretraining for Text-to-SQL

no code implementations NAACL 2021 Xiang Deng, Ahmed Hassan Awadallah, Christopher Meek, Oleksandr Polozov, Huan Sun, Matthew Richardson

Additionally, to evaluate different methods under more realistic text-table alignment settings, we create a new evaluation set Spider-Realistic based on Spider dev set with explicit mentions of column names removed, and adopt eight existing text-to-SQL datasets for cross-database evaluation.

Text-To-SQL

Adaptive Self-training for Few-shot Neural Sequence Labeling

no code implementations7 Oct 2020 Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah

While self-training serves as an effective mechanism to learn from large amounts of unlabeled data -- meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels.

Meta-Learning named-entity-recognition +3

Smart To-Do: Automatic Generation of To-Do Items from Emails

no code implementations ACL 2020 Sudipto Mukherjee, Subhabrata Mukherjee, Marcello Hasegawa, Ahmed Hassan Awadallah, Ryen White

Intelligent features in email service applications aim to increase productivity by helping people organize their folders, compose their emails and respond to pending tasks.

Management Text Generation

An Empirical Study of Software Exceptions in the Field using Search Logs

no code implementations30 May 2020 Foyzul Hassan, Chetan Bansal, Nachiappan Nagappan, Thomas Zimmermann, Ahmed Hassan Awadallah

Using the machine learning model, we extracted exceptions from raw queries and performed popularity, effort, success, query characteristic and web domain analysis.

BIG-bench Machine Learning

Learning with Weak Supervision for Email Intent Detection

no code implementations26 May 2020 Kai Shu, Subhabrata Mukherjee, Guoqing Zheng, Ahmed Hassan Awadallah, Milad Shokouhi, Susan Dumais

In this paper, we propose to leverage user actions as a source of weak supervision, in addition to a limited set of annotated examples, to detect intents in emails.

intent-classification Intent Classification +2

Smart To-Do : Automatic Generation of To-Do Items from Emails

no code implementations5 May 2020 Sudipto Mukherjee, Subhabrata Mukherjee, Marcello Hasegawa, Ahmed Hassan Awadallah, Ryen White

Intelligent features in email service applications aim to increase productivity by helping people organize their folders, compose their emails and respond to pending tasks.

Management Text Generation

Analyzing Web Search Behavior for Software Engineering Tasks

no code implementations19 Dec 2019 Nikitha Rao, Chetan Bansal, Thomas Zimmermann, Ahmed Hassan Awadallah, Nachiappan Nagappan

Subsequently, we propose a taxonomy of intents to identify the various contexts in which web search is used in software engineering.

Meta Label Correction for Noisy Label Learning

1 code implementation10 Nov 2019 Guoqing Zheng, Ahmed Hassan Awadallah, Susan Dumais

We view the label correction procedure as a meta-process and propose a new meta-learning based framework termed MLC (Meta Label Correction) for learning with noisy labels.

Ranked #9 on Image Classification on Clothing1M (using clean data) (using extra training data)

Learning with noisy labels Meta-Learning +2

Detecting Fake News with Weak Social Supervision

no code implementations24 Oct 2019 Kai Shu, Ahmed Hassan Awadallah, Susan Dumais, Huan Liu

This is especially the case for many real-world tasks where large scale annotated examples are either too expensive to acquire or unavailable due to privacy or data access constraints.

Fake News Detection

Distilling BERT into Simple Neural Networks with Unlabeled Transfer Data

no code implementations4 Oct 2019 Subhabrata Mukherjee, Ahmed Hassan Awadallah

We show that our student models can compress the huge teacher by up to 26x while still matching or even marginally exceeding the teacher performance in low-resource settings with small amount of labeled data.

Knowledge Distillation NER

On Domain Transfer When Predicting Intent in Text

no code implementations NeurIPS Workshop Document_Intelligen 2019 Petar Stojanov, Ahmed Hassan Awadallah, Paul Bennett, Saghar Hosseini

In many domains, especially enterprise text analysis, there is an abundance of data which can be used for the development of new AI-powered intelligent experiences to improve people's productivity.

Multi-Source Cross-Lingual Model Transfer: Learning What to Share

1 code implementation ACL 2019 Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, Claire Cardie

In this work, we focus on the multilingual transfer setting where training data in multiple source languages is leveraged to further boost target language performance.

Cross-Lingual NER text-classification +2

Zero-Resource Multilingual Model Transfer: Learning What to Share

no code implementations27 Sep 2018 Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, Claire Cardie

In this work, we propose a zero-resource multilingual transfer learning model that can utilize training data in multiple source languages, while not requiring target language training data nor cross-lingual supervision.

Cross-Lingual Transfer text-classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.