Search Results for author: Saghar Hosseini

Found 10 papers, 4 papers with code

Say ‘YES’ to Positivity: Detecting Toxic Language in Workplace Communications

no code implementations Findings (EMNLP) 2021 Meghana Moorthy Bhat, Saghar Hosseini, Ahmed Hassan Awadallah, Paul Bennett, Weisheng Li

Specifically, the lack of corpus, sparsity of toxicity in enterprise emails, and well-defined criteria for annotating toxic conversations have prevented researchers from addressing the problem at scale.

ROBBIE: Robust Bias Evaluation of Large Generative Language Models

no code implementations29 Nov 2023 David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith

In this work, our focus is two-fold: (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity metrics across 12 demographic axes and 5 families of generative LLMs.

Benchmarking Fairness

An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models

1 code implementation22 Jan 2023 Saghar Hosseini, Hamid Palangi, Ahmed Hassan Awadallah

Large-scale Pre-Trained Language Models (PTLMs) capture knowledge from massive human-written data which contains latent societal biases and toxic contents.

Language Modelling

Compositional Generalization for Natural Language Interfaces to Web APIs

no code implementations9 Dec 2021 Saghar Hosseini, Ahmed Hassan Awadallah, Yu Su

We define new compositional generalization tasks for NL2API which explore the models' ability to extrapolate from simple API calls in the training set to new and more complex API calls in the inference phase.

Semantic Parsing

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

1 code implementation4 Nov 2021 Subhabrata Mukherjee, Xiaodong Liu, Guoqing Zheng, Saghar Hosseini, Hao Cheng, Greg Yang, Christopher Meek, Ahmed Hassan Awadallah, Jianfeng Gao

We demonstrate that while recent models reach human performance when they have access to large amounts of labeled data, there is a huge gap in performance in the few-shot setting for most tasks.

Few-Shot Learning Natural Language Understanding

On Domain Transfer When Predicting Intent in Text

no code implementations NeurIPS Workshop Document_Intelligen 2019 Petar Stojanov, Ahmed Hassan Awadallah, Paul Bennett, Saghar Hosseini

In many domains, especially enterprise text analysis, there is an abundance of data which can be used for the development of new AI-powered intelligent experiences to improve people's productivity.

Online Distributed Optimization on Dynamic Networks

no code implementations22 Dec 2014 Saghar Hosseini, Airlie Chapman, Mehran Mesbahi

This paper presents a distributed optimization scheme over a network of agents in the presence of cost uncertainties and over switching communication topologies.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.