Search Results for author: Maria Mahbub

Found 7 papers, 2 papers with code

Advancing NLP Security by Leveraging LLMs as Adversarial Engines

no code implementations23 Oct 2024 Sudarshan Srinivasan, Maria Mahbub, Amir Sadovnik

This position paper proposes a novel approach to advancing NLP security by leveraging Large Language Models (LLMs) as engines for generating diverse adversarial attacks.

Position

Hiding-in-Plain-Sight (HiPS) Attack on CLIP for Targetted Object Removal from Images

no code implementations16 Oct 2024 Arka Daw, Megan Hong-Thanh Chung, Maria Mahbub, Amir Sadovnik

Machine learning models are known to be vulnerable to adversarial attacks, but traditional attacks have mostly focused on single-modalities.

Image Captioning Object

BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task

1 code implementation26 Feb 2022 Maria Mahbub, Sudarshan Srinivasan, Edmon Begoli, Gregory D Peterson

We present an adversarial learning-based domain adaptation framework for the biomedical machine reading comprehension task (BioADAPT-MRC), a neural network-based method to address the discrepancies in the marginal distributions between the general and biomedical domain datasets.

Domain Adaptation Machine Reading Comprehension +1

Cannot find the paper you are looking for? You can Submit a new open access paper.