Search Results for author: Marcello Hasegawa

Found 6 papers, 0 papers with code

Smart To-Do : Automatic Generation of To-Do Items from Emails

no code implementations5 May 2020 Sudipto Mukherjee, Subhabrata Mukherjee, Marcello Hasegawa, Ahmed Hassan Awadallah, Ryen White

Intelligent features in email service applications aim to increase productivity by helping people organize their folders, compose their emails and respond to pending tasks.

Management Text Generation

Smart To-Do: Automatic Generation of To-Do Items from Emails

no code implementations ACL 2020 Sudipto Mukherjee, Subhabrata Mukherjee, Marcello Hasegawa, Ahmed Hassan Awadallah, Ryen White

Intelligent features in email service applications aim to increase productivity by helping people organize their folders, compose their emails and respond to pending tasks.

Management Text Generation

Privacy Regularization: Joint Privacy-Utility Optimization in Language Models

no code implementations12 Mar 2021 FatemehSadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor Berg-Kirkpatrick, Robert Sim

In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a triplet-loss term.

Memorization Privacy Preserving

On Privacy and Confidentiality of Communications in Organizational Graphs

no code implementations27 May 2021 Masoumeh Shafieinejad, Huseyin Inan, Marcello Hasegawa, Robert Sim

We propose a model that captures the correlation in the social network graph, and incorporates this correlation in the privacy calculations through Pufferfish privacy principles.

Language Modelling

Privacy Regularization: Joint Privacy-Utility Optimization in LanguageModels

no code implementations NAACL 2021 FatemehSadat Mireshghallah, Huseyin Inan, Marcello Hasegawa, Victor R{\"u}hle, Taylor Berg-Kirkpatrick, Robert Sim

In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a novel triplet-loss term.

Memorization Privacy Preserving

Membership Inference on Word Embedding and Beyond

no code implementations21 Jun 2021 Saeed Mahloujifar, Huseyin A. Inan, Melissa Chase, Esha Ghosh, Marcello Hasegawa

Indeed, our attack is a cheaper membership inference attack on text-generative models, which does not require the knowledge of the target model or any expensive training of text-generative models as shadow models.

Inference Attack Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.