Search Results for author: Steve Chien

Found 12 papers, 5 papers with code

Using Unsupervised and Supervised Learning and Digital Twin for Deep Convective Ice Storm Classification

no code implementations12 Sep 2023 Jason Swope, Steve Chien, Emily Dunkel, Xavier Bosch-Lluis, Qing Yue, William Deal

Critical to the intelligent targeting is accurate identification of storm/cloud types from eight bands of radiance collected by the radiometer.

How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy

1 code implementation1 Mar 2023 Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H. Brendan McMahan, Sergei Vassilvitskii, Steve Chien, Abhradeep Thakurta

However, while some adoption of DP has happened in industry, attempts to apply DP to real world complex ML models are still few and far between.

Temporal Multimodal Multivariate Learning

no code implementations14 Jun 2022 Hyoshin Park, Justice Darko, Niharika Deshpande, Venktesh Pandey, Hui Su, Masahiro Ono, Dedrick Barkely, Larkin Folsom, Derek Posselt, Steve Chien

We introduce temporal multimodal multivariate learning, a new family of decision making models that can indirectly learn and transfer online information from simultaneous observations of a probability distribution with more than one peak or more than one outcome variable from one time stage to another.

Decision Making

Detecting Unintended Memorization in Language-Model-Fused ASR

no code implementations20 Apr 2022 W. Ronny Huang, Steve Chien, Om Thakkar, Rajiv Mathews

End-to-end (E2E) models are often being accompanied by language models (LMs) via shallow fusion for boosting their overall quality as well as recognition of rare words.

Language Modelling Memorization

Toward Training at ImageNet Scale with Differential Privacy

1 code implementation28 Jan 2022 Alexey Kurakin, Shuang Song, Steve Chien, Roxana Geambasu, Andreas Terzis, Abhradeep Thakurta

Despite a rich literature on how to train ML models with differential privacy, it remains extremely challenging to train real-life, large neural networks with both reasonable accuracy and privacy.

Image Classification with Differential Privacy

Membership Inference Attacks From First Principles

2 code implementations7 Dec 2021 Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramer

A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset.

Inference Attack Membership Inference Attack

Using Explainable Scheduling for the Mars 2020 Rover Mission

no code implementations17 Nov 2020 Jagriti Agrawal, Amruta Yelamanchili, Steve Chien

In this paper, we describe such a scheduling system for NASA's Mars 2020 Perseverance Rover, as well as Crosscheck, an explainable scheduling tool that explains the scheduler behavior.

Scheduling

Tempered Sigmoid Activations for Deep Learning with Differential Privacy

1 code implementation28 Jul 2020 Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Úlfar Erlingsson

Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer privacy for training data.

Privacy Preserving Privacy Preserving Deep Learning

Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy

no code implementations25 Sep 2019 Nicolas Papernot, Steve Chien, Shuang Song, Abhradeep Thakurta, Ulfar Erlingsson

Because learning sometimes involves sensitive data, standard machine-learning algorithms have been extended to offer strong privacy guarantees for training data.

Privacy Preserving

A General Approach to Adding Differential Privacy to Iterative Training Procedures

4 code implementations15 Dec 2018 H. Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, Peter Kairouz

In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees.

Cannot find the paper you are looking for? You can Submit a new open access paper.