Search Results for author: Daniel Kang

Found 15 papers, 4 papers with code

Proof: Accelerating Approximate Aggregation Queries with Expensive Predicates

no code implementations27 Jul 2021 Daniel Kang, John Guibas, Peter Bailis, Tatsunori Hashimoto, Yi Sun, Matei Zaharia

Given a dataset $\mathcal{D}$, we are interested in computing the mean of a subset of $\mathcal{D}$ which matches a predicate.

Jointly Optimizing Preprocessing and Inference for DNN-based Visual Analytics

no code implementations25 Jul 2020 Daniel Kang, Ankit Mathur, Teja Veeramacheneni, Peter Bailis, Matei Zaharia

This runtime engine a) efficiently pipelines preprocessing and DNN execution for inference, b) places preprocessing operations on the CPU or GPU in a hardware- and input-aware manner, and c) efficiently manages memory and threading for high throughput execution.

Improved Natural Language Generation via Loss Truncation

no code implementations ACL 2020 Daniel Kang, Tatsunori Hashimoto

In this work, we show that the distinguishability of the models and reference serves as a principled and robust alternative for handling invalid references.

Text Generation

Model Assertions for Monitoring and Improving ML Models

no code implementations3 Mar 2020 Daniel Kang, Deepti Raghavan, Peter Bailis, Matei Zaharia

We propose methods of using model assertions at all stages of ML system deployment, including runtime monitoring, validating labels, and continuously improving ML models.

Active Learning

LIT: Learned Intermediate Representation Training for Model Compression

1 code implementation4 Sep 2019 Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia

In this work, we introduce Learned Intermediate representation Training (LIT), a novel model compression technique that outperforms a range of recent model compression techniques by leveraging the highly repetitive structure of modern DNNs (e. g., ResNet).

Image Classification Model Compression +2

Testing Robustness Against Unforeseen Adversaries

2 code implementations21 Aug 2019 Daniel Kang, Yi Sun, Dan Hendrycks, Tom Brown, Jacob Steinhardt

Adversaries adapt and evolve their attacks; hence adversarial defenses must be robust to a broad range of unforeseen attacks.

Adversarial Defense

Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference

no code implementations3 Jun 2019 Peter Kraft, Daniel Kang, Deepak Narayanan, Shoumik Palkar, Peter Bailis, Matei Zaharia

First, Willump automatically cascades feature computation for classification queries: Willump classifies most data inputs using only high-value, low-cost features selected through empirical observations of ML model performance, improving query performance by up to 5x without statistically significant accuracy loss.

Transfer of Adversarial Robustness Between Perturbation Types

no code implementations3 May 2019 Daniel Kang, Yi Sun, Tom Brown, Dan Hendrycks, Jacob Steinhardt

We study the transfer of adversarial robustness of deep neural networks between different perturbation types.

Adversarial Robustness

Network Offloading Policies for Cloud Robotics: a Learning-based Approach

no code implementations15 Feb 2019 Sandeep Chinchali, Apoorva Sharma, James Harrison, Amine Elhafsi, Daniel Kang, Evgenya Pergament, Eyal Cidon, Sachin Katti, Marco Pavone

In this paper, we formulate a novel Robot Offloading Problem --- how and when should robots offload sensing tasks, especially if they are uncertain, to improve accuracy while minimizing the cost of cloud communication?

Decision Making Object Detection

LIT: Block-wise Intermediate Representation Training for Model Compression

no code implementations ICLR 2019 Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia

Knowledge distillation (KD) is a popular method for reducing the computational overhead of deep network inference, in which the output of a teacher model is used to train a smaller, faster student model.

Knowledge Distillation Model Compression

Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark

no code implementations4 Jun 2018 Cody Coleman, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Re, Matei Zaharia

In this work, we analyze the entries from DAWNBench, which received optimized submissions from multiple industrial groups, to investigate the behavior of TTA as a metric as well as trends in the best-performing entries.

BlazeIt: Optimizing Declarative Aggregation and Limit Queries for Neural Network-Based Video Analytics

no code implementations2 May 2018 Daniel Kang, Peter Bailis, Matei Zaharia

We introduce two new query optimization techniques in BlazeIt that are not supported by prior work.

Databases

Model Specialization for Inference Via End-to-End Distillation, Pruning, and Cascades

no code implementations ICLR 2018 Daniel Kang, Karey Shi, Thao Ngyuen, Stephanie Mallard, Peter Bailis, Matei Zaharia

Thus, simply fine-tuning or transfer learn- ing from a general-purpose network inherits a large computational cost that may not be necessary for a given task.

General Classification Image Classification

NoScope: Optimizing Neural Network Queries over Video at Scale

1 code implementation7 Mar 2017 Daniel Kang, John Emmons, Firas Abuzaid, Peter Bailis, Matei Zaharia

Given a target video, object to detect, and reference neural network, NoScope automatically searches for and trains a sequence, or cascade, of models that preserves the accuracy of the reference network but is specialized to the target video and are therefore far less computationally expensive.

Cannot find the paper you are looking for? You can Submit a new open access paper.