no code implementations • 27 May 2024 • Jinjin Zhao, Ted Shaowang, Stavos Sintos, Sanjay Krishnan
Video analytics systems based on deep learning models are often opaque and brittle and require explanation systems to help users debug.
no code implementations • 6 Feb 2024 • Shinan Liu, Ted Shaowang, Gerry Wan, Jeewon Chae, Jonatas Marques, Sanjay Krishnan, Nick Feamster
ServeFlow is able to make inferences on 76. 3% of flows in under 16ms, which is a speed-up of 40. 5x on the median end-to-end serving latency while increasing the service rate and maintaining similar accuracy.
1 code implementation • 2 Jun 2023 • Edward C. Williams, Grace Su, Sandra R. Schloen, Miller C. Prosser, Susanne Paulus, Sanjay Krishnan
The end-to-end pipeline achieves a top-5 classification accuracy of 0. 80.
no code implementations • 2 Mar 2023 • Ted Shaowang, Sanjay Krishnan
The relevant features for a machine learning task may arrive as one or more continuous streams of data.
no code implementations • 7 Dec 2021 • Dale Decatur, Sanjay Krishnan
Visual graphics, such as plots, charts, and figures, are widely used to communicate statistical conclusions.
no code implementations • 23 Feb 2021 • Cong Ding, Dixin Tang, Xi Liang, Aaron J. Elmore, Sanjay Krishnan
In this paper, we present CIAO, a tunable system to enable client cooperation with the server to enable efficient partial loading and data skipping for a given workload.
Databases
no code implementations • 18 Mar 2020 • Adam Dziedzic, Vanlin Sathya, Muhammad Iqbal Rochman, Monisha Ghosh, Sanjay Krishnan
The promise of ML techniques in solving non-linear problems influenced this work which aims to apply known ML techniques and develop new ones for wireless spectrum sharing between Wi-Fi and LTE in the unlicensed spectrum.
no code implementations • 8 Feb 2020 • Adam Dziedzic, Sanjay Krishnan
Recent work has extensively shown that randomized perturbations of neural networks can improve robustness to adversarial attacks.
no code implementations • 7 Feb 2020 • Rui Liu, Sanjay Krishnan, Aaron J. Elmore, Michael J. Franklin
As neural networks are increasingly employed in machine learning practice, how to efficiently share limited training resources among a diverse set of model training tasks becomes a crucial issue.
no code implementations • 21 Nov 2019 • Adam Dziedzic, John Paparrizos, Sanjay Krishnan, Aaron Elmore, Michael Franklin
The convolutional layers are core building blocks of neural network architectures.
no code implementations • 21 Nov 2019 • Vanlin Sathya, Adam Dziedzic, Monisha Ghosh, Sanjay Krishnan
This approach delivers an accuracy close to 100% compared to auto-correlation (AC) and energy detection (ED) approaches.
no code implementations • 25 Sep 2019 • Adam Dziedzic, Sanjay Krishnan
The existence of adversarial examples, or intentional mis-predictions constructed from small changes to correctly predicted examples, is one of the most significant challenges in neural network research today.
1 code implementation • 10 May 2019 • Zongheng Yang, Eric Liang, Amog Kamsetty, Chenggang Wu, Yan Duan, Xi Chen, Pieter Abbeel, Joseph M. Hellerstein, Sanjay Krishnan, Ion Stoica
To produce a truly usable estimator, we develop a Monte Carlo integration scheme on top of autoregressive models that can efficiently handle range queries with dozens of dimensions or more.
1 code implementation • 26 Apr 2019 • Sanjay Krishnan, Eugene Wu
The analyst effort in data cleaning is gradually shifting away from the design of hand-written scripts to building and tuning complex pipelines of automated data cleaning libraries.
Databases
no code implementations • 9 Aug 2018 • Sanjay Krishnan, Zongheng Yang, Ken Goldberg, Joseph Hellerstein, Ion Stoica
Exhaustive enumeration of all possible join orders is often avoided, and most optimizers leverage heuristics to prune the search space.
Databases
no code implementations • ICLR 2018 • Roy Fox, Richard Shin, Sanjay Krishnan, Ken Goldberg, Dawn Song, Ion Stoica
Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism.
no code implementations • 4 Nov 2017 • Richard Liaw, Sanjay Krishnan, Animesh Garg, Daniel Crankshaw, Joseph E. Gonzalez, Ken Goldberg
We explore how Deep Neural Networks can represent meta-policies that switch among a set of previously learned policies, specifically in settings where the dynamics of a new scenario are composed of a mixture of previously learned dynamics and where the state observation is possibly corrupted by sensing noise.
1 code implementation • 19 Sep 2017 • Daniel Seita, Sanjay Krishnan, Roy Fox, Stephen McKinley, John Canny, Ken Goldberg
In Phase II (fine), the bias from Phase I is applied to move the end-effector toward a small set of specific target points on a printed sheet.
Robotics
no code implementations • 24 Mar 2017 • Roy Fox, Sanjay Krishnan, Ion Stoica, Ken Goldberg
Augmenting an agent's control with useful higher-level behaviors called options can greatly reduce the sample complexity of reinforcement learning, but manually designing options is infeasible in high-dimensional and abstract state spaces.
no code implementations • 4 Oct 2016 • Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg
Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error.
no code implementations • 15 Jan 2016 • Sanjay Krishnan, Jiannan Wang, Eugene Wu, Michael J. Franklin, Ken Goldberg
Data cleaning is often an important step to ensure that predictive models, such as regression and classification, are not affected by systematic errors such as inconsistent, out-of-date, or outlier data.
no code implementations • NeurIPS 2014 • Martin Jaggi, Virginia Smith, Martin Takáč, Jonathan Terhorst, Sanjay Krishnan, Thomas Hofmann, Michael. I. Jordan
Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning.