Search Results for author: Vinod Yegneswaran

Found 5 papers, 1 papers with code

Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning

1 code implementation3 Feb 2023 Jacob Brown, Xi Jiang, Van Tran, Arjun Nitin Bhagoji, Nguyen Phong Hoang, Nick Feamster, Prateek Mittal, Vinod Yegneswaran

In this paper, we explore how machine learning (ML) models can (1) help streamline the detection process, (2) improve the potential of using large-scale datasets for censorship detection, and (3) discover new censorship instances and blocking signatures missed by existing heuristic methods.

Blocking

Scalable Microservice Forensics and Stability Assessment Using Variational Autoencoders

no code implementations23 Apr 2021 Prakhar Sharma, Phillip Porras, Steven Cheung, James Carpenter, Vinod Yegneswaran

We present a deep learning based approach to containerized application runtime stability analysis, and an intelligent publishing algorithm that can dynamically adjust the depth of process-level forensics published to a backend incident analysis repository.

Data Masking with Privacy Guarantees

no code implementations8 Jan 2019 Anh T. Pham, Shalini Ghosh, Vinod Yegneswaran

In particular, we propose a method of masking the private data with privacy guarantee while ensuring that a classifier trained on the masked data is similar to the classifier trained on the original data, to maintain usability.

Time Series Deinterleaving of DNS Traffic

no code implementations16 Jul 2018 Amir Asiaee, Hardik Goel, Shalini Ghosh, Vinod Yegneswaran, Arindam Banerjee

Stream deinterleaving is an important problem with various applications in the cybersecurity domain.

BIG-bench Machine Learning Time Series +1

Trusted Neural Networks for Safety-Constrained Autonomous Control

no code implementations18 May 2018 Shalini Ghosh, Amaury Mercier, Dheeraj Pichapati, Susmit Jha, Vinod Yegneswaran, Patrick Lincoln

Experiments using our first approach of a multi-headed TNN model, on a dataset generated by a customized version of TORCS, show that (1) adding safety constraints to a neural network model results in increased performance and safety, and (2) the improvement increases with increasing importance of the safety constraints.

Self-Driving Cars

Cannot find the paper you are looking for? You can Submit a new open access paper.