Visual representations underlie object recognition tasks, but they often contain both robust and non-robust features.
In this paper, we introduce a new evasive attack, DIVA, that exploits these differences in edge adaptation, by adding adversarial noise to input data that maximizes the output difference between the original and adapted model.
In this paper, we propose a novel defense that can dynamically adapt the input using the intrinsic structure from multiple self-supervised tasks.
The overhead of the kernel storage path accounts for half of the access latency for new NVMe storage devices.
Operating Systems Databases
We introduce a framework for learning robust visual representations that generalize to new viewpoints, backgrounds, and scene contexts.
We thus train the model to learn execution semantics from the functions' micro-traces, without any manual labeling effort.
We present the results of three experiments comparing representations of millions of images with exhaustively shifted objects, examining both local invariance (within a few pixels) and global invariance (across the image frame).
We present XDA, a transfer-learning-based disassembly framework that learns different contextual dependencies present in machine code and transfers this knowledge for accurate and robust disassembly.
Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network.
We propose a live attack on deep learning systems that patches model parameters in memory to achieve predefined malicious behavior on a certain set of inputs.
Due to the inherent robustness of segmentation models, traditional norm-bounded attack methods show limited effect on such type of models.
Our approach can check different safety properties and find concrete counterexamples for networks that are 10$\times$ larger than the ones supported by existing analysis techniques.
In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers.
Finally, we show that retraining using the safety violations detected by VeriVis can reduce the average number of violations up to 60. 2%.
We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation.
First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs.