1 code implementation • 17 Dec 2020 • Edward Raff, William Fleshman, Richard Zak, Hyrum S. Anderson, Bobby Filar, Mark McLean
Recent works within machine learning have been tackling inputs of ever-increasing size, with cybersecurity presenting sequence classification problems of particularly extreme lengths.
no code implementations • 22 Oct 2020 • Edward Raff, Bobby Filar, James Holt
We propose a strategy for fixing false positives in production after a model has already been deployed.
1 code implementation • 6 Sep 2020 • Edward Raff, Richard Zak, Gary Lopez Munoz, William Fleming, Hyrum S. Anderson, Bobby Filar, Charles Nicholas, James Holt
Yara rules are a ubiquitous tool among cybersecurity practitioners and analysts.
no code implementations • 20 Feb 2018 • Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, SJ Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei
This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats.
4 code implementations • arXiv 2018 • Hyrum S. Anderson, Anant Kharkar, Bobby Filar, David Evans, Phil Roth
We show in experiments that our method can attack a gradient-boosted machine learning model with evasion rates that are substantial and appear to be strongly dependent on the dataset.
Cryptography and Security
no code implementations • 6 Oct 2016 • Hyrum S. Anderson, Jonathan Woodbridge, Bobby Filar
We test the hypothesis of whether adversarially generated domains may be used to augment training sets in order to harden other machine learning models against yet-to-be-observed DGAs.