no code implementations • 9 Apr 2024 • Shaona Ghosh, Prasoon Varshney, Erick Galinkin, Christopher Parisien
As Large Language Models (LLMs) and generative AI become more widespread, the content safety risks associated with their use also increase.
no code implementations • 8 Dec 2022 • Erick Galinkin, Emmanouil Pountourakis, John Carter, Spiros Mancoridis
In the cybersecurity setting, defenders are often at the mercy of their detection technologies and subject to the information and experiences that individual analysts have.
no code implementations • 7 Mar 2022 • Erick Galinkin
Explainability in machine learning has become incredibly important as machine learning-powered systems become ubiquitous and both regulation and public sentiment begin to demand an understanding of how these systems make decisions.
no code implementations • 6 Mar 2022 • Erick Galinkin
Legislation and public sentiment throughout the world have promoted fairness metrics, explainability, and interpretability as prescriptions for the responsible development of ethical artificial intelligence systems.
no code implementations • 23 Sep 2021 • Erick Galinkin, John Carter, Spiros Mancoridis
In cybersecurity, attackers range from brash, unsophisticated script kiddies and cybercriminals to stealthy, patient advanced persistent threats.
no code implementations • 30 Jul 2021 • Erick Galinkin
In many cases, neural networks perform well on test data, but tend to overestimate their confidence on out-of-distribution data.
no code implementations • 19 May 2021 • Abhishek Gupta, Alexandrine Royer, Connor Wright, Falaah Arif Khan, Victoria Heath, Erick Galinkin, Ryan Khurana, Marianna Bergamaschi Ganapini, Muriam Fancy, Masa Sweidan, Mo Akif, Renjie Butalid
The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020.
no code implementations • 16 Mar 2021 • Erick Galinkin
Differentially private models seek to protect the privacy of data the model is trained on, making it an important component of model security and privacy.
no code implementations • 5 Nov 2020 • Abhishek Gupta, Alexandrine Royer, Victoria Heath, Connor Wright, Camylle Lanteigne, Allison Cohen, Marianna Bergamaschi Ganapini, Muriam Fancy, Erick Galinkin, Ryan Khurana, Mo Akif, Renjie Butalid, Falaah Arif Khan, Masa Sweidan, Audrey Balogh
The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020.
1 code implementation • 16 Sep 2020 • Erick Galinkin
Applying our results can serve to guide analysis methods for machine learning engineers and suggests that neural networks that can exploit the convolution theorem are equally accurate as standard convolutional neural networks, and can be more computationally efficient.
no code implementations • 9 Jul 2020 • Abhishek Gupta, Erick Galinkin
In this hand-off, the engineers responsible for model deployment are often not privy to the details of the model and thus, the potential vulnerabilities associated with its usage, exposure, or compromise.
no code implementations • 25 Jun 2020 • Abhishek Gupta, Camylle Lanteigne, Victoria Heath, Marianna Bergamaschi Ganapini, Erick Galinkin, Allison Cohen, Tania De Gasperis, Mo Akif, Renjie Butalid
These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast.