no code implementations • 7 Sep 2024 • George Kour, Naama Zwerdling, Marcel Zalmanovici, Ateret Anaby-Tavor, Ora Nova Fandina, Eitan Farchi
Large language models (LLMs) are increasingly used in business dialogue systems but they pose security and ethical risks.
no code implementations • 29 Jul 2024 • Marcel Zalmanovici, Orna Raz, Eitan Farchi, Iftach Freund
Large Language Models (LLMs) are used for many tasks, including those related to coding.
no code implementations • 9 Mar 2024 • Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Kirushikesh DB, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Nishtha Madaan, Sameep Mehta, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, Marcel Zalmanovici
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations.
no code implementations • 7 Nov 2023 • George Kour, Marcel Zalmanovici, Naama Zwerdling, Esther Goldbraich, Ora Nova Fandina, Ateret Anaby-Tavor, Orna Raz, Eitan Farchi
As large language models become more prevalent, their possible harmful or inappropriate responses are a cause for concern.
no code implementations • 22 Dec 2021 • George Kour, Marcel Zalmanovici, Orna Raz, Samuel Ackerman, Ateret Anaby-Tavor
Testing Machine Learning (ML) models and AI-Infused Applications (AIIAs), or systems that contain ML models, is highly challenging.
no code implementations • 10 Nov 2021 • Samuel Ackerman, Orna Raz, Marcel Zalmanovici, Aviad Zlotnick
The assumption underlying statistical ML resulting in theoretical or empirical performance guarantees is that the distribution of the training data is representative of the production data distribution.
no code implementations • 11 Oct 2021 • Samuel Ackerman, Eitan Farchi, Orna Raz, Marcel Zalmanovici, Maya Zohar
A user may want to know where in the feature space observations are concentrated, and where it is sparse or empty.
no code implementations • 12 Aug 2021 • Samuel Ackerman, Orna Raz, Marcel Zalmanovici
In this paper we show the feasibility of automatically extracting feature models that result in explainable data slices over which the ML solution under-performs.
no code implementations • 11 Aug 2021 • Samuel Ackerman, Parijat Dube, Eitan Farchi, Orna Raz, Marcel Zalmanovici
Detecting drift in performance of Machine Learning (ML) models is an acknowledged challenge.
no code implementations • 16 Dec 2020 • Samuel Ackerman, Eitan Farchi, Orna Raz, Marcel Zalmanovici, Parijat Dube
Drift is distribution change between the training and deployment data, which is concerning if model performance changes.