no code implementations • 7 Sep 2024 • George Kour, Naama Zwerdling, Marcel Zalmanovici, Ateret Anaby-Tavor, Ora Nova Fandina, Eitan Farchi
Large language models (LLMs) are increasingly used in business dialogue systems but they pose security and ethical risks.
no code implementations • 22 Aug 2024 • Ora Nova Fandina, Leshem Choshen, Eitan Farchi, George Kour, Yotam Perlitz, Orna Raz
We applied these tests in a model safety scenario to assess the reliability of harmfulness detection metrics, uncovering a number of inconsistencies.
no code implementations • 4 Aug 2024 • Samuel Ackerman, Ella Rabinovich, Eitan Farchi, Ateret Anaby-Tavor
We evaluate the robustness of several large language models on multiple datasets.
no code implementations • 29 Jul 2024 • Marcel Zalmanovici, Orna Raz, Eitan Farchi, Iftach Freund
Large Language Models (LLMs) are used for many tasks, including those related to coding.
no code implementations • 15 May 2024 • Samuel Ackerman, Eitan Farchi, Rami Katan, Orna Raz
Next, a set of interactions between the factors are defined and combinatorial optimization is used to create a small subset $P$ that ensures all desired interactions occur in $P$.
no code implementations • 9 Mar 2024 • Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Kirushikesh DB, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Nishtha Madaan, Sameep Mehta, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, Marcel Zalmanovici
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations.
no code implementations • 7 Nov 2023 • George Kour, Marcel Zalmanovici, Naama Zwerdling, Esther Goldbraich, Ora Nova Fandina, Ateret Anaby-Tavor, Orna Raz, Eitan Farchi
As large language models become more prevalent, their possible harmful or inappropriate responses are a cause for concern.
no code implementations • 2 Nov 2023 • Ella Rabinovich, Samuel Ackerman, Orna Raz, Eitan Farchi, Ateret Anaby-Tavor
Semantic consistency of a language model is broadly defined as the model's ability to produce semantically-equivalent outputs, given semantically-equivalent inputs.
1 code implementation • 23 Oct 2023 • Samuel Ackerman, George Kour, Eitan Farchi
We quantify this quality by constructing a Known-Similarity Corpora set from two paraphrase corpora and calculating the distance between paired corpora from it.
no code implementations • 17 Oct 2023 • Dipak Wani, Samuel Ackerman, Eitan Farchi, Xiaotong Liu, Hau-wen Chang, Sarasi Lalithsena
Logs enable the monitoring of infrastructure status and the performance of associated applications.
no code implementations • 14 May 2023 • Samuel Ackerman, Axel Bendavid, Eitan Farchi, Orna Raz
The approach we propose is to separate the observations that are the most likely to be predicted incorrectly into 'attention sets'.
1 code implementation • 3 Mar 2023 • Dennis Wei, Haoze Wu, Min Wu, Pin-Yu Chen, Clark Barrett, Eitan Farchi
The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.
2 code implementations • 29 Nov 2022 • George Kour, Samuel Ackerman, Orna Raz, Eitan Farchi, Boaz Carmeli, Ateret Anaby-Tavor
The ability to compare the semantic similarity between text corpora is important in a variety of natural language processing applications.
no code implementations • 2 Jan 2022 • Samuel Ackerman, Guy Barash, Eitan Farchi, Orna Raz, Onn Shehory
The crafting of machine learning (ML) based systems requires statistical control throughout its life cycle.
no code implementations • 9 Nov 2021 • Samuel Ackerman, Parijat Dube, Eitan Farchi
It is thus desirable to monitor the usage patterns and identify when the system is used in a way that was never used before.
no code implementations • 24 Oct 2021 • Eliran Roffe, Samuel Ackerman, Orna Raz, Eitan Farchi
We thus use a set of learned strong polynomial relations to identify drift.
no code implementations • 11 Oct 2021 • Samuel Ackerman, Eitan Farchi, Orna Raz, Marcel Zalmanovici, Maya Zohar
A user may want to know where in the feature space observations are concentrated, and where it is sparse or empty.
no code implementations • 6 Sep 2021 • Samuel Ackerman, Sanjib Choudhury, Nirmit Desai, Eitan Farchi, Dan Gisolfi, Andrew Hicks, Saritha Route, Diptikalyan Saha
API economy is driving the digital transformation of business applications across the hybrid Cloud and edge environments.
no code implementations • 11 Aug 2021 • Samuel Ackerman, Parijat Dube, Eitan Farchi, Orna Raz, Marcel Zalmanovici
Detecting drift in performance of Machine Learning (ML) models is an acknowledged challenge.
no code implementations • 4 May 2021 • Guy Barash, Eitan Farchi, Sarit Kraus, Onn Shehory
We show that, with a low attack budget, our attack's success rate is above 80%, and in some cases 100%, for white-box learning.
no code implementations • 16 Dec 2020 • Samuel Ackerman, Eitan Farchi, Orna Raz, Marcel Zalmanovici, Parijat Dube
Drift is distribution change between the training and deployment data, which is concerning if model performance changes.
no code implementations • 31 Jul 2020 • Samuel Ackerman, Parijat Dube, Eitan Farchi
We utilize neural network embeddings to detect data drift by formulating the drift detection within an appropriate sequential decision framework.
no code implementations • 16 Jan 2019 • Eitan Farchi, Onn Shehory, Guy Barash
There are cases in which an adversary can strategically tamper with the input data to affect the outcome of the learning process.