no code implementations • 22 Oct 2024 • Itay Nakash, George Kour, Guy Uziel, Ateret Anaby-Tavor
Following the advancement of large language models (LLMs), the development of LLM-based autonomous agents has become increasingly prevalent.
no code implementations • 7 Sep 2024 • George Kour, Naama Zwerdling, Marcel Zalmanovici, Ateret Anaby-Tavor, Ora Nova Fandina, Eitan Farchi
Large language models (LLMs) are increasingly used in business dialogue systems but they pose security and ethical risks.
no code implementations • 22 Aug 2024 • Ora Nova Fandina, Leshem Choshen, Eitan Farchi, George Kour, Yotam Perlitz, Orna Raz
We applied these tests in a model safety scenario to assess the reliability of harmfulness detection metrics, uncovering a number of inconsistencies.
1 code implementation • 30 May 2024 • Tal Reiss, George Kour, Naama Zwerdling, Ateret Anaby-Tavor, Yedid Hoshen
This paper studies the realistic but underexplored cold-start setting where an anomaly detection model is initialized using zero-shot guidance, but subsequently receives a small number of contaminated observations (namely, that may include anomalies).
Ranked #1 on
Cold-Start Anomaly Detection
on BANKING77-OOS
no code implementations • 9 Mar 2024 • Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Kirushikesh DB, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Nishtha Madaan, Sameep Mehta, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, Marcel Zalmanovici
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations.
no code implementations • 7 Nov 2023 • George Kour, Marcel Zalmanovici, Naama Zwerdling, Esther Goldbraich, Ora Nova Fandina, Ateret Anaby-Tavor, Orna Raz, Eitan Farchi
As large language models become more prevalent, their possible harmful or inappropriate responses are a cause for concern.
1 code implementation • 23 Oct 2023 • Samuel Ackerman, George Kour, Eitan Farchi
We quantify this quality by constructing a Known-Similarity Corpora set from two paraphrase corpora and calculating the distance between paired corpora from it.
2 code implementations • 29 Nov 2022 • George Kour, Samuel Ackerman, Orna Raz, Eitan Farchi, Boaz Carmeli, Ateret Anaby-Tavor
The ability to compare the semantic similarity between text corpora is important in a variety of natural language processing applications.
no code implementations • 22 Jun 2022 • Naama Zwerdling, Segev Shlomov, Esther Goldbraich, George Kour, Boaz Carmeli, Naama Tepper, Inbal Ronen, Vitaly Zabershinsky, Ateret Anaby-Tavor
Models for text generation have become focal for many research tasks and especially for the generation of sentence corpora.
no code implementations • 22 Dec 2021 • George Kour, Marcel Zalmanovici, Orna Raz, Samuel Ackerman, Ateret Anaby-Tavor
Testing Machine Learning (ML) models and AI-Infused Applications (AIIAs), or systems that contain ML models, is highly challenging.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Naama Tepper, Esther Goldbraich, Naama Zwerdling, George Kour, Ateret Anaby Tavor, Boaz Carmeli
Data balancing is a known technique for improving the performance of classification tasks.
1 code implementation • 8 Nov 2019 • Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, Naama Zwerdling
Based on recent advances in natural language modeling and those in text generation capabilities, we propose a novel data augmentation method for text classification tasks.
no code implementations • ICLR 2019 • Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, Jonathan Berant
We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.
no code implementations • 24 Apr 2018 • Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, Alon Jacovi
At inference time, we replace each estimator with its existing application counterpart and let the base network solve the task by interacting with the existing application.