1 code implementation • EMNLP 2021 • Entony Lekhtman, Yftah Ziser, Roi Reichart
We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting.
no code implementations • CL (ACL) 2020 • Ivan Vulić, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, Anna Korhonen
We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering data sets for 12 typologically diverse languages, including major languages (e. g., Mandarin Chinese, Spanish, Russian) as well as less-resourced ones (e. g., Welsh, Kiswahili).
no code implementations • 27 Jul 2024 • Nitay Calderon, Roi Reichart
This surge in usage has led to an explosion in NLP model interpretability and analysis research, accompanied by numerous technical surveys.
no code implementations • 17 Jun 2024 • Almog Gueta, Amir Feder, Zorik Gekhman, Ariel Goldstein, Roi Reichart
This study empirically tests the $\textit{Narrative Economics}$ hypothesis, which posits that narratives (ideas that are spread virally and affect public beliefs) can influence economic fluctuations.
no code implementations • 9 May 2024 • Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, Jonathan Herzig
In this work, we study the impact of such exposure to new knowledge on the capability of the fine-tuned model to utilize its pre-existing knowledge.
no code implementations • 2 May 2024 • Liam Hazan, Gili Focht, Naama Gavrielov, Roi Reichart, Talar Hagopian, Mary-Louise C. Greer, Ruth Cytter Kuint, Dan Turner, Moti Freiman
Automatic conversion of free-text radiology reports into structured data using Natural Language Processing (NLP) techniques is crucial for analyzing diseases on a large scale.
no code implementations • 22 Apr 2024 • Shir Lissak, Yaakov Ophir, Refael Tikochinski, Anat Brunstein Klomek, Itay Sisso, Eyal Fruchter, Roi Reichart
This secondary dataset included responses by 1, 062 participants to the same suicide scale as well as to well-validated scales measuring depression and boredom.
1 code implementation • 19 Feb 2024 • Shir Lissak, Nitay Calderon, Geva Shenkman, Yaakov Ophir, Eyal Fruchter, Anat Brunstein Klomek, Roi Reichart
Queer youth face increased mental health risks, such as depression, anxiety, and suicidal ideation.
no code implementations • 6 Feb 2024 • Amir Taubenfeld, Yaniv Dover, Roi Reichart, Ariel Goldstein
Recent advancements in natural language processing, especially the emergence of Large Language Models (LLMs), have opened exciting possibilities for constructing computational simulations designed to replicate human behavior accurately.
no code implementations • 30 Jan 2024 • Eilam Shapira, Omer Madmon, Roi Reichart, Moshe Tennenholtz
Human choice prediction in economic contexts is crucial for applications in marketing, finance, public policy, and more.
no code implementations • 25 Oct 2023 • Alon Goldstein, Miriam Havin, Roi Reichart, Ariel Goldstein
This paper investigates the problem-solving capabilities of Large Language Models (LLMs) by evaluating their performance on stumpers, unique single-step intuition problems that pose challenges for human solvers but are easily verifiable.
no code implementations • 11 Oct 2023 • Ariel Goldstein, Eric Ham, Mariano Schain, Samuel Nastase, Zaid Zada, Avigail Dabush, Bobbi Aubrey, Harshvardhan Gazula, Amir Feder, Werner K Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Roi Reichart, Michael Brenner, Avinatan Hassidim, Orrin Devinsky, Adeen Flinker, Omer Levy, Uri Hasson
Our results reveal a connection between human language processing and DLMs, with the DLM's layer-by-layer accumulation of contextual information mirroring the timing of neural activity in high-order language areas.
1 code implementation • 3 Oct 2023 • Mor Ventura, Eyal Ben-David, Anna Korhonen, Roi Reichart
Text-To-Image (TTI) models, such as DALL-E and StableDiffusion, have demonstrated remarkable prompt-based image generation capabilities.
no code implementations • 1 Oct 2023 • Yair Gat, Nitay Calderon, Amir Feder, Alexander Chapanin, Amit Sharma, Roi Reichart
We hence present a second approach based on matching, and propose a method that is guided by an LLM at training-time and learns a dedicated embedding space.
2 code implementations • 31 May 2023 • Nitay Calderon, Naveh Porat, Eyal Ben-David, Alexander Chapanin, Zorik Gekhman, Nadav Oved, Vitaly Shalumov, Roi Reichart
We then conducted a comprehensive large-scale DR study involving over 14, 000 domain shifts across 21 fine-tuned models and few-shot LLMs.
1 code implementation • 17 May 2023 • Eilam Shapira, Reut Apel, Moshe Tennenholtz, Roi Reichart
Recent advances in Large Language Models (LLMs) have spurred interest in designing LLM-based agents for tasks that involve interaction with human and artificial agents.
1 code implementation • 3 May 2023 • Nitay Calderon, Subhabrata Mukherjee, Roi Reichart, Amir Kantor
In this work, we study the potential of compressing them, which is crucial for real-world applications serving millions of users.
no code implementations • 19 Feb 2023 • Yael Badian, Yaakov Ophir, Refael Tikochinski, Nitay Calderon, Anat Brunstein Klomek, Roi Reichart
Notably, the study illustrates the advantages of hybrid models in such complicated tasks and provides simple and flexible prediction strategies that could be utilized to develop real-life monitoring tools of suicide.
no code implementations • 27 Oct 2022 • Ohad Amosy, Tomer Volk, Eilam Shapira, Eyal Ben-David, Roi Reichart, Gal Chechik
Our approach generates non-linear classifiers and can handle rich textual descriptions.
1 code implementation • 2 Sep 2022 • Eyal Ben-David, Yftah Ziser, Roi Reichart
In this setup, we aim to efficiently annotate data from a set of source domains such that the trained model performs well on a sensitive target domain from which data is unavailable for annotation.
1 code implementation • 10 Aug 2022 • Guy Rotman, Roi Reichart
Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related.
1 code implementation • 29 Jun 2022 • Zorik Gekhman, Nadav Oved, Orgad Keller, Idan Szpektor, Roi Reichart
We find that high benchmark scores do not necessarily translate to strong robustness, and that various methods can perform extremely differently under different settings.
1 code implementation • 12 Jun 2022 • Itai Gat, Nitay Calderon, Roi Reichart, Tamir Hazan
This work suggests a theoretical framework for model interpretability by measuring the contribution of relevant features to the functional entropy of the network with respect to the input.
no code implementations • 1 Jun 2022 • Amir Feder, Guy Horowitz, Yoav Wald, Roi Reichart, Nir Rosenfeld
Accurately predicting the relevance of items to users is crucial to the success of many social platforms.
1 code implementation • 27 May 2022 • Eldar David Abraham, Karel D'Oosterlinck, Amir Feder, Yair Ori Gat, Atticus Geiger, Christopher Potts, Roi Reichart, Zhengxuan Wu
We introduce CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP).
1 code implementation • ICLR 2022 • Alon Berliner, Guy Rotman, Yossi Adi, Roi Reichart, Tamir Hazan
Discrete variational auto-encoders (VAEs) are able to represent semantic latent spaces in generative learning.
1 code implementation • 27 Mar 2022 • Tomer Volk, Eyal Ben-David, Ohad Amosy, Gal Chechik, Roi Reichart
Our innovative framework employs example-based Hypernetwork adaptation: a T5 encoder-decoder initially generates a unique signature from an input example, embedding it within the source domains' semantic space.
1 code implementation • ACL 2022 • Nitay Calderon, Eyal Ben-David, Amir Feder, Roi Reichart
Natural language processing (NLP) algorithms have become very successful, but they still struggle when applied to out-of-distribution examples.
1 code implementation • 2 Sep 2021 • Amir Feder, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brandon M. Stewart, Victor Veitch, Diyi Yang
A fundamental goal of scientific research is to learn about causal relationships.
1 code implementation • 1 Sep 2021 • Entony Lekhtman, Yftah Ziser, Roi Reichart
We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting.
no code implementations • IJCNLP 2019 • Edoardo Maria Ponti, Ivan Vulić, Ryan Cotterell, Roi Reichart, Anna Korhonen
Motivated by this question, we aim at constructing an informative prior over neural weights, in order to adapt quickly to held-out languages in the task of character-level language modeling.
no code implementations • ACL 2021 • Daniel Rosenberg, Itai Gat, Amir Feder, Roi Reichart
Deep learning algorithms have shown promising results in visual question answering (VQA) tasks, but a more careful look reveals that they often do not understand the rich signal they are being fed with.
no code implementations • 11 May 2021 • Maya Raifer, Guy Rotman, Reut Apel, Moshe Tennenholtz, Roi Reichart
Persuasion games are fundamental in economics and AI research and serve as the basis for important applications.
1 code implementation • 24 Feb 2021 • Eyal Ben-David, Nadav Oved, Roi Reichart
We address a challenging and underexplored version of this domain adaptation problem, where an algorithm is trained on several source domains, and then applied to examples from unseen domains that are unknown at training time.
1 code implementation • EACL 2021 • Yi Zhu, Ehsan Shareghi, Yingzhen Li, Roi Reichart, Anna Korhonen
Semi-supervised learning through deep generative models and multi-lingual pretraining techniques have orchestrated tremendous success across different areas of NLP.
1 code implementation • 18 Jan 2021 • Guy Rotman, Amir Feder, Roi Reichart
Recent improvements in the predictive quality of natural language processing systems are often dependent on a substantial increase in the number of model parameters.
no code implementations • ACL 2021 • Mengjie Zhao, Yi Zhu, Ehsan Shareghi, Ivan Vulić, Roi Reichart, Anna Korhonen, Hinrich Schütze
Few-shot crosslingual transfer has been shown to outperform its zero-shot counterpart with pretrained encoders like multilingual BERT.
1 code implementation • 17 Dec 2020 • Reut Apel, Ido Erev, Roi Reichart, Moshe Tennenholtz
Our results demonstrate that given a prefix of the interaction sequence, our models can predict the future decisions of the decision-maker, particularly when a sequential modeling approach and hand-crafted textual features are applied.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Eyal Ben-David, Orgad Keller, Eric Malmi, Idan Szpektor, Roi Reichart
Sentence fusion is the task of joining related sentences into coherent text.
1 code implementation • 16 Jun 2020 • Eyal Ben-David, Carmel Rabinovitz, Roi Reichart
Pivot-based neural representation models have lead to significant progress in domain adaptation for NLP.
1 code implementation • CL (ACL) 2021 • Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart
Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance.
1 code implementation • ACL 2020 • Daniela Gerz, Ivan Vulić, Marek Rei, Roi Reichart, Anna Korhonen
We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures.
1 code implementation • 9 May 2020 • Dor Ringel, Rotem Dror, Roi Reichart
We present the Structured Weighted Violation MIRA (SWVM), a new structured prediction algorithm that is based on an hybridization between MIRA (Crammer and Singer, 2003) and the structured weighted violations perceptron (SWVP) (Dror and Reichart, 2016).
1 code implementation • 6 Apr 2020 • Omer Ben-Porat, Sharon Hirsch, Lital Kuchy, Guy Elad, Roi Reichart, Moshe Tennenholtz
In ablation analysis, we demonstrate the importance of our modeling choices---the representation of the text with the commonsensical personality attributes and our classifier---to the predictive power of our model.
no code implementations • 10 Mar 2020 • Ivan Vulić, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, Anna Korhonen
We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering datasets for 12 typologically diverse languages, including major languages (e. g., Mandarin Chinese, Spanish, Russian) as well as less-resourced ones (e. g., Welsh, Kiswahili).
no code implementations • 5 Feb 2020 • Elad Kravi, Benny Kimelfeld, Yaron Kanza, Roi Reichart
We explore two approaches to the problem: (a) a pipeline approach, where each message is first classified, and then the location associated with the message set is inferred from the individual message labels; and (b) a joint approach where the individual messages are simultaneously processed to yield the desired location type.
1 code implementation • 30 Jan 2020 • Edoardo M. Ponti, Ivan Vulić, Ryan Cotterell, Marinela Parovic, Roi Reichart, Anna Korhonen
In this work, we propose a Bayesian generative model for the space of neural parameters.
no code implementations • EMNLP 2020 • Haim Dubossarsky, Ivan Vulić, Roi Reichart, Anna Korhonen
Performance in cross-lingual NLP tasks is impacted by the (dis)similarity of languages at hand: e. g., previous work has suggested there is a connection between the expected success of bilingual lexicon induction (BLI) and the assumption of (approximate) isomorphism between monolingual embedding spaces.
1 code implementation • ACL 2019 • Ofer Givoli, Roi Reichart
We consider a zero-shot semantic parsing task: parsing instructions into compositional logical forms, in domains that were not seen during training.
1 code implementation • TACL 2019 • Guy Rotman, Roi Reichart
Neural dependency parsing has proven very effective, achieving state-of-the-art results on numerous domains and languages.
no code implementations • IJCNLP 2019 • Edoardo Maria Ponti, Ivan Vuli{\'c}, Goran Glava{\v{s}}, Roi Reichart, Anna Korhonen
Semantic specialization integrates structured linguistic knowledge from external resources (such as lexical relations in WordNet) into pretrained distributional vectors in the form of constraints.
2 code implementations • CL (ACL) 2020 • Nadav Oved, Amir Feder, Roi Reichart
We find that our best performing textual model is most associated with topics that are intuitively related to each prediction task and that better models yield higher correlation with more informative topics.
no code implementations • CONLL 2019 • Yi Zhu, Benjamin Heinzerling, Ivan Vulić, Michael Strube, Roi Reichart, Anna Korhonen
Recent work has validated the importance of subword information for word representation learning.
1 code implementation • IJCNLP 2019 • Ivan Vulić, Goran Glavaš, Roi Reichart, Anna Korhonen
A series of bilingual lexicon induction (BLI) experiments with 15 diverse languages (210 language pairs) show that fully unsupervised CLWE methods still fail for a large number of language pairs (e. g., they yield zero BLI performance for 87/210 pairs).
1 code implementation • ACL 2019 • Rotem Dror, Segev Shlomov, Roi Reichart
Comparing between Deep Neural Network (DNN) models based on their performance on unseen data is crucial for the progress of the NLP field.
1 code implementation • ACL 2019 • Yftah Ziser, Roi Reichart
Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation.
no code implementations • NAACL 2019 • Ehsan Shareghi, Yingzhen Li, Yi Zhu, Roi Reichart, Anna Korhonen
While neural dependency parsers provide state-of-the-art accuracy for several languages, they still rely on large amounts of costly labeled training data.
no code implementations • TACL 2019 • Amichay Doitch, Ram Yazdi, Tamir Hazan, Roi Reichart
In this paper we propose a perturbation-based approach where sampling from a probabilistic model is computationally efficient.
no code implementations • EMNLP 2018 • Rivka Malca, Roi Reichart
Pinter et al. (2016) has formalized the grammar of these queries and proposed semi-supervised algorithms for the adaptation of parsers originally designed to parse according to the standard dependency grammar, so that they can account for the unique forest grammar of queries.
no code implementations • EMNLP 2018 • Daniela Gerz, Ivan Vuli{\'c}, Edoardo Maria Ponti, Roi Reichart, Anna Korhonen
A key challenge in cross-lingual NLP is developing general language-independent architectures that are equally applicable to any language.
1 code implementation • EMNLP 2018 • Yftah Ziser, Roi Reichart
In the full setup the model has access to unlabeled data from both pairs, while in the lazy setup, which is more realistic for truly resource-poor languages, unlabeled data is available for both domains but only for the source language.
1 code implementation • 5 Sep 2018 • Rotem Dror, Roi Reichart
Statistical significance testing plays an important role when drawing conclusions from experimental results in NLP papers.
no code implementations • CL 2019 • Edoardo Maria Ponti, Helen O'Horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, Anna Korhonen
Linguistic typology aims to capture structural and semantic variation across the world's languages.
no code implementations • ACL 2018 • Edoardo Maria Ponti, Roi Reichart, Anna Korhonen, Ivan Vuli{\'c}
The transfer or share of knowledge between languages is a potential solution to resource scarcity in NLP.
1 code implementation • ACL 2018 • Rotem Dror, Gili Baumer, Segev Shlomov, Roi Reichart
We establish the fundamental concepts of significance testing and discuss the specific aspects of NLP tasks, experimental setups and evaluation measures that affect the choice of significance tests in NLP research.
1 code implementation • ACL 2018 • Guy Rotman, Ivan Vuli{\'c}, Roi Reichart
We present a deep neural network that leverages images to improve bilingual text embeddings.
no code implementations • NAACL 2018 • Yftah Ziser, Roi Reichart
Particularly, our model processes the information in the text with a sequential NN (LSTM) and its output consists of a representation vector for every input word.
no code implementations • TACL 2018 • Daniela Gerz, Ivan Vuli{\'c}, Edoardo Ponti, Jason Naradowsky, Roi Reichart, Anna Korhonen
Neural architectures are prominent in the construction of language models (LMs).
1 code implementation • TACL 2017 • Rotem Dror, Gili Baumer, Marina Bogomolov, Roi Reichart
With the ever-growing amounts of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure consistent performance across heterogeneous setups.
2 code implementations • 1 Jun 2017 • Nikola Mrkšić, Ivan Vulić, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, Steve Young
We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources.
no code implementations • ACL 2017 • Ivan Vulić, Nikola Mrkšić, Roi Reichart, Diarmuid Ó Séaghdha, Steve Young, Anna Korhonen
Morphologically rich languages accentuate two properties of distributional vector space models: 1) the difficulty of inducing accurate representations for low-frequency word forms; and 2) insensitivity to distinct lexical relations that have similar distributional signatures.
1 code implementation • ACL 2017 • Lotem Peled, Roi Reichart
Sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment.
no code implementations • TACL 2017 • Nikola Mrk{\v{s}}i{\'c}, Ivan Vuli{\'c}, Diarmuid {\'O} S{\'e}aghdha, Ira Leviant, Roi Reichart, Milica Ga{\v{s}}i{\'c}, Anna Korhonen, Steve Young
We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources.
no code implementations • COLING 2016 • Helen O'Horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Anna Korhonen
In recent years linguistic typology, which classifies the world's languages according to their functional and structural properties, has been widely used to support multilingual NLP.
2 code implementations • CONLL 2017 • Yftah Ziser, Roi Reichart
Particularly, our model is a three-layer neural network that learns to encode the nonpivot features of an input example into a low-dimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation.
no code implementations • 28 Sep 2016 • Hagar Loeub, Roi Reichart
We address the problem of integrating textual and visual information in vector space models for word meaning representation.
no code implementations • 25 Sep 2016 • Lilach Edelstein, Roi Reichart
We present a factorized compositional distributional semantics model for the representation of transitive verb constructions.
no code implementations • CONLL 2017 • Ivan Vulić, Roy Schwartz, Ari Rappoport, Roi Reichart, Anna Korhonen
With our selected context configurations, we train on only 14% (A), 26. 2% (V), and 33. 6% (N) of all dependency-based contexts, resulting in a reduced training time.
1 code implementation • EMNLP 2016 • Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, Anna Korhonen
Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research.
no code implementations • 10 May 2016 • Yuval Pinter, Roi Reichart, Idan Szpektor
A description and annotation guidelines for the Yahoo Webscope release of Query Treebank, Version 1. 0, May 2016.
no code implementations • CONLL 2015 • Yevgeni Berzak, Roi Reichart, Boris Katz
This work examines the impact of cross-linguistic transfer on grammatical errors in English as Second Language (ESL) texts.
no code implementations • EMNLP 2016 • Rotem Dror, Roi Reichart
We present the Structured Weighted Violations Perceptron (SWVP) algorithm, a new structured prediction algorithm that generalizes the Collins Structured Perceptron (CSP).
no code implementations • ACL 2016 • Effi Levi, Roi Reichart, Ari Rappoport
The run time complexity of state-of-the-art inference algorithms in graph-based dependency parsing is super-linear in the number of input words (n).
no code implementations • 1 Aug 2015 • Ira Leviant, Roi Reichart
A common evaluation practice in the vector space models (VSMs) literature is to measure the models' ability to predict human judgments about lexical semantic relations between word pairs.
no code implementations • TACL 2015 • Yufan Guo, Roi Reichart, Anna Korhonen
Inferring the information structure of scientific documents is useful for many NLP applications.
3 code implementations • CL 2015 • Felix Hill, Roi Reichart, Anna Korhonen
We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways.
no code implementations • WS 2014 • Yevgeni Berzak, Roi Reichart, Boris Katz
Linguists and psychologists have long been studying cross-linguistic transfer, the influence of native language properties on linguistic performance in a foreign language.
no code implementations • TACL 2014 • Felix Hill, Roi Reichart, Anna Korhonen
Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition.