1 code implementation • EMNLP 2021 • Entony Lekhtman, Yftah Ziser, Roi Reichart
We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting.
1 code implementation • 14 Nov 2023 • Yifu Qiu, Zheng Zhao, Yftah Ziser, Anna Korhonen, Edoardo M. Ponti, Shay B. Cohen
Instead, we provide LLMs with textual narratives and probe them with respect to their common-sense knowledge of the structure and duration of events, their ability to order events along a timeline, and self-consistency within their temporal model (e. g., temporal relations such as after and before are mutually exclusive for any pair of events).
1 code implementation • 24 Oct 2023 • Zheng Zhao, Yftah Ziser, Bonnie Webber, Shay B. Cohen
Using this tool, we study to what extent and how morphosyntactic features are reflected in the representations learned by multilingual pre-trained models.
1 code implementation • 23 May 2023 • Yifu Qiu, Yftah Ziser, Anna Korhonen, Edoardo M. Ponti, Shay B. Cohen
With the existing faithful metrics focusing on English, even measuring the extent of this phenomenon in cross-lingual settings is hard.
1 code implementation • 18 Feb 2023 • Weixian Waylon Li, Yftah Ziser, Maximin Coavoux, Shay B. Cohen
While the first decoding method matches a proof to a statement without being aware of other statements or proofs, the second method treats the task as a global matching problem.
1 code implementation • 6 Feb 2023 • Shun Shao, Yftah Ziser, Shay Cohen
We present the Assignment-Maximization Spectral Attribute removaL (AMSAL) algorithm, which erases information from neural representations when the information to be erased is implicit rather than directly being aligned to each input example.
1 code implementation • 22 Oct 2022 • Zheng Zhao, Yftah Ziser, Shay B. Cohen
We investigate how different domains are encoded in modern neural network architectures.
1 code implementation • 2 Sep 2022 • Eyal Ben-David, Yftah Ziser, Roi Reichart
In this setup, we aim to efficiently annotate data from a set of source domains such that the trained model performs well on a sensitive target domain from which data is unavailable for annotation.
1 code implementation • 25 May 2022 • Marcio Fonseca, Yftah Ziser, Shay B. Cohen
We argue that disentangling content selection from the budget used to cover salient content improves the performance and applicability of abstractive summarizers.
Ranked #1 on Text Summarization on GovReport
1 code implementation • 15 Mar 2022 • Shun Shao, Yftah Ziser, Shay B. Cohen
We describe a simple and effective method (Spectral Attribute removaL; SAL) to remove private or guarded information from neural representations.
1 code implementation • 1 Sep 2021 • Entony Lekhtman, Yftah Ziser, Roi Reichart
We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting.
no code implementations • ACL 2021 • Nachshon Cohen, Oren Kalinsky, Yftah Ziser, Alessandro Moschitti
Recent works made significant advances on summarization tasks, facilitated by summarization datasets.
no code implementations • NAACL 2021 • Ohad Rozen, David Carmel, Avihai Mejer, Vitaly Mirkis, Yftah Ziser
In this work, we propose a novel and complementary approach for predicting the answer for such questions, based on the answers for similar questions asked on similar products.
1 code implementation • ACL 2019 • Yftah Ziser, Roi Reichart
Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation.
1 code implementation • EMNLP 2018 • Yftah Ziser, Roi Reichart
In the full setup the model has access to unlabeled data from both pairs, while in the lazy setup, which is more realistic for truly resource-poor languages, unlabeled data is available for both domains but only for the source language.
no code implementations • NAACL 2018 • Yftah Ziser, Roi Reichart
Particularly, our model processes the information in the text with a sequential NN (LSTM) and its output consists of a representation vector for every input word.
2 code implementations • CONLL 2017 • Yftah Ziser, Roi Reichart
Particularly, our model is a three-layer neural network that learns to encode the nonpivot features of an input example into a low-dimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation.