Search Results for author: Yftah Ziser

Found 17 papers, 14 papers with code

DILBERT: Customized Pre-Training for Domain Adaptation with Category Shift, with an Application to Aspect Extraction

1 code implementation EMNLP 2021 Entony Lekhtman, Yftah Ziser, Roi Reichart

We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting.

Aspect Extraction Language Modelling +1

Are Large Language Models Temporally Grounded?

1 code implementation14 Nov 2023 Yifu Qiu, Zheng Zhao, Yftah Ziser, Anna Korhonen, Edoardo M. Ponti, Shay B. Cohen

Instead, we provide LLMs with textual narratives and probe them with respect to their common-sense knowledge of the structure and duration of events, their ability to order events along a timeline, and self-consistency within their temporal model (e. g., temporal relations such as after and before are mutually exclusive for any pair of events).

Common Sense Reasoning In-Context Learning +2

A Joint Matrix Factorization Analysis of Multilingual Representations

1 code implementation24 Oct 2023 Zheng Zhao, Yftah Ziser, Bonnie Webber, Shay B. Cohen

Using this tool, we study to what extent and how morphosyntactic features are reflected in the representations learned by multilingual pre-trained models.

Detecting and Mitigating Hallucinations in Multilingual Summarisation

1 code implementation23 May 2023 Yifu Qiu, Yftah Ziser, Anna Korhonen, Edoardo M. Ponti, Shay B. Cohen

With the existing faithful metrics focusing on English, even measuring the extent of this phenomenon in cross-lingual settings is hard.

Cross-Lingual Transfer

BERT is not The Count: Learning to Match Mathematical Statements with Proofs

1 code implementation18 Feb 2023 Weixian Waylon Li, Yftah Ziser, Maximin Coavoux, Shay B. Cohen

While the first decoding method matches a proof to a statement without being aware of other statements or proofs, the second method treats the task as a global matching problem.

Information Retrieval Retrieval

Erasure of Unaligned Attributes from Neural Representations

1 code implementation6 Feb 2023 Shun Shao, Yftah Ziser, Shay Cohen

We present the Assignment-Maximization Spectral Attribute removaL (AMSAL) algorithm, which erases information from neural representations when the information to be erased is implicit rather than directly being aligned to each input example.

Attribute

Understanding Domain Learning in Language Models Through Subpopulation Analysis

1 code implementation22 Oct 2022 Zheng Zhao, Yftah Ziser, Shay B. Cohen

We investigate how different domains are encoded in modern neural network architectures.

Language Modelling

Domain Adaptation from Scratch

1 code implementation2 Sep 2022 Eyal Ben-David, Yftah Ziser, Roi Reichart

In this setup, we aim to efficiently annotate data from a set of source domains such that the trained model performs well on a sensitive target domain from which data is unavailable for annotation.

Active Learning Domain Adaptation +5

Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents

1 code implementation25 May 2022 Marcio Fonseca, Yftah Ziser, Shay B. Cohen

We argue that disentangling content selection from the budget used to cover salient content improves the performance and applicability of abstractive summarizers.

Abstractive Text Summarization Disentanglement +2

Gold Doesn't Always Glitter: Spectral Removal of Linear and Nonlinear Guarded Attribute Information

1 code implementation15 Mar 2022 Shun Shao, Yftah Ziser, Shay B. Cohen

We describe a simple and effective method (Spectral Attribute removaL; SAL) to remove private or guarded information from neural representations.

Attribute

DILBERT: Customized Pre-Training for Domain Adaptation withCategory Shift, with an Application to Aspect Extraction

1 code implementation1 Sep 2021 Entony Lekhtman, Yftah Ziser, Roi Reichart

We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting.

Aspect Extraction Language Modelling +1

Answering Product-Questions by Utilizing Questions from Other Contextually Similar Products

no code implementations NAACL 2021 Ohad Rozen, David Carmel, Avihai Mejer, Vitaly Mirkis, Yftah Ziser

In this work, we propose a novel and complementary approach for predicting the answer for such questions, based on the answers for similar questions asked on similar products.

Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation

1 code implementation ACL 2019 Yftah Ziser, Roi Reichart

Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation.

Language Modelling Sentiment Analysis +2

Deep Pivot-Based Modeling for Cross-language Cross-domain Transfer with Minimal Guidance

1 code implementation EMNLP 2018 Yftah Ziser, Roi Reichart

In the full setup the model has access to unlabeled data from both pairs, while in the lazy setup, which is more realistic for truly resource-poor languages, unlabeled data is available for both domains but only for the source language.

Word Embeddings

Pivot Based Language Modeling for Improved Neural Domain Adaptation

no code implementations NAACL 2018 Yftah Ziser, Roi Reichart

Particularly, our model processes the information in the text with a sequential NN (LSTM) and its output consists of a representation vector for every input word.

Domain Adaptation Language Modelling +3

Neural Structural Correspondence Learning for Domain Adaptation

2 code implementations CONLL 2017 Yftah Ziser, Roi Reichart

Particularly, our model is a three-layer neural network that learns to encode the nonpivot features of an input example into a low-dimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation.

Denoising Domain Adaptation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.