Search Results for author: Peter Izsak

Found 10 papers, 4 papers with code

Exploring the Boundaries of Low-Resource BERT Distillation

no code implementations EMNLP (sustainlp) 2020 Moshe Wasserblat, Oren Pereg, Peter Izsak

We also show that the distillation of large pre-trained models is more effective in real-life scenarios where limited amounts of labeled training are available.

Model Compression

Optimizing Retrieval-augmented Reader Models via Token Elimination

1 code implementation20 Oct 2023 Moshe Berchansky, Peter Izsak, Avi Caciularu, Ido Dagan, Moshe Wasserblat

Fusion-in-Decoder (FiD) is an effective retrieval-augmented language model applied across a variety of open-domain tasks, such as question answering, fact checking, etc.

Answer Generation Fact Checking +3

Transformer Language Models without Positional Encodings Still Learn Positional Information

1 code implementation30 Mar 2022 Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, Omer Levy

Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings.

Position

How to Train BERT with an Academic Budget

4 code implementations EMNLP 2021 Peter Izsak, Moshe Berchansky, Omer Levy

While large language models a la BERT are used ubiquitously in NLP, pretraining them is considered a luxury that only a few well-funded industry labs can afford.

Language Modelling Linguistic Acceptability +4

Q8BERT: Quantized 8Bit BERT

5 code implementations14 Oct 2019 Ofir Zafrir, Guy Boudoukh, Peter Izsak, Moshe Wasserblat

Recently, pre-trained Transformer based language models such as BERT and GPT, have shown great improvement in many Natural Language Processing (NLP) tasks.

Linguistic Acceptability Natural Language Inference +3

Training Compact Models for Low Resource Entity Tagging using Pre-trained Language Models

no code implementations14 Oct 2019 Peter Izsak, Shira Guskin, Moshe Wasserblat

In this work-in-progress we combined the effectiveness of transfer learning provided by pre-trained masked language models with a semi-supervised approach to train a fast and compact model using labeled and unlabeled examples.

Language Modelling Low Resource Named Entity Recognition +4

Term Set Expansion based NLP Architect by Intel AI Lab

no code implementations EMNLP 2018 Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat

We present SetExpander, a corpus-based system for expanding a seed set of terms into amore complete set of terms that belong to the same semantic class.

Term Set Expansion based on Multi-Context Term Embeddings: an End-to-end Workflow

no code implementations26 Jul 2018 Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Ido Dagan, Yoav Goldberg, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat

We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class.

Cannot find the paper you are looking for? You can Submit a new open access paper.