Search Results for author: Shira Guskin

Found 6 papers, 1 papers with code

QuaLA-MiniLM: a Quantized Length Adaptive MiniLM

2 code implementations31 Oct 2022 Shira Guskin, Moshe Wasserblat, Chang Wang, Haihao Shen

Our quantized length-adaptive MiniLM model (QuaLA-MiniLM) is trained only once, dynamically fits any inference scenario, and achieves an accuracy-efficiency trade-off superior to any other efficient approaches per any computational budget on the SQuAD1. 1 dataset (up to x8. 8 speedup with <1% accuracy loss).

Computational Efficiency Knowledge Distillation +2

Training Compact Models for Low Resource Entity Tagging using Pre-trained Language Models

no code implementations14 Oct 2019 Peter Izsak, Shira Guskin, Moshe Wasserblat

In this work-in-progress we combined the effectiveness of transfer learning provided by pre-trained masked language models with a semi-supervised approach to train a fast and compact model using labeled and unlabeled examples.

Language Modelling Low Resource Named Entity Recognition +4

Term Set Expansion based NLP Architect by Intel AI Lab

no code implementations EMNLP 2018 Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat

We present SetExpander, a corpus-based system for expanding a seed set of terms into amore complete set of terms that belong to the same semantic class.

Term Set Expansion based on Multi-Context Term Embeddings: an End-to-end Workflow

no code implementations26 Jul 2018 Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Ido Dagan, Yoav Goldberg, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, Daniel Korat

We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class.

Cannot find the paper you are looking for? You can Submit a new open access paper.