no code implementations • IJCNLP 2019 • Oren Melamud, Mihaela Bornea, Ken Barker
In this work, we combine these two approaches to improve low-shot text classification with two novel methods: a simple bag-of-words embedding approach; and a more complex context-aware method, based on the BERT model.
1 code implementation • WS 2019 • Oren Melamud, Chaitanya Shivade
Large-scale clinical data is invaluable to driving many computational scientific advances today.
no code implementations • COLING 2018 • Jacob Goldberger, Oren Melamud
Self-normalizing discriminative models approximate the normalized probability of a class without having to compute the partition function.
no code implementations • EMNLP 2017 • Oren Melamud, Ido Dagan, Jacob Goldberger
Specifically, we show that with minor modifications to word2vec's algorithm, we get principled language models that are closely related to the well-established Noise Contrastive Estimation (NCE) based language models.
no code implementations • ACL 2017 • Oren Melamud, Jacob Goldberger
In this paper we define a measure of dependency between two random variables, based on the Jensen-Shannon (JS) divergence between their joint distribution and the product of their marginal distributions.
no code implementations • 5 Sep 2016 • Oren Melamud, Ido Dagan, Jacob Goldberger
The obtained language modeling is closely related to NCE language models but is based on a simplified objective function.
no code implementations • LREC 2016 • Vasily Konovalov, Ron artstein, Oren Melamud, Ido Dagan
In this work, we introduce an annotated natural language human-agent dialogue corpus in the negotiation domain.
no code implementations • NAACL 2016 • Oren Melamud, David McClosky, Siddharth Patwardhan, Mohit Bansal
We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks.