1 code implementation • 2 Aug 2022 • Eyal Shnarch, Alon Halfon, Ariel Gera, Marina Danilevsky, Yannis Katsis, Leshem Choshen, Martin Santillan Cooper, Dina Epelboim, Zheng Zhang, Dakuo Wang, Lucy Yip, Liat Ein-Dor, Lena Dankin, Ilya Shnayderman, Ranit Aharonov, Yunyao Li, Naftali Liberman, Philip Levin Slesarev, Gwilym Newton, Shila Ofek-Koifman, Noam Slonim, Yoav Katz
Text classification can be useful in many real-world scenarios, saving a lot of time for end users.
Furthermore, we suggest a method that given a sentence, identifies points in the quality control space that are expected to yield optimal generated paraphrases.
In recent years, pretrained language models have revolutionized the NLP world, while achieving state of the art performance in various downstream tasks.
Wikification of large corpora is beneficial for various NLP applications.
We introduce a weakly supervised approach for inferring the property of abstractness of words and expressions in the complete absence of labeled data.
We train a triplet network to embed sentences from the same section closer.