no code implementations • 21 Oct 2022 • Giorgio Giannone, Serhii Havrylov, Jordan Massiah, Emine Yilmaz, Yunlong Jiao
Advances in deep learning theory have revealed how average generalization relies on superficial patterns in data.
1 code implementation • ICLR 2022 • Fangyu Liu, Yunlong Jiao, Jordan Massiah, Emine Yilmaz, Serhii Havrylov
Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders.
Ranked #1 on
Semantic Textual Similarity
on STS16
no code implementations • 30 Apr 2020 • Serhii Havrylov, Ivan Titov
Variational autoencoders (VAEs) are a standard framework for inducing latent variable models that have been shown effective in learning text representations as well as in text generation.
no code implementations • 11 Oct 2019 • Shangmin Guo, Yi Ren, Serhii Havrylov, Stella Frank, Ivan Titov, Kenny Smith
Since first introduced, computer simulation has been an increasingly important tool in evolutionary linguistics.
1 code implementation • WS 2020 • Zhifeng Hu, Serhii Havrylov, Ivan Titov, Shay B. Cohen
We introduce an idea for a privacy-preserving transformation on natural language data, inspired by homomorphic encryption.
1 code implementation • NAACL 2019 • Serhii Havrylov, Germán Kruszewski, Armand Joulin
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics.
1 code implementation • COLING 2018 • Arthur Bražinskas, Serhii Havrylov, Ivan Titov
Rather than assuming that a word embedding is fixed across the entire text collection, as in standard word embedding methods, in our Bayesian model we generate it from a word-specific prior density for each occurrence of a given word.
no code implementations • NeurIPS 2017 • Serhii Havrylov, Ivan Titov
Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI.