no code implementations • 20 Dec 2022 • Brian D. Zimmerman, Gaurav Sahu, Olga Vechtomova
In this work, we propose Future Sight, a method for finetuning a pretrained generative transformer on the task of future conditioning.
no code implementations • 27 Oct 2022 • Olga Vechtomova, Gaurav Sahu
Subsequently, it is difficult for artists to rediscover audio segments that might be suitable for use in their compositions from thousands of hours of recordings.
no code implementations • 3 Jun 2021 • Olga Vechtomova, Gaurav Sahu, Dhruv Kumar
We describe a real-time system that receives a live audio stream from a jam session and generates lyric lines that are congruent with the live music being played.
no code implementations • 3 May 2021 • Gaurav Sahu, Robin Cohen, Olga Vechtomova
This paper envisions a multi-agent system for detecting the presence of hate speech in online social media platforms such as Twitter and Facebook.
2 code implementations • CL (ACL) 2022 • Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, Rada Mihalcea
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others.
no code implementations • NLP4MusA 2020 • Olga Vechtomova, Gaurav Sahu, Dhruv Kumar
We present a system for generating novel lyrics lines conditioned on music audio.
no code implementations • ACL 2020 • Lili Mou, Olga Vechtomova
We start from the definition of style and different settings of stylized text generation, illustrated with various applications.
1 code implementation • ACL 2020 • Dhruv Kumar, Lili Mou, Lukasz Golab, Olga Vechtomova
We present a novel iterative, edit-based approach to unsupervised sentence simplification.
Ranked #5 on
Text Simplification
on Newsela
2 code implementations • ACL 2020 • Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, Katja Markert
Automatic sentence summarization produces a shorter version of a sentence, while preserving its most important information.
no code implementations • EACL 2021 • Vikash Balasubramanian, Ivan Kobyzev, Hareesh Bahuleyan, Ilya Shapiro, Olga Vechtomova
Learning disentangled representations of real-world data is a challenging open problem.
no code implementations • 10 Nov 2019 • Amirpasha Ghabussi, Lili Mou, Olga Vechtomova
Moreover, we can train our model on relatively small datasets and learn the latent representation of a specified class by adding external data with other styles/classes to our dataset.
1 code implementation • COLING 2020 • Kashif Khan, Gaurav Sahu, Vikash Balasubramanian, Lili Mou, Olga Vechtomova
Generating relevant responses in a dialog is challenging, and requires not only proper modeling of context in the conversation but also being able to generate fluent sentences during inference.
no code implementations • EACL 2021 • Gaurav Sahu, Olga Vechtomova
Effective fusion of data from multiple modalities, such as video, speech, and text, is challenging due to the heterogeneous nature of multimodal data.
1 code implementation • ACL 2019 • Yu Bao, Hao Zhou, Shu-Jian Huang, Lei LI, Lili Mou, Olga Vechtomova, Xin-yu Dai, Jia-Jun Chen
In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces.
4 code implementations • 28 Mar 2019 • Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin
In the natural language processing literature, neural networks are becoming increasingly deeper and complex.
Ranked #55 on
Sentiment Analysis
on SST-2 Binary classification
no code implementations • 20 Dec 2018 • Olga Vechtomova, Hareesh Bahuleyan, Amirpasha Ghabussi, Vineet John
We present a system for generating song lyrics lines conditioned on the style of a specified artist.
3 code implementations • ACL 2019 • Vineet John, Lili Mou, Hareesh Bahuleyan, Olga Vechtomova
This paper tackles the problem of disentangling the latent variables of style and content in language models.
1 code implementation • NAACL 2019 • Hareesh Bahuleyan, Lili Mou, Hao Zhou, Olga Vechtomova
The variational autoencoder (VAE) imposes a probabilistic distribution (typically Gaussian) on the latent space and penalizes the Kullback--Leibler (KL) divergence between the posterior and prior.
2 code implementations • COLING 2018 • Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, Pascal Poupart
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network.
no code implementations • WS 2017 • Vineet John, Olga Vechtomova
This paper describes the UWaterloo affect prediction system developed for EmoInt-2017.
no code implementations • SEMEVAL 2017 • Hareesh Bahuleyan, Olga Vechtomova
This paper describes our system for subtask-A: SDQC for RumourEval, task-8 of SemEval 2017.
Ranked #2 on
Stance Detection
on RumourEval
1 code implementation • SEMEVAL 2017 • Vineet John, Olga Vechtomova
The system uses text vectorization models, such as N-gram, TF-IDF and paragraph embeddings, coupled with regression model variants to predict the sentiment scores.
no code implementations • SEMEVAL 2017 • Olga Vechtomova
The method achieved the best performance in the Heterographic category and the second best in the Homographic.
1 code implementation • 29 Jul 2017 • Vineet John, Olga Vechtomova
The system uses text vectorization models, such as N-gram, TF-IDF and paragraph embeddings, coupled with regression model variants to predict the sentiment scores.