1 code implementation • NAACL 2021 • Artidoro Pagnoni, Vidhisha Balachandran, Yulia Tsvetkov
Modern summarization models generate highly fluent but often factually unreliable outputs.
1 code implementation • EACL 2021 • Vidhisha Balachandran, Artidoro Pagnoni, Jay Yoon Lee, Dheeraj Rajagopal, Jaime Carbonell, Yulia Tsvetkov
To this end, we propose incorporating latent and explicit dependencies across sentences in the source document into end-to-end single-document summarization models.
1 code implementation • COLING 2020 • Evangelia Spiliopoulou, Artidoro Pagnoni, Eduard Hovy
Advances in word representations have shown tremendous improvements in downstream NLP tasks, but lack semantic interpretability.
1 code implementation • 10 Jun 2019 • Gyeong-In Yu, Saeed Amizadeh, Sehoon Kim, Artidoro Pagnoni, Byung-Gon Chun, Markus Weimer, Matteo Interlandi
To this end, we propose a framework that translates a pre-trained ML pipeline into a neural network and fine-tunes the ML models within the pipeline jointly using backpropagation.
no code implementations • 14 May 2019 • Zeeshan Ahmed, Saeed Amizadeh, Mikhail Bilenko, Rogan Carr, Wei-Sheng Chin, Yael Dekel, Xavier Dupre, Vadim Eksarevskiy, Eric Erhardt, Costin Eseanu, Senja Filipi, Tom Finley, Abhishek Goswami, Monte Hoover, Scott Inglis, Matteo Interlandi, Shon Katzenberger, Najeeb Kazmi, Gleb Krivosheev, Pete Luferenko, Ivan Matantsev, Sergiy Matusevych, Shahab Moradi, Gani Nazirov, Justin Ormont, Gal Oshri, Artidoro Pagnoni, Jignesh Parmar, Prabhat Roy, Sarthak Shah, Mohammad Zeeshan Siddiqui, Markus Weimer, Shauheen Zahirazami, Yiwen Zhu
Machine Learning is transitioning from an art and science into a technology available to every developer.
no code implementations • 16 Dec 2018 • Artidoro Pagnoni, Stefan Gramatovici, Samuel Liu
We consider the Domain Adaptation problem, also known as the covariate shift problem, where the distributions that generate the training and test data differ while retaining the same labeling function.
no code implementations • 11 Dec 2018 • Artidoro Pagnoni, Kevin Liu, Shangyan Li
We explore the performance of latent variable models for conditional text generation in the context of neural machine translation (NMT).