no code implementations • ACL (WebNLG, INLG) 2020 • Giulio Zhou, Gerasimos Lampouras
This paper presents our submission to the WebNLG Challenge 2020 for the English and Russian RDF-to-text generation tasks.
no code implementations • Findings (ACL) 2022 • Giulio Zhou, Gerasimos Lampouras, Ignacio Iacobacci
Large pretrained models enable transfer learning to low-resource domains for language generation tasks.
no code implementations • EMNLP 2021 • Giulio Zhou, Jacob Devlin
Large-scale document retrieval systems often utilize two styles of neural network models which live at two different ends of the joint computation vs. accuracy spectrum.
no code implementations • 1 Feb 2024 • Giulio Zhou, Tsz Kin Lam, Alexandra Birch, Barry Haddow
While there has been a growing interest in developing direct speech translation systems to avoid propagating errors and losing non-verbal content, prior work in direct S2TT has struggled to conclusively establish the advantages of integrating the acoustic signal directly into the translation process.
1 code implementation • 16 Apr 2023 • Liu Leqi, Giulio Zhou, Fatma Kılınç-Karzan, Zachary C. Lipton, Alan L. Montgomery
While answering these questions, we provide a flexible experimental framework for understanding human preference dynamics and testing MABs algorithms with human users.
no code implementations • ACL 2021 • Giulio Zhou, Gerasimos Lampouras
In this paper, we explore the application of multilingual models in concept-to-text and propose Language Agnostic Delexicalisation, a novel delexicalisation method that uses multilingual pretrained embeddings, and employs a character-level post-editing model to inflect words in their correct form during relexicalisation.
no code implementations • Findings (EMNLP) 2021 • Giulio Zhou, Gerasimos Lampouras
In this work, we propose to ameliorate this cost by using an Imitation Learning approach to explore the level of diversity that a language generation model can reliably produce.
2 code implementations • 2 Oct 2019 • Angela H. Jiang, Daniel L. -K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai
This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration.
1 code implementation • 24 May 2019 • Christopher Canel, Thomas Kim, Giulio Zhou, Conglong Li, Hyeontaek Lim, David G. Andersen, Michael Kaminsky, Subramanya R. Dulloor
As video camera deployments continue to grow, the need to process large volumes of real-time data strains wide area network infrastructure.
no code implementations • 10 Dec 2018 • Giulio Zhou, Subramanya Dulloor, David G. Andersen, Michael Kaminsky
We present a way to rapidly bootstrap object detection on unseen videos using minimal human annotations.
no code implementations • 9 Dec 2016 • Daniel Crankshaw, Xin Wang, Giulio Zhou, Michael J. Franklin, Joseph E. Gonzalez, Ion Stoica
In this paper, we introduce Clipper, a general-purpose low-latency prediction serving system.