no code implementations • 22 Jun 2023 • Paul K. Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zalán Borsos, Félix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, Hannah Muckenhirn, Dirk Padfield, James Qin, Danny Rozenberg, Tara Sainath, Johan Schalkwyk, Matt Sharifi, Michelle Tadmor, Ramanovich, Marco Tagliasacchi, Alexandru Tudor, Mihajlo Velimirović, Damien Vincent, Jiahui Yu, Yongqiang Wang, Vicky Zayats, Neil Zeghidour, Yu Zhang, Zhishuai Zhang, Lukas Zilka, Christian Frank
AudioPaLM inherits the capability to preserve paralinguistic information such as speaker identity and intonation from AudioLM and the linguistic knowledge present only in text large language models such as PaLM-2.
no code implementations • 7 Feb 2023 • Amirkeivan Mohtashami, Mauro Verzetti, Paul K. Rubenstein
Learned metrics such as BLEURT have in recent years become widely employed to evaluate the quality of machine translation systems.
no code implementations • 11 Mar 2022 • Thomas Verelst, Paul K. Rubenstein, Marcin Eichner, Tinne Tuytelaars, Maxim Berman
We show that adding a consistency loss, ensuring that the predictions of the network are consistent over consecutive training epochs, is a simple yet effective method to train multi-label classifiers in a weakly supervised setting.
no code implementations • 9 Oct 2019 • Julius von Kügelgen, Paul K. Rubenstein, Bernhard Schölkopf, Adrian Weller
We study the problem of causal discovery through targeted interventions.
2 code implementations • ICLR 2020 • Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, Mario Lucic
Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.
1 code implementation • NeurIPS 2019 • Paul K. Rubenstein, Olivier Bousquet, Josip Djolonga, Carlos Riquelme, Ilya Tolstikhin
The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.
no code implementations • 16 May 2019 • Luigi Gresele, Paul K. Rubenstein, Arash Mehrjou, Francesco Locatello, Bernhard Schölkopf
In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available.
no code implementations • 19 Dec 2018 • Paul K. Rubenstein, Yunpeng Li, Dominik Roblek
Generative adversarial networks (GANs) are capable of producing high quality image samples.
no code implementations • 11 Feb 2018 • Paul K. Rubenstein, Bernhard Schoelkopf, Ilya Tolstikhin
We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs).
no code implementations • 4 Jul 2017 • Paul K. Rubenstein, Sebastian Weichwald, Stephan Bongers, Joris M. Mooij, Dominik Janzing, Moritz Grosse-Wentrup, Bernhard Schölkopf
Complex systems can be modelled at various levels of detail.
no code implementations • 30 Jun 2017 • Paul K. Rubenstein, Ilya Tolstikhin, Philipp Hennig, Bernhard Schoelkopf
We consider the problem of learning the functions computing children from parents in a Structural Causal Model once the underlying causal graph has been identified.
no code implementations • 29 Aug 2016 • Paul K. Rubenstein, Stephan Bongers, Bernhard Schoelkopf, Joris M. Mooij
Structural Causal Models are widely used in causal modelling, but how they relate to other modelling tools is poorly understood.
no code implementations • 2 Mar 2016 • Paul K. Rubenstein, Kacper P. Chwialkowski, Arthur Gretton
The main contributions of this paper are twofold: first, we prove that the Lancaster statistic satisfies the conditions required to estimate the quantiles of the null distribution using the wild bootstrap; second, the manner in which this is proved is novel, simpler than existing methods, and can further be applied to other statistics.