no code implementations • 15 Nov 2023 • Muhammad Waleed Gondal, Jochen Gast, Inigo Alonso Ruiz, Richard Droste, Tommaso Macri, Suren Kumar, Luitpold Staudigl
Large vision-language representation learning models like CLIP have demonstrated impressive performance for zero-shot transfer to downstream tasks while largely benefiting from inter-modal (image-text) alignment via contrastive objectives.
no code implementations • NeurIPS 2021 • Nasim Rahaman, Muhammad Waleed Gondal, Shruti Joshi, Peter Gehler, Yoshua Bengio, Francesco Locatello, Bernhard Schölkopf
Modern neural network architectures can leverage large amounts of data to generalize well within the training distribution.
no code implementations • ICLR 2021 • Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf
Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalise well and are robust to changes in the input distribution.
no code implementations • 14 Oct 2020 • Muhammad Waleed Gondal, Shruti Joshi, Nasim Rahaman, Stefan Bauer, Manuel Wüthrich, Bernhard Schölkopf
This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model.
no code implementations • 28 Sep 2020 • Muhammad Waleed Gondal, Shruti Joshi, Nasim Rahaman, Stefan Bauer, Manuel Wuthrich, Bernhard Schölkopf
Few-shot-learning seeks to find models that are capable of fast-adaptation to novel tasks which are not encountered during training.
no code implementations • 13 Jul 2020 • Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf
Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalize well and are robust to changes in the input distribution.
no code implementations • 7 Jun 2019 • Đorđe Miladinović, Muhammad Waleed Gondal, Bernhard Schölkopf, Joachim M. Buhmann, Stefan Bauer
Sequential data often originates from diverse domains across which statistical regularities and domain specifics exist.
4 code implementations • NeurIPS 2019 • Muhammad Waleed Gondal, Manuel Wüthrich, Đorđe Miladinović, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
Learning meaningful and compact representations with disentangled semantic aspects is considered to be of key importance in representation learning.
1 code implementation • 14 May 2019 • Wittawat Jitkrittum, Patsorn Sangkloy, Muhammad Waleed Gondal, Amit Raj, James Hays, Bernhard Schölkopf
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit model e. g., a generative adversarial network (GAN).
1 code implementation • 31 Jul 2018 • Muhammad Waleed Gondal, Bernhard Schölkopf, Michael Hirsch
Moreover, we show that a texture representation of those deep features better capture the perceptual quality of an image than the original deep features.