Search Results for author: Samarth Bhargav

Found 6 papers, 1 papers with code

Towards Reproducible Machine Learning Research in Natural Language Processing

no code implementations ACL 2022 Ana Lucic, Maurits Bleeker, Samarth Bhargav, Jessica Forde, Koustuv Sinha, Jesse Dodge, Sasha Luccioni, Robert Stojnic

While recent progress in the field of ML has been significant, the reproducibility of these cutting-edge results is often lacking, with many submissions lacking the necessary information in order to ensure subsequent reproducibility.

Reproducibility as a Mechanism for Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence

no code implementations1 Nov 2021 Ana Lucic, Maurits Bleeker, Sami Jullien, Samarth Bhargav, Maarten de Rijke

In this work, we explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI concepts through the lens of reproducibility.

Fairness

Controllable Recommenders using Deep Generative Models and Disentanglement

no code implementations11 Oct 2021 Samarth Bhargav, Evangelos Kanoulas

We show that by updating the disentangled latent space based on user feedback, and by exploiting the generative nature of the recommender, controlled and personalized recommendations can be produced.

Collaborative Filtering Disentanglement

Sinkhorn AutoEncoders

2 code implementations ICLR 2019 Giorgio Patrini, Rianne van den Berg, Patrick Forré, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, Frank Nielsen

We show that minimizing the p-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the p-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error.

Probabilistic Programming

Cannot find the paper you are looking for? You can Submit a new open access paper.