no code implementations • CVPR 2023 • Michael Bernasconi, Abdelaziz Djelouah, Farnood Salehi, Markus Gross, Christopher Schroers
This renders our model applicable for different types of data not seen during the training such as normals.
1 code implementation • 30 Jul 2020 • Mahsa Forouzesh, Farnood Salehi, Patrick Thiran
We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing the generalization performance of networks, without requiring labeled data.
1 code implementation • NeurIPS 2019 • Farnood Salehi, William Trouleau, Matthias Grossglauser, Patrick Thiran
It is also able to take into account the uncertainty in the model parameters by learning a posterior distribution over them.
no code implementations • 25 Sep 2019 • Mahsa Forouzesh, Farnood Salehi, Patrick Thiran
We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing generalization performance of networks, without requiring labeled data.
1 code implementation • 1 Jul 2019 • Robert Bamler, Farnood Salehi, Stephan Mandt
Knowledge graph embeddings rank among the most successful methods for link prediction in knowledge graphs, i. e., the task of completing an incomplete collection of relational facts.
Ranked #5 on Link Prediction on FB15k
no code implementations • 27 Sep 2018 • Farnood Salehi, Robert Bamler, Stephan Mandt
We develop a probabilistic extension of state-of-the-art embedding models for link prediction in relational knowledge graphs.
no code implementations • 23 Feb 2018 • L. Elisa Celis, Sayash Kapoor, Farnood Salehi, Nisheeth K. Vishnoi
Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user.
no code implementations • NeurIPS 2018 • Farnood Salehi, Patrick Thiran, L. Elisa Celis
Ideally, we would update the decision variable that yields the largest decrease in the cost function.
no code implementations • 8 Aug 2017 • Farnood Salehi, L. Elisa Celis, Patrick Thiran
This approach for sampling datapoints is general, and can be used in conjunction with any algorithm that uses an unbiased gradient estimation -- we expect it to have broad applicability beyond the specific examples explored in this work.
no code implementations • ICML 2017 • Pedram Pad, Farnood Salehi, Elisa Celis, Patrick Thiran, Michael Unser
We propose a new statistical dictionary learning algorithm for sparse signals that is based on an $\alpha$-stable innovation model.
no code implementations • 14 Apr 2017 • L. Elisa Celis, Farnood Salehi
We provide algorithms for this setting, both for stochastic and adversarial bandits, and show that their regret smoothly interpolates between the regret in the classical bandit setting and that of the full-information setting as a function of the neighbors' exploration.