Search Results for author: Touqir Sajed

Found 4 papers, 1 papers with code

RECipe: Does a Multi-Modal Recipe Knowledge Graph Fit a Multi-Purpose Recommendation System?

no code implementations8 Aug 2023 Ali Pesaranghader, Touqir Sajed

We initialize the weights of the entities with these embeddings to train our knowledge graph embedding (KGE) model.

Benchmarking Collaborative Filtering +4

Self-Supervised Contrastive BERT Fine-tuning for Fusion-based Reviewed-Item Retrieval

2 code implementations1 Aug 2023 Mohammad Mahdi Abdollah Pour, Parsa Farinneya, Armin Toroghi, Anton Korikov, Ali Pesaranghader, Touqir Sajed, Manasa Bharadwaj, Borislav Mavrin, Scott Sanner

Experimental results show that Late Fusion contrastive learning for Neural RIR outperforms all other contrastive IR configurations, Neural IR, and sparse retrieval baselines, thus demonstrating the power of exploiting the two-level structure in Neural RIR approaches as well as the importance of preserving the nuance of individual review content via Late Fusion methods.

Contrastive Learning Information Retrieval +2

An Optimal Private Stochastic-MAB Algorithm Based on an Optimal Private Stopping Rule

no code implementations22 May 2019 Touqir Sajed, Or Sheffet

We present a provably optimal differentially private algorithm for the stochastic multi-arm bandit problem, as opposed to the private analogue of the UCB-algorithm [Mishra and Thakurta, 2015; Tossou and Dimitrakakis, 2016] which doesn't meet the recently discovered lower-bound of $\Omega \left(\frac{K\log(T)}{\epsilon} \right)$ [Shariff and Sheffet, 2018].

High-confidence error estimates for learned value functions

no code implementations28 Aug 2018 Touqir Sajed, Wesley Chung, Martha White

We provide experiments investigating the number of samples required by this offline algorithm in simple benchmark reinforcement learning domains, and highlight that there are still many open questions to be solved for this important problem.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.