Search Results for author: Yusuke Narita

Found 14 papers, 6 papers with code

Off-Policy Evaluation of Ranking Policies under Diverse User Behavior

1 code implementation26 Jun 2023 Haruka Kiyohara, Masatoshi Uehara, Yusuke Narita, Nobuyuki Shimizu, Yasuo Yamamoto, Yuta Saito

We show that the resulting estimator, which we call Adaptive IPS (AIPS), can be unbiased under any complex user behavior.

Off-policy evaluation

Counterfactual Learning with General Data-generating Policies

no code implementations4 Dec 2022 Yusuke Narita, Kyohei Okumura, Akihiro Shimizu, Kohei Yata

Off-policy evaluation (OPE) attempts to predict the performance of counterfactual policies using log data from a different policy.

counterfactual Decision Making +1

Policy-Adaptive Estimator Selection for Off-Policy Evaluation

1 code implementation25 Nov 2022 Takuma Udagawa, Haruka Kiyohara, Yusuke Narita, Yuta Saito, Kei Tateno

Although many estimators have been developed, there is no single estimator that dominates the others, because the estimators' accuracy can vary greatly depending on a given OPE task such as the evaluation policy, number of actions, and noise level.

counterfactual Off-policy evaluation

Approximating Choice Data by Discrete Choice Models

no code implementations4 May 2022 Haoge Chang, Yusuke Narita, Kota Saito

We obtain a necessary and sufficient condition under which random-coefficient discrete choice models, such as mixed-logit models, are rich enough to approximate any nonparametric random utility models arbitrarily well across choice sets.

Discrete Choice Models

Evaluating the Robustness of Off-Policy Evaluation

2 code implementations31 Aug 2021 Yuta Saito, Takuma Udagawa, Haruka Kiyohara, Kazuki Mogi, Yusuke Narita, Kei Tateno

Unfortunately, identifying a reliable estimator from results reported in research papers is often difficult because the current experimental procedure evaluates and compares the estimators' performance on a narrow set of hyperparameters and evaluation policies.

Off-policy evaluation Recommendation Systems

Curse of Democracy: Evidence from the 21st Century

no code implementations15 Apr 2021 Yusuke Narita, Ayumi Sudo

Democracy is widely believed to contribute to economic growth and public health in the 20th and earlier centuries.

Breaking Ties: Regression Discontinuity Design Meets Market Design

no code implementations31 Dec 2020 Atila Abdulkadiroglu, Joshua D. Angrist, Yusuke Narita, Parag Pathak

The New York City public high school match illustrates the latter, using test scores and other criteria to rank applicants at ``screened'' schools, combined with lottery tie-breaking at unscreened ``lottery'' schools.

Math regression +1

Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation

3 code implementations17 Aug 2020 Yuta Saito, Shunsuke Aihara, Megumi Matsutani, Yusuke Narita

Our dataset is unique in that it contains a set of multiple logged bandit datasets collected by running different policies on the same platform.

Off-policy evaluation

Debiased Off-Policy Evaluation for Recommendation Systems

no code implementations20 Feb 2020 Yusuke Narita, Shota Yasui, Kohei Yata

Efficient methods to evaluate new algorithms are critical for improving interactive bandit and reinforcement learning systems such as recommendation systems.

counterfactual Off-policy evaluation +2

Efficient Adaptive Experimental Design for Average Treatment Effect Estimation

no code implementations13 Feb 2020 Masahiro Kato, Takuya Ishihara, Junya Honda, Yusuke Narita

In adaptive experimental design, the experimenter is allowed to change the probability of assigning a treatment using past observations for estimating the ATE efficiently.

Experimental Design

Efficient Counterfactual Learning from Bandit Feedback

no code implementations10 Sep 2018 Yusuke Narita, Shota Yasui, Kohei Yata

What is the most statistically efficient way to do off-policy evaluation and optimization with batch data from bandit feedback?

Causal Inference counterfactual +2

Cannot find the paper you are looking for? You can Submit a new open access paper.