Search Results for author: Yusuke Narita

Found 10 papers, 4 papers with code

Approximating Choice Data by Discrete Choice Models

no code implementations4 May 2022 Haoge Chang, Yusuke Narita, Kota Saito

We obtain a necessary and sufficient condition under which parametric random-coefficient discrete choice models can approximate the choice behavior generated by nonparametric random utility models.

Evaluating the Robustness of Off-Policy Evaluation

2 code implementations31 Aug 2021 Yuta Saito, Takuma Udagawa, Haruka Kiyohara, Kazuki Mogi, Yusuke Narita, Kei Tateno

Unfortunately, identifying a reliable estimator from results reported in research papers is often difficult because the current experimental procedure evaluates and compares the estimators' performance on a narrow set of hyperparameters and evaluation policies.

Recommendation Systems

Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules

2 code implementations26 Apr 2021 Yusuke Narita, Kohei Yata

We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic decision-making algorithms.

BIG-bench Machine Learning Decision Making +1

Curse of Democracy: Evidence from the 21st Century

no code implementations15 Apr 2021 Yusuke Narita, Ayumi Sudo

Democracy is widely believed to contribute to economic growth and public health in the 20th and earlier centuries.

Breaking Ties: Regression Discontinuity Design Meets Market Design

no code implementations31 Dec 2020 Atila Abdulkadiroglu, Joshua D. Angrist, Yusuke Narita, Parag Pathak

The New York City public high school match illustrates the latter, using test scores and other criteria to rank applicants at ``screened'' schools, combined with lottery tie-breaking at unscreened ``lottery'' schools.

Selection bias

Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation

3 code implementations17 Aug 2020 Yuta Saito, Shunsuke Aihara, Megumi Matsutani, Yusuke Narita

Our dataset is unique in that it contains a set of multiple logged bandit datasets collected by running different policies on the same platform.

Debiased Off-Policy Evaluation for Recommendation Systems

no code implementations20 Feb 2020 Yusuke Narita, Shota Yasui, Kohei Yata

Efficient methods to evaluate new algorithms are critical for improving interactive bandit and reinforcement learning systems such as recommendation systems.

Recommendation Systems reinforcement-learning

Efficient Adaptive Experimental Design for Average Treatment Effect Estimation

no code implementations13 Feb 2020 Masahiro Kato, Takuya Ishihara, Junya Honda, Yusuke Narita

In adaptive experimental design, the experimenter is allowed to change the probability of assigning a treatment using past observations for estimating the ATE efficiently.

Experimental Design

Efficient Counterfactual Learning from Bandit Feedback

no code implementations10 Sep 2018 Yusuke Narita, Shota Yasui, Kohei Yata

What is the most statistically efficient way to do off-policy evaluation and optimization with batch data from bandit feedback?

Cannot find the paper you are looking for? You can Submit a new open access paper.