no code implementations • ICML 2020 • Hanrui Zhang, Vincent Conitzer
We study problems where a learner aims to learn the valuations of an agent by observing which goods he buys under varying price vectors.
no code implementations • ICML 2020 • Vincent Conitzer, Debmalya Panigrahi, Hanrui Zhang
We study the problem of learning opinions in social networks.
no code implementations • 16 Jul 2024 • Saba Ahmadi, Kunhe Yang, Hanrui Zhang
We derive regret bounds in both the realizable setting where all agents manipulate according to the same graph within the graph family, and the agnostic setting where the manipulation graphs are chosen adversarially and not consistently modeled by a single graph in the family.
no code implementations • 16 May 2022 • Hanrui Zhang, Yu Cheng, Vincent Conitzer
Our approach can also be extended to the (discounted) infinite-horizon case, for which we give an algorithm that runs in time polynomial in the size of the input and $\log(1/\varepsilon)$, and returns a policy that is optimal up to an additive error of $\varepsilon$.
no code implementations • NeurIPS 2021 • Yuan Deng, Hanrui Zhang
We study prior-independent dynamic auction design with production costs for a value-maximizing buyer, a paradigm that is becoming prevalent recently following the development of automatic bidding algorithms in advertising platforms.
1 code implementation • 13 Aug 2021 • Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, Nihar B. Shah
Many scientific conferences employ a two-phase paper review process, where some papers are assigned additional reviewers after the initial reviews are submitted.
no code implementations • Proceedings of the AAAI Conference on Artificial Intelligence 2021 • Hanrui Zhang, Vincent Conitzer
We give a sample complexity bound that is, curiously, independent of the hypothesis class, for the ERM principle restricted to incentivecompatible classifiers.
no code implementations • 12 Apr 2021 • Hanrui Zhang, Yu Cheng, Vincent Conitzer
We study the problem of automated mechanism design with partial verification, where each type can (mis)report only a restricted set of types (rather than any other type), induced by the principal's limited verification power.
1 code implementation • 18 Dec 2020 • Anilesh K. Krishnaswamy, Haoming Li, David Rein, Hanrui Zhang, Vincent Conitzer
To this end, we present {\sc IC-LR}, a modification of Logistic Regression that removes the incentive to strategically drop features.
no code implementations • 22 Oct 2020 • Ruosong Wang, Hanrui Zhang, Devendra Singh Chaplot, Denis Garagić, Ruslan Salakhutdinov
We study planning with submodular objective functions, where instead of maximizing the cumulative reward, the goal is to maximize the objective value induced by a submodular function.
2 code implementations • NeurIPS 2020 • Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar B. Shah, Vincent Conitzer, Fei Fang
We further consider the problem of restricting the joint probability that certain suspect pairs of reviewers are assigned to certain papers, and show that this problem is NP-hard for arbitrary constraints on these joint probabilities but efficiently solvable for a practical special case.
no code implementations • ICML 2020 • Yi Li, Ruosong Wang, Lin Yang, Hanrui Zhang
We give a row sampling algorithm for the quantile loss function with sample complexity nearly linear in the dimensionality of the data, improving upon the previous best algorithm whose sampling complexity has at least cubic dependence on the dimensionality.
no code implementations • NeurIPS 2019 • Hanrui Zhang, Yu Cheng, Vincent Conitzer
In other settings, the principal may not even be able to observe samples directly; instead, she must rely on signals that the agent is able to send based on the samples that he obtains, and he will choose these signals strategically.
no code implementations • NeurIPS 2019 • Simon S. Du, Yuping Luo, Ruosong Wang, Hanrui Zhang
Though the idea of using function approximation was proposed at least 60 years ago, even in the simplest setup, i. e, approximating Q-functions with linear functions, it is still an open problem how to design a provably efficient algorithm that learns a near-optimal policy.
no code implementations • 14 Jun 2019 • Simon S. Du, Yuping Luo, Ruosong Wang, Hanrui Zhang
Though the idea of using function approximation was proposed at least 60 years ago, even in the simplest setup, i. e, approximating $Q$-functions with linear functions, it is still an open problem on how to design a provably efficient algorithm that learns a near-optimal policy.