no code implementations • 4 Apr 2024 • Xinmeng Huang, Shuo Li, Mengxin Yu, Matteo Sesia, Hamed Hassani, Insup Lee, Osbert Bastani, Edgar Dobriban
Language Models (LMs) have shown promising performance in natural language generation.
1 code implementation • 26 Dec 2023 • Edgar Dobriban, Mengxin Yu
Methods for predictive inference have been developed under a variety of assumptions, often -- for instance, in standard conformal prediction -- relying on the invariance of the distribution of the data under special groups of transformations such as permutation groups.
no code implementations • 5 Aug 2023 • Jianqing Fan, Zhipeng Lou, Weichen Wang, Mengxin Yu
This paper studies the performance of the spectral method in the estimation and uncertainty quantification of the unobserved preference scores of compared entities in a general and more realistic setup.
no code implementations • 20 Dec 2022 • Jianqing Fan, Jikai Hou, Mengxin Yu
This paper concerns with statistical estimation and inference for the ranking problems based on pairwise comparisons with additional covariate information such as the attributes of the compared items.
no code implementations • 22 Nov 2022 • Jianqing Fan, Zhipeng Lou, Weichen Wang, Mengxin Yu
The estimated distribution is then used to construct simultaneous confidence intervals for the differences in the preference scores and the ranks of individual items.
no code implementations • 22 Nov 2022 • Jianqing Fan, Zhipeng Lou, Mengxin Yu
A stylized feature of high-dimensional data is that many variables have heavy tails, and robust statistical inference is critical for valid large-scale statistical inference.
no code implementations • 23 Aug 2022 • Mengxin Yu, Zhuoran Yang, Jianqing Fan
We study offline reinforcement learning under a novel model called strategic MDP, which characterizes the strategic interactions between a principal and a sequence of myopic agents with private types.
no code implementations • 2 Mar 2022 • Jianqing Fan, Zhipeng Lou, Mengxin Yu
To fill in such an important gap, we also leverage our model as the alternative model to test the sufficiency of the latent factor regression and the sparse linear regression models.
no code implementations • 13 Sep 2021 • Jianqing Fan, Yongyi Guo, Mengxin Yu
$F(\cdot)$ with $m$-th order derivative ($m\geq 2$), our policy achieves a regret upper bound of $\tilde{O}_{d}(T^{\frac{2m+1}{4m-1}})$, where $T$ is time horizon and $\tilde{O}_{d}$ is the order that hides logarithmic terms and the dimensionality of feature $d$.
no code implementations • 16 Jul 2020 • Jianqing Fan, Zhuoran Yang, Mengxin Yu
For both the vector and matrix settings, we construct an over-parameterized least-squares loss function by employing the score function transform and a robust truncation step designed specifically for heavy-tailed data.