no code implementations • 8 Mar 2024 • Xun Tang, Holakou Rahmanian, Michael Shavlovsky, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying
We derive the corresponding entropy regularization formulation and introduce a Sinkhorn-type algorithm for such constrained OT problems supported by theoretical guarantees.
no code implementations • 20 Jan 2024 • Xun Tang, Michael Shavlovsky, Holakou Rahmanian, Elisa Tardini, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying
To achieve possibly super-exponential convergence, we present Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm, by introducing early stopping for the matrix scaling steps and a second stage featuring a Newton-type subroutine.
no code implementations • 3 Sep 2022 • Xun Tang, YoonHaeng Hur, Yuehaw Khoo, Lexing Ying
In this paper, we present a density estimation framework based on tree tensor-network states.
no code implementations • 9 Aug 2022 • Songnian Chen, Shakeeb Khan, Xun Tang
We identify and estimate treatment effects when potential outcomes are weakly separable with a binary endogenous treatment.
no code implementations • 14 Jul 2022 • Philip Marx, Elie Tamer, Xun Tang
Difference-in-differences is a common method for estimating treatment effects, and the parallel trends condition is its main identifying assumption: the trend in mean untreated outcomes is independent of the observed treatment status.
no code implementations • 25 Oct 2021 • Xun Tang, Lexing Ying, Yuhua Zhu
When the error is in the residual norm, we prove that the shifting factor is always positive and upper bounded by $1+O\left(1/n\right)$, where $n$ is the number of samples used in learning each row of the transition matrix.
Model-based Reinforcement Learning reinforcement-learning +1