no code implementations • 6 Nov 2023 • Wei-Fu Tseng, Kai-Chun Chen, Zi-Hong Xiao, Yen-Huan Li
Online learning quantum states with the logarithmic loss (LL-OLQS) is a quantum generalization of online portfolio selection, a classic open problem in the field of online learning for over three decades.
2 code implementations • 5 Nov 2023 • Chung-En Tsai, Hao-Chung Cheng, Yen-Huan Li
For the Poisson inverse problem, our algorithm attains an $\varepsilon$-optimal solution in $\smash{\tilde{O}}(d^2/\varepsilon^2)$ time, matching the state of the art, where $d$ denotes the dimension.
1 code implementation • 23 Nov 2022 • Chung-En Tsai, Hao-Chung Cheng, Yen-Huan Li
In maximum-likelihood quantum state tomography, both the sample size and dimension grow exponentially with the number of qubits.
no code implementations • 3 Oct 2022 • Chung-En Tsai, Hao-Chung Cheng, Yen-Huan Li
For online portfolio selection, the regret of online mirror descent with the logarithmic barrier is $\tilde{O}(\sqrt{T d})$.
no code implementations • 31 Dec 2020 • Chien-Ming Lin, Yu-Ming Hsu, Yen-Huan Li
Quantum state tomography (QST), the task of estimating an unknown quantum state given measurement outcomes, is essential to building reliable quantum computing devices.
no code implementations • 3 May 2018 • Baran Gözcü, Rabeeh Karimi Mahabadi, Yen-Huan Li, Efe Ilıcak, Tolga Çukur, Jonathan Scarlett, Volkan Cevher
In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms have been proposed that can be used with general Fourier subsampling patterns.
no code implementations • 1 Feb 2016 • Yen-Huan Li, Volkan Cevher
The standard approach to compressive sampling considers recovering an unknown deterministic signal with certain known structure, and designing the sub-sampling pattern and recovery algorithm based on the known structure.
no code implementations • 21 Oct 2015 • Luca Baldassarre, Yen-Huan Li, Jonathan Scarlett, Baran Gözcü, Ilija Bogunovic, Volkan Cevher
In this paper, we instead take a principled learning-based approach in which a \emph{fixed} index set is chosen based on a set of training signals $\mathbf{x}_1,\dotsc,\mathbf{x}_m$.
no code implementations • 4 Feb 2015 • Quoc Tran-Dinh, Yen-Huan Li, Volkan Cevher
The self-concordant-like property of a smooth convex function is a new analytical structure that generalizes the self-concordant notion.