no code implementations • 9 Apr 2025 • Ling Team, Caizhi Tang, Chilin Fu, Chunwei Wu, Jia Guo, Jianwen Wang, Jingyu Hu, Liang Jiang, Meng Li, Peng Jiao, Pingping Liu, Shaomian Zheng, Shiwei Liang, Shuaicheng Li, YaLin Zhang, Yingting Wu, Yongkang Liu, Zhenyu Huang
This technical report presents Ring-Lite-Distill, a lightweight reasoning model derived from our open-source Mixture-of-Experts (MoE) Large Language Models (LLMs) Ling-Lite.
no code implementations • 22 Mar 2025 • Liang Jiang, Yuzhou Cheng, Kun Luo, Jianren Fan
Physics-informed neural networks (PINNs) demonstrate promising potential in parameterized engineering turbulence optimization problems but face challenges, such as high data requirements and low computational accuracy when applied to engineering turbulence problems.
no code implementations • 2 Nov 2024 • Pei Zeng, Debayan Bandyopadhyay, José A. Méndez Méndez, Nolan Bitner, Alexander Kolar, Michael T. Solomon, Ziyu Ye, Filip Rozpędek, Tian Zhong, F. Joseph Heremans, David D. Awschalom, Liang Jiang, Junyu Liu
Quantum resistance is vital for emerging cryptographic systems as quantum technologies continue to advance towards large-scale, fault-tolerant quantum computers.
no code implementations • 1 Nov 2024 • Pei Zeng, Debayan Bandyopadhyay, José A. Méndez Méndez, Nolan Bitner, Alexander Kolar, Michael T. Solomon, F. Joseph Heremans, David D. Awschalom, Liang Jiang, Junyu Liu
The rapid advancement of quantum technologies calls for the design and deployment of quantum-safe cryptographic protocols and communication networks.
no code implementations • 2 Oct 2024 • Bingzhi Zhang, Junyu Liu, Liang Jiang, Quntao Zhuang
We reveal a quantum-data-driven dynamical transition, where the target value and data determine the polynomial or exponential convergence of the training.
no code implementations • 19 Aug 2024 • Kaining Zhang, Junyu Liu, Liu Liu, Liang Jiang, Min-Hsiu Hsieh, DaCheng Tao
Provided that the encoding of quantum data is sufficiently random, the performance, we find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in the number of qubits, which we call "the curse of random quantum data".
no code implementations • 29 Nov 2023 • Bingzhi Zhang, Junyu Liu, Xiao-Chuan Wu, Liang Jiang, Quntao Zhuang
In this work, we show that the late-time training dynamics of quantum neural networks with a quadratic loss function can be described by the generalized Lotka-Volterra equations, which lead to a transcritical bifurcation transition in the dynamics.
no code implementations • 23 Sep 2023 • Senrui Chen, Changhun Oh, Sisi Zhou, Hsin-Yuan Huang, Liang Jiang
In this work, we consider learning algorithms without entanglement to be those that only utilize states, measurements, and operations that are separable between the main system of interest and an ancillary system.
no code implementations • 12 Sep 2023 • Junyu Liu, Liang Jiang
A quantum version of data centers might be significant in the quantum era.
no code implementations • 25 Jul 2023 • Yunfei Wang, Yuri Alexeev, Liang Jiang, Frederic T. Chong, Junyu Liu
Quantum random access memory (QRAM), a fundamental component of many essential quantum algorithms for tasks such as linear algebra, data search, and machine learning, is often proposed to offer $\mathcal{O}(\log N)$ circuit depth for $\mathcal{O}(N)$ data size, given $N$ qubits.
no code implementations • 17 Apr 2023 • Liang Jiang, Liyao Li, Ke Miao, Yichong Zhang
On the other hand, RAs can degrade estimation efficiency due to their estimation errors, which are not asymptotically negligible when the number of regressors is of the same order as the sample size.
no code implementations • 6 Mar 2023 • Junyu Liu, Minzhao Liu, Jin-Peng Liu, Ziyu Ye, Yunfei Wang, Yuri Alexeev, Jens Eisert, Liang Jiang
Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process.
no code implementations • 9 Feb 2023 • Yuehao Bai, Liang Jiang, Joseph P. Romano, Azeem M. Shaikh, Yichong Zhang
This paper studies inference on the average treatment effect in experiments in which treatment status is determined according to "matched pairs" and it is additionally desired to adjust for observed, baseline covariates to gain further precision.
no code implementations • 13 Oct 2022 • Junyu Liu, Frederik Wilde, Antonio Anna Mele, Xin Jin, Liang Jiang, Jens Eisert
Saddle points constitute a crucial challenge for first-order gradient descent algorithms.
no code implementations • 23 Sep 2022 • Liang Jiang, Zhenyu Huang, Jia Liu, Zujie Wen, Xi Peng
Such a process will inevitably introduce mismatched pairs (i. e., noisy correspondence) due to i) the unavailable QA pairs in target documents, and ii) the domain shift during applying the QA construction model to the target domain.
no code implementations • 28 Jul 2022 • Junyu Liu, Connor T. Hann, Liang Jiang
In this paper, we propose the Quantum Data Center (QDC), an architecture combining Quantum Random Access Memory (QRAM) and quantum networks.
no code implementations • 19 Jun 2022 • Junyu Liu, Zexi Lin, Liang Jiang
We discuss the difference between laziness and \emph{barren plateau} in quantum machine learning created by quantum physicists in \cite{mcclean2018barren} for the flatness of the loss function landscape during gradient descent.
no code implementations • 19 May 2022 • Minzhao Liu, Junyu Liu, Yuri Alexeev, Liang Jiang
Random quantum circuits have been utilized in the contexts of quantum supremacy demonstrations, variational quantum algorithms for chemistry and machine learning, and blackhole information.
no code implementations • 30 Mar 2022 • Junyu Liu, Khadijeh Najafi, Kunal Sharma, Francesco Tacchino, Liang Jiang, Antonio Mezzacapo
We define wide quantum neural networks as parameterized quantum circuits in the limit of a large number of qubits and variational parameters.
no code implementations • 8 Mar 2022 • Ruijie Yan, Shuang Peng, Haitao Mi, Liang Jiang, Shihui Yang, Yuchi Zhang, Jiajun Li, Liangrui Peng, Yongliang Wang, Zujie Wen
Building robust and general dialogue models for spoken conversations is challenging due to the gap in distributions of spoken and written data.
no code implementations • 31 Jan 2022 • Liang Jiang, Oliver B. Linton, Haihan Tang, Yichong Zhang
We investigate how to improve efficiency using regression adjustments with covariates in covariate-adaptive randomizations (CARs) with imperfect subject compliance.
no code implementations • 8 Nov 2021 • Junyu Liu, Francesco Tacchino, Jennifer R. Glick, Liang Jiang, Antonio Mezzacapo
We analytically solve the dynamics in the frozen limit, or lazy training regime, where variational angles change slowly and a linear perturbation is good enough.
no code implementations • 31 May 2021 • Liang Jiang, Peter C. B. Phillips, Yubo Tao, Yichong Zhang
We establish the consistency and limit distribution of the regression-adjusted QTE estimator and prove that the use of multiplier bootstrap inference is non-conservative under CARs.
no code implementations • 19 Feb 2021 • Changhun Oh, Youngrong Lim, Bill Fefferman, Liang Jiang
Sampling from probability distributions of quantum circuits is a fundamentally and practically important task which can be used to demonstrate quantum supremacy using noisy intermediate-scale quantum devices.
Quantum Physics
no code implementations • 9 Oct 2020 • Jacob C. Curtis, Connor T. Hann, Salvatore S. Elder, Christopher S. Wang, Luigi Frunzio, Liang Jiang, Robert J. Schoelkopf
This detector functions by measuring a series of generalized parity operators which make up the bits in the binary decomposition of the photon number.
Quantum Physics
no code implementations • 25 May 2020 • Liang Jiang, Xiaobin Liu, Peter C. B. Phillips, Yichong Zhang
This paper examines methods of inference concerning quantile treatment effects (QTEs) in randomized experiments with matched-pairs designs (MPDs).
no code implementations • 2 Mar 2020 • Liang Jiang, Zujie Wen, Zhongping Liang, Yafang Wang, Gerard de Melo, Zhe Li, Liangzhuang Ma, Jiaxing Zhang, Xiaolong Li, Yuan Qi
The long-term teacher draws on snapshots from several epochs ago in order to provide steadfast guidance and to guarantee teacher--student differences, while the short-term one yields more up-to-date cues with the goal of enabling higher-quality updates.
no code implementations • 21 Dec 2018 • Chuan Qin, HengShu Zhu, Tong Xu, Chen Zhu, Liang Jiang, Enhong Chen, Hui Xiong
The wide spread use of online recruitment services has led to information explosion in the job market.
no code implementations • 8 Mar 2018 • Linli Xu, Liang Jiang, Chuan Qin, Zhe Wang, Dongfang Du
Generating poetry from images is much more challenging than generating poetry from text, since images contain very rich visual information which cannot be described completely using several keywords, and a good poem should convey the image accurately.