no code implementations • 26 Feb 2024 • Naixu Guo, Zhan Yu, Matthew Choi, Aman Agrawal, Kouhei Nakaji, Alán Aspuru-Guzik, Patrick Rebentrost
Generative machine learning methods such as large-language models are revolutionizing the creation of text and images.
no code implementations • 11 Oct 2023 • Zhan Yu, Qiuhao Chen, Yuling Jiao, Yinan Li, Xiliang Lu, Xin Wang, Jerry Zhijian Yang
To achieve this, we utilize techniques from quantum signal processing and linear combinations of unitaries to construct PQCs that implement multivariate polynomials.
no code implementations • 7 Jul 2023 • Zhongjie Shi, Zhan Yu, Ding-Xuan Zhou
In contrast to the classical regression methods, the input variables of distribution regression are probability measures.
no code implementations • 12 May 2023 • Zhan Yu, Jun Fan, Zhongjie Shi, Ding-Xuan Zhou
In the information era, to face the big data challenges {that} stem from functional data analysis very recently, we propose a novel distributed gradient descent functional learning (DGDFL) algorithm to tackle functional data across numerous local machines (processors) in the framework of reproducing kernel Hilbert space.
no code implementations • 6 May 2023 • Yifei Chen, Zhan Yu, Chenghong Zhu, Xin Wang
The rapid advancement of quantum computing has led to an extensive demand for effective techniques to extract classical information from quantum systems, particularly in fields like quantum machine learning and quantum chemistry.
no code implementations • 16 May 2022 • Zhan Yu, Hongshun Yao, Mujin Li, Xin Wang
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
no code implementations • 1 Mar 2022 • Zhan Yu, Xuanqiang Zhao, Benchi Zhao, Xin Wang
In this work, we solve the problem on the minimum size of sufficient quantum datasets for learning a unitary transformation exactly, which reveals the power and limitation of quantum data.
no code implementations • 21 Apr 2021 • Zhan Yu, Daniel W. C. Ho, Ding-Xuan Zhou
Regularization schemes for regression have been widely studied in learning theory and inverse problems.
no code implementations • 16 Jun 2020 • Zhan Yu, Daniel W. C. Ho
The main contribution of the paper is to present a novel multi-penalty regularization algorithm to capture more features of distribution regression and derive optimal learning rates for the algorithm.
no code implementations • CVPR 2014 • Can Chen, Haiting Lin, Zhan Yu, Sing Bing Kang, Jingyi Yu
Our bilateral consistency metric is used to indicate the probability of occlusions by analyzing the SCams.