Search Results for author: Xiliang Lu

Found 18 papers, 1 papers with code

Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction

1 code implementation15 Mar 2024 Ziyang Xu, Keqin Peng, Liang Ding, DaCheng Tao, Xiliang Lu

Experiments across various prompts, PLMs, and benchmarks show that our approach can not only correct the overfitted performance caused by prompt bias, but also significantly improve the prompt retrieval capability (up to 10% absolute performance gain).

Neural Network Approximation for Pessimistic Offline Reinforcement Learning

no code implementations19 Dec 2023 Di wu, Yuling Jiao, Li Shen, Haizhao Yang, Xiliang Lu

In this paper, we establish a non-asymptotic estimation error of pessimistic offline RL using general neural network approximation with $\mathcal{C}$-mixing data regarding the structure of networks, the dimension of datasets, and the concentrability of data coverage, under mild assumptions.

Offline RL reinforcement-learning +1

Provable Advantage of Parameterized Quantum Circuit in Function Approximation

no code implementations11 Oct 2023 Zhan Yu, Qiuhao Chen, Yuling Jiao, Yinan Li, Xiliang Lu, Xin Wang, Jerry Zhijian Yang

To achieve this, we utilize techniques from quantum signal processing and linear combinations of unitaries to construct PQCs that implement multivariate polynomials.

Quantum Machine Learning

Current density impedance imaging with PINNs

no code implementations24 Jun 2023 Chenguang Duan, Yuling Jiao, Xiliang Lu, Jerry Zhijian Yang

In this paper, we introduce CDII-PINNs, a computationally efficient method for solving CDII using PINNs in the framework of Tikhonov regularization.

GAS: A Gaussian Mixture Distribution-Based Adaptive Sampling Method for PINNs

no code implementations28 Mar 2023 Yuling Jiao, Di Li, Xiliang Lu, Jerry Zhijian Yang, Cheng Yuan

With the recent study of deep learning in scientific computation, the Physics-Informed Neural Networks (PINNs) method has drawn widespread attention for solving Partial Differential Equations (PDEs).

Incremental Learning

Efficient and practical quantum compiler towards multi-qubit systems with deep reinforcement learning

no code implementations14 Apr 2022 Qiuhao Chen, Yuxuan Du, Qi Zhao, Yuling Jiao, Xiliang Lu, Xingyao Wu

We systematically evaluate the performance of our proposal in compiling quantum operators with both inverse-closed and inverse-free universal basis sets.

Q-Learning reinforcement-learning +1

Imaging Conductivity from Current Density Magnitude using Neural Networks

no code implementations5 Apr 2022 Bangti Jin, Xiyao Li, Xiliang Lu

Conductivity imaging represents one of the most important tasks in medical imaging.

A Data-Driven Line Search Rule for Support Recovery in High-dimensional Data Analysis

no code implementations21 Nov 2021 Peili Li, Yuling Jiao, Xiliang Lu, Lican Kang

In this work, we consider the algorithm to the (nonlinear) regression problems with $\ell_0$ penalty.

regression

Coordinate Descent for MCP/SCAD Penalized Least Squares Converges Linearly

no code implementations18 Sep 2021 Yuling Jiao, Dingwei Li, Min Liu, Xiliang Lu

Recovering sparse signals from observed data is an important topic in signal/imaging processing, statistics and machine learning.

Generative Learning With Euler Particle Transport

no code implementations11 Dec 2020 Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu, Xiliang Lu, Zhijian Yang

The key task in training is the estimation of the density ratios or differences that determine the residual maps.

On Newton Screening

no code implementations27 Jan 2020 Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu, Yuanyuan Yang

Based on this KKT system, a built-in working set with a relatively small size is first determined using the sum of primal and dual variables generated from the previous iteration, then the primal variable is updated by solving a least-squares problem on the working set and the dual variable updated based on a closed-form expression.

Sparse Learning

A Support Detection and Root Finding Approach for Learning High-dimensional Generalized Linear Models

no code implementations16 Jan 2020 Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu

Feature selection is important for modeling high-dimensional data, where the number of variables can be much larger than the sample size.

feature selection

A stochastic alternating minimizing method for sparse phase retrieval

no code implementations14 Jun 2019 Jian-Feng Cai, Yuling Jiao, Xiliang Lu, Juntao You

Sparse phase retrieval plays an important role in many fields of applied science and thus attracts lots of attention.

Retrieval

SNAP: A semismooth Newton algorithm for pathwise optimization with optimal local convergence rate and oracle properties

no code implementations9 Oct 2018 Jian Huang, Yuling Jiao, Xiliang Lu, Yueyong Shi, Qinglong Yang

We propose a semismooth Newton algorithm for pathwise optimization (SNAP) for the LASSO and Enet in sparse, high-dimensional linear regression.

regression

A Primal Dual Active Set with Continuation Algorithm for the \ell^0-Regularized Optimization Problem

no code implementations3 Mar 2014 Yuling Jiao, Bangti Jin, Xiliang Lu

We develop a primal dual active set with continuation algorithm for solving the \ell^0-regularized least-squares problem that frequently arises in compressed sensing.

A Unified Primal Dual Active Set Algorithm for Nonconvex Sparse Recovery

no code implementations4 Oct 2013 Jian Huang, Yuling Jiao, Bangti Jin, Jin Liu, Xiliang Lu, Can Yang

In this paper, we consider the problem of recovering a sparse signal based on penalized least squares formulations.

Cannot find the paper you are looking for? You can Submit a new open access paper.