Search Results for author: Qianxiao Li

Found 51 papers, 17 papers with code

Computing committor functions for the study of rare events using deep learning with importance sampling

no code implementations ICLR 2019 Qianxiao Li, Bo Lin, Weiqing Ren

The committor function is a central object of study in understanding transitions between metastable states in complex systems.

Feature Engineering

Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving

1 code implementation5 May 2024 Sohei Arisaka, Qianxiao Li

Scientific computing is an essential tool for scientific discovery and engineering design, and its computational cost is always a main concern in practice.

Meta-Learning

From Generalization Analysis to Optimization Designs for State Space Models

no code implementations4 May 2024 Fusheng Liu, Qianxiao Li

A State Space Model (SSM) is a foundation model in time series analysis, which has recently been shown as an alternative to transformers in sequence modeling.

Time Series Time Series Analysis

PID Control-Based Self-Healing to Improve the Robustness of Large Language Models

1 code implementation31 Mar 2024 Zhuotong Chen, Zihu Wang, Yifan Yang, Qianxiao Li, Zheng Zhang

This approach reduces the computational cost to that of using just the P controller, instead of the full PID control.

DynGMA: a robust approach for learning stochastic differential equations from data

1 code implementation22 Feb 2024 Aiqing Zhu, Qianxiao Li

Benefiting from the robust density approximation, our method exhibits superior accuracy compared to baseline methods in learning the fully unknown drift and diffusion functions and computing the invariant distribution from trajectory data.

Mitigating distribution shift in machine learning-augmented hybrid simulation

1 code implementation17 Jan 2024 Jiaxi Zhao, Qianxiao Li

We study the problem of distribution shift generally arising in machine-learning augmented hybrid simulation, where parts of simulation algorithms are replaced by data-driven surrogates.

StableSSM: Alleviating the Curse of Memory in State-space Models through Stable Reparameterization

1 code implementation24 Nov 2023 Shida Wang, Qianxiao Li

In this paper, we investigate the long-term memory learning capabilities of state-space models (SSMs) from the perspective of parameterization.

Asymptotically Fair Participation in Machine Learning Models: an Optimal Control Perspective

no code implementations16 Nov 2023 Zhuotong Chen, Qianxiao Li, Zheng Zhang

Moreover, we design a surrogate retention system based on existing literature on evolutionary population dynamics to approximate the dynamics of distribution shifts on active user counts, from which the objective of achieving asymptotically fair participation is formulated as an optimal control problem, and the control variables are considered as the model parameters.

Interpolation, Approximation and Controllability of Deep Neural Networks

no code implementations12 Sep 2023 Jingpu Cheng, Qianxiao Li, Ting Lin, Zuowei Shen

We investigate the expressive power of deep residual neural networks idealized as continuous dynamical systems through control theory.

Constructing Custom Thermodynamics Using Deep Learning

1 code implementation8 Aug 2023 Xiaoli Chen, Beatrice W. Soh, Zi-En Ooi, Eleonore Vissol-Gaudin, Haijun Yu, Kostya S. Novoselov, Kedar Hippalgaonkar, Qianxiao Li

Specifically, we learn three interpretable thermodynamic coordinates and build a dynamical landscape of polymer stretching, including the identification of stable and transition states and the control of the stretching rate.

Physical Intuition

Inverse Approximation Theory for Nonlinear Recurrent Neural Networks

1 code implementation30 May 2023 Shida Wang, Zhong Li, Qianxiao Li

We prove an inverse approximation theorem for the approximation of nonlinear sequence-to-sequence relationships using recurrent neural networks (RNNs).

Approximation Rate of the Transformer Architecture for Sequence Modeling

no code implementations29 May 2023 Haotian Jiang, Qianxiao Li

The Transformer architecture is widely applied in sequence modeling applications, yet the theoretical understanding of its working principles remains limited.

Forward and Inverse Approximation Theory for Linear Temporal Convolutional Networks

no code implementations29 May 2023 Haotian Jiang, Qianxiao Li

We present a theoretical analysis of the approximation properties of convolutional architectures when applied to the modeling of temporal sequences.

Temporal Sequences

A Brief Survey on the Approximation Theory for Sequence Modelling

no code implementations27 Feb 2023 Haotian Jiang, Qianxiao Li, Zhong Li, Shida Wang

We survey current developments in the approximation theory of sequence modelling in machine learning.

On the Universal Approximation Property of Deep Fully Convolutional Neural Networks

no code implementations25 Nov 2022 Ting Lin, Zuowei Shen, Qianxiao Li

We study the approximation of shift-invariant or equivariant functions by deep fully convolutional networks from the dynamical systems perspective.

A Recursively Recurrent Neural Network (R2N2) Architecture for Learning Iterative Algorithms

no code implementations22 Nov 2022 Danimir T. Doncevic, Alexander Mitsos, Yue Guo, Qianxiao Li, Felix Dietrich, Manuel Dahmen, Ioannis G. Kevrekidis

Meta-learning of numerical algorithms for a given task consists of the data-driven identification and adaptation of an algorithmic structure and the associated hyperparameters.

Inductive Bias Meta-Learning

Fast Bayesian Optimization of Needle-in-a-Haystack Problems using Zooming Memory-Based Initialization (ZoMBI)

1 code implementation26 Aug 2022 Alexander E. Siemenn, Zekun Ren, Qianxiao Li, Tonio Buonassisi

Needle-in-a-Haystack problems exist across a wide range of applications including rare disease prediction, ecological resource management, fraud detection, and material property optimization.

Bayesian Optimization Disease Prediction +2

Deep Neural Network Approximation of Invariant Functions through Dynamical Systems

no code implementations18 Aug 2022 Qianxiao Li, Ting Lin, Zuowei Shen

We study the approximation of functions which are invariant with respect to certain permutations of the input indices using flow maps of dynamical systems.

Translation

Self-Healing Robust Neural Networks via Closed-Loop Control

1 code implementation26 Jun 2022 Zhuotong Chen, Qianxiao Li, Zheng Zhang

While numerous attack and defense techniques have been developed, this work investigates the robustness issue from a new angle: can we design a self-healing neural network that can automatically detect and fix the vulnerability issue by itself?

Principled Acceleration of Iterative Numerical Methods Using Machine Learning

no code implementations17 Jun 2022 Sohei Arisaka, Qianxiao Li

Iterative methods are ubiquitous in large-scale scientific computing applications, and a number of approaches based on meta-learning have been recently proposed to accelerate them.

Meta-Learning

Short optimization paths lead to good generalization

no code implementations29 Sep 2021 Fusheng Liu, Haizhao Yang, Qianxiao Li

Through our approach, we show that, with a proper initialization, gradient flow converges following a short path with an explicit length estimate.

BIG-bench Machine Learning regression

On the approximation properties of recurrent encoder-decoder architectures

no code implementations ICLR 2022 Zhong Li, Haotian Jiang, Qianxiao Li

Our results provide the theoretical understanding of approximation properties of the recurrent encoder-decoder architecture, which characterises, in the considered setting, the types of temporal relationships that can be efficiently learned.

Decoder

Unraveling Model-Agnostic Meta-Learning via The Adaptation Learning Rate

no code implementations ICLR 2022 Yingtian Zou, Fusheng Liu, Qianxiao Li

In this paper, we study the effect of the adaptation learning rate in meta-learning with mixed linear regression.

Meta-Learning

Gradient-based Meta-solving and Its Applications to Iterative Methods for Solving Differential Equations

no code implementations29 Sep 2021 Sohei Arisaka, Qianxiao Li

In science and engineering applications, it is often required to solve similar computational problems repeatedly.

Meta-Learning

Approximation Theory of Convolutional Architectures for Time Series Modelling

no code implementations20 Jul 2021 Haotian Jiang, Zhong Li, Qianxiao Li

We study the approximation properties of convolutional architectures applied to time series modelling, which can be formulated mathematically as a functional approximation problem.

Time Series Time Series Analysis

Adversarial Invariant Learning

1 code implementation CVPR 2021 Nanyang Ye, Jingxuan Tang, Huayu Deng, Xiao-Yun Zhou, Qianxiao Li, Zhenguo Li, Guang-Zhong Yang, Zhanxing Zhu

To the best of our knowledge, this is one of the first to adopt differentiable environment splitting method to enable stable predictions across environments without environment index information, which achieves the state-of-the-art performance on datasets with strong spurious correlation, such as Colored MNIST.

Domain Generalization Out-of-Distribution Generalization

Personalized Algorithm Generation: A Case Study in Learning ODE Integrators

2 code implementations4 May 2021 Yue Guo, Felix Dietrich, Tom Bertalan, Danimir T. Doncevic, Manuel Dahmen, Ioannis G. Kevrekidis, Qianxiao Li

As a case study, we develop a machine learning approach that automatically learns effective solvers for initial value problems in the form of ordinary differential equations (ODEs), based on the Runge-Kutta (RK) integrator architecture.

Meta-Learning

QROSS: QUBO Relaxation Parameter Optimisation via Learning Solver Surrogates

no code implementations19 Mar 2021 Tian Huang, Siong Thye Goh, Sabrish Gopalakrishnan, Tao Luo, Qianxiao Li, Hoong Chuin Lau

In this way, we are able capture the common structure of the instances and their interactions with the solver, and produce good choices of penalty parameters with fewer number of calls to the QUBO solver.

Traveling Salesman Problem

Towards Robust Neural Networks via Close-loop Control

1 code implementation ICLR 2021 Zhuotong Chen, Qianxiao Li, Zheng Zhang

We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective.

Amata: An Annealing Mechanism for Adversarial Training Acceleration

no code implementations15 Dec 2020 Nanyang Ye, Qianxiao Li, Xiao-Yun Zhou, Zhanxing Zhu

However, conducting adversarial training brings much computational overhead compared with standard training.

A Data Driven Method for Computing Quasipotentials

no code implementations13 Dec 2020 Bo Lin, Qianxiao Li, Weiqing Ren

The quasipotential is a natural generalization of the concept of energy functions to non-equilibrium systems.

Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning

no code implementations22 Oct 2020 Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang

The future of mobility-as-a-Service (Maas)should embrace an integrated system of ride-hailing, street-hailing and ride-sharing with optimised intelligent vehicle routing in response to a real-time, stochastic demand pattern.

reinforcement-learning Reinforcement Learning (RL)

On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis

no code implementations ICLR 2021 Zhong Li, Jiequn Han, Weinan E, Qianxiao Li

We study the approximation properties and optimization dynamics of recurrent neural networks (RNNs) when applied to learn input-output relationships in temporal data.

OnsagerNet: Learning Stable and Interpretable Dynamics using a Generalized Onsager Principle

1 code implementation6 Sep 2020 Haijun Yu, Xinyuan Tian, Weinan E, Qianxiao Li

We further apply this method to study Rayleigh-Benard convection and learn Lorenz-like low dimensional autonomous reduced order models that capture both qualitative and quantitative properties of the underlying dynamics.

Optimization in Machine Learning: A Distribution Space Approach

no code implementations18 Apr 2020 Yongqiang Cai, Qianxiao Li, Zuowei Shen

We present the viewpoint that optimization problems encountered in machine learning can often be interpreted as minimizing a convex functional over a function space, but with a non-convex constraint set introduced by model parameterization.

BIG-bench Machine Learning

Collaborative Inference for Efficient Remote Monitoring

no code implementations12 Feb 2020 Chi Zhang, Yong Sheng Soh, Ling Feng, Tianyi Zhou, Qianxiao Li

While current machine learning models have impressive performance over a wide range of applications, their large size and complexity render them unsuitable for tasks such as remote monitoring on edge devices with limited storage and computational power.

Collaborative Inference

Deep Learning via Dynamical Systems: An Approximation Perspective

no code implementations22 Dec 2019 Qianxiao Li, Ting Lin, Zuowei Shen

We build on the dynamical systems approach to deep learning, where deep residual networks are idealized as continuous-time dynamical systems, from the approximation perspective.

Distributed Optimization for Over-Parameterized Learning

no code implementations14 Jun 2019 Chi Zhang, Qianxiao Li

Moreover, we show that the more local updating can reduce the overall communication, even for an infinity number of steps where each node is free to update its local model to near-optimality before exchanging information.

Distributed Optimization

Computing Committor Functions for the Study of Rare Events Using Deep Learning

no code implementations14 Jun 2019 Qianxiao Li, Bo Lin, Weiqing Ren

The committor function is a central object of study in understanding transitions between metastable states in complex systems.

Feature Engineering

On the Convergence and Robustness of Batch Normalization

no code implementations ICLR 2019 Yongqiang Cai, Qianxiao Li, Zuowei Shen

Despite its empirical success, the theoretical underpinnings of the stability, convergence and acceleration properties of batch normalization (BN) remain elusive.

Stochastic Modified Equations and Dynamics of Stochastic Gradient Algorithms I: Mathematical Foundations

no code implementations5 Nov 2018 Qianxiao Li, Cheng Tai, Weinan E

We develop the mathematical foundations of the stochastic modified equations (SME) framework for analyzing the dynamics of stochastic gradient algorithms, where the latter is approximated by a class of stochastic differential equations with small noise parameters.

A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent

no code implementations ICLR 2019 Yongqiang Cai, Qianxiao Li, Zuowei Shen

Despite its empirical success and recent theoretical progress, there generally lacks a quantitative analysis of the effect of batch normalization (BN) on the convergence and stability of gradient descent.

A Mean-Field Optimal Control Formulation of Deep Learning

no code implementations3 Jul 2018 Weinan E, Jiequn Han, Qianxiao Li

This paper introduces the mathematical formulation of the population risk minimization problem in deep learning as a mean-field optimal control problem.

Maximum Principle Based Algorithms for Deep Learning

2 code implementations26 Oct 2017 Qianxiao Li, Long Chen, Cheng Tai, Weinan E

The continuous dynamical system approach to deep learning is explored in order to devise alternative frameworks for training algorithms.

Stochastic modified equations and adaptive stochastic gradient algorithms

no code implementations ICML 2017 Qianxiao Li, Cheng Tai, Weinan E

We develop the method of stochastic modified equations (SME), in which stochastic gradient algorithms are approximated in the weak sense by continuous-time stochastic differential equations.

Cannot find the paper you are looking for? You can Submit a new open access paper.