Search Results for author: Qianxiao Li

Found 39 papers, 11 papers with code

Computing committor functions for the study of rare events using deep learning with importance sampling

no code implementations ICLR 2019 Qianxiao Li, Bo Lin, Weiqing Ren

The committor function is a central object of study in understanding transitions between metastable states in complex systems.

Feature Engineering

A Brief Survey on the Approximation Theory for Sequence Modelling

no code implementations27 Feb 2023 Haotian Jiang, Qianxiao Li, Zhong Li, Shida Wang

We survey current developments in the approximation theory of sequence modelling in machine learning.

On the Universal Approximation Property of Deep Fully Convolutional Neural Networks

no code implementations25 Nov 2022 Ting Lin, Zuowei Shen, Qianxiao Li

We study the approximation of shift-invariant or equivariant functions by deep fully convolutional networks from the dynamical systems perspective.

A Recursively Recurrent Neural Network (R2N2) Architecture for Learning Iterative Algorithms

no code implementations22 Nov 2022 Danimir T. Doncevic, Alexander Mitsos, Yue Guo, Qianxiao Li, Felix Dietrich, Manuel Dahmen, Ioannis G. Kevrekidis

Meta-learning of numerical algorithms for a given task consist of the data-driven identification and adaptation of an algorithmic structure and the associated hyperparameters.

Inductive Bias Meta-Learning

Fast Bayesian Optimization of Needle-in-a-Haystack Problems using Zooming Memory-Based Initialization (ZoMBI)

1 code implementation26 Aug 2022 Alexander E. Siemenn, Zekun Ren, Qianxiao Li, Tonio Buonassisi

Needle-in-a-Haystack problems exist across a wide range of applications including rare disease prediction, ecological resource management, fraud detection, and material property optimization.

Disease Prediction Fraud Detection +1

Deep Neural Network Approximation of Invariant Functions through Dynamical Systems

no code implementations18 Aug 2022 Qianxiao Li, Ting Lin, Zuowei Shen

We study the approximation of functions which are invariant with respect to certain permutations of the input indices using flow maps of dynamical systems.

Translation

Self-Healing Robust Neural Networks via Closed-Loop Control

1 code implementation26 Jun 2022 Zhuotong Chen, Qianxiao Li, Zheng Zhang

While numerous attack and defense techniques have been developed, this work investigates the robustness issue from a new angle: can we design a self-healing neural network that can automatically detect and fix the vulnerability issue by itself?

Principled Acceleration of Iterative Numerical Methods Using Machine Learning

1 code implementation17 Jun 2022 Sohei Arisaka, Qianxiao Li

Iterative methods are ubiquitous in large-scale scientific computing applications, and a number of approaches based on meta-learning have been recently proposed to accelerate them.

Meta-Learning

On the approximation properties of recurrent encoder-decoder architectures

no code implementations ICLR 2022 Zhong Li, Haotian Jiang, Qianxiao Li

Our results provide the theoretical understanding of approximation properties of the recurrent encoder-decoder architecture, which characterises, in the considered setting, the types of temporal relationships that can be efficiently learned.

Gradient-based Meta-solving and Its Applications to Iterative Methods for Solving Differential Equations

no code implementations29 Sep 2021 Sohei Arisaka, Qianxiao Li

In science and engineering applications, it is often required to solve similar computational problems repeatedly.

Meta-Learning

Short optimization paths lead to good generalization

no code implementations29 Sep 2021 Fusheng Liu, Haizhao Yang, Qianxiao Li

Through our approach, we show that, with a proper initialization, gradient flow converges following a short path with an explicit length estimate.

BIG-bench Machine Learning regression

Unraveling Model-Agnostic Meta-Learning via The Adaptation Learning Rate

no code implementations ICLR 2022 Yingtian Zou, Fusheng Liu, Qianxiao Li

In this paper, we study the effect of the adaptation learning rate in meta-learning with mixed linear regression.

Meta-Learning

Approximation Theory of Convolutional Architectures for Time Series Modelling

no code implementations20 Jul 2021 Haotian Jiang, Zhong Li, Qianxiao Li

We study the approximation properties of convolutional architectures applied to time series modelling, which can be formulated mathematically as a functional approximation problem.

Time Series Analysis

Adversarial Invariant Learning

1 code implementation CVPR 2021 Nanyang Ye, Jingxuan Tang, Huayu Deng, Xiao-Yun Zhou, Qianxiao Li, Zhenguo Li, Guang-Zhong Yang, Zhanxing Zhu

To the best of our knowledge, this is one of the first to adopt differentiable environment splitting method to enable stable predictions across environments without environment index information, which achieves the state-of-the-art performance on datasets with strong spurious correlation, such as Colored MNIST.

Domain Generalization Out-of-Distribution Generalization

Personalized Algorithm Generation: A Case Study in Learning ODE Integrators

2 code implementations4 May 2021 Yue Guo, Felix Dietrich, Tom Bertalan, Danimir T. Doncevic, Manuel Dahmen, Ioannis G. Kevrekidis, Qianxiao Li

As a case study, we develop a machine learning approach that automatically learns effective solvers for initial value problems in the form of ordinary differential equations (ODEs), based on the Runge-Kutta (RK) integrator architecture.

Meta-Learning

QROSS: QUBO Relaxation Parameter Optimisation via Learning Solver Surrogates

no code implementations19 Mar 2021 Tian Huang, Siong Thye Goh, Sabrish Gopalakrishnan, Tao Luo, Qianxiao Li, Hoong Chuin Lau

In this way, we are able capture the common structure of the instances and their interactions with the solver, and produce good choices of penalty parameters with fewer number of calls to the QUBO solver.

Traveling Salesman Problem

Towards Robust Neural Networks via Close-loop Control

1 code implementation ICLR 2021 Zhuotong Chen, Qianxiao Li, Zheng Zhang

We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective.

Amata: An Annealing Mechanism for Adversarial Training Acceleration

no code implementations15 Dec 2020 Nanyang Ye, Qianxiao Li, Xiao-Yun Zhou, Zhanxing Zhu

However, conducting adversarial training brings much computational overhead compared with standard training.

A Data Driven Method for Computing Quasipotentials

no code implementations13 Dec 2020 Bo Lin, Qianxiao Li, Weiqing Ren

The quasipotential is a natural generalization of the concept of energy functions to non-equilibrium systems.

Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning

no code implementations22 Oct 2020 Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang

The future of mobility-as-a-Service (Maas)should embrace an integrated system of ride-hailing, street-hailing and ride-sharing with optimised intelligent vehicle routing in response to a real-time, stochastic demand pattern.

reinforcement-learning Reinforcement Learning (RL)

On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis

no code implementations ICLR 2021 Zhong Li, Jiequn Han, Weinan E, Qianxiao Li

We study the approximation properties and optimization dynamics of recurrent neural networks (RNNs) when applied to learn input-output relationships in temporal data.

OnsagerNet: Learning Stable and Interpretable Dynamics using a Generalized Onsager Principle

1 code implementation6 Sep 2020 Haijun Yu, Xinyuan Tian, Weinan E, Qianxiao Li

We further apply this method to study Rayleigh-Benard convection and learn Lorenz-like low dimensional autonomous reduced order models that capture both qualitative and quantitative properties of the underlying dynamics.

Optimization in Machine Learning: A Distribution Space Approach

no code implementations18 Apr 2020 Yongqiang Cai, Qianxiao Li, Zuowei Shen

We present the viewpoint that optimization problems encountered in machine learning can often be interpreted as minimizing a convex functional over a function space, but with a non-convex constraint set introduced by model parameterization.

BIG-bench Machine Learning

Collaborative Inference for Efficient Remote Monitoring

no code implementations12 Feb 2020 Chi Zhang, Yong Sheng Soh, Ling Feng, Tianyi Zhou, Qianxiao Li

While current machine learning models have impressive performance over a wide range of applications, their large size and complexity render them unsuitable for tasks such as remote monitoring on edge devices with limited storage and computational power.

Deep Learning via Dynamical Systems: An Approximation Perspective

no code implementations22 Dec 2019 Qianxiao Li, Ting Lin, Zuowei Shen

We build on the dynamical systems approach to deep learning, where deep residual networks are idealized as continuous-time dynamical systems, from the approximation perspective.

Computing Committor Functions for the Study of Rare Events Using Deep Learning

no code implementations14 Jun 2019 Qianxiao Li, Bo Lin, Weiqing Ren

The committor function is a central object of study in understanding transitions between metastable states in complex systems.

Feature Engineering

Distributed Optimization for Over-Parameterized Learning

no code implementations14 Jun 2019 Chi Zhang, Qianxiao Li

Moreover, we show that the more local updating can reduce the overall communication, even for an infinity number of steps where each node is free to update its local model to near-optimality before exchanging information.

Distributed Optimization

On the Convergence and Robustness of Batch Normalization

no code implementations ICLR 2019 Yongqiang Cai, Qianxiao Li, Zuowei Shen

Despite its empirical success, the theoretical underpinnings of the stability, convergence and acceleration properties of batch normalization (BN) remain elusive.

Stochastic Modified Equations and Dynamics of Stochastic Gradient Algorithms I: Mathematical Foundations

no code implementations5 Nov 2018 Qianxiao Li, Cheng Tai, Weinan E

We develop the mathematical foundations of the stochastic modified equations (SME) framework for analyzing the dynamics of stochastic gradient algorithms, where the latter is approximated by a class of stochastic differential equations with small noise parameters.

A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent

no code implementations ICLR 2019 Yongqiang Cai, Qianxiao Li, Zuowei Shen

Despite its empirical success and recent theoretical progress, there generally lacks a quantitative analysis of the effect of batch normalization (BN) on the convergence and stability of gradient descent.

A Mean-Field Optimal Control Formulation of Deep Learning

no code implementations3 Jul 2018 Weinan E, Jiequn Han, Qianxiao Li

This paper introduces the mathematical formulation of the population risk minimization problem in deep learning as a mean-field optimal control problem.

Maximum Principle Based Algorithms for Deep Learning

2 code implementations26 Oct 2017 Qianxiao Li, Long Chen, Cheng Tai, Weinan E

The continuous dynamical system approach to deep learning is explored in order to devise alternative frameworks for training algorithms.

Stochastic modified equations and adaptive stochastic gradient algorithms

no code implementations ICML 2017 Qianxiao Li, Cheng Tai, Weinan E

We develop the method of stochastic modified equations (SME), in which stochastic gradient algorithms are approximated in the weak sense by continuous-time stochastic differential equations.

Cannot find the paper you are looking for? You can Submit a new open access paper.