Search Results for author: Zhiyuan Li

Found 45 papers, 12 papers with code

Few-shot Generation of Personalized Neural Surrogates for Cardiac Simulation via Bayesian Meta-Learning

no code implementations6 Oct 2022 Xiajun Jiang, Zhiyuan Li, Ryan Missel, Md Shakil Zaman, Brian Zenger, Wilson W. Good, Rob S. MacLeod, John L. Sapp, Linwei Wang

As test time, metaPNS delivers a personalized neural surrogate by fast feed-forward embedding of a small and flexible number of data available from an individual, achieving -- for the first time -- personalization and surrogate construction for expensive simulations in one end-to-end learning framework.

Meta-Learning Variational Inference

Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent

no code implementations8 Jul 2022 Zhiyuan Li, Tianhao Wang, JasonD. Lee, Sanjeev Arora

Conversely, continuous mirror descent with any Legendre function can be viewed as gradient flow with a related commuting parametrization.

Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction

no code implementations14 Jun 2022 Kaifeng Lyu, Zhiyuan Li, Sanjeev Arora

Normalization layers (e. g., Batch Normalization, Layer Normalization) were introduced to help with optimization difficulties in very deep nets, but they clearly also help generalization, even in not-so-deep nets.

Understanding Gradient Descent on Edge of Stability in Deep Learning

no code implementations19 May 2022 Sanjeev Arora, Zhiyuan Li, Abhishek Panigrahi

The current paper mathematically analyzes a new mechanism of implicit regularization in the EoS phase, whereby GD updates due to non-smooth loss landscape turn out to evolve along some deterministic flow on the manifold of minimum loss.

Interpretability of Neural Network With Physiological Mechanisms

no code implementations24 Mar 2022 Anna Zou, Zhiyuan Li

Deep learning continues to play as a powerful state-of-art technique that has achieved extraordinary accuracy levels in various domains of regression and classification tasks, including images, video, signal, and natural language data.

Robust Training of Neural Networks Using Scale Invariant Architectures

no code implementations2 Feb 2022 Zhiyuan Li, Srinadh Bhojanapalli, Manzil Zaheer, Sashank J. Reddi, Sanjiv Kumar

In contrast to SGD, adaptive gradient methods like Adam allow robust training of modern deep networks, especially large language models.

Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias

no code implementations NeurIPS 2021 Kaifeng Lyu, Zhiyuan Li, Runzhe Wang, Sanjeev Arora

The current paper is able to establish this global optimality for two-layer Leaky ReLU nets trained with gradient flow on linearly separable and symmetric data, regardless of the width.

What Happens after SGD Reaches Zero Loss? --A Mathematical Framework

no code implementations ICLR 2022 Zhiyuan Li, Tianhao Wang, Sanjeev Arora

Understanding the implicit bias of Stochastic Gradient Descent (SGD) is one of the key challenges in deep learning, especially for overparametrized models, where the local minimizers of the loss function $L$ can form a manifold.

DeFRCN: Decoupled Faster R-CNN for Few-Shot Object Detection

1 code implementation ICCV 2021 Limeng Qiao, Yuxuan Zhao, Zhiyuan Li, Xi Qiu, Jianan Wu, Chi Zhang

Few-shot object detection, which aims at detecting novel objects rapidly from extremely few annotated examples of previously unseen classes, has attracted significant research interest in the community.

Classification Few-Shot Object Detection +1

Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning

no code implementations25 Mar 2021 Yaqi Duan, Chi Jin, Zhiyuan Li

Concretely, we view the Bellman error as a surrogate loss for the optimality gap, and prove the followings: (1) In double sampling regime, the excess risk of Empirical Risk Minimizer (ERM) is bounded by the Rademacher complexity of the function class.

Learning Theory reinforcement-learning

A Two-Stage Variable Selection Approach for Correlated High Dimensional Predictors

no code implementations24 Mar 2021 Zhiyuan Li

To solve the challenge, we propose a two-stage approach that combines a variable clustering stage and a group variable stage for the group variable selection problem.

Variable Selection

On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs)

1 code implementation NeurIPS 2021 Zhiyuan Li, Sadhika Malladi, Sanjeev Arora

It is generally recognized that finite learning rate (LR), in contrast to infinitesimal LR, is important for good generalization in real-life deep nets.

A Supernova-driven, Magnetically-collimated Outflow as the Origin of the Galactic Center Radio Bubbles

no code implementations26 Jan 2021 Mengfei Zhang, Zhiyuan Li, Mark R. Morris

Our simulations are run with different combinations of two main parameters, the supernova birth rate and the strength of a global magnetic field being vertically oriented with respect to the disk.

Astrophysics of Galaxies

Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning

no code implementations ICLR 2021 Zhiyuan Li, Yuping Luo, Kaifeng Lyu

Matrix factorization is a simple and natural test-bed to investigate the implicit regularization of gradient descent.

Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?

no code implementations ICLR 2021 Zhiyuan Li, Yi Zhang, Sanjeev Arora

However, this has not been made mathematically rigorous, and the hurdle is that the fully connected net can always simulate the convolutional net (for a fixed task).

Image Classification Inductive Bias

Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate

no code implementations NeurIPS 2020 Zhiyuan Li, Kaifeng Lyu, Sanjeev Arora

Recent works (e. g., (Li and Arora, 2020)) suggest that the use of popular normalization schemes (including Batch Normalization) in today's deep learning can move it far from a traditional optimization viewpoint, e. g., use of exponentially increasing learning rates.

Learning Geometry-Dependent and Physics-Based Inverse Image Reconstruction

no code implementations18 Jul 2020 Xiajun Jiang, Sandesh Ghimire, Jwala Dhamala, Zhiyuan Li, Prashnna Kumar Gyawali, Linwei Wang

However, many reconstruction problems involve imaging physics that are dependent on the underlying non-Euclidean geometry.

Image Reconstruction

Chemical abundances in Sgr A East: evidence for a type Iax supernova remnant

no code implementations26 Jun 2020 Ping Zhou, Shing-Chi Leung, Zhiyuan Li, Ken'ichi Nomoto, Jacco Vink, Yang Chen

We report evidence that SNR Sgr A East in the Galactic center resulted from a pure turbulent deflagration of a Chandrasekhar-mass carbon-oxygen WD, an explosion mechanism used for type Iax SNe.

High Energy Astrophysical Phenomena

When is Particle Filtering Efficient for Planning in Partially Observed Linear Dynamical Systems?

no code implementations10 Jun 2020 Simon S. Du, Wei Hu, Zhiyuan Li, Ruoqi Shen, Zhao Song, Jiajun Wu

Though errors in past actions may affect the future, we are able to bound the number of particles needed so that the long-run reward of the policy based on particle filtering is close to that based on exact inference.

Decision Making

Semi-supervised Medical Image Classification with Global Latent Mixing

1 code implementation22 May 2020 Prashnna Kumar Gyawali, Sandesh Ghimire, Pradeep Bajracharya, Zhiyuan Li, Linwei Wang

In this work, we argue that regularizing the global smoothness of neural functions by filling the void in between data points can further improve SSL.

General Classification Image Classification +1

Progressive Learning and Disentanglement of Hierarchical Representations

1 code implementation ICLR 2020 Zhiyuan Li, Jaideep Vitthal Murkute, Prashnna Kumar Gyawali, Linwei Wang

By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.

Disentanglement

Enhanced Convolutional Neural Tangent Kernels

no code implementations3 Nov 2019 Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S. Du, Wei Hu, Ruslan Salakhutdinov, Sanjeev Arora

An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel.

Data Augmentation

An Exponential Learning Rate Schedule for Deep Learning

no code implementations ICLR 2020 Zhiyuan Li, Sanjeev Arora

This paper suggests that the phenomenon may be due to Batch Normalization or BN, which is ubiquitous and provides benefits in optimization and generalization across all standard architectures.

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

3 code implementations ICLR 2020 Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu

On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance.

Few-Shot Image Classification General Classification +2

Improving Disentangled Representation Learning with the Beta Bernoulli Process

1 code implementation3 Sep 2019 Prashnna Kumar Gyawali, Zhiyuan Li, Cameron Knight, Sandesh Ghimire, B. Milan Horacek, John Sapp, Linwei Wang

We note that the independence within and the complexity of the latent density are two different properties we constrain when regularizing the posterior density: while the former promotes the disentangling ability of VAE, the latter -- if overly limited -- creates an unnecessary competition with the data reconstruction objective in VAE.

Decision Making Representation Learning

Semi-Supervised Learning by Disentangling and Self-Ensembling Over Stochastic Latent Space

1 code implementation22 Jul 2019 Prashnna Kumar Gyawali, Zhiyuan Li, Sandesh Ghimire, Linwei Wang

In this work, we hypothesize -- from the generalization perspective -- that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space.

Data Augmentation Multi-Label Classification +1

Feature-level and Model-level Audiovisual Fusion for Emotion Recognition in the Wild

no code implementations6 Jun 2019 Jie Cai, Zibo Meng, Ahmed Shehab Khan, Zhiyuan Li, James O'Reilly, Shizhong Han, Ping Liu, Min Chen, Yan Tong

In this paper, we proposed two strategies to fuse information extracted from different modalities, i. e., audio and visual.

Emotion Recognition

Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee

no code implementations ICLR 2020 Wei Hu, Zhiyuan Li, Dingli Yu

Over-parameterized deep neural networks trained by simple first-order methods are known to be able to fit any labeling of data.

The role of over-parametrization in generalization of neural networks

1 code implementation ICLR 2019 Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann Lecun, Nathan Srebro

Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization.

On Exact Computation with an Infinitely Wide Neural Net

2 code implementations NeurIPS 2019 Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang

An attraction of such ideas is that a pure kernel-based method is used to capture the power of a fully-trained deep net of infinite width.

Gaussian Processes

Identity-Free Facial Expression Recognition using conditional Generative Adversarial Network

no code implementations19 Mar 2019 Jie Cai, Zibo Meng, Ahmed Shehab Khan, Zhiyuan Li, James O'Reilly, Shizhong Han, Yan Tong

A novel Identity-Free conditional Generative Adversarial Network (IF-GAN) was proposed for Facial Expression Recognition (FER) to explicitly reduce high inter-subject variations caused by identity-related facial attributes, e. g., age, race, and gender.

Facial Expression Recognition

Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks

no code implementations24 Jan 2019 Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang

This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR'17].

Probabilistic Attribute Tree in Convolutional Neural Networks for Facial Expression Recognition

no code implementations17 Dec 2018 Jie Cai, Zibo Meng, Ahmed Shehab Khan, Zhiyuan Li, James O'Reilly, Yan Tong

In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e. g., age, race, and gender.

Facial Expression Recognition

Theoretical Analysis of Auto Rate-Tuning by Batch Normalization

no code implementations ICLR 2019 Sanjeev Arora, Zhiyuan Li, Kaifeng Lyu

Batch Normalization (BN) has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization.

Deep Template Matching for Offline Handwritten Chinese Character Recognition

no code implementations15 Nov 2018 Zhiyuan Li, Min Jin, Qi Wu, Huaxiang Lu

Just like its remarkable achievements in many computer vision tasks, the convolutional neural networks (CNN) provide an end-to-end solution in handwritten Chinese character recognition (HCCR) with great success.

Offline Handwritten Chinese Character Recognition Template Matching

Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks

2 code implementations30 May 2018 Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann Lecun, Nathan Srebro

Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization.

Online Improper Learning with an Approximation Oracle

no code implementations NeurIPS 2018 Elad Hazan, Wei Hu, Yuanzhi Li, Zhiyuan Li

We revisit the question of reducing online learning to approximate optimization of the offline problem.

online learning

Building Efficient CNN Architecture for Offline Handwritten Chinese Character Recognition

no code implementations4 Apr 2018 Zhiyuan Li, Nanjun Teng, Min Jin, Huaxiang Lu

Deep convolutional networks based methods have brought great breakthrough in images classification, which provides an end-to-end solution for handwritten Chinese character recognition(HCCR) problem through learning discriminative features automatically.

Offline Handwritten Chinese Character Recognition

Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition

no code implementations CVPR 2018 Shizhong Han, Zibo Meng, Zhiyuan Li, James O'Reilly, Jie Cai, Xiao-Feng Wang, Yan Tong

Most recently, Convolutional Neural Networks (CNNs) have shown promise for facial AU recognition, where predefined and fixed convolution filter sizes are employed.

Facial Action Unit Detection

Solving Marginal MAP Problems with NP Oracles and Parity Constraints

no code implementations NeurIPS 2016 Yexiang Xue, Zhiyuan Li, Stefano Ermon, Carla P. Gomes, Bart Selman

Arising from many applications at the intersection of decision making and machine learning, Marginal Maximum A Posteriori (Marginal MAP) Problems unify the two main classes of inference, namely maximization (optimization) and marginal inference (counting), and are believed to have higher complexity than both of them.

BIG-bench Machine Learning Decision Making

Learning in Games: Robustness of Fast Convergence

no code implementations NeurIPS 2016 Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Eva Tardos

We show that learning algorithms satisfying a $\textit{low approximate regret}$ property experience fast convergence to approximate optimality in a large class of repeated games.

Cannot find the paper you are looking for? You can Submit a new open access paper.