Search Results for author: Peizhong Ju

Found 9 papers, 1 papers with code

Non-asymptotic Convergence of Discrete-time Diffusion Models: New Approach and Improved Rate

no code implementations21 Feb 2024 Yuchen Liang, Peizhong Ju, Yingbin Liang, Ness Shroff

In this paper, we establish the convergence guarantee for substantially larger classes of distributions under discrete-time diffusion models and further improve the convergence rate for distributions with bounded support.

Denoising

Achieving Sample and Computational Efficient Reinforcement Learning by Action Space Reduction via Grouping

no code implementations22 Jun 2023 Yining Li, Peizhong Ju, Ness Shroff

To address this issue, we formulate a general optimization problem for determining the optimal grouping strategy, which strikes a balance between performance loss and sample/computational complexity.

Generalization Performance of Transfer Learning: Overparameterized and Underparameterized Regimes

no code implementations8 Jun 2023 Peizhong Ju, Sen Lin, Mark S. Squillante, Yingbin Liang, Ness B. Shroff

For example, when the total number of features in the source task's learning model is fixed, we show that it is more advantageous to allocate a greater number of redundant features to the task-specific part rather than the common part.

Transfer Learning

Achieving Fairness in Multi-Agent Markov Decision Processes Using Reinforcement Learning

no code implementations1 Jun 2023 Peizhong Ju, Arnob Ghosh, Ness B. Shroff

Fairness plays a crucial role in various multi-agent systems (e. g., communication networks, financial markets, etc.).

Fairness Offline RL +2

Theoretical Characterization of the Generalization Performance of Overfitted Meta-Learning

no code implementations9 Apr 2023 Peizhong Ju, Yingbin Liang, Ness B. Shroff

However, due to the uniqueness of meta-learning such as task-specific gradient descent inner training and the diversity/fluctuation of the ground-truth signals among training tasks, we find new and interesting properties that do not exist in single-task linear regression.

Meta-Learning regression

Theory on Forgetting and Generalization of Continual Learning

no code implementations12 Feb 2023 Sen Lin, Peizhong Ju, Yingbin Liang, Ness Shroff

In particular, there is a lack of understanding on what factors are important and how they affect "catastrophic forgetting" and generalization performance.

Continual Learning

On the Generalization Power of the Overfitted Three-Layer Neural Tangent Kernel Model

no code implementations4 Jun 2022 Peizhong Ju, Xiaojun Lin, Ness B. Shroff

Our upper bound reveals that, between the two hidden-layers, the test error descends faster with respect to the number of neurons in the second hidden-layer (the one closer to the output) than with respect to that in the first hidden-layer (the one closer to the input).

On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models

no code implementations9 Mar 2021 Peizhong Ju, Xiaojun Lin, Ness B. Shroff

Specifically, for a class of learnable functions, we provide a new upper bound of the generalization error that approaches a small limiting value, even when the number of neurons $p$ approaches infinity.

Cannot find the paper you are looking for? You can Submit a new open access paper.