Search Results for author: Jialin Mao

Found 6 papers, 4 papers with code

A picture of the space of typical learnable tasks

2 code implementations31 Oct 2022 Rahul Ramesh, Jialin Mao, Itay Griniasty, Rubing Yang, Han Kheng Teoh, Mark Transtrum, James P. Sethna, Pratik Chaudhari

We develop information geometric techniques to understand the representations learned by deep networks when they are trained on different tasks using supervised, meta-, semi-supervised and contrastive learning.

Contrastive Learning Meta-Learning +1

Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy

1 code implementation21 May 2022 Zhiqi Bu, Jialin Mao, Shiyun Xu

Large convolutional neural networks (CNN) can be difficult to train in the differentially private (DP) regime, since the optimization algorithms require a computationally expensive operation, known as the per-sample gradient clipping.

Does the Data Induce Capacity Control in Deep Learning?

1 code implementation27 Oct 2021 Rubing Yang, Jialin Mao, Pratik Chaudhari

This structure is mirrored in a network trained on this data: we show that the Hessian and the Fisher Information Matrix (FIM) have eigenvalues that are spread uniformly over exponentially large ranges.

Generalization Bounds

Loss Functions for Multiset Prediction

no code implementations ICLR 2018 Sean Welleck, Zixin Yao, Yu Gai, Jialin Mao, Zheng Zhang, Kyunghyun Cho

In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making.

Decision Making Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.