Search Results for author: Zhuoning Yuan

Found 10 papers, 4 papers with code

Accelerating Deep Learning with Millions of Classes

no code implementations ECCV 2020 Zhuoning Yuan, Zhishuai Guo, Xiaotian Yu, Xiaoyu Wang, Tianbao Yang

In our experiment, we demonstrate that the proposed frame-work is able to train deep learning models with millions of classes and achieve above 10×speedup compared to existing approaches.

Classification General Classification +1

Memory-based Optimization Methods for Model-Agnostic Meta-Learning

no code implementations9 Jun 2021 Bokun Wang, Zhuoning Yuan, Yiming Ying, Tianbao Yang

Existing algorithms for MAML are based on the "episode" idea by sampling a number of tasks and a number of data points for each sampled task at each iteration for updating the meta-model.

Continual Learning Meta-Learning +2

Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity

1 code implementation9 Feb 2021 Zhuoning Yuan, Zhishuai Guo, Yi Xu, Yiming Ying, Tianbao Yang

Deep AUC (area under the ROC curve) Maximization (DAM) has attracted much attention recently due to its great potential for imbalanced data classification.

Federated Learning

Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification

3 code implementations ICCV 2021 Zhuoning Yuan, Yan Yan, Milan Sonka, Tianbao Yang

Our studies demonstrate that the proposed DAM method improves the performance of optimizing cross-entropy loss by a large margin, and also achieves better performance than optimizing the existing AUC square loss on these medical image classification tasks.

Classification General Classification +3

Advanced Graph and Sequence Neural Networks for Molecular Property Prediction and Drug Discovery

1 code implementation2 Dec 2020 Zhengyang Wang, Meng Liu, Youzhi Luo, Zhao Xu, Yaochen Xie, Limei Wang, Lei Cai, Qi Qi, Zhuoning Yuan, Tianbao Yang, Shuiwang Ji

Here we develop a suite of comprehensive machine learning methods and tools spanning different computational models, molecular representations, and loss functions for molecular property prediction and drug discovery.

Drug Discovery Molecular Property Prediction

Fast Objective & Duality Gap Convergence for Nonconvex-Strongly-Concave Min-Max Problems

no code implementations12 Jun 2020 Zhishuai Guo, Zhuoning Yuan, Yan Yan, Tianbao Yang

This paper focuses on stochastic methods for solving smooth non-convex strongly-concave min-max problems, which have received increasing attention due to their potential applications in deep learning (e. g., deep AUC maximization).

Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks

1 code implementation ICML 2020 Zhishuai Guo, Mingrui Liu, Zhuoning Yuan, Li Shen, Wei Liu, Tianbao Yang

In this paper, we study distributed algorithms for large-scale AUC maximization with a deep neural network as a predictive model.

Distributed Optimization

Stochastic AUC Maximization with Deep Neural Networks

no code implementations ICLR 2020 Mingrui Liu, Zhuoning Yuan, Yiming Ying, Tianbao Yang

In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model.

Stagewise Training Accelerates Convergence of Testing Error Over SGD

no code implementations NeurIPS 2019 Zhuoning Yuan, Yan Yan, Rong Jin, Tianbao Yang

For convex loss functions and two classes of "nice-behaviored" non-convex objectives that are close to a convex function, we establish faster convergence of stagewise training than the vanilla SGD under the PL condition on both training error and testing error.

Universal Stagewise Learning for Non-Convex Problems with Convergence on Averaged Solutions

no code implementations ICLR 2019 Zaiyi Chen, Zhuoning Yuan, Jin-Feng Yi, Bo-Wen Zhou, Enhong Chen, Tianbao Yang

For example, there is still a lack of theories of convergence for SGD and its variants that use stagewise step size and return an averaged solution in practice.

Cannot find the paper you are looking for? You can Submit a new open access paper.