Search Results for author: Zhenhuan Yang

Found 10 papers, 6 papers with code

Robust COVID-19 Detection in CT Images with CLIP

1 code implementation13 Mar 2024 Li Lin, Yamini Sri Krubha, Zhenhuan Yang, Cheng Ren, Thuc Duy Le, Irene Amerini, Xin Wang, Shu Hu

In the realm of medical imaging, particularly for COVID-19 detection, deep learning models face substantial challenges such as the necessity for extensive computational resources, the paucity of well-annotated datasets, and a significant amount of unlabeled data.

Outlier Robust Adversarial Training

1 code implementation10 Sep 2023 Shu Hu, Zhenhuan Yang, Xin Wang, Yiming Ying, Siwei Lyu

Theoretically, we show that the learning objective of ORAT satisfies the $\mathcal{H}$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss.

Adversarial Attack Binary Classification

Fairness-aware Differentially Private Collaborative Filtering

no code implementations16 Mar 2023 Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao, Yiming Ying

Recently, there has been an increasing adoption of differential privacy guided algorithms for privacy-preserving machine learning tasks.

Collaborative Filtering Fairness +1

Minimax AUC Fairness: Efficient Algorithm with Provable Convergence

1 code implementation22 Aug 2022 Zhenhuan Yang, Yan Lok Ko, Kush R. Varshney, Yiming Ying

We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.

Decision Making Fairness +1

Differentially Private SGDA for Minimax Problems

no code implementations22 Jan 2022 Zhenhuan Yang, Shu Hu, Yunwen Lei, Kush R. Varshney, Siwei Lyu, Yiming Ying

We further provide its utility analysis in the nonconvex-strongly-concave setting which is the first-ever-known result in terms of the primal population risk.

Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning

no code implementations NeurIPS 2021 Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying

A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.

Generalization Bounds Metric Learning +1

Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise Learning

1 code implementation23 Nov 2021 Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying

A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.

Generalization Bounds Metric Learning +1

Stability and Generalization of Stochastic Gradient Methods for Minimax Problems

1 code implementation8 May 2021 Yunwen Lei, Zhenhuan Yang, Tianbao Yang, Yiming Ying

In this paper, we provide a comprehensive generalization analysis of stochastic gradient methods for minimax problems under both convex-concave and nonconvex-nonconcave cases through the lens of algorithmic stability.

Generalization Bounds

Stochastic Hard Thresholding Algorithms for AUC Maximization

1 code implementation4 Nov 2020 Zhenhuan Yang, Baojian Zhou, Yunwen Lei, Yiming Ying

In this paper, we aim to develop stochastic hard thresholding algorithms for the important problem of AUC maximization in imbalanced classification.

imbalanced classification

Stability and Optimization Error of Stochastic Gradient Descent for Pairwise Learning

no code implementations25 Apr 2019 Wei Shen, Zhenhuan Yang, Yiming Ying, Xiaoming Yuan

From this fundamental trade-off, we obtain lower bounds for the optimization error of SGD algorithms and the excess expected risk over a class of pairwise losses.

Generalization Bounds Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.