Search Results for author: Jacky Y. Zhang

Found 6 papers, 2 papers with code

SABAL: Sparse Approximation-based Batch Active Learning

no code implementations29 Sep 2021 Maohao Shen, Bowen Jiang, Jacky Y. Zhang, Oluwasanmi O Koyejo

We propose a novel and general framework (i. e., SABAL) that formulates batch active learning as a sparse approximation problem.

Active Learning

Scalable Robust Federated Learning with Provable Security Guarantees

no code implementations29 Sep 2021 Andrew Liu, Jacky Y. Zhang, Nishant Kumar, Dakshita Khurana, Oluwasanmi O Koyejo

Federated averaging, the most popular aggregation approach in federated learning, is known to be vulnerable to failures and adversarial updates from clients that wish to disrupt training.

Federated Learning

Does Adversarial Transferability Indicate Knowledge Transferability?

no code implementations28 Sep 2020 Kaizhao Liang, Jacky Y. Zhang, Oluwasanmi O Koyejo, Bo Li

Despite the immense success that deep neural networks (DNNs) have achieved, \emph{adversarial examples}, which are perturbed inputs that aim to mislead DNNs to make mistakes, have recently led to great concerns.

Transfer Learning

Bayesian Coresets: Revisiting the Nonconvex Optimization Perspective

1 code implementation1 Jul 2020 Jacky Y. Zhang, Rajiv Khanna, Anastasios Kyrillidis, Oluwasanmi Koyejo

Bayesian coresets have emerged as a promising approach for implementing scalable Bayesian inference.

Bayesian Inference

Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability

2 code implementations25 Jun 2020 Kaizhao Liang, Jacky Y. Zhang, Boxin Wang, Zhuolin Yang, Oluwasanmi Koyejo, Bo Li

Knowledge transferability, or transfer learning, has been widely adopted to allow a pre-trained model in the source domain to be effectively adapted to downstream tasks in the target domain.

Transfer Learning

Learning Sparse Distributions using Iterative Hard Thresholding

no code implementations NeurIPS 2019 Jacky Y. Zhang, Rajiv Khanna, Anastasios Kyrillidis, Oluwasanmi Koyejo

Iterative hard thresholding (IHT) is a projected gradient descent algorithm, known to achieve state of the art performance for a wide range of structured estimation problems, such as sparse inference.

Cannot find the paper you are looking for? You can Submit a new open access paper.