Search Results for author: Yaoqing Yang

Found 18 papers, 13 papers with code

Coded Distributed Computing for Inverse Problems

no code implementations NeurIPS 2017 Yaoqing Yang, Pulkit Grover, Soummya Kar

Our experiments for personalized PageRank performed on real systems and real social networks show that this ratio can be as large as $10^4$.

Distributed Computing

Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling

1 code implementation CVPR 2018 Yiru Shen, Chen Feng, Yaoqing Yang, Dong Tian

Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure.

Point Cloud Registration

FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation

3 code implementations CVPR 2018 Yaoqing Yang, Chen Feng, Yiru Shen, Dong Tian

Recent deep networks that directly handle points in a point set, e. g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation.

3D Point Cloud Linear Classification General Classification +1

Deep Unsupervised Learning of 3D Point Clouds via Graph Topology Inference and Filtering

no code implementations11 May 2019 Siheng Chen, Chaojing Duan, Yaoqing Yang, Duanshun Li, Chen Feng, Dong Tian

The experimental results show that (1) the proposed networks outperform the state-of-the-art methods in various tasks; (2) a graph topology can be inferred as auxiliary information without specific supervision on graph topology inference; and (3) graph filtering refines the reconstruction, leading to better performances.

3D Point Cloud Reconstruction General Classification +1

Serverless Straggler Mitigation using Local Error-Correcting Codes

1 code implementation21 Jan 2020 Vipul Gupta, Dominic Carrano, Yaoqing Yang, Vaishaal Shankar, Thomas Courtade, Kannan Ramchandran

Inexpensive cloud services, such as serverless computing, are often vulnerable to straggling nodes that increase end-to-end latency for distributed computation.

Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Boundary thickness and robustness in learning models

1 code implementation NeurIPS 2020 Yaoqing Yang, Rajiv Khanna, Yaodong Yu, Amir Gholami, Kurt Keutzer, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney

Using these observations, we show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms.

Adversarial Defense Data Augmentation

Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models

1 code implementation26 Aug 2020 Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E. Gonzalez, Michael W. Mahoney

Replacing BN with the recently-proposed Group Normalization (GN) can reduce gradient diversity and improve test accuracy.

Federated Learning

Self-supervised Spatial Reasoning on Multi-View Line Drawings

1 code implementation CVPR 2022 Siyuan Xiang, Anbang Yang, Yanfei Xue, Yaoqing Yang, Chen Feng

Based on the fact that self-supervised learning is helpful when a large number of data are available, we propose two self-supervised learning approaches to improve the baseline performance for view consistency reasoning and camera pose reasoning tasks on the SPARE3D dataset.

Binary Classification Contrastive Learning +2

Taxonomizing local versus global structure in neural network loss landscapes

1 code implementation NeurIPS 2021 Yaoqing Yang, Liam Hodgkinson, Ryan Theisen, Joe Zou, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney

Viewing neural network models in terms of their loss landscapes has a long history in the statistical mechanics approach to learning, and in recent years it has received attention within machine learning proper.

Augmentations in Graph Contrastive Learning: Current Methodological Flaws & Towards Better Practices

no code implementations5 Nov 2021 Puja Trivedi, Ekdeep Singh Lubana, Yujun Yan, Yaoqing Yang, Danai Koutra

Unsupervised graph representation learning is critical to a wide range of applications where labels may be scarce or expensive to procure.

Contrastive Learning Data Augmentation +5

The Effect of Model Size on Worst-Group Generalization

no code implementations8 Dec 2021 Alan Pham, Eunice Chan, Vikranth Srivatsa, Dhruba Ghosh, Yaoqing Yang, Yaodong Yu, Ruiqi Zhong, Joseph E. Gonzalez, Jacob Steinhardt

Overparameterization is shown to result in poor test accuracy on rare subgroups under a variety of settings where subgroup information is known.

Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data

1 code implementation6 Feb 2022 Yaoqing Yang, Ryan Theisen, Liam Hodgkinson, Joseph E. Gonzalez, Kannan Ramchandran, Charles H. Martin, Michael W. Mahoney

Our analyses consider (I) hundreds of Transformers trained in different settings, in which we systematically vary the amount of data, the model size and the optimization hyperparameters, (II) a total of 51 pretrained Transformers from eight families of Huggingface NLP models, including GPT2, BERT, etc., and (III) a total of 28 existing and novel generalization metrics.

Model Selection

Neurotoxin: Durable Backdoors in Federated Learning

2 code implementations12 Jun 2022 Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Joseph E. Gonzalez, Kannan Ramchandran, Prateek Mittal

In this type of attack, the goal of the attacker is to use poisoned updates to implant so-called backdoors into the learned model such that, at test time, the model's outputs can be fixed to a given target for certain inputs.

Backdoor Attack Federated Learning +1

A Three-regime Model of Network Pruning

1 code implementation28 May 2023 Yefan Zhou, Yaoqing Yang, Arin Chang, Michael W. Mahoney

Our approach uses temperature-like and load-like parameters to model the impact of neural network (NN) training hyperparameters on pruning performance.

Efficient Neural Network Hyperparameter Optimization +1

Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training

1 code implementation NeurIPS 2023 Yefan Zhou, Tianyu Pang, Keqin Liu, Charles H. Martin, Michael W. Mahoney, Yaoqing Yang

In particular, the learning rate, which can be interpreted as a temperature-like parameter within the statistical mechanics of learning, plays a crucial role in neural network training.

Scheduling

Teach LLMs to Phish: Stealing Private Information from Language Models

no code implementations1 Mar 2024 Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal

When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information.

Cannot find the paper you are looking for? You can Submit a new open access paper.