1 code implementation • 28 May 2023 • Yefan Zhou, Yaoqing Yang, Arin Chang, Michael W. Mahoney
Our approach uses temperature-like and load-like parameters to model the impact of neural network (NN) training hyperparameters on pruning performance.
no code implementations • 21 May 2023 • Ryan Theisen, Hyunsuk Kim, Yaoqing Yang, Liam Hodgkinson, Michael W. Mahoney
Ensembling has a long history in statistical data analysis, with many impactful applications.
2 code implementations • 12 Jun 2022 • Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael W. Mahoney, Joseph E. Gonzalez, Kannan Ramchandran, Prateek Mittal
In this type of attack, the goal of the attacker is to use poisoned updates to implant so-called backdoors into the learned model such that, at test time, the model's outputs can be fixed to a given target for certain inputs.
1 code implementation • 6 Feb 2022 • Yaoqing Yang, Ryan Theisen, Liam Hodgkinson, Joseph E. Gonzalez, Kannan Ramchandran, Charles H. Martin, Michael W. Mahoney
Our analyses consider (I) hundreds of Transformers trained in different settings, in which we systematically vary the amount of data, the model size and the optimization hyperparameters, (II) a total of 51 pretrained Transformers from eight families of Huggingface NLP models, including GPT2, BERT, etc., and (III) a total of 28 existing and novel generalization metrics.
no code implementations • 8 Dec 2021 • Alan Pham, Eunice Chan, Vikranth Srivatsa, Dhruba Ghosh, Yaoqing Yang, Yaodong Yu, Ruiqi Zhong, Joseph E. Gonzalez, Jacob Steinhardt
Overparameterization is shown to result in poor test accuracy on rare subgroups under a variety of settings where subgroup information is known.
1 code implementation • 30 Nov 2021 • Yefan Zhou, Yiru Shen, Yujun Yan, Chen Feng, Yaoqing Yang
Our finding shows that a leading factor in determining recognition versus reconstruction is how dispersed the training data is.
no code implementations • 5 Nov 2021 • Puja Trivedi, Ekdeep Singh Lubana, Yujun Yan, Yaoqing Yang, Danai Koutra
Unsupervised graph representation learning is critical to a wide range of applications where labels may be scarce or expensive to procure.
1 code implementation • NeurIPS 2021 • Yaoqing Yang, Liam Hodgkinson, Ryan Theisen, Joe Zou, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney
Viewing neural network models in terms of their loss landscapes has a long history in the statistical mechanics approach to learning, and in recent years it has received attention within machine learning proper.
1 code implementation • CVPR 2022 • Siyuan Xiang, Anbang Yang, Yanfei Xue, Yaoqing Yang, Chen Feng
Based on the fact that self-supervised learning is helpful when a large number of data are available, we propose two self-supervised learning approaches to improve the baseline performance for view consistency reasoning and camera pose reasoning tasks on the SPARE3D dataset.
1 code implementation • 12 Feb 2021 • Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, Danai Koutra
We are the first to take a unified perspective to jointly explain the oversmoothing and heterophily problems at the node level.
1 code implementation • 26 Aug 2020 • Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E. Gonzalez, Michael W. Mahoney
Replacing BN with the recently-proposed Group Normalization (GN) can reduce gradient diversity and improve test accuracy.
1 code implementation • NeurIPS 2020 • Yaoqing Yang, Rajiv Khanna, Yaodong Yu, Amir Gholami, Kurt Keutzer, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney
Using these observations, we show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms.
1 code implementation • 21 Jan 2020 • Vipul Gupta, Dominic Carrano, Yaoqing Yang, Vaishaal Shankar, Thomas Courtade, Kannan Ramchandran
Inexpensive cloud services, such as serverless computing, are often vulnerable to straggling nodes that increase end-to-end latency for distributed computation.
Distributed, Parallel, and Cluster Computing Information Theory Information Theory
no code implementations • 11 May 2019 • Siheng Chen, Chaojing Duan, Yaoqing Yang, Duanshun Li, Chen Feng, Dong Tian
The experimental results show that (1) the proposed networks outperform the state-of-the-art methods in various tasks; (2) a graph topology can be inferred as auxiliary information without specific supervision on graph topology inference; and (3) graph filtering refines the reconstruction, leading to better performances.
3 code implementations • CVPR 2018 • Yaoqing Yang, Chen Feng, Yiru Shen, Dong Tian
Recent deep networks that directly handle points in a point set, e. g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation.
Ranked #14 on
3D Point Cloud Linear Classification
on ModelNet40
3D Point Cloud Linear Classification
General Classification
+1
1 code implementation • CVPR 2018 • Yiru Shen, Chen Feng, Yaoqing Yang, Dong Tian
Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure.
no code implementations • NeurIPS 2017 • Yaoqing Yang, Pulkit Grover, Soummya Kar
Our experiments for personalized PageRank performed on real systems and real social networks show that this ratio can be as large as $10^4$.