Search Results for author: YaoLiang Yu

Found 12 papers, 6 papers with code

Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training

no code implementations NeurIPS 2021 Shangshu Qian, Hung Pham, Thibaud Lutellier, Zeou Hu, Jungwon Kim, Lin Tan, YaoLiang Yu, Jiahao Chen, Sameena Shah

Our study of 22 mitigation techniques and five baselines reveals up to 12. 6% fairness variance across identical training runs with identical seeds.

Crime Prediction Fairness

Demystifying and Generalizing BinaryConnect

no code implementations NeurIPS 2021 Tim Dockhorn, YaoLiang Yu, Eyyüb Sari, Mahdi Zolnouri, Vahid Partovi Nia

BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization.

Quantization

An Operator Splitting View of Federated Learning

no code implementations12 Aug 2021 Saber Malekmohammadi, Kiarash Shaloudegi, Zeou Hu, YaoLiang Yu

Over the past few years, the federated learning ($\texttt{FL}$) community has witnessed a proliferation of new $\texttt{FL}$ algorithms.

Federated Learning

The Art of Abstention: Selective Prediction and Error Regularization for Natural Language Processing

1 code implementation ACL 2021 Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin

To fill this void in the literature, we study in this paper selective prediction for NLP, comparing different models and confidence estimators.

$S^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

no code implementations NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy efficient compared to conventional neural networks.

Quantifying and Improving Transferability in Domain Generalization

1 code implementation NeurIPS 2021 Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart

We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.

Domain Generalization

S$^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

no code implementations NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy-efficient compared to conventional neural networks.

BERxiT: Early Exiting for BERT with Better Fine-Tuning and Extension to Regression

1 code implementation EACL 2021 Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin

The slow speed of BERT has motivated much research on accelerating its inference, and the early exiting idea has been proposed to make trade-offs between model quality and efficiency.

Posterior Differential Regularization with f-divergence for Improving Model Robustness

1 code implementation NAACL 2021 Hao Cheng, Xiaodong Liu, Lis Pereira, YaoLiang Yu, Jianfeng Gao

Theoretically, we provide a connection of two recent methods, Jacobian Regularization and Virtual Adversarial Training, under this framework.

Domain Generalization

OLALA: Object-Level Active Learning for Efficient Document Layout Annotation

1 code implementation5 Oct 2020 Zejiang Shen, Jian Zhao, Melissa Dell, YaoLiang Yu, Weining Li

Document images often have intricate layout structures, with numerous content regions (e. g. texts, figures, tables) densely arranged on each page.

Active Learning Object Detection

Efficient Structured Matrix Rank Minimization

no code implementations NeurIPS 2014 Adams Wei Yu, Wanli Ma, YaoLiang Yu, Jaime Carbonell, Suvrit Sra

We study the problem of finding structured low-rank matrices using nuclear norm regularization where the structure is encoded by a linear map.

Cannot find the paper you are looking for? You can Submit a new open access paper.