1 code implementation • EMNLP (sustainlp) 2020 • Ji Xin, Rodrigo Nogueira, YaoLiang Yu, Jimmy Lin
Pre-trained language models such as BERT have shown their effectiveness in various tasks.
no code implementations • 10 Apr 2024 • Yiwei Lu, Matthew Y. R. Yang, Zuoqiu Liu, Gautam Kamath, YaoLiang Yu
Copyright infringement may occur when a generative model produces samples substantially similar to some copyrighted data that it had access to during the training phase.
no code implementations • 4 Apr 2024 • Jing Dong, Baoxiang Wang, YaoLiang Yu
Our algorithm simultaneously achieves a Nash regret and a regret bound of $O(T^{4/5})$ for potential games, which matches the best available result, without using additional projection steps.
no code implementations • 29 Feb 2024 • Haoye Lu, Spencer Szabados, YaoLiang Yu
Diffusion models have become the leading distribution-learning method in recent years.
no code implementations • NeurIPS 2023 • Yiwei Lu, YaoLiang Yu, Xinlin Li, Vahid Partovi Nia
In neural network binarization, BinaryConnect (BC) and its variants are considered the standard.
no code implementations • 20 Feb 2024 • Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath, YaoLiang Yu
In this paper, we extend the exploration of the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors.
no code implementations • 15 Feb 2024 • Yiwei Lu, Guojun Zhang, Sun Sun, Hongyu Guo, YaoLiang Yu
In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual information.
1 code implementation • 7 Mar 2023 • Yiwei Lu, Gautam Kamath, YaoLiang Yu
Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing indiscriminate data poisoning baselines over a range of datasets and models.
no code implementations • 5 Aug 2022 • Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, YaoLiang Yu
Similar to other differentially private (DP) learners, the major challenge for DPGM is also how to achieve a subtle balance between utility and privacy.
no code implementations • 31 Jul 2022 • Ji Xin, Raphael Tang, Zhiying Jiang, YaoLiang Yu, Jimmy Lin
There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc.
1 code implementation • 20 Jun 2022 • Artur Back de Luca, Guojun Zhang, Xi Chen, YaoLiang Yu
Federated Learning (FL) is a prominent framework that enables training a centralized model while securing user privacy by fusing local, decentralized models.
1 code implementation • 20 May 2022 • Qinghua Zheng, Jihong Wang, Minnan Luo, YaoLiang Yu, Jundong Li, Lina Yao, Xiaojun Chang
Due to the superior performance of Graph Neural Networks (GNNs) in various domains, there is an increasing interest in the GNN explanation problem "\emph{which fraction of the input graph is the most crucial to decide the model's decision?}"
1 code implementation • 19 Apr 2022 • Yiwei Lu, Gautam Kamath, YaoLiang Yu
Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention.
1 code implementation • 3 Feb 2022 • Guojun Zhang, Saber Malekmohammadi, Xi Chen, YaoLiang Yu
With the increasingly broad deployment of federated learning (FL) systems in the real world, it is critical but challenging to ensure fairness in FL, i. e. reasonably satisfactory performances for each of the numerous diverse clients.
no code implementations • NeurIPS 2021 • Shangshu Qian, Hung Pham, Thibaud Lutellier, Zeou Hu, Jungwon Kim, Lin Tan, YaoLiang Yu, Jiahao Chen, Sameena Shah
Our study of 22 mitigation techniques and five baselines reveals up to 12. 6% fairness variance across identical training runs with identical seeds.
no code implementations • NeurIPS 2021 • Tim Dockhorn, YaoLiang Yu, Eyyüb Sari, Mahdi Zolnouri, Vahid Partovi Nia
BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization.
no code implementations • ICLR 2022 • Dihong Jiang, Sun Sun, YaoLiang Yu
Deep generative models have been widely used in practical applications such as the detection of out-of-distribution (OOD) data.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 29 Sep 2021 • Jesse Sun, Dihong Jiang, YaoLiang Yu
Quantile regression has a natural extension to generative modelling by leveraging a stronger convergence in pointwise rather than in distribution.
no code implementations • 29 Sep 2021 • Guojun Zhang, Yiwei Lu, Sun Sun, Hongyu Guo, YaoLiang Yu
Self-supervised contrastive learning is an emerging field due to its power in providing good data representations.
no code implementations • 12 Aug 2021 • Saber Malekmohammadi, Kiarash Shaloudegi, Zeou Hu, YaoLiang Yu
Over the past few years, the federated learning ($\texttt{FL}$) community has witnessed a proliferation of new $\texttt{FL}$ algorithms.
1 code implementation • ACL 2021 • Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin
To fill this void in the literature, we study in this paper selective prediction for NLP, comparing different models and confidence estimators.
1 code implementation • NeurIPS 2021 • Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia
Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy efficient compared to conventional neural networks.
2 code implementations • NeurIPS 2021 • Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart
We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.
no code implementations • NeurIPS 2021 • Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia
Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy-efficient compared to conventional neural networks.
1 code implementation • EACL 2021 • Ji Xin, Raphael Tang, YaoLiang Yu, Jimmy Lin
The slow speed of BERT has motivated much research on accelerating its inference, and the early exiting idea has been proposed to make trade-offs between model quality and efficiency.
2 code implementations • NAACL 2021 • Hao Cheng, Xiaodong Liu, Lis Pereira, YaoLiang Yu, Jianfeng Gao
Theoretically, we provide a connection of two recent methods, Jacobian Regularization and Virtual Adversarial Training, under this framework.
1 code implementation • 5 Oct 2020 • Zejiang Shen, Jian Zhao, Melissa Dell, YaoLiang Yu, Weining Li
Document images often have intricate layout structures, with numerous content regions (e. g. texts, figures, tables) densely arranged on each page.
no code implementations • NeurIPS 2014 • Adams Wei Yu, Wanli Ma, YaoLiang Yu, Jaime Carbonell, Suvrit Sra
We study the problem of finding structured low-rank matrices using nuclear norm regularization where the structure is encoded by a linear map.