Search Results for author: Weiwei Liu

Found 21 papers, 6 papers with code

Adaptive Adversarial Multi-task Representation Learning

no code implementations ICML 2020 YUREN MAO, Weiwei Liu, Xuemin Lin

Adversarial Multi-task Representation Learning (AMTRL) methods are able to boost the performance of Multi-task Representation Learning (MTRL) models.

Representation Learning

MetaWeighting: Learning to Weight Tasks in Multi-Task Learning

no code implementations Findings (ACL) 2022 YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Pengtao Xie

Task weighting, which assigns weights on the including tasks during training, significantly matters the performance of Multi-task Learning (MTL); thus, recently, there has been an explosive interest in it.

Multi-Task Learning text-classification +1

Coverage-Guaranteed Prediction Sets for Out-of-Distribution Data

no code implementations29 Mar 2024 Xin Zou, Weiwei Liu

In this paper, we study the confidence set prediction problem in the OOD generalization setting.

Conformal Prediction

LASIL: Learner-Aware Supervised Imitation Learning For Long-term Microscopic Traffic Simulation

no code implementations26 Mar 2024 Ke Guo, Zhenwei Miao, Wei Jing, Weiwei Liu, Weizi Li, Dayang Hao, Jia Pan

Due to the covariate shift issue, existing imitation learning-based simulators often fail to generate stable long-term simulations.

Imitation Learning

Deep Partial Multi-Label Learning with Graph Disambiguation

no code implementations10 May 2023 Haobo Wang, Shisong Yang, Gengyu Lyu, Weiwei Liu, Tianlei Hu, Ke Chen, Songhe Feng, Gang Chen

In partial multi-label learning (PML), each data example is equipped with a candidate label set, which consists of multiple ground-truth labels and other false-positive labels.

Multi-Label Learning

Generalization Bounds for Adversarial Contrastive Learning

no code implementations21 Feb 2023 Xin Zou, Weiwei Liu

Deep networks are well-known to be fragile to adversarial attacks, and adversarial training is one of the most popular methods used to train a robust model.

Contrastive Learning Generalization Bounds

Better Diffusion Models Further Improve Adversarial Training

2 code implementations9 Feb 2023 Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, Shuicheng Yan

Under the $\ell_\infty$-norm threat model with $\epsilon=8/255$, our models achieve $70. 69\%$ and $42. 67\%$ robust accuracy on CIFAR-10 and CIFAR-100, respectively, i. e. improving upon previous state-of-the-art models by $+4. 58\%$ and $+8. 03\%$.

Denoising

WAT: Improve the Worst-class Robustness in Adversarial Training

1 code implementation8 Feb 2023 Boqi Li, Weiwei Liu

Furthermore, we propose a measurement to evaluate the proposed method in terms of both the average and worst-class accuracies.

PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System

1 code implementation7 Jun 2022 Chenxia Li, Weiwei Liu, Ruoyu Guo, Xiaoting Yin, Kaitao Jiang, Yongkun Du, Yuning Du, Lingfeng Zhu, Baohua Lai, Xiaoguang Hu, dianhai yu, Yanjun Ma

For text recognizer, the base model is replaced from CRNN to SVTR, and we introduce lightweight text recognition network SVTR LCNet, guided training of CTC by attention, data augmentation strategy TextConAug, better pre-trained model by self-supervised TextRotNet, UDML, and UIM to accelerate the model and improve the effect.

Data Augmentation Optical Character Recognition +2

EPPAC: Entity Pre-typing Relation Classification with Prompt AnswerCentralizing

no code implementations1 Mar 2022 Jiejun Tan, Wenbin Hu, Weiwei Liu

To address these issues, a novel paradigm, Entity Pre-typing Relation Classification with Prompt Answer Centralizing(EPPAC) is proposed in this paper.

Classification Relation +1

BanditMTL: Bandit-based Multi-task Learning for Text Classification

no code implementations ACL 2021 YUREN MAO, Zekai Wang, Weiwei Liu, Xuemin Lin, Wenbin Hu

Task variance regularization, which can be used to improve the generalization of Multi-task Learning (MTL) models, remains unexplored in multi-task text classification.

Multi-Task Learning text-classification +1

The Emerging Trends of Multi-Label Learning

no code implementations23 Nov 2020 Weiwei Liu, Haobo Wang, Xiaobo Shen, Ivor W. Tsang

Exabytes of data are generated daily by humans, leading to the growing need for new efforts in dealing with the grand challenges for multi-label learning brought by big data.

Classification Extreme Multi-Label Classification +2

Leveraged Matrix Completion with Noise

no code implementations11 Nov 2020 Xinjian Huang, Weiwei Liu, Bo Du, DaCheng Tao

In this paper, we employ the leverage scores to characterize the importance of each element and significantly relax assumptions to: (1) not any other structure assumptions are imposed on the underlying low-rank matrix; (2) elements being observed are appropriately dependent on their importance via the leverage score.

Matrix Completion

PP-OCR: A Practical Ultra Lightweight OCR System

9 code implementations21 Sep 2020 Yuning Du, Chenxia Li, Ruoyu Guo, Xiaoting Yin, Weiwei Liu, Jun Zhou, Yifan Bai, Zilin Yu, Yehua Yang, Qingqing Dang, Haoshuang Wang

Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17. 9M images are used).

Computational Efficiency Optical Character Recognition +1

Opinion Maximization in Social Trust Networks

1 code implementation19 Jun 2020 Pinghua Xu, Wenbin Hu, Jia Wu, Weiwei Liu

However, the practical significance of the existing studies on this subject is limited for two reasons.

Social and Information Networks Computer Science and Game Theory J.4

Copula Multi-label Learning

no code implementations NeurIPS 2019 Weiwei Liu

This inspires us to develop a novel copula multi-label learning paradigm for modeling label and feature dependencies.

Econometrics Multi-Label Learning

Sparse Embedded k-Means Clustering

no code implementations NeurIPS 2017 Weiwei Liu, Xiaobo Shen, Ivor Tsang

For example, compared to the advanced singular value decomposition based feature extraction approach, [1] reduce the running time by a factor of $\min \{n, d\}\epsilon^2 log(d)/k$ for data matrix $X \in \mathbb{R}^{n\times d} $ with $n$ data points and $d$ features, while losing only a factor of one in approximation accuracy.

Clustering Dimensionality Reduction

On the Optimality of Classifier Chain for Multi-label Classification

no code implementations NeurIPS 2015 Weiwei Liu, Ivor Tsang

Based on our results, we propose a dynamic programming based classifier chain (CC-DP) algorithm to search the globally optimal label order for CC and a greedy classifier chain (CC-Greedy) algorithm to find a locally optimal CC.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.