no code implementations • Findings (EMNLP) 2021 • Yiming Wang, Ximing Li, Xiaotang Zhou, Jihong Ouyang
Short text nowadays has become a more fashionable form of text data, e. g., Twitter posts, news titles, and product reviews.
no code implementations • 21 Jan 2023 • Ximing Li, Chendi Wang, Guang Cheng
To complete the picture, we establish a lower bound for TV accuracy that holds for every $\epsilon$-DP synthetic data generator.
1 code implementation • 24 Nov 2022 • Ximing Li, Yuanzhi Jiang, Changchun Li, Yiyuan Wang, Jihong Ouyang
Inspired by the impressive success of deep Semi-Supervised (SS) learning, we transform the PL learning problem into the SS learning problem, and propose a novel PL learning method, namely Partial Label learning with Semi-supervised Perspective (PLSP).
1 code implementation • COLING 2022 • Bing Wang, Liang Ding, Qihuang Zhong, Ximing Li, DaCheng Tao
Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task, which focuses on detecting the sentiment polarity towards the aspect in a sentence.
Aspect-Based Sentiment Analysis (ABSA)
Contrastive Learning
+2
no code implementations • 20 Nov 2021 • Bing Wang, Yue Wang, Ximing Li, Jihong Ouyang
The recent generative dataless methods construct document-specific category priors by using seed word occurrences only, however, such category priors often contain very limited and even noisy supervised signals.
no code implementations • 22 Oct 2021 • Jinjin Chi, Zhiyao Yang, Jihong Ouyang, Ximing Li
The basic idea is to introduce a variational distribution as the approximation of the true continuous barycenter, so as to frame the barycenters computation problem as an optimization problem, where parameters of the variational distribution adjust the proxy distribution to be similar to the barycenter.
no code implementations • ICLR 2022 • Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang
In this paper, we propose a novel PU learning method, namely Positive and unlabeled learning with Partially Positive Mixup (P3Mix), which simultaneously benefits from data augmentation and supervision correction with a heuristic mixup technique.
no code implementations • ACL 2021 • Changchun Li, Ximing Li, Jihong Ouyang
They initialize the deep classifier by training over labeled texts; and then alternatively predict unlabeled texts as their pseudo-labels and train the deep classifier over the mixture of labeled and pseudo-labeled texts.