no code implementations • Findings (EMNLP) 2021 • Haoyu Wang, Fenglong Ma, Yaqing Wang, Jing Gao
We propose to mine outline knowledge of concepts related to given sentences from Wikipedia via BM25 model.
no code implementations • EMNLP 2021 • Hengtong Zhang, Tianhang Zheng, Yaliang Li, Jing Gao, Lu Su, Bo Li
To address this problem, we propose a training framework with certified robustness to eliminate the causes that trigger the generation of profanity.
no code implementations • 28 Sep 2023 • Tianci Liu, Haoyu Wang, Feijie Wu, Hengtong Zhang, Pan Li, Lu Su, Jing Gao
Fair machine learning seeks to mitigate model prediction bias against certain demographic subgroups such as elder and female.
no code implementations • 2 Jun 2023 • Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei zhang, Liling Dong, Jing Gao, Jianyong Wang
In this paper, we explore the potential of LLMs such as GPT-4 to outperform traditional AI tools in dementia diagnosis.
no code implementations • 25 Mar 2023 • Murray Z. Frank, Jing Gao, Keer Yang
There is considerable evidence that machine learning algorithms have better predictive abilities than humans in various financial settings.
no code implementations • 19 Feb 2023 • Tianci Liu, Haoyu Wang, Yaqing Wang, Xiaoqian Wang, Lu Su, Jing Gao
This new framework utilizes data that have similar labels when estimating fairness on a particular label group for better stability, and can unify DP and EOp.
no code implementations • 1 Dec 2022 • Junde Wu, Huihui Fang, Yehui Yang, Yuanpei Liu, Jing Gao, Lixin Duan, Weihua Yang, Yanwu Xu
In this paper, we propose a novel neural network framework, called Multi-Rater Prism (MrPrism) to learn the medical image segmentation from multiple labels.
no code implementations • 28 Nov 2022 • Dong Li, Ruoming Jin, Zhenming Liu, Bin Ren, Jing Gao, Zhi Liu
Since Rendle and Krichene argued that commonly used sampling-based evaluation metrics are "inconsistent" with respect to the global metrics (even in expectation), there have been a few studies on the sampling-based recommender system evaluation.
1 code implementation • 31 Oct 2022 • Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.
no code implementations • 6 Oct 2022 • Liwang Zhou, Jing Gao
One decomposition approach often cannot be used for numerous forecasting tasks since the standard time series decomposition lacks flexibility and robustness.
1 code implementation • 5 Aug 2022 • Junde Wu, Yu Zhang, Rao Fu, Yuanpei Liu, Jing Gao
Then, to ensure that the method adapts to the dynamic and unseen person flow, we propose Graph Convolutional Network (GCN) with a simple Nearest Neighbor (NN) strategy to accurately cluster the instances of CSG.
1 code implementation • 13 Jun 2022 • Feijie Wu, Song Guo, Zhihao Qu, Shiqi He, Ziming Liu, Jing Gao
The lack of inactive clients' updates in partial client participation makes it more likely for the model aggregation to deviate from the aggregation based on full client participation.
no code implementations • 12 Jun 2022 • Junde Wu, Huihui Fang, Fangxin Shang, Dalu Yang, Zhaowei Wang, Jing Gao, Yehui Yang, Yanwu Xu
To model the segmentation-diagnosis interaction, SeA-block first embeds the diagnosis feature based on the segmentation information via the encoder, and then transfers the embedding back to the diagnosis feature space by a decoder.
1 code implementation • 24 May 2022 • Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.
1 code implementation • 22 Apr 2022 • Jing Gao, Tilo Burghardt, Neill W. Campbell
In particular, for the task of automatic identification of individual Holstein-Friesians in real-world farm CCTV, we show that self-supervision, metric learning, cluster analysis, and active learning can complement each other to significantly reduce the annotation requirements usually needed to train cattle identification frameworks.
no code implementations • 25 Mar 2022 • Jiacong Hu, Jing Gao, Zunlei Feng, Lechao Cheng, Jie Lei, Hujun Bao, Mingli Song
the feature maps are adopted to locate the critical features in each layer.
no code implementations • 17 Dec 2021 • Tang Li, Jing Gao, Xi Peng
Here we explore the capacity of deep spatial learning for the predictive modeling of urbanization.
1 code implementation • Findings (NAACL) 2022 • Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
The first is the use of self-training to leverage large amounts of unlabeled data for prompt-based FN in few-shot settings.
no code implementations • 29 Sep 2021 • Dong Li, Zhenming Liu, Ruoming Jin, Zhi Liu, Jing Gao, Bin Ren
Recently, a wide range of recommendation algorithms inspired by deep learning techniques have emerged as the performance leaders several standard recommendation benchmarks.
no code implementations • 29 Sep 2021 • Liuyi Yao, Yaliang Li, Bolin Ding, Jingren Zhou, Jinduo Liu, Mengdi Huai, Jing Gao
To tackle these challenges, we propose a novel casual graph based fair prediction framework, which integrates graph structure learning into fair prediction to ensure that unfair pathways are excluded in the causal graph.
no code implementations • Findings (EMNLP) 2021 • Yaqing Wang, Haoda Chu, Chao Zhang, Jing Gao
In this work, we study the problem of named entity recognition (NER) in a low resource scenario, focusing on few-shot and zero-shot settings.
no code implementations • 22 Jun 2021 • Yaqing Wang, Fenglong Ma, Haoyu Wang, Kishlay Jha, Jing Gao
The experimental results show our proposed MetaFEND model can detect fake news on never-seen events effectively and outperform the state-of-the-art methods.
no code implementations • 20 Jun 2021 • Dong Li, Ruoming Jin, Jing Gao, Zhi Liu
Recently, Rendle has warned that the use of sampling-based top-$k$ metrics might not suffice.
no code implementations • 27 May 2021 • Ruoming Jin, Dong Li, Jing Gao, Zhi Liu, Li Chen, Yang Zhou
Through the derivation and analysis of the closed-form solutions for two basic regression and matrix factorization approaches, we found these two approaches are indeed inherently related but also diverge in how they "scale-down" the singular values of the original user-item interaction matrix.
2 code implementations • 5 May 2021 • Jing Gao, Tilo Burghardt, William Andrew, Andrew W. Dowsey, Neill W. Campbell
Motivated by the labelling burden involved in constructing visual cattle identification systems, we propose exploiting the temporal coat pattern appearance across videos as a self-supervision signal for animal identity learning.
no code implementations • 17 Mar 2021 • Haoyu Liu, Fenglong Ma, Shibo He, Jiming Chen, Jing Gao
Meanwhile, we propose a post-processing framework to tune the original ensemble results through a stacking process so that we can achieve a trade off between fairness and detection performance.
no code implementations • 2 Mar 2021 • Ruoming Jin, Dong Li, Benjamin Mudrak, Jing Gao, Zhi Liu
The proposed approaches either are rather uninformative (linking sampling to metric evaluation) or can only work on simple metrics, such as Recall/Precision (Krichene and Rendle 2020; Li et al. 2020).
no code implementations • 1 Jan 2021 • Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah
Neural sequence labeling is an important technique employed for many Natural Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot tagging for dialog systems and semantic parsing.
no code implementations • 7 Oct 2020 • Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah
While self-training serves as an effective mechanism to learn from large amounts of unlabeled data -- meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels.
no code implementations • 16 Aug 2020 • Yaqing Wang, Fenglong Ma, Jing Gao
To tackle this challenging task, we propose a cross-graph representation learning framework, i. e., CrossVal, which can leverage an external KG to validate the facts in the target KG efficiently.
2 code implementations • 16 Jun 2020 • William Andrew, Jing Gao, Siobhan Mullan, Neill Campbell, Andrew W Dowsey, Tilo Burghardt
Holstein-Friesian cattle exhibit individually-characteristic black and white coat patterns visually akin to those arising from Turing's reaction-diffusion systems.
no code implementations • 15 Jun 2020 • Yaqing Wang, Yifan Ethan Xu, Xi-An Li, Xin Luna Dong, Jing Gao
(1) We formalize the problem of validating the textual attribute values of products from a variety of categories as a natural language inference task in the few-shot learning setting, and propose a meta-learning latent variable model to jointly process the signals obtained from product profiles and textual attribute values.
no code implementations • 21 Apr 2020 • Alexander Hanbo Li, Yaqing Wang, Changyou Chen, Jing Gao
Effective inference for a generative adversarial model remains an important and challenging problem.
no code implementations • 12 Apr 2020 • Zhi Liu, Yan Huang, Jing Gao, Li Chen, Dong Li
Similar product recommendation is one of the most common scenes in e-commerce.
no code implementations • 7 Apr 2020 • Hengtong Zhang, Yaliang Li, Bolin Ding, Jing Gao
In real-world recommendation systems, the cost of retraining recommendation models is high, and the interaction frequency between users and a recommendation system is restricted. Given these real-world restrictions, we propose to let the agent interact with a recommender simulator instead of the target recommendation system and leverage the transferability of the generated adversarial samples to poison the target system.
1 code implementation • 5 Feb 2020 • Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, Aidong Zhang
Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up.
1 code implementation • 28 Dec 2019 • Yaqing Wang, Weifeng Yang, Fenglong Ma, Jin Xu, Bin Zhong, Qiang Deng, Jing Gao
In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i. e., WeFEND, which can leverage users' reports as weak supervision to enlarge the amount of training data for fake news detection.
no code implementations • 22 Dec 2019 • Jing Gao, N. Anantrasirichai, David Bull
This paper describes a novel deep learning-based method for mitigating the effects of atmospheric distortion.
no code implementations • 26 Apr 2019 • Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, Kui Ren
Knowledge graph embedding (KGE) is a technique for learning continuous embeddings for entities and relations in the knowledge graph. Due to its benefit to a variety of downstream tasks such as knowledge graph completion, question answering and recommendation, KGE has gained significant attention recently.
1 code implementation • NeurIPS 2018 • Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, Aidong Zhang
Estimating individual treatment effect (ITE) is a challenging problem in causal inference, due to the missing counterfactuals and the selection bias.
no code implementations • 14 Oct 2018 • Yaliang Li, Liuyi Yao, Nan Du, Jing Gao, Qi Li, Chuishi Meng, Chenwei Zhang, Wei Fan
Patients who have medical information demands tend to post questions about their health conditions on these crowdsourced Q&A websites and get answers from other users.
no code implementations • 10 Oct 2018 • Yaliang Li, Houping Xiao, Zhan Qin, Chenglin Miao, Lu Su, Jing Gao, Kui Ren, Bolin Ding
To better utilize sensory data, the problem of truth discovery, whose goal is to estimate user quality and infer reliable aggregated results through quality-aware data aggregation, has emerged as a hot topic.
no code implementations • 27 Sep 2018 • Hanbo Li, Yaqing Wang, Changyou Chen, Jing Gao
We propose a novel approach, Adversarial Inference by Matching priors and conditionals (AIM), which explicitly matches prior and conditional distributions in both data and code spaces, and puts a direct constraint on the dependency structure of the generative model.
1 code implementation • Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2018 • Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, Jing Gao
One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events.
no code implementations • 6 Jul 2017 • Fenglong Ma, Radha Chitta, Saurabh Kataria, Jing Zhou, Palghat Ramesh, Tong Sun, Jing Gao
Question answering is an important and difficult task in the natural language processing domain, because many basic natural language processing tasks can be cast into a question answering task.
no code implementations • 19 Jun 2017 • Fenglong Ma, Radha Chitta, Jing Zhou, Quanzeng You, Tong Sun, Jing Gao
Existing work solves this problem by employing recurrent neural networks (RNNs) to model EHR data and utilizing simple attention mechanism to interpret the results.
no code implementations • 11 Aug 2016 • Chenwei Zhang, Sihong Xie, Yaliang Li, Jing Gao, Wei Fan, Philip S. Yu
We propose a novel multi-source hierarchical prediction consolidation method to effectively exploits the complicated hierarchical label structures to resolve the noisy and conflicting information that inherently originates from multiple imperfect sources.
no code implementations • 16 Oct 2013 • Sihong Xie, Xiangnan Kong, Jing Gao, Wei Fan, Philip S. Yu
Nonetheless, data nowadays are usually multilabeled, such that more than one label have to be predicted at the same time.
no code implementations • NeurIPS 2009 • Jing Gao, Feng Liang, Wei Fan, Yizhou Sun, Jiawei Han
First, we can boost the diversity of classification ensemble by incorporating multiple clustering outputs, each of which provides grouping constraints for the joint label predictions of a set of related objects.