no code implementations • EMNLP 2021 • Hengtong Zhang, Tianhang Zheng, Yaliang Li, Jing Gao, Lu Su, Bo Li
To address this problem, we propose a training framework with certified robustness to eliminate the causes that trigger the generation of profanity.
no code implementations • Findings (EMNLP) 2021 • Haoyu Wang, Fenglong Ma, Yaqing Wang, Jing Gao
We propose to mine outline knowledge of concepts related to given sentences from Wikipedia via BM25 model.
no code implementations • 3 Apr 2025 • Jing Gao, Ce Zheng, Laszlo A. Jeni, Zackory Erickson
In-bed human mesh recovery can be crucial and enabling for several healthcare applications, including sleep pattern monitoring, rehabilitation support, and pressure ulcer prevention.
no code implementations • 29 Mar 2025 • Tianyang Xu, Xiaoze Liu, Feijie Wu, Xiaoqian Wang, Jing Gao
Large Language Models (LLMs) have transformed natural language processing by learning from massive datasets, yet this rapid progress has also drawn legal scrutiny, as the ability to unintentionally generate copyrighted content has already prompted several prominent lawsuits.
no code implementations • 3 Mar 2025 • Elizabeth G. Campolongo, Yuan-Tang Chou, Ekaterina Govorkova, Wahid Bhimji, Wei-Lun Chao, Chris Harris, Shih-Chieh Hsu, Hilmar Lapp, Mark S. Neubauer, Josephine Namayanja, Aneesh Subramanian, Philip Harris, Advaith Anand, David E. Carlyn, Subhankar Ghosh, Christopher Lawrence, Eric Moreno, Ryan Raikman, Jiaman Wu, Ziheng Zhang, Bayu Adhi, Mohammad Ahmadi Gharehtoragh, Saúl Alonso Monsalve, Marta Babicz, Furqan Baig, Namrata Banerji, William Bardon, Tyler Barna, Tanya Berger-Wolf, Adji Bousso Dieng, Micah Brachman, Quentin Buat, David C. Y. Hui, Phuong Cao, Franco Cerino, Yi-Chun Chang, Shivaji Chaulagain, An-Kai Chen, Deming Chen, Eric Chen, Chia-Jui Chou, Zih-Chen Ciou, Miles Cochran-Branson, Artur Cordeiro Oudot Choi, Michael Coughlin, Matteo Cremonesi, Maria Dadarlat, Peter Darch, Malina Desai, Daniel Diaz, Steven Dillmann, Javier Duarte, Isla Duporge, Urbas Ekka, Saba Entezari Heravi, Hao Fang, Rian Flynn, Geoffrey Fox, Emily Freed, Hang Gao, Jing Gao, Julia Gonski, Matthew Graham, Abolfazl Hashemi, Scott Hauck, James Hazelden, Joshua Henry Peterson, Duc Hoang, Wei Hu, Mirco Huennefeld, David Hyde, Vandana Janeja, Nattapon Jaroenchai, Haoyi Jia, Yunfan Kang, Maksim Kholiavchenko, Elham E. Khoda, Sangin Kim, Aditya Kumar, Bo-Cheng Lai, Trung Le, Chi-Wei Lee, Janghyeon Lee, Shaocheng Lee, Suzan van der Lee, Charles Lewis, Haitong Li, Haoyang Li, Henry Liao, Mia Liu, Xiaolin Liu, Xiulong Liu, Vladimir Loncar, Fangzheng Lyu, Ilya Makarov, Abhishikth Mallampalli Chen-Yu Mao, Alexander Michels, Alexander Migala, Farouk Mokhtar, Mathieu Morlighem, Min Namgung, Andrzej Novak, Andrew Novick, Amy Orsborn, Anand Padmanabhan, Jia-Cheng Pan, Sneh Pandya, Zhiyuan Pei, Ana Peixoto, George Percivall, Alex Po Leung, Sanjay Purushotham, Zhiqiang Que, Melissa Quinnan, Arghya Ranjan, Dylan Rankin, Christina Reissel, Benedikt Riedel, Dan Rubenstein, Argyro Sasli, Eli Shlizerman, Arushi Singh, Kim Singh, Eric R. Sokol, Arturo Sorensen, Yu Su, Mitra Taheri, Vaibhav Thakkar, Ann Mariam Thomas, Eric Toberer, Chenghan Tsai, Rebecca Vandewalle, Arjun Verma, Ricco C. Venterea, He Wang, Jianwu Wang, Sam Wang, Shaowen Wang, Gordon Watts, Jason Weitz, Andrew Wildridge, Rebecca Williams, Scott Wolf, Yue Xu, Jianqi Yan, Jai Yu, Yulei Zhang, Haoran Zhao, Ying Zhao, Yibo Zhong
We present the different datasets along with a scheme to make machine learning challenges around the three datasets findable, accessible, interoperable, and reusable (FAIR).
no code implementations • 1 Mar 2025 • Tianci Liu, Ruirui Li, Yunzhe Qi, Hui Liu, Xianfeng Tang, Tianqi Zheng, Qingyu Yin, Monica Xiao Cheng, Jun Huan, Haoyu Wang, Jing Gao
In light of this, we explore the feasibility of representation fine-tuning, which applied some linear update to a few representations in a learned subspace, for knowledge editing.
no code implementations • 16 Feb 2025 • David Yin, Jing Gao
We developed LeanNavigator, a novel method for generating a large-scale dataset of Lean theorems and proofs by finding new ways to prove existing Lean theorems.
no code implementations • 14 Jan 2025 • Feijie Wu, Zitao Li, Fei Wei, Yaliang Li, Bolin Ding, Jing Gao
Experimental results demonstrate that RopMura effectively handles both single-hop and multi-hop queries, with the routing mechanism enabling precise answers for single-hop queries and the combined routing and planning mechanisms achieving accurate, multi-step resolutions for complex queries.
1 code implementation • 3 Sep 2024 • Zeyu Zhou, Tianci Liu, Ruqi Bai, Jing Gao, Murat Kocaoglu, David I. Inouye
To fill in this gap, we provide a theoretical study on the inherent trade-off between CF and predictive performance in a model-agnostic manner.
no code implementations • 31 Jul 2024 • Lianghao Tan, Shubing Liu, Jing Gao, Xiaoyi Liu, Linyue Chu, Huangqi Jiang
With the rapid advancement of deep learning technologies, computer vision has shown immense potential in retail automation.
1 code implementation • 28 Jul 2024 • Feijie Wu, Xingchen Wang, Yaqing Wang, Tianci Liu, Lu Su, Jing Gao
In federated learning (FL), accommodating clients' varied computational capacities poses a challenge, often limiting the participation of those with constrained resources in global model training.
no code implementations • 3 Jul 2024 • Feijie Wu, Xiaoze Liu, Haoyu Wang, Xingchen Wang, Lu Su, Jing Gao
Our federated RLHF methods (i. e., FedBis and FedBiscuit) encode each client's preferences into binary selectors and aggregate them to capture common preferences.
1 code implementation • 25 Jun 2024 • Feijie Wu, Zitao Li, Yaliang Li, Bolin Ding, Jing Gao
Specifically, our method involves the server generating a compressed LLM and aligning its performance with the full model.
1 code implementation • 18 Jun 2024 • Xiaoze Liu, Ting Sun, Tianyang Xu, Feijie Wu, Cunxiang Wang, Xiaoqian Wang, Jing Gao
Large Language Models (LLMs) have transformed machine learning but raised significant legal concerns due to their potential to produce text that infringes on copyrights, resulting in several high-profile lawsuits.
1 code implementation • 16 Jun 2024 • Haoyu Wang, Tianci Liu, Ruirui Li, Monica Cheng, Tuo Zhao, Jing Gao
By adding a sparsity constraint on the product of low-rank matrices and converting it to row and column-wise sparsity, we ensure efficient and precise model updates.
no code implementations • 1 Jun 2024 • Tianci Liu, Haoyu Wang, Shiyang Wang, Yu Cheng, Jing Gao
Large language models (LLMs) have achieved impressive performance on various natural language generation tasks.
1 code implementation • 31 May 2024 • Tianyang Xu, Shujin Wu, Shizhe Diao, Xiaoze Liu, Xingyao Wang, Yangyi Chen, Jing Gao
Large language models (LLMs) often generate inaccurate or fabricated information and generally fail to indicate their confidence, which limits their broader applications.
no code implementations • 21 May 2024 • Jing Gao, Ning Cheng, Bin Fang, Wenjuan Han
The Transformer model, initially achieving significant success in the field of natural language processing, has recently shown great potential in the application of tactile perception.
no code implementations • 9 May 2024 • Yangyang Wang, Xu Zhan, Jing Gao, Jinjie Yao, Shunjun Wei, JianSheng Bai
However, sparse imaging based on handcrafted regularization functions suffers from target information loss in few observed SAR data.
1 code implementation • 1 Apr 2024 • Xiaoze Liu, Feijie Wu, Tianyang Xu, Zhuo Chen, Yichi Zhang, Xiaoqian Wang, Jing Gao
In this paper, we propose GraphEval to evaluate an LLM's performance using a substantially large test dataset.
no code implementations • 14 Mar 2024 • Ning Cheng, You Li, Jing Gao, Bin Fang, Jinan Xu, Wenjuan Han
Tactility provides crucial support and enhancement for the perception and interaction capabilities of both humans and robots.
no code implementations • 2 Mar 2024 • An Chen, Zhilong Wang, Karl Luigi Loza Vidaurre, Yanqiang Han, Simin Ye, Kehao Tao, Shiwei Wang, Jing Gao, Jinjin Li
We focus on the application of transfer learning methods for the discovery of advanced molecules/materials, particularly, the construction of transfer learning frameworks for different systems, and how transfer learning can enhance the performance of models.
no code implementations • 25 Feb 2024 • Taixi Lu, Haoyu Wang, Huajie Shao, Jing Gao, Huaxiu Yao
Existing model cascade methods seek to enhance inference efficiency by greedily selecting the lightest model capable of processing the current input from a variety of models, based on model confidence scores.
no code implementations • 16 Feb 2024 • Haoyu Wang, Ruirui Li, Haoming Jiang, Jinjin Tian, Zhengyang Wang, Chen Luo, Xianfeng Tang, Monica Cheng, Tuo Zhao, Jing Gao
Retrieval-augmented Large Language Models (LLMs) offer substantial benefits in enhancing performance across knowledge-intensive scenarios.
1 code implementation • 4 Jan 2024 • Jiahui Peng, Jing Gao, Xin Tong, Jing Guo, Hang Yang, Jianchuan Qi, Ruiqiao Li, Nan Li, Ming Xu
In the evolving field of corporate sustainability, analyzing unstructured Environmental, Social, and Governance (ESG) reports is a complex challenge due to their varied formats and intricate content.
no code implementations • 28 Sep 2023 • Tianci Liu, Haoyu Wang, Feijie Wu, Hengtong Zhang, Pan Li, Lu Su, Jing Gao
Fair machine learning seeks to mitigate model prediction bias against certain demographic subgroups such as elder and female.
no code implementations • 2 Jun 2023 • Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei zhang, Liling Dong, Jing Gao, Jianyong Wang
In this paper, we explore the potential of LLMs such as GPT-4 to outperform traditional AI tools in dementia diagnosis.
no code implementations • 25 Mar 2023 • Murray Z. Frank, Jing Gao, Keer Yang
Machine learning algorithms are known to outperform human analysts in predicting corporate earnings, leading to their rapid adoption.
no code implementations • 19 Feb 2023 • Tianci Liu, Haoyu Wang, Yaqing Wang, Xiaoqian Wang, Lu Su, Jing Gao
This new framework utilizes data that have similar labels when estimating fairness on a particular label group for better stability, and can unify DP and EOp.
1 code implementation • 1 Dec 2022 • Junde Wu, Huihui Fang, Yehui Yang, Yuanpei Liu, Jing Gao, Lixin Duan, Weihua Yang, Yanwu Xu
In this paper, we propose a novel neural network framework, called Multi-Rater Prism (MrPrism) to learn the medical image segmentation from multiple labels.
no code implementations • 28 Nov 2022 • Dong Li, Ruoming Jin, Zhenming Liu, Bin Ren, Jing Gao, Zhi Liu
Since Rendle and Krichene argued that commonly used sampling-based evaluation metrics are "inconsistent" with respect to the global metrics (even in expectation), there have been a few studies on the sampling-based recommender system evaluation.
1 code implementation • 31 Oct 2022 • Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.
no code implementations • 6 Oct 2022 • Liwang Zhou, Jing Gao
One decomposition approach often cannot be used for numerous forecasting tasks since the standard time series decomposition lacks flexibility and robustness.
1 code implementation • 5 Aug 2022 • Junde Wu, Yu Zhang, Rao Fu, Yuanpei Liu, Jing Gao
Then, to ensure that the method adapts to the dynamic and unseen person flow, we propose Graph Convolutional Network (GCN) with a simple Nearest Neighbor (NN) strategy to accurately cluster the instances of CSG.
1 code implementation • 13 Jun 2022 • Feijie Wu, Song Guo, Zhihao Qu, Shiqi He, Ziming Liu, Jing Gao
The lack of inactive clients' updates in partial client participation makes it more likely for the model aggregation to deviate from the aggregation based on full client participation.
no code implementations • 12 Jun 2022 • Junde Wu, Huihui Fang, Fangxin Shang, Dalu Yang, Zhaowei Wang, Jing Gao, Yehui Yang, Yanwu Xu
To model the segmentation-diagnosis interaction, SeA-block first embeds the diagnosis feature based on the segmentation information via the encoder, and then transfers the embedding back to the diagnosis feature space by a decoder.
1 code implementation • 24 May 2022 • Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models.
Natural Language Understanding
parameter-efficient fine-tuning
+1
1 code implementation • 22 Apr 2022 • Jing Gao, Tilo Burghardt, Neill W. Campbell
In particular, for the task of automatic identification of individual Holstein-Friesians in real-world farm CCTV, we show that self-supervision, metric learning, cluster analysis, and active learning can complement each other to significantly reduce the annotation requirements usually needed to train cattle identification frameworks.
1 code implementation • 25 Mar 2022 • Jiacong Hu, Jing Gao, Jingwen Ye, Yang Gao, Xingen Wang, Zunlei Feng, Mingli Song
With the rapid development of deep learning, the increasing complexity and scale of parameters make training a new model increasingly resource-intensive.
no code implementations • 17 Dec 2021 • Tang Li, Jing Gao, Xi Peng
Here we explore the capacity of deep spatial learning for the predictive modeling of urbanization.
1 code implementation • Findings (NAACL) 2022 • Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao
The first is the use of self-training to leverage large amounts of unlabeled data for prompt-based FN in few-shot settings.
no code implementations • 29 Sep 2021 • Liuyi Yao, Yaliang Li, Bolin Ding, Jingren Zhou, Jinduo Liu, Mengdi Huai, Jing Gao
To tackle these challenges, we propose a novel casual graph based fair prediction framework, which integrates graph structure learning into fair prediction to ensure that unfair pathways are excluded in the causal graph.
no code implementations • 29 Sep 2021 • Dong Li, Zhenming Liu, Ruoming Jin, Zhi Liu, Jing Gao, Bin Ren
Recently, a wide range of recommendation algorithms inspired by deep learning techniques have emerged as the performance leaders several standard recommendation benchmarks.
no code implementations • Findings (EMNLP) 2021 • Yaqing Wang, Haoda Chu, Chao Zhang, Jing Gao
In this work, we study the problem of named entity recognition (NER) in a low resource scenario, focusing on few-shot and zero-shot settings.
no code implementations • 22 Jun 2021 • Yaqing Wang, Fenglong Ma, Haoyu Wang, Kishlay Jha, Jing Gao
The experimental results show our proposed MetaFEND model can detect fake news on never-seen events effectively and outperform the state-of-the-art methods.
no code implementations • 20 Jun 2021 • Dong Li, Ruoming Jin, Jing Gao, Zhi Liu
Recently, Rendle has warned that the use of sampling-based top-$k$ metrics might not suffice.
no code implementations • 27 May 2021 • Ruoming Jin, Dong Li, Jing Gao, Zhi Liu, Li Chen, Yang Zhou
Through the derivation and analysis of the closed-form solutions for two basic regression and matrix factorization approaches, we found these two approaches are indeed inherently related but also diverge in how they "scale-down" the singular values of the original user-item interaction matrix.
2 code implementations • 5 May 2021 • Jing Gao, Tilo Burghardt, William Andrew, Andrew W. Dowsey, Neill W. Campbell
Motivated by the labelling burden involved in constructing visual cattle identification systems, we propose exploiting the temporal coat pattern appearance across videos as a self-supervision signal for animal identity learning.
no code implementations • 17 Mar 2021 • Haoyu Liu, Fenglong Ma, Shibo He, Jiming Chen, Jing Gao
Meanwhile, we propose a post-processing framework to tune the original ensemble results through a stacking process so that we can achieve a trade off between fairness and detection performance.
no code implementations • 2 Mar 2021 • Ruoming Jin, Dong Li, Benjamin Mudrak, Jing Gao, Zhi Liu
The proposed approaches either are rather uninformative (linking sampling to metric evaluation) or can only work on simple metrics, such as Recall/Precision (Krichene and Rendle 2020; Li et al. 2020).
no code implementations • 1 Jan 2021 • Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah
Neural sequence labeling is an important technique employed for many Natural Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot tagging for dialog systems and semantic parsing.
no code implementations • 7 Oct 2020 • Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah
While self-training serves as an effective mechanism to learn from large amounts of unlabeled data -- meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels.
no code implementations • 16 Aug 2020 • Yaqing Wang, Fenglong Ma, Jing Gao
To tackle this challenging task, we propose a cross-graph representation learning framework, i. e., CrossVal, which can leverage an external KG to validate the facts in the target KG efficiently.
2 code implementations • 16 Jun 2020 • William Andrew, Jing Gao, Siobhan Mullan, Neill Campbell, Andrew W Dowsey, Tilo Burghardt
Holstein-Friesian cattle exhibit individually-characteristic black and white coat patterns visually akin to those arising from Turing's reaction-diffusion systems.
no code implementations • 15 Jun 2020 • Yaqing Wang, Yifan Ethan Xu, Xi-An Li, Xin Luna Dong, Jing Gao
(1) We formalize the problem of validating the textual attribute values of products from a variety of categories as a natural language inference task in the few-shot learning setting, and propose a meta-learning latent variable model to jointly process the signals obtained from product profiles and textual attribute values.
no code implementations • 21 Apr 2020 • Alexander Hanbo Li, Yaqing Wang, Changyou Chen, Jing Gao
Effective inference for a generative adversarial model remains an important and challenging problem.
no code implementations • 12 Apr 2020 • Zhi Liu, Yan Huang, Jing Gao, Li Chen, Dong Li
Similar product recommendation is one of the most common scenes in e-commerce.
no code implementations • 7 Apr 2020 • Hengtong Zhang, Yaliang Li, Bolin Ding, Jing Gao
In real-world recommendation systems, the cost of retraining recommendation models is high, and the interaction frequency between users and a recommendation system is restricted. Given these real-world restrictions, we propose to let the agent interact with a recommender simulator instead of the target recommendation system and leverage the transferability of the generated adversarial samples to poison the target system.
1 code implementation • 5 Feb 2020 • Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, Aidong Zhang
Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up.
1 code implementation • 28 Dec 2019 • Yaqing Wang, Weifeng Yang, Fenglong Ma, Jin Xu, Bin Zhong, Qiang Deng, Jing Gao
In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i. e., WeFEND, which can leverage users' reports as weak supervision to enlarge the amount of training data for fake news detection.
no code implementations • 22 Dec 2019 • Jing Gao, N. Anantrasirichai, David Bull
This paper describes a novel deep learning-based method for mitigating the effects of atmospheric distortion.
no code implementations • 26 Apr 2019 • Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, Kui Ren
Knowledge graph embedding (KGE) is a technique for learning continuous embeddings for entities and relations in the knowledge graph. Due to its benefit to a variety of downstream tasks such as knowledge graph completion, question answering and recommendation, KGE has gained significant attention recently.
1 code implementation • NeurIPS 2018 • Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, Aidong Zhang
Estimating individual treatment effect (ITE) is a challenging problem in causal inference, due to the missing counterfactuals and the selection bias.
no code implementations • 14 Oct 2018 • Yaliang Li, Liuyi Yao, Nan Du, Jing Gao, Qi Li, Chuishi Meng, Chenwei Zhang, Wei Fan
Patients who have medical information demands tend to post questions about their health conditions on these crowdsourced Q&A websites and get answers from other users.
no code implementations • 10 Oct 2018 • Yaliang Li, Houping Xiao, Zhan Qin, Chenglin Miao, Lu Su, Jing Gao, Kui Ren, Bolin Ding
To better utilize sensory data, the problem of truth discovery, whose goal is to estimate user quality and infer reliable aggregated results through quality-aware data aggregation, has emerged as a hot topic.
no code implementations • 27 Sep 2018 • Hanbo Li, Yaqing Wang, Changyou Chen, Jing Gao
We propose a novel approach, Adversarial Inference by Matching priors and conditionals (AIM), which explicitly matches prior and conditional distributions in both data and code spaces, and puts a direct constraint on the dependency structure of the generative model.
1 code implementation • Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2018 • Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, Jing Gao
One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events.
no code implementations • 6 Jul 2017 • Fenglong Ma, Radha Chitta, Saurabh Kataria, Jing Zhou, Palghat Ramesh, Tong Sun, Jing Gao
Question answering is an important and difficult task in the natural language processing domain, because many basic natural language processing tasks can be cast into a question answering task.
no code implementations • 19 Jun 2017 • Fenglong Ma, Radha Chitta, Jing Zhou, Quanzeng You, Tong Sun, Jing Gao
Existing work solves this problem by employing recurrent neural networks (RNNs) to model EHR data and utilizing simple attention mechanism to interpret the results.
no code implementations • 11 Aug 2016 • Chenwei Zhang, Sihong Xie, Yaliang Li, Jing Gao, Wei Fan, Philip S. Yu
We propose a novel multi-source hierarchical prediction consolidation method to effectively exploits the complicated hierarchical label structures to resolve the noisy and conflicting information that inherently originates from multiple imperfect sources.
no code implementations • 16 Oct 2013 • Sihong Xie, Xiangnan Kong, Jing Gao, Wei Fan, Philip S. Yu
Nonetheless, data nowadays are usually multilabeled, such that more than one label have to be predicted at the same time.
no code implementations • NeurIPS 2009 • Jing Gao, Feng Liang, Wei Fan, Yizhou Sun, Jiawei Han
First, we can boost the diversity of classification ensemble by incorporating multiple clustering outputs, each of which provides grouping constraints for the joint label predictions of a set of related objects.