no code implementations • ICML 2020 • Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu
Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size.
1 code implementation • 2 May 2025 • Kaixuan Zhang, Hu Wang, Minxian Li, Mingwu Ren, Mao Ye, Xiatian Zhu
Typically, multiple-exposure LDR images are employed to capture a wider range of brightness levels in a scene, as a single LDR image cannot represent both the brightest and darkest regions simultaneously.
1 code implementation • Proceedings of the AAAI Conference on Artificial Intelligence 2025 • Nianxin Li, Mao Ye, Lihua Zhou, Song Tang, Yan Gan, Zizhuo Liang, Xiatian Zhu
While for analogical reasoningmodule, graph nodes consist of category-level prompt nodes and pixel-level image feature nodes. Analogical inference is based on graph convolution.
no code implementations • 22 Mar 2025 • Junli Wang, Wanyue Cao, Jinyan Huang, Yu Zhou, Rujia Zheng, Yu Lou, Jiaqi Yang, Jianghui Tang, Mao Ye, Zhengtao Hong, Jiangchao Wu, Haonan Ding, Yuquan Zhang, Jianpeng Sheng, Xinjiang Lu, Pinglong Xu, Xiongbin Lu, Xueli Bai, Tingbo Liang, Qi Zhang
The CD19$^+$ macrophages exhibit increased levels of PD-L1 and CD73, enhanced mitochondrial oxidation, and compromised phagocytosis, indicating their immunosuppressive functions.
no code implementations • 12 Mar 2025 • Lihua Zhou, Mao Ye, Shuaifeng Li, Nianxin Li, Xiatian Zhu, Lei Deng, Hongbin Liu, Zhen Lei
Test-time adaptation with pre-trained vision-language models, such as CLIP, aims to adapt the model to new, potentially out-of-distribution test data.
no code implementations • 4 Feb 2025 • David S. Hayden, Mao Ye, Timur Garipov, Gregory P. Meyer, Carl Vondrick, Zhao Chen, Yuning Chai, Eric Wolff, Siddhartha S. Srinivasa
It is difficult to anticipate the myriad challenges that a predictive model will encounter once deployed.
1 code implementation • Advances in Neural Information Processing Systems 37 (NeurIPS 2024) 2024 • Shuaifeng Li, Mao Ye, Lihua Zhou, Nianxin Li, Siying Xiao, Song Tang, Xiatian Zhu
Knowledge dissemination combines knowledge from cloud detector and CLIP model to initialize a target detector and a CLIP detector in target domain.
no code implementations • 4 Dec 2024 • Song Tang, Chunxiao Zu, Wenxin Su, Yuan Dong, Mao Ye, Yan Gan, Xiatian Zhu
However, this paradigm is not applicable to medical images where the foreground and background share numerous visual features, necessitating a more detailed description for background.
no code implementations • 2 Dec 2024 • Wenxin Su, Song Tang, Xiaofeng Liu, Xiaojing Yi, Mao Ye, Chunxiao Zu, Jiahao Li, Xiatian Zhu
Specifically, we first theoretically reformulate conventional perturbation optimization in a generative way--learning a perturbation generation function with a latent input variable.
no code implementations • 28 Nov 2024 • Hui Li, Mingwang Xu, Yun Zhan, Shan Mu, Jiaye Li, Kaihui Cheng, Yuxuan Chen, Tan Chen, Mao Ye, Jingdong Wang, Siyu Zhu
Recent advancements in visual generation technologies have markedly increased the scale and availability of video datasets, which are crucial for training effective video generation models.
no code implementations • 7 Nov 2024 • Aoru Xue, Yiming Ren, Zining Song, Mao Ye, Xinge Zhu, Yuexin Ma
We propose a novel hybrid calibration-free method FreeCap to accurately capture global multi-person motions in open environments.
1 code implementation • 30 Oct 2024 • YuCheng Huang, Luping Ji, Hudong Liu, Mao Ye
In general, they heavily rely on global visual similarity matching.
1 code implementation • 14 Oct 2024 • Jiaxiang Gou, Luping Ji, Pei Liu, Mao Ye
Whole Slide Image (WSI) classification has very significant applications in clinical pathology, e. g., tumor identification and cancer diagnosis.
no code implementations • 7 Oct 2024 • Jiuzheng Yang, Song Tang, Yangkuiyi Zhang, Shuaifeng Li, Mao Ye, Jianwei Zhang, Xiatian Zhu
The core idea is to distill semantics lossless knowledge in the weak features (from the weak/teacher branch) to guide the representation learning upon the strong features (from the strong/student branch).
no code implementations • 2 Oct 2024 • Yunhao Yang, Yuxin Hu, Mao Ye, Zaiwei Zhang, Zhichao Lu, Yi Xu, Ufuk Topcu, Ben Snyder
Multimodal foundation models offer promising advancements for enhancing driving perception systems, but their high computational and financial costs pose challenges.
no code implementations • 1 Oct 2024 • Bo Liu, Mao Ye, Peter Stone, Qiang Liu
A fundamental challenge in continual learning is to balance the trade-off between learning new tasks and remembering the previously acquired knowledge.
no code implementations • 23 Sep 2024 • Mao Ye, Gregory P. Meyer, Zaiwei Zhang, Dennis Park, Siva Karthik Mustikovela, Yuning Chai, Eric M Wolff
We propose a simple and scalable data mining approach that leverages the knowledge contained within a large vision language model (VLM).
1 code implementation • 14 Sep 2024 • Pei Liu, Luping Ji, Jiaxiang Gou, Bo Fu, Mao Ye
Our VLSA could pave a new way for SA in CPATH by offering weakly-supervised MIL an effective means to learn valuable prognostic clues from gigapixel WSIs.
no code implementations • 30 Aug 2024 • Yanbo Gao, Meng Fu, Shuai Li, Chong Lv, Xun Cai, Hui Yuan, Mao Ye
The analysis transform and synthesis transform are used to encode an image to latent feature and decode the quantized feature to reconstruct the image, and can be regarded as coupled transforms.
no code implementations • 10 Jul 2024 • Dengyan Luo, Yanping Xiang, Hu Wang, Luping Ji, Shuai Li, Mao Ye
Specifically, a Temporal Deformable Alignment (TDA) module based on the designed Dilated Convolution Attention Fusion (DCAF) block is developed to explicitly align the adjacent frames with the current frame at the feature level.
1 code implementation • 26 Jun 2024 • Song Tang, Shaxu Yan, Xiaozhi Qi, Jianxin Gao, Mao Ye, Jianwei Zhang, Xiatian Zhu
Few-shot Semantic Segmentation (FSS) aims to adapt a pretrained model to new classes with as few as a single labelled training sample per class.
1 code implementation • 11 Jun 2024 • Weiwei Duan, Luping Ji, Shengjia Chen, Sicheng Zhu, Mao Ye
To extend feature source domains and enhance feature representation, we propose a new Triple-domain Strategy (Tridos) with the frequency-aware memory enhancement on spatio-temporal domain for infrared small target detection.
1 code implementation • 3 Jun 2024 • Song Tang, Wenxin Su, Yan Gan, Mao Ye, Jianwei Zhang, Xiatian Zhu
We design a proxy denoising mechanism to correct ViL's predictions, grounded on a proxy confidence theory that models the dynamic effect of proxy's divergence against the domain-invariant space during adaptation.
Ranked #9 on
Unsupervised Domain Adaptation
on Office-Home
1 code implementation • 12 Mar 2024 • Song Tang, Wenxin Su, Mao Ye, Jianwei Zhang, Xiatian Zhu
To tackle this unified SFDA problem, we propose a novel approach called Latent Causal Factors Discovery (LCFD).
1 code implementation • CVPR 2024 • Song Tang, Wenxin Su, Mao Ye, Xiatian Zhu
We find that directly applying the ViL model to the target domain in a zero-shot fashion is unsatisfactory, as it is not specialized for this particular task but largely generic.
no code implementations • 21 Sep 2023 • Yanbo Gao, Wenjia Huang, Shuai Li, Hui Yuan, Mao Ye, Siwei Ma
Similar as the traditional video coding, LVC inherits motion estimation/compensation, residual coding and other modules, all of which are implemented with neural networks (NNs).
no code implementations • 31 May 2023 • Mao Ye, Haitao Wang, Zheqian Chen
To solve the problem of poor performance of deep neural network models due to insufficient data, a simple yet effective interpolation-based data augmentation method is proposed: MSMix (Manifold Swap Mixup).
1 code implementation • IJCAI 2023 • Qichen He, Siying Xiao, Mao Ye, Xiatian Zhu, Ferrante Neri and Dongde Hou
Existing Unsupervised Domain Adaptation (UDA) methods typically attempt to perform knowledge transfer in a domain-invariant space explicitly or implicitly.
no code implementations • ICCV 2023 • Mao Ye, Gregory P. Meyer, Yuning Chai, Qiang Liu
Although halting a token is a non-differentiable operation, our method allows for differentiable end-to-end learning by leveraging an equivalent differentiable forward-pass.
1 code implementation • ICCV 2023 • Lihua Zhou, Mao Ye, Xiatian Zhu, Siying Xiao, Xu-Qian Fan, Ferrante Neri
With distribution alignment, it is challenging to acquire a common space which maintains fully the discriminative structure of both domains.
1 code implementation • 20 Sep 2022 • Tongda Xu, Han Gao, Chenjian Gao, Yuanyuan Wang, Dailan He, Jinyong Pi, Jixiang Luo, Ziyu Zhu, Mao Ye, Hongwei Qin, Yan Wang, Jingjing Liu, Ya-Qin Zhang
In this paper, we consider the problem of bit allocation in Neural Video Compression (NVC).
1 code implementation • 19 Sep 2022 • Mao Ye, Bo Liu, Stephen Wright, Peter Stone, Qiang Liu
Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning.
no code implementations • 2 Sep 2022 • Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, Qiang Liu
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development.
no code implementations • 2 Sep 2022 • Mao Ye, Lemeng Wu, Qiang Liu
We propose a family of First Hitting Diffusion Models (FHDM), deep generative models that generate data with a diffusion process that terminates at a random first hitting time.
no code implementations • 2 Sep 2022 • Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu
One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error.
no code implementations • 31 Aug 2022 • Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions in terms of conceptual understanding, theoretical analysis, algorithm improvement and extensions to discrete, structured, non-Euclidean domains.
3 code implementations • 23 Aug 2022 • Ren Yang, Radu Timofte, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu Li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei LI, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng
The homepage of this challenge is at https://github. com/RenYang-home/AIM22_CompressSR.
1 code implementation • The 31st International Joint Conference On Artificial Intelligence 2022 • Hu Wang, Mao Ye, Xiatian Zhu, Shuai Li, Ce Zhu, Xue Li
Recently, with the rise of high dynamic range (HDR) display devices, there is a great demand to transfer traditional low dynamic range (LDR) images into HDR versions.
no code implementations • 11 May 2022 • Mao Ye, Chenxi Liu, Maoqing Yao, Weiyue Wang, Zhaoqi Leng, Charles R. Qi, Dragomir Anguelov
While multi-class 3D detectors are needed in many robotics applications, training them with fully labeled datasets can be expensive in labeling cost.
no code implementations • 29 Apr 2022 • Long Chen, Mao Ye, Alistair Milne, John Hillier, Frances Oglesby
This report, commissioned by the WTW research network, investigates the use of AI in property risk assessment.
2 code implementations • 20 Apr 2022 • Ren Yang, Radu Timofte, Meisong Zheng, Qunliang Xing, Minglang Qiao, Mai Xu, Lai Jiang, Huaida Liu, Ying Chen, Youcheng Ben, Xiao Zhou, Chen Fu, Pei Cheng, Gang Yu, Junyi Li, Renlong Wu, Zhilu Zhang, Wei Shang, Zhengyao Lv, Yunjin Chen, Mingcai Zhou, Dongwei Ren, Kai Zhang, WangMeng Zuo, Pavel Ostyakov, Vyal Dmitry, Shakarim Soltanayev, Chervontsev Sergey, Zhussip Magauiya, Xueyi Zou, Youliang Yan, Pablo Navarrete Michelini, Yunhua Lu, Diankai Zhang, Shaoli Liu, Si Gao, Biao Wu, Chengjian Zheng, Xiaofeng Zhang, Kaidi Lu, Ning Wang, Thuong Nguyen Canh, Thong Bach, Qing Wang, Xiaopeng Sun, Haoyu Ma, Shijie Zhao, Junlin Li, Liangbin Xie, Shuwei Shi, Yujiu Yang, Xintao Wang, Jinjin Gu, Chao Dong, Xiaodi Shi, Chunmei Nian, Dong Jiang, Jucai Lin, Zhihuai Xie, Mao Ye, Dengyan Luo, Liuhan Peng, Shengjie Chen, Qian Wang, Xin Liu, Boyang Liang, Hang Dong, Yuhao Huang, Kai Chen, Xingbei Guo, Yujing Sun, Huilei Wu, Pengxu Wei, Yulin Huang, Junying Chen, Ik Hyun Lee, Sunder Ali Khowaja, Jiseok Yoon
This challenge includes three tracks.
1 code implementation • CVPR 2022 • Shuaifeng Li, Mao Ye, Xiatian Zhu, Lihua Zhou, Lin Xiong
This approach suffers from both unsatisfactory accuracy of pseudo labels due to the presence of domain shift and limited use of target domain training data.
no code implementations • NeurIPS 2021 • Chengyue Gong, Mao Ye, Qiang Liu
We propose a general method to construct centroid approximation for the distribution of maximum points of a random function (a. k. a.
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
In this work, we propose an efficient method to explicitly \emph{optimize} a small set of high quality ``centroid'' points to better approximate the ideal bootstrap distribution.
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
no code implementations • 29 Sep 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
no code implementations • CVPR 2021 • Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data.
1 code implementation • 14 Mar 2021 • Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae
Motivated by the rising abundance of observational data with continuous treatments, we investigate the problem of estimating the average dose-response curve (ADRF).
no code implementations • 1 Feb 2021 • Yong Zhang, Mao Ye, Lin Guan
The original contributions of this paper are summarized as follows: (1) Model the packets collision probability of broadcast or NACK transmission in VANET with the combination theory and investigate the potential influence of miss my packets (MMP) problem.
Networking and Internet Architecture
no code implementations • ICLR 2021 • Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae
With the rising abundance of observational data with continuous treatments, we investigate the problem of estimating average dose-response curve (ADRF).
1 code implementation • NeurIPS 2020 • Mao Ye, Lemeng Wu, Qiang Liu
Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size.
no code implementations • 16 Oct 2020 • Mao Ye, Dhruv Choudhary, Jiecao Yu, Ellie Wen, Zeliang Chen, Jiyan Yang, Jongsoo Park, Qiang Liu, Arun Kejariwal
To the best of our knowledge, this is the first work to provide in-depth analysis and discussion of applying pruning to online recommendation systems with non-stationary data distribution.
no code implementations • ICML 2020 • Denny Zhou, Mao Ye, Chen Chen, Tianjian Meng, Mingxing Tan, Xiaodan Song, Quoc Le, Qiang Liu, Dale Schuurmans
This is achieved by layerwise imitation, that is, forcing the thin network to mimic the intermediate outputs of the wide network from layer to layer.
1 code implementation • ACL 2020 • Mao Ye, Chengyue Gong, Qiang Liu
For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution.
no code implementations • 29 May 2020 • Yan Min, Mao Ye, Liang Tian, Yulin Jian, Ce Zhu, Shangming Yang
Our main contributions are a novel feature section approach which uses multi-step transition probability to characterize the data structure, and three algorithms proposed from the positive and negative aspects for keeping data structure.
no code implementations • 28 May 2020 • Chenpeng Zhang, Shuai Li, Mao Ye, Ce Zhu, Xue Li
Many variants of RNN have been proposed to solve the gradient problems of training RNNs and process long sequences.
no code implementations • 28 May 2020 • Lihua Zhou, Mao Ye, Xinpeng Li, Ce Zhu, Yiguang Liu, Xue Li
By this reconstructor, we can construct prototypes for the original features using class prototypes and domain prototypes correspondingly.
no code implementations • 23 Mar 2020 • Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, Qiang Liu
Recently, Liu et al.[19] proposed a splitting steepest descent (S2D) method that jointly optimizes the neural parameters and architectures based on progressively growing network structures by splitting neurons into multiple copies in a steepest descent fashion.
no code implementations • 13 Mar 2020 • Hanbin Dai, Liangbo Zhou, Feng Zhang, Zhengyu Zhang, Hong Hu, Xiatian Zhu, Mao Ye
Taking them together, we formulate a novel Distribution-Aware coordinate Representation for Keypoint (DARK) method.
1 code implementation • 3 Mar 2020 • Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu
This differs from the existing methods based on backward elimination, which remove redundant neurons from the large network.
no code implementations • NeurIPS 2020 • Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning.
1 code implementation • NeurIPS 2020 • Mao Ye, Tongzheng Ren, Qiang Liu
Our idea is to introduce Stein variational gradient as a repulsive force to push the samples of Langevin dynamics away from the past trajectories.
1 code implementation • 20 Feb 2020 • Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data.
Ranked #1 on
Image Classification
on ImageNet
(Hardware Burden metric)
no code implementations • 20 Feb 2020 • Xingchao Liu, Mao Ye, Dengyong Zhou, Qiang Liu
We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number.
1 code implementation • 7 Feb 2020 • Qifan Song, Yan Sun, Mao Ye, Faming Liang
Stochastic gradient Markov chain Monte Carlo (MCMC) algorithms have received much attention in Bayesian computing for big data problems, but they are only applicable to a small class of problems for which the parameter space has a fixed dimension and the log-posterior density is differentiable with respect to the parameters.
6 code implementations • CVPR 2020 • Feng Zhang, Xiatian Zhu, Hanbin Dai, Mao Ye, Ce Zhu
Interestingly, we found that the process of decoding the predicted heatmaps into the final joint coordinates in the original image space is surprisingly significant for human pose estimation performance, which nevertheless was not recognised before.
Ranked #2 on
Multi-Person Pose Estimation
on MS COCO
(using extra training data)
no code implementations • 3 May 2019 • Xiong Deng, Chao Chen, Deyang Chen, Xiangbin Cai, Xiaozhe Yin, Chao Xu, Fei Sun, Caiwen Li, Yan Li, Han Xu, Mao Ye, Guo Tian, Zhen Fan, Zhipeng Hou, Minghui Qin, Yu Chen, Zhenlin Luo, Xubing Lu, Guofu Zhou, Lang Chen, Ning Wang, Ye Zhu, Xingsen Gao, Jun-Ming Liu
The limitation of commercially available single-crystal substrates and the lack of continuous strain tunability preclude the ability to take full advantage of strain engineering for further exploring novel properties and exhaustively studying fundamental physics in complex oxides.
Materials Science
1 code implementation • CVPR 2019 • Feng Zhang, Xiatian Zhu, Mao Ye
In this work, we investigate the under-studied but practically critical pose model efficiency problem.
Ranked #9 on
Pose Estimation
on Leeds Sports Poses
1 code implementation • 8 Oct 2018 • Tianyang Hu, Zixiang Chen, Hanxi Sun, Jincheng Bai, Mao Ye, Guang Cheng
We propose two novel samplers to generate high-quality samples from a given (un-normalized) probability density.
no code implementations • ICML 2018 • Mao Ye, Yan Sun
We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables.
no code implementations • 17 Oct 2017 • Bilal Alsallakh, Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren
We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data.
no code implementations • ICCV 2017 • Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren
Face alignment has witnessed substantial progress in the last decade.
Ranked #13 on
Facial Landmark Detection
on 300W
no code implementations • CVPR 2015 • Mao Ye, Yu Zhang, Ruigang Yang, Dinesh Manocha
We present a novel sensor fusion algorithm that first segments the depth map into different categories such as opaque/transparent/infinity (e. g., too far to measure) and then updates the depth map based on the segmentation outcome.
no code implementations • CVPR 2014 • Mao Ye, Ruigang Yang
In this paper we present a novel real-time algorithm for simultaneous pose and shape estimation for articulated objects, such as human beings and animals.
no code implementations • CVPR 2014 • Chenxi Zhang, Mao Ye, Bo Fu, Ruigang Yang
Each segmented petal is then fitted with a scale-invariant morphable petal shape model, which is constructed from individually scanned exemplar petals.
no code implementations • CVPR 2014 • Qing Zhang, Bo Fu, Mao Ye, Ruigang Yang
In this paper we present a novel autonomous pipeline to build a personalized parametric model (pose-driven avatar) using a single depth sensor.
no code implementations • CVPR 2013 • Mao Ye, Cha Zhang, Ruigang Yang
With the wide-spread of consumer 3D-TV technology, stereoscopic videoconferencing systems are emerging.
no code implementations • 4 Sep 2011 • Mao Ye, Xingjie Liu, Wang-Chien Lee
The experimental results also confirm that our social influence based group recommendation algorithm outperforms the state-of-the-art algorithms for group recommendation.