no code implementations • COLING 2022 • Junyu Luo, Junxian Lin, Chi Lin, Cao Xiao, Xinning Gui, Fenglong Ma
To fairly evaluate the performance, we also propose three specific evaluation metrics.
1 code implementation • CVPR 2022 • Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen, Huaxia Xia, Si Liu
3D visual grounding aims to locate the referred target object in 3D point cloud scenes according to a free-form language description.
no code implementations • 11 Dec 2021 • Muchao Ye, Junyu Luo, Guanjie Zheng, Cao Xiao, Ting Wang, Fenglong Ma
Deep neural networks (DNNs) have been broadly adopted in health risk prediction to provide healthcare diagnoses and treatments.
1 code implementation • 20 Aug 2021 • Junyu Luo, Jianlei Yang, Xucheng Ye, Xin Guo, Weisheng Zhao
Federated learning aims to protect users' privacy while performing data analysis from different participants.
no code implementations • 5 Aug 2021 • Dailan He, Yusheng Zhao, Junyu Luo, Tianrui Hui, Shaofei Huang, Aixi Zhang, Si Liu
Existing works usually adopt dynamic graph networks to indirectly model the intra/inter-modal interactions, making the model difficult to distinguish the referred object from distractors due to the monolithic representations of visual and linguistic contents.
no code implementations • 6 Dec 2020 • Zewei Long, Liwei Che, Yaqing Wang, Muchao Ye, Junyu Luo, Jinze Wu, Houping Xiao, Fenglong Ma
In this paper, we focus on designing a general framework FedSiam to tackle different scenarios of federated semi-supervised learning, including four settings in the labels-at-client scenario and two setting in the labels-at-server scenario.
no code implementations • 4 Dec 2020 • Junyu Luo, Zifei Zheng, Hanzhong Ye, Muchao Ye, Yaqing Wang, Quanzeng You, Cao Xiao, Fenglong Ma
In this paper, we introduce MedLane -- a new human-annotated Medical Language translation dataset, to align professional medical sentences with layperson-understandable expressions.
no code implementations • 21 Jul 2020 • Pengcheng Dai, Jianlei Yang, Xucheng Ye, Xingzhou Cheng, Junyu Luo, Linghao Song, Yiran Chen, Weisheng Zhao
In this paper, \textit{SparseTrain} is proposed to accelerate CNN training by fully exploiting the sparsity.
no code implementations • ECCV 2020 • Xucheng Ye, Pengcheng Dai, Junyu Luo, Xin Guo, Yingjie Qi, Jianlei Yang, Yiran Chen
Sparsification is an efficient approach to accelerate CNN inference, but it is challenging to take advantage of sparsity in training procedure because the involved gradients are dynamically changed.
no code implementations • 29 Mar 2017 • Junyu Luo, Yong Xu, Chenwei Tang, Jiancheng Lv
The inverse mapping of GANs'(Generative Adversarial Nets) generator has a great potential value. Hence, some works have been developed to construct the inverse function of generator by directly learning or adversarial learning. While the results are encouraging, the problem is highly challenging and the existing ways of training inverse models of GANs have many disadvantages, such as hard to train or poor performance. Due to these reasons, we propose a new approach based on using inverse generator ($IG$) model as encoder and pre-trained generator ($G$) as decoder of an AutoEncoder network to train the $IG$ model.