Search Results for author: Junyu Luo

Found 15 papers, 5 papers with code

CoRelation: Boosting Automatic ICD Coding Through Contextualized Code Relation Learning

no code implementations24 Feb 2024 Junyu Luo, Xiaochen Wang, Jiaqi Wang, Aofei Chang, Yaqing Wang, Fenglong Ma

Automatic International Classification of Diseases (ICD) coding plays a crucial role in the extraction of relevant information from clinical notes for proper recording and billing.

Relation

Recent Advances in Predictive Modeling with Electronic Health Records

no code implementations2 Feb 2024 Jiaqi Wang, Junyu Luo, Muchao Ye, Xiaochen Wang, Yuan Zhong, Aofei Chang, Guanjie Huang, Ziyi Yin, Cao Xiao, Jimeng Sun, Fenglong Ma

This survey systematically reviews recent advances in deep learning-based predictive models using EHR data.

A Survey of Data-Efficient Graph Learning

no code implementations1 Feb 2024 Wei Ju, Siyu Yi, Yifan Wang, Qingqing Long, Junyu Luo, Zhiping Xiao, Ming Zhang

Graph-structured data, prevalent in domains ranging from social networks to biochemical analysis, serve as the foundation for diverse real-world systems.

Graph Learning

Hierarchical Pretraining on Multimodal Electronic Health Records

1 code implementation11 Oct 2023 Xiaochen Wang, Junyu Luo, Jiaqi Wang, Ziyi Yin, Suhan Cui, Yuan Zhong, Yaqing Wang, Fenglong Ma

Pretraining has proven to be a powerful technique in natural language processing (NLP), exhibiting remarkable success in various NLP downstream tasks.

Zero-Resource Hallucination Prevention for Large Language Models

1 code implementation6 Sep 2023 Junyu Luo, Cao Xiao, Fenglong Ma

Existing techniques for hallucination detection in language assistants rely on intricate fuzzy, specific free-language-based chain of thought (CoT) techniques or parameter-based methods that suffer from interpretability issues.

Hallucination

3D-SPS: Single-Stage 3D Visual Grounding via Referred Point Progressive Selection

1 code implementation CVPR 2022 Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen, Huaxia Xia, Si Liu

3D visual grounding aims to locate the referred target object in 3D point cloud scenes according to a free-form language description.

Visual Grounding

MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare

no code implementations11 Dec 2021 Muchao Ye, Junyu Luo, Guanjie Zheng, Cao Xiao, Ting Wang, Fenglong Ma

Deep neural networks (DNNs) have been broadly adopted in health risk prediction to provide healthcare diagnoses and treatments.

Adversarial Attack Position +1

FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update

1 code implementation20 Aug 2021 Junyu Luo, Jianlei Yang, Xucheng Ye, Xin Guo, Weisheng Zhao

Federated learning aims to protect users' privacy while performing data analysis from different participants.

Federated Learning

TransRefer3D: Entity-and-Relation Aware Transformer for Fine-Grained 3D Visual Grounding

no code implementations5 Aug 2021 Dailan He, Yusheng Zhao, Junyu Luo, Tianrui Hui, Shaofei Huang, Aixi Zhang, Si Liu

Existing works usually adopt dynamic graph networks to indirectly model the intra/inter-modal interactions, making the model difficult to distinguish the referred object from distractors due to the monolithic representations of visual and linguistic contents.

Relation Sentence +1

FedSiam: Towards Adaptive Federated Semi-Supervised Learning

no code implementations6 Dec 2020 Zewei Long, Liwei Che, Yaqing Wang, Muchao Ye, Junyu Luo, Jinze Wu, Houping Xiao, Fenglong Ma

In this paper, we focus on designing a general framework FedSiam to tackle different scenarios of federated semi-supervised learning, including four settings in the labels-at-client scenario and two setting in the labels-at-server scenario.

Federated Learning

Accelerating CNN Training by Pruning Activation Gradients

no code implementations ECCV 2020 Xucheng Ye, Pengcheng Dai, Junyu Luo, Xin Guo, Yingjie Qi, Jianlei Yang, Yiran Chen

Sparsification is an efficient approach to accelerate CNN inference, but it is challenging to take advantage of sparsity in training procedure because the involved gradients are dynamically changed.

Learning Inverse Mapping by Autoencoder based Generative Adversarial Nets

no code implementations29 Mar 2017 Junyu Luo, Yong Xu, Chenwei Tang, Jiancheng Lv

The inverse mapping of GANs'(Generative Adversarial Nets) generator has a great potential value. Hence, some works have been developed to construct the inverse function of generator by directly learning or adversarial learning. While the results are encouraging, the problem is highly challenging and the existing ways of training inverse models of GANs have many disadvantages, such as hard to train or poor performance. Due to these reasons, we propose a new approach based on using inverse generator ($IG$) model as encoder and pre-trained generator ($G$) as decoder of an AutoEncoder network to train the $IG$ model.

Cannot find the paper you are looking for? You can Submit a new open access paper.