Search Results for author: Jiahang Xu

Found 10 papers, 4 papers with code

A Fully-Automatic Framework for Parkinson's Disease Diagnosis by Multi-Modality Images

no code implementations26 Feb 2019 Jiahang Xu, Fangyang Jiao, Yechong Huang, Xinzhe Luo, Qian Xu, Ling Li, Xueling Liu, Chuantao Zuo, Ping Wu, Xiahai Zhuang

Methods: In this paper, we proposed an automatic, end-to-end, multi-modality diagnosis framework, including segmentation, registration, feature generation and machine learning, to process the information of the striatum for the diagnosis of PD.

General Classification Segmentation

Diagnosis of Alzheimer's Disease via Multi-modality 3D Convolutional Neural Network

no code implementations26 Feb 2019 Yechong Huang, Jiahang Xu, Yuncheng Zhou, Tong Tong, Xiahai Zhuang, the Alzheimer's Disease Neuroimaging Initiative

In this paper, we propose a novel convolutional neural network (CNN) to fuse the multi-modality information including T1-MRI and FDG-PDT images around the hippocampal area for the diagnosis of AD.

Image Classification

KLDivNet: An unsupervised neural network for multi-modality image registration

no code implementations23 Aug 2019 Yechong Huang, Tao Song, Jiahang Xu, Yinan Chen, Xiahai Zhuang

We then embed the KLDivNet into a registration network to achieve the unsupervised deformable registration for multi-modality images.

Image Registration Medical Image Registration

Improving Hypernasality Estimation with Automatic Speech Recognition in Cleft Palate Speech

no code implementations10 Aug 2022 Kaitao Song, Teng Wan, Bixia Wang, Huiqiang Jiang, Luna Qiu, Jiahang Xu, Liping Jiang, Qun Lou, Yuqing Yang, Dongsheng Li, Xudong Wang, Lili Qiu

Specifically, we first pre-train an encoder-decoder framework in an automatic speech recognition (ASR) objective by using speech-to-text dataset, and then fine-tune ASR encoder on the cleft palate dataset for hypernasality estimation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

SpaceEvo: Hardware-Friendly Search Space Design for Efficient INT8 Inference

1 code implementation ICCV 2023 Li Lyna Zhang, Xudong Wang, Jiahang Xu, Quanlu Zhang, Yujing Wang, Yuqing Yang, Ningxin Zheng, Ting Cao, Mao Yang

The combination of Neural Architecture Search (NAS) and quantization has proven successful in automatically designing low-FLOPs INT8 quantized neural networks (QNN).

Neural Architecture Search Quantization

ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices

1 code implementation ICCV 2023 Chen Tang, Li Lyna Zhang, Huiqiang Jiang, Jiahang Xu, Ting Cao, Quanlu Zhang, Yuqing Yang, Zhi Wang, Mao Yang

However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e. g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance.

Neural Architecture Search

Constraint-aware and Ranking-distilled Token Pruning for Efficient Transformer Inference

1 code implementation26 Jun 2023 Junyan Li, Li Lyna Zhang, Jiahang Xu, Yujing Wang, Shaoguang Yan, Yunqing Xia, Yuqing Yang, Ting Cao, Hao Sun, Weiwei Deng, Qi Zhang, Mao Yang

Deploying pre-trained transformer models like BERT on downstream tasks in resource-constrained scenarios is challenging due to their high inference cost, which grows rapidly with input sequence length.

Model Compression

Compresso: Structured Pruning with Collaborative Prompting Learns Compact Large Language Models

1 code implementation8 Oct 2023 Song Guo, Jiahang Xu, Li Lyna Zhang, Mao Yang

To this end, Compresso prunes LLaMA-7B to 5. 4B, maintaining original performance and even surpassing LLaMA-7B in reading comprehension by 2. 62%.

Natural Language Understanding Reading Comprehension

LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens

no code implementations21 Feb 2024 Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang, Mao Yang

This is achieved by three key innovations: (i) we identify and exploit two forms of non-uniformities in positional interpolation through an efficient search, providing a better initialization for fine-tuning and enabling an 8x extension in non-fine-tuning scenarios; (ii) we introduce a progressive extension strategy that first fine-tunes a 256k length LLM and then conducts a second positional interpolation on the fine-tuned extended LLM to achieve a 2048k context window; (iii) we readjust LongRoPE on 8k length to recover the short context window performance.

8k

Cannot find the paper you are looking for? You can Submit a new open access paper.