Search Results for author: Shuo Jin

Found 6 papers, 2 papers with code

Temporal Consistency Learning of inter-frames for Video Super-Resolution

1 code implementation3 Nov 2022 Meiqin Liu, Shuo Jin, Chao Yao, Chunyu Lin, Yao Zhao

A spatio-temporal stability module is designed to learn the self-alignment from inter-frames.

Video Super-Resolution

Dense residual Transformer for image denoising

no code implementations14 May 2022 Chao Yao, Shuo Jin, Meiqin Liu, Xiaojuan Ban

In this paper, we proposed an image denoising network structure based on Transformer, which is named DenSformer.

Image Compression Image Denoising +1

RefineCap: Concept-Aware Refinement for Image Captioning

no code implementations8 Sep 2021 Yekun Chai, Shuo Jin, Junliang Xing

Automatically translating images to texts involves image scene understanding and language modeling.

Descriptive Image Captioning +3

Neural Text Classification by Jointly Learning to Cluster and Align

no code implementations24 Nov 2020 Yekun Chai, Haidong Zhang, Shuo Jin

Distributional text clustering delivers semantically informative representations and captures the relevance between each word and semantic clustering centroids.

Clustering General Classification +4

COVID-19 Chest CT Image Segmentation -- A Deep Convolutional Neural Network Solution

no code implementations23 Apr 2020 Qingsen Yan, Bo wang, Dong Gong, Chuan Luo, Wei Zhao, Jianhu Shen, Qinfeng Shi, Shuo Jin, Liang Zhang, Zheng You

Inspired by the observation that the boundary of the infected lung can be enhanced by adjusting the global intensity, in the proposed deep CNN, we introduce a feature variation block which adaptively adjusts the global properties of the features for segmenting COVID-19 infection.

Computed Tomography (CT) Image Segmentation +3

Highway Transformer: Self-Gating Enhanced Self-Attentive Networks

1 code implementation ACL 2020 Yekun Chai, Shuo Jin, Xinwen Hou

Self-attention mechanisms have made striking state-of-the-art (SOTA) progress in various sequence learning tasks, standing on the multi-headed dot product attention by attending to all the global contexts at different locations.

Cannot find the paper you are looking for? You can Submit a new open access paper.