This paper presents an in-depth study of multimodal machine translation (MMT), examining the prevailing understanding that MMT systems exhibit decreased sensitivity to visual information when text inputs are complete.
The design choices in Transformer feed-forward neural networks have resulted in significant computational and parameter overhead.
Ranked #23 on Machine Translation on WMT2014 English-German
In this paper, we propose a novel architecture, the Enhanced Interactive Transformer (EIT), to address the issue of head degradation in self-attention mechanisms.
We utilize the scale uncertainty among various receptive field sizes of a segmentation FCN to obtain infection regions.
Unsupervised SR methods are required that do not need paired LR and HR images.
This paper presents a super-resolution (SR) method with unpaired training dataset of clinical CT and micro CT volumes.
This paper newly introduces multi-modality loss function for GAN-based super-resolution that can maintain image structure and intensity on unpaired training dataset of clinical CT and micro CT volumes.
First, the contextual net with a center-surround architecture extracts local contextual features from image patches, and generates initial illuminant estimates and the corresponding color corrected patches.