Search Results for author: Heng Yu

Found 34 papers, 6 papers with code

Towards Enhancing Faithfulness for Neural Machine Translation

no code implementations EMNLP 2020 Rongxiang Weng, Heng Yu, Xiangpeng Wei, Weihua Luo

Neural machine translation (NMT) has achieved great success due to the ability to generate high-quality sentences.

Decoder Machine Translation +3

CoGS: Controllable Gaussian Splatting

no code implementations9 Dec 2023 Heng Yu, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma, László A. Jeni

We present CoGS, a method for Controllable Gaussian Splatting, that enables the direct manipulation of scene elements, offering real-time control of dynamic scenes without the prerequisite of pre-computing control signals.

SubZero: Subspace Zero-Shot MRI Reconstruction

1 code implementation28 Nov 2023 Heng Yu, Yamin Arefeen, Berkin Bilgic

Recently introduced zero-shot self-supervised learning (ZS-SSL) has shown potential in accelerated MRI in a scan-specific scenario, which enabled high-quality reconstructions without access to a large training dataset.

MRI Reconstruction Self-Supervised Learning

Instruct2Attack: Language-Guided Semantic Adversarial Attacks

no code implementations27 Nov 2023 Jiang Liu, Chen Wei, Yuxiang Guo, Heng Yu, Alan Yuille, Soheil Feizi, Chun Pong Lau, Rama Chellappa

We propose Instruct2Attack (I2A), a language-guided semantic attack that generates semantically meaningful perturbations according to free-form language instructions.

TFDet: Target-Aware Fusion for RGB-T Pedestrian Detection

1 code implementation26 May 2023 Xue Zhang, Xiao-Han Zhang, Jiacheng Ying, Zehua Sheng, Heng Yu, Chunguang Li, Hui-Liang Shen

In this paper, we propose a novel target-aware fusion strategy for multispectral pedestrian detection, named TFDet.

Pedestrian Detection

Unsupervised Style-based Explicit 3D Face Reconstruction from Single Image

no code implementations24 Apr 2023 Heng Yu, Zoltan A. Milacski, Laszlo A. Jeni

Inferring 3D object structures from a single image is an ill-posed task due to depth ambiguity and occlusion.

3D Face Reconstruction 3D Reconstruction +3

DyLiN: Making Light Field Networks Dynamic

no code implementations CVPR 2023 Heng Yu, Joel Julin, Zoltan A. Milacski, Koichiro Niinuma, Laszlo A. Jeni

Light Field Networks, the re-formulations of radiance fields to oriented rays, are magnitudes faster than their coordinate network counterparts, and provide higher fidelity with respect to representing 3D structures from 2D observations.

Attribute Knowledge Distillation

Patch Network for medical image Segmentation

no code implementations23 Feb 2023 Weihu Song, Heng Yu, Jianhua Wu

Accurate and fast segmentation of medical images is clinically essential, yet current research methods include convolutional neural networks with fast inference speed but difficulty in learning image contextual features, and transformer with good performance but high hardware requirements.

Image Segmentation Lesion Segmentation +3

Non-pooling Network for medical image segmentation

no code implementations21 Feb 2023 Weihu Song, Heng Yu

Existing studies tend tofocus onmodel modifications and integration with higher accuracy, which improve performance but also carry huge computational costs, resulting in longer detection times.

Decoder Image Segmentation +2

CoNFies: Controllable Neural Face Avatars

no code implementations16 Nov 2022 Heng Yu, Koichiro Niinuma, Laszlo A. Jeni

Neural Radiance Fields (NeRF) are compelling techniques for modeling dynamic 3D scenes from 2D image collections.

Action Recognition

Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation

2 code implementations ACL 2022 Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Weihua Luo, Jun Xie, Rong Jin

Although data augmentation is widely used to enrich the training data, conventional methods with discrete manipulations fail to generate diverse and faithful training samples.

Data Augmentation Machine Translation +3

GPU-Net: Lightweight U-Net with more diverse features

1 code implementation7 Jan 2022 Heng Yu, Di Fan, Weihu Song

Image segmentation is an important task in the medical image field and many convolutional neural networks (CNNs) based methods have been proposed, among which U-Net and its variants show promising performance.

Image Segmentation Segmentation +1

Dynamic Texture Recognition using PDV Hashing and Dictionary Learning on Multi-scale Volume Local Binary Pattern

no code implementations24 Nov 2021 Ruxin Ding, Jianfeng Ren, Heng Yu, Jiawei Li

To tackle this problem, we propose a method for dynamic texture recognition using PDV hashing and dictionary learning on multi-scale volume local binary pattern (PHD-MVLBP).

Dictionary Learning Dynamic Texture Recognition

Scan Specific Artifact Reduction in K-space (SPARK) Neural Networks Synergize with Physics-based Reconstruction to Accelerate MRI

no code implementations2 Apr 2021 Yamin Arefeen, Onur Beker, Jaejin Cho, Heng Yu, Elfar Adalsteinsson, Berkin Bilgic

Conclusion: SPARK synergizes with physics-based acquisition and reconstruction techniques to improve accelerated MRI by training scan-specific models to estimate and correct reconstruction errors in k-space.

Translation Memory Guided Neural Machine Translation

no code implementations1 Jan 2021 Shaohui Kuang, Heng Yu, Weihua Luo, Qiang Wang

Existing ways either employ extra encoder to encode information from TM or concatenate source sentence and TM sentences as encoder's input.

Decoder Language Modelling +5

Uncertainty-Aware Semantic Augmentation for Neural Machine Translation

no code implementations EMNLP 2020 Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, Weihua Luo

As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other.

Machine Translation NMT +3

On Learning Universal Representations Across Languages

no code implementations ICLR 2021 Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, Weihua Luo

Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.

Contrastive Learning Cross-Lingual Natural Language Inference +4

Language-aware Interlingua for Multilingual Neural Machine Translation

no code implementations ACL 2020 Changfeng Zhu, Heng Yu, Shanbo Cheng, Weihua Luo

However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained.

Decoder Machine Translation +3

Multiscale Collaborative Deep Models for Neural Machine Translation

1 code implementation ACL 2020 Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, Weihua Luo

Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train.

Machine Translation NMT +1

AR: Auto-Repair the Synthetic Data for Neural Machine Translation

no code implementations5 Apr 2020 Shanbo Cheng, Shaohui Kuang, Rongxiang Weng, Heng Yu, Changfeng Zhu, Weihua Luo

Compared with only using limited authentic parallel data as training corpus, many studies have proved that incorporating synthetic parallel data, which generated by back translation (BT) or forward translation (FT, or selftraining), into the NMT training process can significantly improve translation quality.

Machine Translation NMT +2

GRET: Global Representation Enhanced Transformer

no code implementations24 Feb 2020 Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen

The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.

Decoder Machine Translation +4

Acquiring Knowledge from Pre-trained Model to Neural Machine Translation

no code implementations4 Dec 2019 Rongxiang Weng, Heng Yu, Shu-Jian Huang, Shanbo Cheng, Weihua Luo

The standard paradigm of exploiting them includes two steps: first, pre-training a model, e. g. BERT, with a large scale unlabeled monolingual data.

General Knowledge Knowledge Distillation +3

Improving Neural Machine Translation with Pre-trained Representation

no code implementations21 Aug 2019 Rongxiang Weng, Heng Yu, Shu-Jian Huang, Weihua Luo, Jia-Jun Chen

Then, we design a framework for integrating both source and target sentence-level representations into NMT model to improve the translation quality.

Machine Translation NMT +3

Sequence Generation: From Both Sides to the Middle

no code implementations23 Jun 2019 Long Zhou, Jiajun Zhang, Cheng-qing Zong, Heng Yu

The encoder-decoder framework has achieved promising process for many sequence generation tasks, such as neural machine translation and text summarization.

Decoder Machine Translation +3

Improving Multilingual Semantic Textual Similarity with Shared Sentence Encoder for Low-resource Languages

no code implementations20 Oct 2018 Xin Tang, Shanbo Cheng, Loc Do, Zhiyu Min, Feng Ji, Heng Yu, Ji Zhang, Haiqin Chen

Our approach is extended from a basic monolingual STS framework to a shared multilingual encoder pretrained with translation task to incorporate rich-resource language data.

Machine Translation Semantic Similarity +4

Cannot find the paper you are looking for? You can Submit a new open access paper.