no code implementations • EMNLP 2020 • Rongxiang Weng, Heng Yu, Xiangpeng Wei, Weihua Luo
Neural machine translation (NMT) has achieved great success due to the ability to generate high-quality sentences.
no code implementations • 9 Dec 2023 • Heng Yu, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma, László A. Jeni
We present CoGS, a method for Controllable Gaussian Splatting, that enables the direct manipulation of scene elements, offering real-time control of dynamic scenes without the prerequisite of pre-computing control signals.
1 code implementation • 28 Nov 2023 • Heng Yu, Yamin Arefeen, Berkin Bilgic
Recently introduced zero-shot self-supervised learning (ZS-SSL) has shown potential in accelerated MRI in a scan-specific scenario, which enabled high-quality reconstructions without access to a large training dataset.
no code implementations • 27 Nov 2023 • Jiang Liu, Chen Wei, Yuxiang Guo, Heng Yu, Alan Yuille, Soheil Feizi, Chun Pong Lau, Rama Chellappa
We propose Instruct2Attack (I2A), a language-guided semantic attack that generates semantically meaningful perturbations according to free-form language instructions.
1 code implementation • 26 May 2023 • Xue Zhang, Xiao-Han Zhang, Jiacheng Ying, Zehua Sheng, Heng Yu, Chunguang Li, Hui-Liang Shen
In this paper, we propose a novel target-aware fusion strategy for multispectral pedestrian detection, named TFDet.
no code implementations • 24 Apr 2023 • Heng Yu, Zoltan A. Milacski, Laszlo A. Jeni
Inferring 3D object structures from a single image is an ill-posed task due to depth ambiguity and occlusion.
no code implementations • CVPR 2023 • Heng Yu, Joel Julin, Zoltan A. Milacski, Koichiro Niinuma, Laszlo A. Jeni
Light Field Networks, the re-formulations of radiance fields to oriented rays, are magnitudes faster than their coordinate network counterparts, and provide higher fidelity with respect to representing 3D structures from 2D observations.
no code implementations • 23 Feb 2023 • Weihu Song, Heng Yu, Jianhua Wu
Accurate and fast segmentation of medical images is clinically essential, yet current research methods include convolutional neural networks with fast inference speed but difficulty in learning image contextual features, and transformer with good performance but high hardware requirements.
no code implementations • 21 Feb 2023 • Weihu Song, Heng Yu
Existing studies tend tofocus onmodel modifications and integration with higher accuracy, which improve performance but also carry huge computational costs, resulting in longer detection times.
no code implementations • 16 Nov 2022 • Heng Yu, Koichiro Niinuma, Laszlo A. Jeni
Neural Radiance Fields (NeRF) are compelling techniques for modeling dynamic 3D scenes from 2D image collections.
2 code implementations • ACL 2022 • Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Weihua Luo, Jun Xie, Rong Jin
Although data augmentation is widely used to enrich the training data, conventional methods with discrete manipulations fail to generate diverse and faithful training samples.
1 code implementation • 7 Jan 2022 • Heng Yu, Di Fan, Weihu Song
Image segmentation is an important task in the medical image field and many convolutional neural networks (CNNs) based methods have been proposed, among which U-Net and its variants show promising performance.
no code implementations • 24 Nov 2021 • Ruxin Ding, Jianfeng Ren, Heng Yu, Jiawei Li
To tackle this problem, we propose a method for dynamic texture recognition using PDV hashing and dictionary learning on multi-scale volume local binary pattern (PHD-MVLBP).
no code implementations • 7 Jul 2021 • Heng Yu, Zijing Dong, Yamin Arefeen, Congyu Liao, Kawin Setsompop, Berkin Bilgic
RAKI can perform database-free MRI reconstruction by training models using only auto-calibration signal (ACS) from each specific scan.
no code implementations • 2 Apr 2021 • Yamin Arefeen, Onur Beker, Jaejin Cho, Heng Yu, Elfar Adalsteinsson, Berkin Bilgic
Conclusion: SPARK synergizes with physics-based acquisition and reconstruction techniques to improve accelerated MRI by training scan-specific models to estimate and correct reconstruction errors in k-space.
no code implementations • 1 Jan 2021 • Shaohui Kuang, Heng Yu, Weihua Luo, Qiang Wang
Existing ways either employ extra encoder to encode information from TM or concatenate source sentence and TM sentences as encoder's input.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Yongchao Deng, Hongfei Yu, Heng Yu, Xiangyu Duan, Weihua Luo
Multi-Domain Neural Machine Translation (NMT) aims at building a single system that performs well on a range of target domains.
no code implementations • EMNLP 2020 • Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, Weihua Luo
As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other.
no code implementations • ICLR 2021 • Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, Weihua Luo
Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.
Contrastive Learning Cross-Lingual Natural Language Inference +4
no code implementations • ACL 2020 • Changfeng Zhu, Heng Yu, Shanbo Cheng, Weihua Luo
However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained.
1 code implementation • ACL 2020 • Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, Weihua Luo
Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train.
no code implementations • 5 Apr 2020 • Shanbo Cheng, Shaohui Kuang, Rongxiang Weng, Heng Yu, Changfeng Zhu, Weihua Luo
Compared with only using limited authentic parallel data as training corpus, many studies have proved that incorporating synthetic parallel data, which generated by back translation (BT) or forward translation (FT, or selftraining), into the NMT training process can significantly improve translation quality.
no code implementations • 24 Feb 2020 • Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen
The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.
no code implementations • 4 Dec 2019 • Rongxiang Weng, Heng Yu, Shu-Jian Huang, Shanbo Cheng, Weihua Luo
The standard paradigm of exploiting them includes two steps: first, pre-training a model, e. g. BERT, with a large scale unlabeled monolingual data.
no code implementations • 21 Aug 2019 • Rongxiang Weng, Heng Yu, Shu-Jian Huang, Weihua Luo, Jia-Jun Chen
Then, we design a framework for integrating both source and target sentence-level representations into NMT model to improve the translation quality.
no code implementations • 23 Jun 2019 • Long Zhou, Jiajun Zhang, Cheng-qing Zong, Heng Yu
The encoder-decoder framework has achieved promising process for many sequence generation tasks, such as neural machine translation and text summarization.
1 code implementation • NAACL 2019 • Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang
Leveraging user-provided translation to constrain NMT has practical significance.
no code implementations • 20 Oct 2018 • Xin Tang, Shanbo Cheng, Loc Do, Zhiyu Min, Feng Ji, Heng Yu, Ji Zhang, Haiqin Chen
Our approach is extended from a basic monolingual STS framework to a shared multilingual encoder pretrained with translation task to incorporate rich-resource language data.
no code implementations • ACL 2016 • Chunyang Liu, Yang Liu, Huanbo Luan, Maosong Sun, Heng Yu
We introduce an agreement-based approach to learning parallel lexicons and phrases from non-parallel corpora.