Search Results for author: Jiawei Hu

Found 6 papers, 2 papers with code

I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization

1 code implementation16 Nov 2023 Yunshan Zhong, Jiawei Hu, Mingbao Lin, Mengzhao Chen, Rongrong Ji

Albeit the scalable performance of vision transformers (ViTs), the dense computational costs (training & inference) undermine their position in industrial applications.

Quantization

Refining Source Representations with Relation Networks for Neural Machine Translation

no code implementations COLING 2018 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Although neural machine translation with the encoder-decoder framework has achieved great success recently, it still suffers drawbacks of forgetting distant information, which is an inherent disadvantage of recurrent neural network structure, and disregarding relationship between source words during encoding step.

Machine Translation Memorization +2

CASICT Tibetan Word Segmentation System for MLWS2017

1 code implementation17 Oct 2017 Jiawei Hu, Qun Liu

We participated in the MLWS 2017 on Tibetan word segmentation task, our system is trained in a unrestricted way, by introducing a baseline system and 76w tibetan segmented sentences of ours.

Segmentation

Refining Source Representations with Relation Networks for Neural Machine Translation

no code implementations12 Sep 2017 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Although neural machine translation (NMT) with the encoder-decoder framework has achieved great success in recent times, it still suffers from some drawbacks: RNNs tend to forget old information which is often useful and the encoder only operates through words without considering word relationship.

Machine Translation NMT +2

Information-Propogation-Enhanced Neural Machine Translation by Relation Model

no code implementations6 Sep 2017 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Even though sequence-to-sequence neural machine translation (NMT) model have achieved state-of-art performance in the recent fewer years, but it is widely concerned that the recurrent neural network (RNN) units are very hard to capture the long-distance state information, which means RNN can hardly find the feature with long term dependency as the sequence becomes longer.

Machine Translation NMT +4

Cannot find the paper you are looking for? You can Submit a new open access paper.