Search Results for author: Hu Liu

Found 8 papers, 2 papers with code

Efficient selective attention LSTM for well log curve synthesis

no code implementations17 Jul 2023 Yuankai Zhou, Huanyu Li, Hu Liu

Non-core drilling has gradually become the primary exploration method in geological engineering, and well logging curves have increasingly gained importance as the main carriers of geological information.

Kalman Filtering Attention for User Behavior Modeling in CTR Prediction

no code implementations NeurIPS 2020 Hu Liu, Jing Lu, Xiwei Zhao, Sulong Xu, Hao Peng, Yutong Liu, Zehua Zhang, Jian Li, Junsheng Jin, Yongjun Bao, Weipeng Yan

First, conventional attentions mostly limit the attention field only to a single user's behaviors, which is not suitable in e-commerce where users often hunt for new demands that are irrelevant to any historical behaviors.

Click-Through Rate Prediction

Category-Specific CNN for Visual-aware CTR Prediction at

no code implementations18 Jun 2020 Hu Liu, Jing Lu, Hao Yang, Xiwei Zhao, Sulong Xu, Hao Peng, Zehua Zhang, Wenjie Niu, Xiaokun Zhu, Yongjun Bao, Weipeng Yan

Existing algorithms usually extract visual features using off-the-shelf Convolutional Neural Networks (CNNs) and late fuse the visual and non-visual features for the finally predicted CTR.

Click-Through Rate Prediction

A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training

1 code implementation IEEE Transactions on Multimedia 2019 Runpeng Cui, Hu Liu, ChangShui Zhang

In contrast, our proposed architecture adopts deep convolutional neural networks with stacked temporal fusion layers as the feature extraction module, and bi-directional recurrent neural networks as the sequence learning module.

Optical Flow Estimation Sign Language Recognition

Connectionist Temporal Classification with Maximum Entropy Regularization

1 code implementation NeurIPS 2018 Hu Liu, Sheng Jin, Chang-Shui Zhang

Connectionist Temporal Classification (CTC) is an objective function for end-to-end sequence learning, which adopts dynamic programming algorithms to directly learn the mapping between sequences.

Classification General Classification +3

Recurrent Convolutional Neural Networks for Continuous Sign Language Recognition by Staged Optimization

no code implementations CVPR 2017 Runpeng Cui, Hu Liu, Chang-Shui Zhang

This work presents a weakly supervised framework with deep neural networks for vision-based continuous sign language recognition, where the ordered gloss labels but no exact temporal locations are available with the video of sign sentence, and the amount of labeled sentences for training is limited.

Sign Language Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.