Search Results for author: Mingnan Luo

Found 8 papers, 3 papers with code

Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters

1 code implementation10 Nov 2021 Xiangru Lian, Binhang Yuan, XueFeng Zhu, Yulong Wang, Yongjun He, Honghuan Wu, Lei Sun, Haodong Lyu, Chengjun Liu, Xing Dong, Yiqiao Liao, Mingnan Luo, Congfei Zhang, Jingru Xie, Haonan Li, Lei Chen, Renjie Huang, Jianying Lin, Chengchun Shu, Xuezhong Qiu, Zhishan Liu, Dongying Kong, Lei Yuan, Hai Yu, Sen yang, Ce Zhang, Ji Liu

Specifically, in order to ensure both the training efficiency and the training accuracy, we design a novel hybrid training algorithm, where the embedding layer and the dense neural network are handled by different synchronization mechanisms; then we build a system called Persia (short for parallel recommendation training system with hybrid acceleration) to support this hybrid training algorithm.

Recommendation Systems

Inner-Imaging Networks: Put Lenses into Convolutional Structure

1 code implementation22 Apr 2019 Yang Hu, Guihua Wen, Mingnan Luo, Dan Dai, Wenming Cao, Zhiwen Yu, Wendy Hall

To deal with these problems, a novel Inner-Imaging architecture is proposed in this paper, which allows relationships between channels to meet the above requirement.

Stochastic Region Pooling: Make Attention More Expressive

no code implementations22 Apr 2019 Mingnan Luo, Guihua Wen, Yang Hu, Dan Dai, Yingxue Xu

Global Average Pooling (GAP) is used by default on the channel-wise attention mechanism to extract channel descriptors.

Chinese Herbal Recognition based on Competitive Attentional Fusion of Multi-hierarchies Pyramid Features

no code implementations23 Dec 2018 Yingxue Xu, Guihua Wen, Yang Hu, Mingnan Luo, Dan Dai, Yishan Zhuang

According to the characteristics of herbal images, we proposed the competitive attentional fusion pyramid networks to model the features of herbal image, which mdoels the relationship of feature maps from different levels, and re-weights multi-level channels with channel-wise attention mechanism.

Metabolize Neural Network

no code implementations4 Sep 2018 Dan Dai, Zhiwen Yu, Yang Hu, Wenming Cao, Mingnan Luo

It is self-evident that the significance of metabolize neuronal network(MetaNet) in model construction.

Competitive Inner-Imaging Squeeze and Excitation for Residual Network

1 code implementation24 Jul 2018 Yang Hu, Guihua Wen, Mingnan Luo, Dan Dai, Jiajiong Ma, Zhiwen Yu

In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network.

Cannot find the paper you are looking for? You can Submit a new open access paper.