Deep Interest with Hierarchical Attention Network for Click-Through Rate Prediction

22 May 2020  ·  Weinan Xu, Hengxu He, Minshi Tan, Yunming Li, Jun Lang, Dongbai Guo ·

Deep Interest Network (DIN) is a state-of-the-art model which uses attention mechanism to capture user interests from historical behaviors. User interests intuitively follow a hierarchical pattern such that users generally show interests from a higher-level then to a lower-level abstraction. Modeling such an interest hierarchy in an attention network can fundamentally improve the representation of user behaviors. We, therefore, propose an improvement over DIN to model arbitrary interest hierarchy: Deep Interest with Hierarchical Attention Network (DHAN). In this model, a multi-dimensional hierarchical structure is introduced on the first attention layer which attends to an individual item, and the subsequent attention layers in the same dimension attend to higher-level hierarchy built on top of the lower corresponding layers. To enable modeling of multiple dimensional hierarchies, an expanding mechanism is introduced to capture one to many hierarchies. This design enables DHAN to attend different importance to different hierarchical abstractions thus can fully capture user interests at different dimensions (e.g. category, price, or brand).To validate our model, a simplified DHAN has applied to Click-Through Rate (CTR) prediction and our experimental results on three public datasets with two levels of the one-dimensional hierarchy only by category. It shows the superiority of DHAN with significant AUC uplift from 12% to 21% over DIN. DHAN is also compared with another state-of-the-art model Deep Interest Evolution Network (DIEN), which models temporal interest. The simplified DHAN also gets slight AUC uplift from 1.0% to 1.7% over DIEN. A potential future work can be a combination of DHAN and DIEN to model both temporal and hierarchical interests.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here