no code implementations • COLING 2022 • Siyu Wang, Jianhui Jiang, Yao Huang, Yin Wang
However, we observed that most of the keyphrases are composed of some important words (seed words) in the source text, and if these words can be identified accurately and copied to create more keyphrases, the performance of the model might be improved.
no code implementations • 17 Apr 2023 • Siyu Wang, Xiaocong Chen, Quan Z. Sheng, Yihong Zhang, Lina Yao
This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
no code implementations • 17 Apr 2023 • Siyu Wang, Xiaocong Chen, Dietmar Jannach, Lina Yao
Reinforcement learning-based recommender systems have recently gained popularity.
no code implementations • 16 Feb 2023 • Shiwei Zhang, Lansong Diao, Siyu Wang, Zongyan Cao, Yiliang Gu, Chang Si, Ziji Shi, Zhen Zheng, Chuan Wu, Wei Lin
We present Rhino, a system for accelerating tensor programs with automatic parallelization on AI platform for real production environment.
no code implementations • 13 Feb 2023 • Shiwei Zhang, Xiaodong Yi, Lansong Diao, Chuan Wu, Siyu Wang, Wei Lin
This paper presents TAG, an automatic system to derive optimized DNN training graph and its deployment onto any device topology, for expedited training in device- and topology- heterogeneous ML clusters.
no code implementations • 17 Sep 2022 • Xiaocong Chen, Siyu Wang, Lina Yao, Lianyong Qi, Yong Li
It is more challenging to balance the exploration and exploitation in DRL RS where RS agent need to deeply explore the informative trajectories and exploit them efficiently in the context of recommender systems.
no code implementations • 10 Aug 2022 • Siyu Wang, Xiaocong Chen, Lina Yao, Sally Cripps, Julian McAuley
Recent advances in recommender systems have proved the potential of Reinforcement Learning (RL) to handle the dynamic evolution processes between users and recommender systems.
1 code implementation • 17 Jun 2022 • Siyu Wang, Jianfei Chen, Chongxuan Li, Jun Zhu, Bo Zhang
In this work, we propose Integer-only Discrete Flows (IODF), an efficient neural compressor with integer-only arithmetic.
no code implementations • 1 Apr 2022 • Siyu Wang, Xiaocong Chen, Lina Yao
Recent advances have convinced that the ability of reinforcement learning to handle the dynamic process can be effectively applied in the interactive recommendation.
no code implementations • 2 Dec 2021 • Siyu Wang, Yuanjiang Cao, Xiaocong Chen, Lina Yao, Xianzhi Wang, Quan Z. Sheng
Finally, we study the attack strength and frequency of adversarial examples and evaluate our model on standard datasets with multiple crafting methods.
no code implementations • 8 Jul 2020 • Siyu Wang, Yi Rong, Shiqing Fan, Zhen Zheng, Lansong Diao, Guoping Long, Jun Yang, Xiaoyong Liu, Wei. Lin
The last decade has witnessed growth in the computational requirements for training deep neural networks.
no code implementations • 1 Jun 2020 • Chander Chandak, Zeynab Raeesy, Ariya Rastrow, Yuzong Liu, Xiangyang Huang, Siyu Wang, Dong Kwon Joo, Roland Maas
A common approach to solve multilingual speech recognition is to run multiple monolingual ASR systems in parallel and rely on a language identification (LID) component that detects the input language.
1 code implementation • CVPR 2019 • Yucheng Shi, Siyu Wang, Yahong Han
On the one hand, existing iterative attacks add noises monotonically along the direction of gradient ascent, resulting in a lack of diversity and adaptability of the generated iterative trajectories.
1 code implementation • CVPR 2018 • Juzheng Li, Hang Su, Jun Zhu, Siyu Wang, Bo Zhang
The machine thus performs as an instructor to extract the essay-level contradictions as the Guidance.
no code implementations • ACL 2017 • Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, Wei Chu
We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models.