1 code implementation • ACL 2022 • Jinyu Guo, Kai Shuang, Jijie Li, Zihan Wang, Yixuan Liu
However, no matter how the dialogue history is used, each existing model uses its own consistent dialogue history during the entire state tracking process, regardless of which slot is updated.
no code implementations • 24 Sep 2021 • Lei Shi, Kai Shuang, Shijie Geng, Peng Gao, Zuohui Fu, Gerard de Melo, Yunpeng Chen, Sen Su
To overcome these issues, we propose unbiased Dense Contrastive Visual-Linguistic Pretraining (DCVLP), which replaces the region regression and classification with cross-modality region contrastive learning that requires no annotations.
no code implementations • ACL 2021 • Jinyu Guo, Kai Shuang, Jijie Li, Zihan Wang
However, the overwhelming majority of the slots in each turn should simply inherit the slot values from the previous turn.
1 code implementation • NeurIPS 2020 • Tao Zhuang, Zhixuan Zhang, Yuheng Huang, Xiaoyi Zeng, Kai Shuang, Xiang Li
Experimentally, we show that structured pruning using polarization regularizer achieves much better results than using L1 regularizer.
no code implementations • 18 Aug 2020 • Hao Guo, Xintao Ren, Rongrong Wang, Zhun Cai, Kai Shuang, Yue Sun
In this paper, we propose a model named HUIHEN (Hierarchical User Intention-Habit Extract Network) that leverages the users' behavior information in mobile banking APP.
no code implementations • 26 Jul 2020 • Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, Sen Su
We evaluate CVLP on several down-stream tasks, including VQA, GQA and NLVR2 to validate the superiority of contrastive learning on multi-modality representation learning.
no code implementations • 3 Jan 2020 • Lei Shi, Shijie Geng, Kai Shuang, Chiori Hori, Songxiang Liu, Peng Gao, Sen Su
To solve the issue for the intermediate layers, we propose an efficient Quaternion Block Network (QBN) to learn interaction not only for the last layer but also for all intermediate layers simultaneously.
no code implementations • 25 Jul 2019 • Rui Li, Kai Shuang, Mengyu Gu, Sen Su
Due to the adaptive noises can be improved as the training processes, its negative effects can be weakened and even transformed into a positive effect to further improve the expressiveness of the main-branch RNN.