no code implementations • NAACL 2022 • Jiangang Bai, Yujing Wang, Hong Sun, Ruonan Wu, Tianmeng Yang, Pengfei Tang, Defu Cao, Mingliang Zhang1, Yunhai Tong, Yaming Yang, Jing Bai, Ruofei Zhang, Hao Sun, Wei Shen
Large-scale pre-trained language models have attracted extensive attentions in the research community and shown promising results on various tasks of natural language processing.
no code implementations • 16 Jun 2023 • Dongshuo Yin, Xueting Han, Bin Li, Hao Feng, Jing Bai
Despite their success, the existing PETL methods in CV can be computationally expensive and require large amounts of memory and time cost during training, which limits low-resource users from conducting research and applications on large models.
1 code implementation • 13 Jun 2023 • Haozhen Zhang, Xueting Han, Xi Xiao, Jing Bai
To address these issues, we propose a Time-aware Graph Structure Learning (TGSL) approach via sequence prediction on temporal graphs, which learns better graph structures for downstream tasks through adding potential temporal edges.
no code implementations • 19 Apr 2023 • Shengrui Li, Xueting Han, Jing Bai
We have also provided a theoretical justification for delta tuning can improve the generalization ability of GNNs by applying generalization bounds.
no code implementations • Remote Sensing 2022 • Jing Bai, Jiawei Lu, Zhu Xiao, Zheng Chen, Licheng Jiao
Nowadays, HSI classification can reach a high classification accuracy when given sufficient labeled samples as training set.
no code implementations • 26 Jan 2022 • Sinan Tan, Hui Xue, Qiyu Ren, Huaping Liu, Jing Bai
Our framework is based on an innovative evolution algorithm, which is stable and suitable for multiple dataset scenario.
1 code implementation • 25 Dec 2021 • Jiayan Guo, Yaming Yang, Xiangchen Song, Yuan Zhang, Yujing Wang, Jing Bai, Yan Zhang
Specifically, we creatively propose Multi-granularity Intent Heterogeneous Session Graph which captures the interactions between different granularity intent units and relieves the burden of long-dependency.
no code implementations • ICLR 2022 • Jieyu Zhang, Bohan Wang, Xiangchen Song, Yujing Wang, Yaming Yang, Jing Bai, Alexander Ratner
Creating labeled training sets has become one of the major roadblocks in machine learning.
no code implementations • 3 Oct 2021 • Tianmeng Yang, Yujing Wang, Zhihan Yue, Yaming Yang, Yunhai Tong, Jing Bai
On the one hand, multi-hop-based approaches do not explicitly distinguish relevant nodes from a large number of multi-hop neighborhoods, leading to a severe over-smoothing problem.
no code implementations • 5 Sep 2021 • Yankai Chen, Yaming Yang, Yujing Wang, Jing Bai, Xiangchen Song, Irwin King
However, simply integrating KGs in current KG-based RS models is not necessarily a guarantee to improve the recommendation performance, which may even weaken the holistic model capability.
Click-Through Rate Prediction
Knowledge-Aware Recommendation
+1
1 code implementation • 19 Jul 2021 • Xueting Han, Zhenhuan Huang, Bang An, Jing Bai
We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task.
2 code implementations • NeurIPS 2020 • Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Conguri Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, Qi Zhang
In this paper, we propose Spectral Temporal Graph Neural Network (StemGNN) to further improve the accuracy of multivariate time-series forecasting.
1 code implementation • EACL 2021 • Jiangang Bai, Yujing Wang, Yiren Chen, Yaming Yang, Jing Bai, Jing Yu, Yunhai Tong
Pre-trained language models like BERT achieve superior performances in various NLP tasks without explicit consideration of syntactic information.
2 code implementations • 20 Feb 2021 • Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong
In this paper, we propose a novel and generic mechanism based on evolving attention to improve the performance of transformers.
no code implementations • 1 Jan 2021 • Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Yunhai Tong
Instead, we model their dependencies via a chain of prediction models that take previous attention maps as input to predict the attention maps of a new layer through convolutional neural networks.
no code implementations • 14 Oct 2020 • Yiren Chen, Yaming Yang, Hong Sun, Yujing Wang, Yu Xu, Wei Shen, Rong Zhou, Yunhai Tong, Jing Bai, Ruofei Zhang
We add the model designed by AutoADR as a sub-model into the production Ad Relevance model.
2 code implementations • 4 Sep 2020 • Hang Zhao, Yujing Wang, Juanyong Duan, Congrui Huang, Defu Cao, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, Qi Zhang
Anomaly detection on multivariate time-series is of great importance in both data mining research and industrial applications.
no code implementations • COLING 2020 • Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, Jing Bai
BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks.
no code implementations • 23 Dec 2019 • Yujing Wang, Yaming Yang, Yiren Chen, Jing Bai, Ce Zhang, Guinan Su, Xiaoyu Kou, Yunhai Tong, Mao Yang, Lidong Zhou
Learning text representation is crucial for text classification and other language related tasks.
no code implementations • 21 Nov 2019 • Bitan Hou, Yujing Wang, Ming Zeng, Shan Jiang, Ole J. Mengshoel, Yunhai Tong, Jing Bai
For these applications, graph embedding is crucial as it provides vector representations of the graph.
no code implementations • 17 May 2013 • Fu-qiang Chen, Yan Wu, Guo-dong Zhao, Jun-ming Zhang, Ming Zhu, Jing Bai
Auto-encoder is a special kind of neural network based on reconstruction.