no code implementations • 6 Jan 2025 • Shengming Zhang, Le Zhang, Jingbo Zhou, Hui Xiong
These findings substantiate the efficacy of CHAT in addressing the complex problem of link prediction in heterogeneous networks.
no code implementations • 25 Jun 2023 • Shengming Zhang, Yizhou Sun
Drug-target interaction (DTI) prediction, which aims at predicting whether a drug will be bounded to a target, have received wide attention recently, with the goal to automate and accelerate the costly process of drug design.
1 code implementation • 9 Nov 2022 • Jie Wu, Ying Peng, Shengming Zhang, Weigang Qi, Jian Zhang
MVLT is trained in two stages: in the first stage, we design a STR-tailored pretraining method based on a masking strategy; in the second stage, we fine-tune our model and adopt an iterative correction method to improve the performance.
1 code implementation • ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022 • Shengming Zhang, Yanchi Liu, Xuchao Zhang, Wei Cheng, Haifeng Chen, Hui Xiong
It is critical and important to detect anomalies in event sequences, which becomes widely available in many application domains. In-deed, various efforts have been made to capture abnormal patterns from event sequences through sequential pattern analysis or event representation learning. However, existing approaches usually ignore the semantic information of event content. To this end, in this paper, we propose a self-attentive encoder-decoder transformer framework, Content-Aware Transformer(CAT), for anomaly detection in event sequences. In CAT, the encoder learns preamble event sequence representations with content awareness, and the decoder embeds sequences under detection into a latent space, where anomalies are distinguishable. Specifically, the event content is first fed to a content-awareness layer, generating representations of each event. The encoder accepts preamble event representation sequence, generating feature maps. In the decoder, an additional token is added at the beginning of the sequence under detection, denoting the sequence status. A one-class objective together with sequence reconstruction loss is collectively applied to train our framework under the label efficiency scheme. Furthermore, CAT is optimized under a scalable and efficient setting. Finally, extensive experiments on three real-world datasets demonstrate the superiority of CAT.
no code implementations • ACL 2020 • Fan Zhou, Shengming Zhang, Yi Yang
To tackle these challenges, we present a semi-supervised text classification framework that integrates multi-head attention mechanism with Semi-supervised variational inference for Operational Risk Classification (SemiORC).
no code implementations • 5 Aug 2017 • Quanshi Zhang, Ruiming Cao, Shengming Zhang, Mark Redmonds, Ying Nian Wu, Song-Chun Zhu
In the scenario of one/multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals.
1 code implementation • 20 Apr 2016 • Hongyang Xue, Shengming Zhang, Deng Cai
The proposed low gradient regularization is integrated with the low rank regularization into the low rank low gradient approach for depth image inpainting.