no code implementations • ICML 2020 • Fangcheng Fu, Yuzheng Hu, Yihan He, Jiawei Jiang, Yingxia Shao, Ce Zhang, Bin Cui
Recent years have witnessed intensive research interests on training deep neural networks (DNNs) more efficiently by quantization-based compression methods, which facilitate DNNs training in two ways: (1) activations are quantized to shrink the memory consumption, and (2) gradients are quantized to decrease the communication cost.
1 code implementation • EMNLP 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
no code implementations • 20 Apr 2022 • Bowen Yu, Yingxia Shao, Ang Li
In recent years, with the rapid growth of Internet data, the number and types of scientific and technological resources are also rapidly expanding.
no code implementations • 13 Apr 2022 • Yuhui Wang, Yingxia Shao, Ang Li
In the era of big data, intellectual property-oriented scientific and technological resources show the trend of large data scale, high information density and low value density, which brings severe challenges to the effective use of intellectual property resources, and the demand for mining hidden information in intellectual property is increasing.
no code implementations • 13 Apr 2022 • Suyu Ouyang, Yingxia Shao, Ang Li
The scientific and technological resources of experts and scholars are mainly composed of basic attributes and scientific research achievements.
1 code implementation • 1 Apr 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, Denvy Deng, Qi Zhang, Xing Xie
We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index.
no code implementations • 31 Mar 2022 • Suyu Ouyang, Yingxia Shao, Junping Du, Ang Li
The knowledge extraction task is to extract triple relations (head entity-relation-tail entity) from unstructured text data.
no code implementations • 21 Mar 2022 • Yuhui Wang, Junping Du, Yingxia Shao
This paper proposes a method for extracting intellectual property entities based on Transformer and technical word information , and provides accurate word vector representation in combination with the BERT language method.
no code implementations • 21 Mar 2022 • Bowen Yu, Junping Du, Yingxia Shao
With the rapid growth of the number and types of web resources, there are still problems to be solved when using a single strategy to extract the text information of different pages.
1 code implementation • 18 Feb 2022 • Tianyu Zhao, Cheng Yang, Yibo Li, Quan Gan, Zhenyi Wang, Fengqi Liang, Huan Zhao, Yingxia Shao, Xiao Wang, Chuan Shi
Heterogeneous Graph Neural Network (HGNN) has been successfully employed in various tasks, but we cannot accurately know the importance of different design dimensions of HGNNs due to diverse architectures and applied scenarios.
no code implementations • 13 Feb 2022 • Jianjin Zhang, Zheng Liu, Weihao Han, Shitao Xiao, Ruicheng Zheng, Yingxia Shao, Hao Sun, Hanqing Zhu, Premkumar Srinivasan, Denvy Deng, Qi Zhang, Xing Xie
On the other hand, the capability of making high-CTR retrieval is optimized by learning to discriminate user's clicked ads from the entire corpus.
2 code implementations • 14 Jan 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Yingxia Shao, Defu Lian, Chaozhuo Li, Hao Sun, Denvy Deng, Liangjie Zhang, Qi Zhang, Xing Xie
In this work, we tackle this problem with Bi-Granular Document Representation, where the lightweight sparse embeddings are indexed and standby in memory for coarse-grained candidate search, and the heavyweight dense embeddings are hosted in disk for fine-grained post verification.
1 code implementation • Science China Information Sciences 2021 • Shitao Xiao, Yingxia Shao, Yawen Li, Hongzhi Yin, Yanyan Shen & Bin Cui
In this paper, we model an interaction between user and item as an edge and propose a novel CF framework, called learnable edge collaborative filtering (LECF).
1 code implementation • 24 Aug 2021 • Xin Xia, Hongzhi Yin, Junliang Yu, Yingxia Shao, Lizhen Cui
In this paper, for informative session-based data augmentation, we combine self-supervised learning with co-training, and then develop a framework to enhance session-based recommendation.
no code implementations • The VLDB Journal 2021 • Yingxia Shao, Shiyue Huang, Yawen Li, Xupeng Miao, Bin Cui & Lei Chen
In this paper, to clearly compare the efficiency of various node sampling methods, we first design a cost model and propose two new node sampling methods: one follows the acceptance-rejection paradigm to achieve a better balance between memory and time cost, and the other is optimized for fast sampling the skewed probability distributions existed in natural graphs.
2 code implementations • 16 Apr 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
no code implementations • 18 Feb 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Tao Di, Xing Xie
Secondly, it improves the data efficiency of the training workflow, where non-informative data can be eliminated from encoding.
no code implementations • 8 Dec 2020 • Yang Li, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang, Bin Cui
In this framework, the BO methods are used to solve the HPO problem for each ML algorithm separately, incorporating a much smaller hyperparameter space for BO methods.
1 code implementation • 10 Oct 2020 • Xingyu Yao, Yingxia Shao, Bin Cui, Lei Chen
Finally, with the new edge sampler and random walk model abstraction, we carefully implement a scalable NRL framework called UniNet.
no code implementations • 10 Oct 2019 • Xupeng Miao, Nezihe Merve Gürel, Wentao Zhang, Zhichao Han, Bo Li, Wei Min, Xi Rao, Hansheng Ren, Yinan Shan, Yingxia Shao, Yujie Wang, Fan Wu, Hui Xue, Yaming Yang, Zitao Zhang, Yang Zhao, Shuai Zhang, Yujing Wang, Bin Cui, Ce Zhang
Despite the wide application of Graph Convolutional Network (GCN), one major limitation is that it does not benefit from the increasing depth and suffers from the oversmoothing problem.
no code implementations • 3 Jul 2019 • Fangcheng Fu, Jiawei Jiang, Yingxia Shao, Bin Cui
Gradient boosting decision tree (GBDT) is a widely-used machine learning algorithm in both data analytic competitions and real-world industrial applications.
5 code implementations • 16 Dec 2018 • Yongqi Zhang, Quanming Yao, Yingxia Shao, Lei Chen
Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding.
Ranked #4 on
Link Prediction
on FB15k
no code implementations • 6 Nov 2018 • Yang Li, Jiawei Jiang, Yingxia Shao, Bin Cui
The performance of deep neural networks crucially depends on good hyperparameter configurations.