no code implementations • 14 Mar 2023 • Jikun Kang, Di wu, Ju Wang, Ekram Hossain, Xue Liu, Gregory Dudek
In cellular networks, User Equipment (UE) handoff from one Base Station (BS) to another, giving rise to the load balancing problem among the BSs.
no code implementations • 4 Mar 2023 • Junliang Luo, Farimah Poursafaei, Xue Liu
Detecting illicit nodes on blockchain networks is a valuable task for strengthening future regulation.
no code implementations • 2 Mar 2023 • Dan Liu, Xue Liu
Numerous research efforts have been made to compress neural network models with faster inference and higher accuracy.
no code implementations • 4 Feb 2023 • Fuyuan Lyu, Xing Tang, Dugang Liu, Haolun Wu, Chen Ma, Xiuqiang He, Xue Liu
Representation learning has been a critical topic in machine learning.
no code implementations • 3 Feb 2023 • Igor Kozlov, Dmitriy Rivkin, Wei-Di Chang, Di wu, Xue Liu, Gregory Dudek
Such networks undergo frequent and often heterogeneous changes caused by network operators, who are seeking to tune their system parameters for optimal performance.
no code implementations • 26 Jan 2023 • Fuyuan Lyu, Xing Tang, Dugang Liu, Liang Chen, Xiuqiang He, Xue Liu
Because of the large-scale search space, we develop a learning-by-continuation training scheme to learn such gates.
no code implementations • 29 Dec 2022 • Haolun Wu, Yansen Zhang, Chen Ma, Fuyuan Lyu, Fernando Diaz, Xue Liu
Diversifying search results is an important research topic in retrieval systems in order to satisfy both the various interests of customers and the equal market exposure of providers.
no code implementations • 24 Dec 2022 • Dan Liu, Xue Liu
Most of the existing works use projection functions for ternary quantization in discrete space.
no code implementations • 24 Dec 2022 • Dan Liu, Xue Liu
Most existing pruning works are resource-intensive, requiring retraining or fine-tuning of the pruned models for accuracy.
no code implementations • 24 Dec 2022 • Dan Liu, Xi Chen, Chen Ma, Xue Liu
Model quantization enables the deployment of deep neural networks under resource-constrained devices.
no code implementations • 24 Nov 2022 • Xue Liu, Juan Zou, Xiawu Zheng, Cheng Li, Hairong Zheng, Shanshan Wang
Then, we design an effective self-supervised training data refinement method to reduce this data bias.
no code implementations • 21 Nov 2022 • Tim Tianyi Yang, Tom Tianze Yang, Andrew Liu, Jie Tang, Na An, Shaoshan Liu, Xue Liu
Also, through the AICOM-MP project, we have generalized a methodology of developing health AI technologies for AMCs to allow universal access even in resource-constrained environments.
no code implementations • 11 Nov 2022 • Haolun Wu, Yingxue Zhang, Chen Ma, Wei Guo, Ruiming Tang, Xue Liu, Mark Coates
To offer accurate and diverse recommendation services, recent methods use auxiliary information to foster the learning process of user and item representations.
no code implementations • 28 Oct 2022 • Chengming Hu, Xuan Li, Dan Liu, Xi Chen, Ju Wang, Xue Liu
To tackle this issue, Teacher-Student architectures were first utilized in knowledge distillation, where simple student networks can achieve comparable performance to deep teacher networks.
no code implementations • 2 Oct 2022 • Xue Liu, Dan Sun, Xiaobo Cao, Hao Ye, Wei Wei
Graph embedding provides a feasible methodology to conduct pattern classification for graph-structured data by mapping each data into the vectorial space.
1 code implementation • 9 Aug 2022 • Fuyuan Lyu, Xing Tang, Hong Zhu, Huifeng Guo, Yingxue Zhang, Ruiming Tang, Xue Liu
To this end, we propose an optimal embedding table learning framework OptEmbed, which provides a practical and general method to find an optimal embedding table for various base CTR models.
1 code implementation • 2 Aug 2022 • Haolun Wu, Chen Ma, Yingxue Zhang, Xue Liu, Ruiming Tang, Mark Coates
In order to effectively utilize such information, most research adopts the pairwise ranking method on constructed training triplets (user, positive item, negative item) and aims to distinguish between positive items and negative items for each user.
no code implementations • 24 Jul 2022 • Can Chen, Xi Chen, Chen Ma, Zixuan Liu, Xue Liu
Bi-level optimization, especially the gradient-based category, has been widely used in the deep learning community including hyperparameter optimization and meta knowledge extraction.
no code implementations • 14 Jun 2022 • Xuan Li, Paule-J Toussaint, Alan Evans, Xue Liu
To dispense with the manual annotation requirement, we propose to train a model to adaptively transfer the annotation from the cerebellum on the Allen Brain Human Brain Atlas to the BigBrain in an unsupervised manner, taking into account the different staining and spacing between sections.
no code implementations • 31 May 2022 • Can Chen, Chen Ma, Xi Chen, Sirui Song, Hao liu, Xue Liu
Recent works reveal a huge gap between the implicit feedback and user-item relevance due to the fact that implicit feedback is also closely related to the item exposure.
1 code implementation • 29 Apr 2022 • Haolun Wu, Bhaskar Mitra, Chen Ma, Fernando Diaz, Xue Liu
Prior research on exposure fairness in the context of recommender systems has focused mostly on disparities in the exposure of individual or groups of items to individual users of the system.
no code implementations • 6 Apr 2022 • Can Chen, Jingbo Zhou, Fan Wang, Xue Liu, Dejing Dou
Furthermore, we propose to leverage the available protein language model pretrained on protein sequences to enhance the self-supervised learning.
no code implementations • 20 Mar 2022 • Yuecai Zhu, Fuyuan Lyu, Chengming Hu, Xi Chen, Xue Liu
However, the temporal information embedded in the dynamic graphs brings new challenges in analyzing and deploying them.
no code implementations • 20 Nov 2021 • Yang Hu, Zhui Zhu, Sirui Song, Xue Liu, Yang Yu
Experimental results in an exemplary environment show that our MARL approach is able to demonstrate the effectiveness and necessity of restrictions on individual liberty for collaborative supply of public goods.
1 code implementation • 29 Oct 2021 • Can Chen, Shuhao Zheng, Xi Chen, Erqun Dong, Xue Liu, Hao liu, Dejing Dou
To be specific, GDW unrolls the loss gradient to class-level gradients by the chain rule and reweights the flow of each gradient separately.
1 code implementation • 6 Oct 2021 • Jikun Kang, Miao Liu, Abhinav Gupta, Chris Pal, Xue Liu, Jie Fu
Various automatic curriculum learning (ACL) methods have been proposed to improve the sample efficiency and final performance of deep reinforcement learning (DRL).
no code implementations • 29 Sep 2021 • Di wu, Tianyu Li, David Meger, Michael Jenkin, Xue Liu, Gregory Dudek
Unfortunately, most online reinforcement learning algorithms require a large number of interactions with the environment to learn a reliable control policy.
no code implementations • 29 Sep 2021 • Jikun Kang, Xi Chen, Ju Wang, Chengming Hu, Xue Liu, Gregory Dudek
Results show that, compared with SOTA model-free methods, our method can improve the data efficiency and system performance by up to 75% and 10%, respectively.
no code implementations • 23 Aug 2021 • Xuan Li, Liqiong Chang, Xue Liu
To this end, this paper proposes a framework to assess the impact of the near-duplicate images on CNN training performance, called CE-Dedup.
1 code implementation • 3 Aug 2021 • Fuyuan Lyu, Xing Tang, Huifeng Guo, Ruiming Tang, Xiuqiang He, Rui Zhang, Xue Liu
As feature interactions bring in non-linearity, they are widely adopted to improve the performance of CTR prediction models.
Ranked #2 on
Click-Through Rate Prediction
on Avazu
no code implementations • 23 Jul 2021 • Dan Liu, Xi Chen, Jie Fu, Chen Ma, Xue Liu
To simultaneously optimize bit-width, model size, and accuracy, we propose pruning ternary quantization (PTQ): a simple, effective, symmetric ternary quantization method.
no code implementations • 13 Jul 2021 • Xue Liu, Dan Sun, Wei Wei
Considering the preservation of graph entropy, we propose an effective strategy to generate randomly perturbed training data but maintain both graph topology and graph entropy.
no code implementations • 6 May 2021 • Haolun Wu, Chen Ma, Bhaskar Mitra, Fernando Diaz, Xue Liu
To address these limitations, we propose a multi-objective optimization framework for fairness-aware recommendation, Multi-FR, that adaptively balances accuracy and fairness for various stakeholders with Pareto optimality guarantee.
no code implementations • 2 Feb 2021 • Xue Liu, Wei Wei, Xiangnan Feng, Xiaobo Cao, Dan Sun
Most existing popular methods for learning graph embedding only consider fixed-order global structural features and lack structures hierarchical representation.
1 code implementation • CVPR 2021 • Yufei Cui, Yu Mao, Ziquan Liu, Qiao Li, Antoni B. Chan, Xue Liu, Tei-Wei Kuo, Chun Jason Xue
Nested dropout is a variant of dropout operation that is able to order network parameters or features based on the pre-defined importance during training.
no code implementations • 13 Jan 2021 • Chen Ma, Liheng Ma, Yingxue Zhang, Ruiming Tang, Xue Liu, Mark Coates
Personalized recommender systems are playing an increasingly important role as more content and services become available and users struggle to identify what might interest them.
no code implementations • 13 Jan 2021 • Chen Ma, Liheng Ma, Yingxue Zhang, Haolun Wu, Xue Liu, Mark Coates
To effectively make use of the knowledge graph, we propose a recommendation model in the hyperbolic space, which facilitates the learning of the hierarchical structure of knowledge graphs.
1 code implementation • 4 Aug 2020 • Sirui Song, Zefang Zong, Yong Li, Xue Liu, Yang Yu
Saving lives or economy is a dilemma for epidemic control in most cities while smart-tracing technology raises people's privacy concerns.
no code implementations • 31 Jul 2020 • Xing Li, Wei Wei, Xiangnan Feng, Xue Liu, Zhiming Zheng
The graph structure is a commonly used data storage mode, and it turns out that the low-dimensional embedded representation of nodes in the graph is extremely useful in various typical tasks, such as node classification, link prediction , etc.
no code implementations • 21 May 2020 • Hang Li, Chen Ma, Wei Xu, Xue Liu
Building compact convolutional neural networks (CNNs) with reliable performance is a critical but challenging task, especially when deploying them in real-world applications.
no code implementations • 20 Mar 2020 • Xuan Li, Yuchen Lu, Christian Desrosiers, Xue Liu
In this paper, we study the problem of out-of-distribution detection in skin disease images.
1 code implementation • 26 Dec 2019 • Chen Ma, Liheng Ma, Yingxue Zhang, Jianing Sun, Xue Liu, Mark Coates
In addition to the modeling of user interests, we employ a bilinear function to capture the co-occurrence patterns of related items.
1 code implementation • 12 Nov 2019 • Qinglong Wang, Kaixuan Zhang, Xue Liu, C. Lee Giles
We propose an approach that connects recurrent networks with different orders of hidden interaction with regular grammars of different levels of complexity.
no code implementations • 15 Oct 2019 • Kaixuan Zhang, Qinglong Wang, Xue Liu, C. Lee Giles
This has motivated different research areas such as data poisoning, model improvement, and explanation of machine learning models.
no code implementations • 2 Oct 2019 • Xuan Li, Yuchen Lu, Peng Xu, Jizong Peng, Christian Desrosiers, Xue Liu
In this paper, we study the problem of image recognition with non-differentiable constraints.
2 code implementations • 21 Jun 2019 • Chen Ma, Peng Kang, Xue Liu
However, with the tremendous increase of users and items, sequential recommender systems still face several challenging problems: (1) the hardness of modeling the long-term user interests from sparse implicit feedback; (2) the difficulty of capturing the short-term user interests given several items the user just accessed.
Ranked #1 on
Recommendation Systems
on Amazon-CDs
1 code implementation • 7 Dec 2018 • Chen Ma, Peng Kang, Bin Wu, Qinglong Wang, Xue Liu
In particular, a word-level and a neighbor-level attention module are integrated with the autoencoder.
no code implementations • 14 Nov 2018 • Qinglong Wang, Kaixuan Zhang, Xue Liu, C. Lee Giles
The verification problem for neural networks is verifying whether a neural network will suffer from adversarial samples, or approximating the maximal allowed scale of adversarial perturbation that can be endured.
1 code implementation • 27 Sep 2018 • Chen Ma, Yingxue Zhang, Qinglong Wang, Xue Liu
To incorporate the geographical context information, we propose a neighbor-aware decoder to make users' reachability higher on the similar and nearby neighbors of checked-in POIs, which is achieved by the inner product of POI embeddings together with the radial basis function (RBF) kernel.
no code implementations • 16 Jan 2018 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.
no code implementations • 29 Sep 2017 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Rule extraction from black-box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis.
no code implementations • 5 Dec 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design.
no code implementations • 6 Oct 2016 • Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong
Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles.
no code implementations • 5 Oct 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, C. Lee Giles, Xue Liu
However, after a thorough analysis of the fundamental flaw in DNNs, we discover that the effectiveness of current defenses is limited and, more importantly, cannot provide theoretical guarantees as to their robustness against adversarial sampled-based attacks.