Search Results for author: Xingen Wang

Found 7 papers, 6 papers with code

Transition Propagation Graph Neural Networks for Temporal Networks

1 code implementation15 Apr 2023 Tongya Zheng, Zunlei Feng, Tianli Zhang, Yunzhi Hao, Mingli Song, Xingen Wang, Xinyu Wang, Ji Zhao, Chun Chen

The proposed TIP-GNN focuses on the bilevel graph structure in temporal networks: besides the explicit interaction graph, a node's sequential interactions can also be constructed as a transition graph.

Graph Mining Link Prediction +1

Learning Dynamic Preference Structure Embedding From Temporal Networks

1 code implementation23 Nov 2021 Tongya Zheng, Zunlei Feng, Yu Wang, Chengchao Shen, Mingli Song, Xingen Wang, Xinyu Wang, Chun Chen, Hao Xu

Our proposed Dynamic Preference Structure (DPS) framework consists of two stages: structure sampling and graph fusion.

Graph Sampling

Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling

no code implementations17 Jul 2021 Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang, Guoliang Dong, Xingen Wang, Ting Dai, Jin Song Dong

In this work, we bridge the gap by proposing a scalable and effective approach for systematically searching for discriminatory samples while extending existing fairness testing approaches to address a more challenging domain, i. e., text classification.

Fairness text-classification +1

Contrastive Model Inversion for Data-Free Knowledge Distillation

3 code implementations18 May 2021 Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, Mingli Song

In this paper, we propose Contrastive Model Inversion~(CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue.

Contrastive Learning Data-free Knowledge Distillation

KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation

1 code implementation10 May 2021 Mengqi Xue, Jie Song, Xinchao Wang, Ying Chen, Xingen Wang, Mingli Song

Knowledge distillation (KD) has recently emerged as an efficacious scheme for learning compact deep neural networks (DNNs).

Knowledge Distillation Multi-class Classification

Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction

1 code implementation22 Sep 2019 Guoliang Dong, Jingyi Wang, Jun Sun, Yang Zhang, Xinyu Wang, Ting Dai, Jin Song Dong, Xingen Wang

In this work, we propose an approach to extract probabilistic automata for interpreting an important class of neural networks, i. e., recurrent neural networks.

Machine Translation Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.