no code implementations • 2 Dec 2024 • Kaiyuan Gao, Yusong Wang, Haoxiang Guan, Zun Wang, Qizhi Pei, John E. Hopcroft, Kun He, Lijun Wu
Two primary obstacles emerge: (1) the difficulty in designing a 3D line notation that ensures SE(3)-invariant atomic coordinates, and (2) the non-trivial task of tokenizing continuous coordinates for use in LMs, which inherently require discrete inputs.
no code implementations • 27 Jun 2024 • Jinsong Chen, Hanpeng Liu, John E. Hopcroft, Kun He
While tokenized graph Transformers have demonstrated strong performance in node classification tasks, their reliance on a limited subset of nodes with high similarity scores for constructing token sequences overlooks valuable information from other nodes, hindering their ability to fully harness graph information for learning optimal node representations.
no code implementations • 31 Oct 2023 • Gaichao Li, Jinsong Chen, John E. Hopcroft, Kun He
Graph pooling methods have been widely used on downsampling graphs, achieving impressive results on multiple graph-level tasks like graph classification and graph generation.
no code implementations • 17 Oct 2023 • Jinsong Chen, Gaichao Li, John E. Hopcroft, Kun He
In this way, SignGT could learn informative node representations from both long-range dependencies and local topology information.
Ranked #7 on Node Classification on Actor
no code implementations • 24 Mar 2023 • Kun He, Xin Liu, Yichen Yang, Zhou Qin, Weigao Wen, Hui Xue, John E. Hopcroft
Besides, we suggest to use the Normalized Mean Square Error (NMSE) to further improve the robustness by aligning the clean and adversarial examples.
no code implementations • 27 Nov 2022 • Shuoxi Zhang, Hanpeng Liu, John E. Hopcroft, Kun He
Knowledge distillation aims to transfer knowledge to the student model by utilizing the predictions/features of the teacher model, and feature-based distillation has recently shown its superiority over logit-based distillation.
1 code implementation • 20 Nov 2022 • Yu-Zhe Shi, Manjie Xu, John E. Hopcroft, Kun He, Joshua B. Tenenbaum, Song-Chun Zhu, Ying Nian Wu, Wenjuan Han, Yixin Zhu
Specifically, at the $representational \ level$, we seek to answer how the complexity varies when a visual concept is mapped to the representation space.
no code implementations • 15 Nov 2022 • Kun He, Chang Liu, Stephen Lin, John E. Hopcroft
And further combination with our feature augmentation techniques, termed LOMA_IF&FO, can continue to strengthen the model and outperform advanced intensity transformation methods for data augmentation.
no code implementations • 27 May 2022 • Binghui Li, Jikai Jin, Han Zhong, John E. Hopcroft, LiWei Wang
Moreover, we establish an improved upper bound of $\exp({\mathcal{O}}(k))$ for the network size to achieve low robust generalization error when the data lies on a manifold with intrinsic dimension $k$ ($k \ll d$).
no code implementations • 8 Dec 2021 • Meng Wang, Boyu Li, Kun He, John E. Hopcroft
We theoretically show that our method can avoid some situations that a broken community and the local community are regarded as one community in the subgraph, leading to the inaccuracy on detection which can be caused by global hidden community detection methods.
1 code implementation • CVPR 2022 • Yifeng Xiong, Jiadong Lin, Min Zhang, John E. Hopcroft, Kun He
The black-box adversarial attack has attracted impressive attention for its practical use in the field of deep learning security.
no code implementations • 29 Sep 2021 • Jiadong Lin, Yifeng Xiong, Min Zhang, John E. Hopcroft, Kun He
Black-box adversarial attack has attracted much attention for its practical use in deep learning applications, and it is very challenging as there is no access to the architecture and weights of the target model.
no code implementations • 31 Jul 2021 • Xiaodong Xin, Kun He, Jialu Bao, Bart Selman, John E. Hopcroft
Our previous work proposes a general structure amplification technique called HICODE that uncovers many layers of functional hidden structure in complex networks.
1 code implementation • 6 Jul 2021 • Kun He, Chao Li, Yixiao Yang, Gao Huang, John E. Hopcroft
We first propose a simple yet efficient implementation of the convolution using circular kernels, and empirically show the significant advantages of large circular kernels over the counterpart square kernels.
no code implementations • 1 Jan 2021 • Xiaosen Wang, Kun He, Chuanbiao Song, LiWei Wang, John E. Hopcroft
A recent work targets unrestricted adversarial example using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input.
3 code implementations • CVPR 2020 • Chao Li, Yixiao Yang, Kun He, Stephen Lin, John E. Hopcroft
IBCLN is a cascaded network that iteratively refines the estimates of transmission and reflection layers in a manner that they can boost the prediction quality to each other, and information across steps of the cascade is transferred using an LSTM.
Ranked #3 on Reflection Removal on SIR^2(Postcard)
1 code implementation • 8 Oct 2019 • Chao Li, Kun He, Guangshuai Liu, John E. Hopcroft
Results: We propose a method called HirHide (Hierarchical Hidden Community Detection), which can be combined with traditional community detection methods to enable them to discover hierarchical hidden communities.
Molecular Networks
1 code implementation • ICLR 2020 • Chuanbiao Song, Kun He, Jiadong Lin, Li-Wei Wang, John E. Hopcroft
We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples.
3 code implementations • ICLR 2020 • Jiadong Lin, Chuanbiao Song, Kun He, Li-Wei Wang, John E. Hopcroft
While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting" on the white-box model being attacked and generate more transferable adversarial examples.
no code implementations • 10 May 2019 • Kun He, Wu Wang, Xiaosen Wang, John E. Hopcroft
In this work, we propose a new method for the anchor word selection by associating the word co-occurrence probability with the words similarity and assuming that the most different words on semantic are potential candidates for the anchor words.
no code implementations • 16 Apr 2019 • Xiaosen Wang, Kun He, Chuanbiao Song, Li-Wei Wang, John E. Hopcroft
In this way, AT-GAN can learn the distribution of adversarial examples that is very close to the distribution of real data.
2 code implementations • ICLR 2019 • Chuanbiao Song, Kun He, Li-Wei Wang, John E. Hopcroft
Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples.
no code implementations • 21 Jan 2018 • Tao Yu, Huan Long, John E. Hopcroft
In this paper we show the similarities and differences of two deep neural networks by comparing the manifolds composed of activation vectors in each fully connected layer of them.
2 code implementations • 13 Dec 2017 • Kun He, Pan Shi, David Bindel, John E. Hopcroft
Community detection is an important information mining task in many fields including computer science, social sciences, biology and physics.
Social and Information Networks
no code implementations • 5 Nov 2017 • Mengxiao Zhang, Wangquan Wu, Yanren Zhang, Kun He, Tao Yu, Huan Long, John E. Hopcroft
Our results show that the dimensions of different categories are close to each other and decline quickly along the convolutional layers and fully connected layers.
no code implementations • 2 Apr 2017 • Kun He, Jingbo Wang, Haochuan Li, Yao Shu, Mengxiao Zhang, Man Zhu, Li-Wei Wang, John E. Hopcroft
Toward a deeper understanding on the inner work of deep neural networks, we investigate CNN (convolutional neural network) using DCN (deconvolutional network) and randomization technique, and gain new insights for the intrinsic property of this network architecture.
12 code implementations • 1 Apr 2017 • Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger
In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost.
4 code implementations • 24 Feb 2017 • Kun He, Yingru Li, Sucheta Soundarajan, John E. Hopcroft
We introduce a new paradigm that is important for community detection in the realm of network analysis.
no code implementations • 19 Nov 2015 • Jacob R. Gardner, Paul Upchurch, Matt J. Kusner, Yixuan Li, Kilian Q. Weinberger, Kavita Bala, John E. Hopcroft
Many tasks in computer vision can be cast as a "label changing" problem, where the goal is to make a semantic change to the appearance of an image or some subject in an image in order to alter the class membership.