no code implementations • 2 Dec 2024 • Kaiyuan Gao, Yusong Wang, Haoxiang Guan, Zun Wang, Qizhi Pei, John E. Hopcroft, Kun He, Lijun Wu
Two primary obstacles emerge: (1) the difficulty in designing a 3D line notation that ensures SE(3)-invariant atomic coordinates, and (2) the non-trivial task of tokenizing continuous coordinates for use in LMs, which inherently require discrete inputs.
no code implementations • 20 Aug 2024 • Qiao Li, Cong Wu, Jing Chen, Zijun Zhang, Kun He, Ruiying Du, Xinxin Wang, Qingchuang Zhao, Yang Liu
Comparative evaluations between the certified defenses of the surrogate and target models demonstrate the effectiveness of our approach.
no code implementations • 6 Aug 2024 • Sen Nie, Zhuo Wang, Xinxin Wang, Kun He
Recent studies emphasize the crucial role of data augmentation in enhancing the performance of object detection models.
no code implementations • 16 Jul 2024 • Weihao Jiang, Shuoxi Zhang, Kun He
This strategy leverages the advantages of both global and local features while ensuring their complementary benefits.
no code implementations • 27 Jun 2024 • Jinsong Chen, Siyu Jiang, Kun He
In most graph Transformers, a crucial step involves transforming the input graph into token sequences as the model input, enabling Transformer to effectively learn the node representations.
no code implementations • 27 Jun 2024 • Jinsong Chen, Hanpeng Liu, John E. Hopcroft, Kun He
While tokenized graph Transformers have demonstrated strong performance in node classification tasks, their reliance on a limited subset of nodes with high similarity scores for constructing token sequences overlooks valuable information from other nodes, hindering their ability to fully harness graph information for learning optimal node representations.
no code implementations • CVPR 2024 • Shuoxi Zhang, Hanpeng Liu, Stephen Lin, Kun He
While self-attention modules empower ViTs to capture long-range dependencies, the computational complexity grows quadratically with the number of tokens, which is a major hindrance to the practical application of ViTs.
no code implementations • 6 May 2024 • Weihao Jiang, Haoyang Cui, Kun He
Subsequently, we filter patch embeddings using class embeddings to retain only the class-relevant ones.
no code implementations • 6 May 2024 • Weihao Jiang, Chang Liu, Kun He
Specifically, we swap the class (CLS) token and patch tokens between the support and query sets to have the mutual attention, which enables each set to focus on the most useful information.
no code implementations • 6 May 2024 • Yuxuan Wang, Jiongzhi Zheng, Jinyao Xie, Kun He
Similar to MP$_{\text{LS}}$, FIMP-HGA divides the solving into match and partition stages, iteratively refining the solution.
1 code implementation • 24 Apr 2024 • Haolin Wu, Jing Chen, Ruiying Du, Cong Wu, Kun He, Xingcan Shang, Hao Ren, Guowen Xu
The detection models exhibited vulnerabilities, with FAR rising to 36. 69%, 31. 23%, and 51. 28% under volume control, fading, and noise injection, respectively.
1 code implementation • 29 Mar 2024 • Kaiyuan Gao, Qizhi Pei, Jinhua Zhu, Kun He, Lijun Wu
Molecular docking is a pivotal process in drug discovery.
Ranked #1 on Blind Docking on PDBbind
2 code implementations • 25 Mar 2024 • Zicong Fan, Takehiko Ohkawa, Linlin Yang, Nie Lin, Zhishan Zhou, Shihao Zhou, Jiajun Liang, Zhong Gao, Xuanyang Zhang, Xue Zhang, Fei Li, Zheng Liu, Feng Lu, Karim Abou Zeid, Bastian Leibe, Jeongwan On, Seungryul Baek, Aditya Prakash, Saurabh Gupta, Kun He, Yoichi Sato, Otmar Hilliges, Hyung Jin Chang, Angela Yao
A holistic 3Dunderstanding of such interactions from egocentric views is important for tasks in robotics, AR/VR, action recognition and motion generation.
no code implementations • 21 Mar 2024 • Maoxuan Zhou, Wei Kang, Kun He
Firstly, Gram angular field coding technique is used to encode the time domain signal of the rolling bearing and generate the feature map to retain the complete information of the vibration signal.
no code implementations • 10 Mar 2024 • Xin Liu, Yuxiang Zhang, Meng Wu, Mingyu Yan, Kun He, Wei Yan, Shirui Pan, Xiaochun Ye, Dongrui Fan
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs), i. e., graph data augmentation and attack.
no code implementations • 6 Mar 2024 • Weihao Jiang, Guodong Liu, Di He, Kun He
However, as a non-end-to-end training method, indicating the meta-training stage can only begin after the completion of pre-training, Meta-Baseline suffers from higher training cost and suboptimal performance due to the inherent conflicts of the two training stages.
no code implementations • 6 Feb 2024 • Jinghui Xue, Jiongzhi Zheng, Mingming Jin, Kun He
Exact algorithms for MBP mainly follow the branch-and-bound (BnB) framework, whose performance heavily depends on the quality of the upper bound on the cardinality of a maximum s-bundle and the initial lower bound with graph reduction.
1 code implementation • 2 Feb 2024 • Jianshu Zhang, Yankai Fu, Ziheng Peng, Dongyu Yao, Kun He
The former adaptively modulates the replay buffer allocation for each task based on its forgetting rate, while the latter guarantees the inclusion of representative data that best encapsulates the characteristics of each task within the buffer.
no code implementations • 23 Jan 2024 • Yichen Yang, Xin Liu, Kun He
Based on the observation that the adversarial perturbations crafted by single-step and multi-step gradient ascent are similar, FAT uses single-step gradient ascent to craft adversarial examples in the embedding space to expedite the training process.
no code implementations • 19 Jan 2024 • Jiongzhi Zheng, Zhuo Chen, Chu-min Li, Kun He
In this paper, we propose to transfer the SPB constraint into the clause weighting system of the local search method, leading the algorithm to better solutions.
no code implementations • 21 Dec 2023 • Haobo Lu, Xin Liu, Kun He
However, few of them are dedicated to input transformation. In this work, we observe a positive correlation between the logit/probability of the target class and diverse input transformation methods in targeted attacks.
1 code implementation • 1 Dec 2023 • Afifa Khaled, Chao Li, Jia Ning, Kun He
Normalization techniques have been widely used in the field of deep learning due to their capability of enabling higher learning rates and are less careful in initialization.
no code implementations • 31 Oct 2023 • Gaichao Li, Jinsong Chen, John E. Hopcroft, Kun He
Graph pooling methods have been widely used on downsampling graphs, achieving impressive results on multiple graph-level tasks like graph classification and graph generation.
no code implementations • 17 Oct 2023 • Jinsong Chen, Gaichao Li, John E. Hopcroft, Kun He
In this way, SignGT could learn informative node representations from both long-range dependencies and local topology information.
Ranked #7 on Node Classification on Actor
1 code implementation • NeurIPS 2023 • Qizhi Pei, Kaiyuan Gao, Lijun Wu, Jinhua Zhu, Yingce Xia, Shufang Xie, Tao Qin, Kun He, Tie-Yan Liu, Rui Yan
In this work, we propose $\mathbf{FABind}$, an end-to-end model that combines pocket prediction and docking to achieve accurate and fast protein-ligand binding.
Ranked #4 on Blind Docking on PDBBind
4 code implementations • 17 Jul 2023 • Chao Li, Zijie Guo, Qiuting He, Hao Xu, Kun He
Utilizing long-range dependency, a concept extensively studied in homogeneous graphs, remains underexplored in heterogeneous graphs, especially on large ones, posing two significant challenges: Reducing computational costs while maximizing effective information utilization in the presence of heterogeneity, and overcoming the over-smoothing issue in graph neural networks.
Ranked #3 on Node Property Prediction on ogbn-mag
1 code implementation • 6 Jul 2023 • Xu Han, Anmin Liu, Chenxuan Yao, Yanbo Fan, Kun He
In either case, the common gradient-based methods generally use the sign function to generate perturbations on the gradient update, that offers a roughly correct direction and has gained great success.
1 code implementation • NeurIPS 2023 • Xiaosen Wang, Kangheng Tong, Kun He
input image and loss function so as to generate adversarial examples with higher transferability.
no code implementations • 20 Jun 2023 • Shuoxi Zhang, Hanpeng Liu, Kun He
To address the above limitations, we propose a novel method called Knowledge Distillation with Token-level Relationship Graph (TRG) that leverages the token-wise relational knowledge to enhance the performance of knowledge distillation.
1 code implementation • 9 Jun 2023 • Afifa Khaled, Ahmed A. Mubarak, Kun He
In this work, we address the above limitations by designing a new deep-learning model, called 3D-DenseUNet, which works as adaptable global aggregation blocks in down-sampling to solve the issue of spatial information loss.
no code implementations • 22 May 2023 • Jinsong Chen, Chang Liu, Kaiyuan Gao, Gaichao Li, Kun He
Graph Transformers, emerging as a new architecture for graph representation learning, suffer from the quadratic complexity on the number of nodes when handling large graphs.
no code implementations • CVPR 2023 • Takehiko Ohkawa, Kun He, Fadime Sener, Tomas Hodan, Luan Tran, Cem Keskin
To obtain high-quality 3D hand pose annotations for the egocentric images, we develop an efficient pipeline, where we use an initial set of manual annotations to train a model to automatically annotate a much larger dataset.
no code implementations • 23 Apr 2023 • Chao Li, Hao Xu, Kun He
Meta-structures are widely used to define which subset of neighbors to aggregate information in heterogeneous information networks (HINs).
2 code implementations • 19 Apr 2023 • Yan Jin, Yuandong Ding, Xuanhao Pan, Kun He, Li Zhao, Tao Qin, Lei Song, Jiang Bian
Traveling Salesman Problem (TSP), as a classic routing optimization problem originally arising in the domain of transportation and logistics, has become a critical task in broader domains, such as manufacturing and biology.
no code implementations • 24 Mar 2023 • Kun He, Xin Liu, Yichen Yang, Zhou Qin, Weigao Wen, Hui Xue, John E. Hopcroft
Besides, we suggest to use the Normalized Mean Square Error (NMSE) to further improve the robustness by aligning the clean and adversarial examples.
no code implementations • 9 Feb 2023 • Qi Chen, Chao Li, Jia Ning, Stephen Lin, Kun He
Inspired by the property that ERFs typically exhibit a Gaussian distribution, we propose a Gaussian Mask convolutional kernel (GMConv) in this work.
no code implementations • 28 Jan 2023 • Yasmeen M. Khedr, Yifeng Xiong, Kun He
The probability score method is based on training a Face Verification model for an attribute prediction task to obtain a class probability score for each attribute.
1 code implementation • ICCV 2023 • Jia Ning, Chen Li, Zheng Zhang, Zigang Geng, Qi Dai, Kun He, Han Hu
With these new techniques and other designs, we show that the proposed general-purpose task-solver can perform both instance segmentation and depth estimation well.
Ranked #23 on Monocular Depth Estimation on NYU-Depth V2
1 code implementation • 29 Nov 2022 • Jiongzhi Zheng, Kun He, Jianrong Zhou, Yan Jin, Chu-min Li, Felip Manyà
In this paper, we propose a local search algorithm for these problems, called BandHS, which applies two multi-armed bandits to guide the search directions when escaping local optima.
no code implementations • 27 Nov 2022 • Chao Li, Hao Xu, Kun He
To address these issues, we propose a novel method called Partial Message Meta Multigraph search (PMMM) to automatically optimize the neural architecture design on HINs.
no code implementations • 27 Nov 2022 • Shuoxi Zhang, Hanpeng Liu, John E. Hopcroft, Kun He
Knowledge distillation aims to transfer knowledge to the student model by utilizing the predictions/features of the teacher model, and feature-based distillation has recently shown its superiority over logit-based distillation.
1 code implementation • 20 Nov 2022 • Yu-Zhe Shi, Manjie Xu, John E. Hopcroft, Kun He, Joshua B. Tenenbaum, Song-Chun Zhu, Ying Nian Wu, Wenjuan Han, Yixin Zhu
Specifically, at the $representational \ level$, we seek to answer how the complexity varies when a visual concept is mapped to the representation space.
no code implementations • 15 Nov 2022 • Gaichao Li, Jinsong Chen, Kun He
MNA-GT further employs an attention layer to learn the importance of different attention kernels to enable the model to adaptively capture the graph structural information for different nodes.
no code implementations • 15 Nov 2022 • Kun He, Chang Liu, Stephen Lin, John E. Hopcroft
And further combination with our feature augmentation techniques, termed LOMA_IF&FO, can continue to strengthen the model and outperform advanced intensity transformation methods for data augmentation.
no code implementations • 15 Nov 2022 • Jinsong Chen, Boyu Li, Kun He
The decoupled Graph Convolutional Network (GCN), a recent development of GCN that decouples the neighborhood aggregation and feature transformation in each convolutional layer, has shown promising performance for graph representation learning.
no code implementations • 26 Oct 2022 • Kaiyuan Gao, Lijun Wu, Jinhua Zhu, Tianbo Peng, Yingce Xia, Liang He, Shufang Xie, Tao Qin, Haiguang Liu, Kun He, Tie-Yan Liu
Specifically, we first pre-train an antibody language model based on the sequence data, then propose a one-shot way for sequence and structure generation of CDR to avoid the heavy cost and error propagation from an autoregressive manner, and finally leverage the pre-trained antibody model for the antigen-specific antibody generation model with some carefully designed modules.
no code implementations • 18 Aug 2022 • Yanli Liu, Jiming Zhao, Chu-min Li, Hua Jiang, Kun He
Branch-and-Bound (BnB) is the basis of a class of efficient algorithms for MCS, consisting in successively selecting vertices to match and pruning when it is discovered that a solution better than the best solution found so far does not exist.
no code implementations • 9 Jul 2022 • Guodong Liu, Tongling Wang, Shuoxi Zhang, Kun He
Model-Agnostic Meta-Learning (MAML) is a famous few-shot learning method that has inspired many follow-up efforts, such as ANIL and BOIL.
1 code implementation • 8 Jul 2022 • Jiongzhi Zheng, Kun He, Jianrong Zhou, Yan Jin, Chu-min Li
LKH-3 is a powerful extension of LKH that can solve many TSP variants.
no code implementations • 21 Jun 2022 • Jinsong Chen, Boyu Li, Qiuting He, Kun He
However, they follow the traditional structure-aware propagation strategy of GCNs, making it hard to capture the attribute correlation of nodes and sensitive to the structure noise described by edges whose two endpoints belong to different categories.
1 code implementation • 10 Jun 2022 • Jinsong Chen, Kaiyuan Gao, Gaichao Li, Kun He
In this work, we observe that existing graph Transformers treat nodes as independent tokens and construct a single long sequence composed of all node tokens so as to train the Transformer model, causing it hard to scale to large graphs due to the quadratic complexity on the number of nodes for the self-attention computation.
no code implementations • 10 Apr 2022 • Chao Li, Jia Ning, Han Hu, Kun He
Differentiable architecture search (DARTS) has attracted much attention due to its simplicity and significant improvement in efficiency.
1 code implementation • 6 Apr 2022 • Xu Han, Anmin Liu, Yifeng Xiong, Yanbo Fan, Kun He
Deviation between the original gradient and the generated noises may lead to inaccurate gradient update estimation and suboptimal solutions for adversarial transferability, which is crucial for black-box attacks.
1 code implementation • CVPR 2022 • Fadime Sener, Dibyadip Chatterjee, Daniel Shelepov, Kun He, Dipika Singhania, Robert Wang, Angela Yao
Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles.
1 code implementation • 28 Feb 2022 • Yichen Yang, Xiaosen Wang, Kun He
We attribute the vulnerability of natural language processing models to the fact that similar inputs are converted to dissimilar representations in the embedding space, leading to inconsistent outputs, and we propose a novel robust training method, termed Fast Triplet Metric Learning (FTML).
1 code implementation • 20 Jan 2022 • Zhen Yu, Xiaosen Wang, Wanxiang Che, Kun He
Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications.
no code implementations • 14 Jan 2022 • Jiongzhi Zheng, Kun He, Jianrong Zhou, Yan Jin, Chu-min Li, Felip Manya
We address Partial MaxSAT (PMS) and Weighted PMS (WPMS), two practical generalizations of the MaxSAT problem, and propose a local search algorithm for these problems, called BandMaxSAT, that applies a multi-armed bandit model to guide the search direction.
1 code implementation • 27 Dec 2021 • Qi Feng, Kun He, He Wen, Cem Keskin, Yuting Ye
Notably, on CMU Panoptic Studio, we are able to reduce the turn-around time by 60% and annotation cost by 80% when compared to the conventional annotation process.
1 code implementation • 13 Dec 2021 • Xiaosen Wang, Zeliang Zhang, Kangheng Tong, Dihong Gong, Kun He, Zhifeng Li, Wei Liu
Decision-based attack poses a severe threat to real-world applications since it regards the target model as a black box and only accesses the hard prediction label.
no code implementations • 8 Dec 2021 • Meng Wang, Boyu Li, Kun He, John E. Hopcroft
We theoretically show that our method can avoid some situations that a broken community and the local community are regarded as one community in the subgraph, leading to the inaccuracy on detection which can be caused by global hidden community detection methods.
1 code implementation • CVPR 2022 • Yifeng Xiong, Jiadong Lin, Min Zhang, John E. Hopcroft, Kun He
The black-box adversarial attack has attracted impressive attention for its practical use in the field of deep learning security.
5 code implementations • 17 Nov 2021 • Delv Lin, Qi Chen, Chengyu Zhou, Kun He
Multi-Object Tracking (MOT) has achieved aggressive progress and derived many excellent deep learning trackers.
no code implementations • 29 Sep 2021 • Jiadong Lin, Yifeng Xiong, Min Zhang, John E. Hopcroft, Kun He
Black-box adversarial attack has attracted much attention for its practical use in deep learning applications, and it is very challenging as there is no access to the architecture and weights of the target model.
1 code implementation • 13 Sep 2021 • Xiaosen Wang, Yifeng Xiong, Kun He
Based on this observation, we propose a novel textual adversarial example detection method, termed Randomized Substitution and Vote (RS&V), which votes the prediction label by accumulating the logits of k samples generated by randomly substituting the words in the input text with synonyms.
1 code implementation • EMNLP 2021 • Haohai Sun, Jialun Zhong, Yunpu Ma, Zhen Han, Kun He
Compared with the completion task, the forecasting task is more difficult that faces two main challenges: (1) how to effectively model the time information to handle future timestamps?
no code implementations • 2 Sep 2021 • Chuanbiao Song, Yanbo Fan, Yichen Yang, Baoyuan Wu, Yiming Li, Zhifeng Li, Kun He
Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks.
1 code implementation • 23 Aug 2021 • Jiongzhi Zheng, Kun He, Jianrong Zhou
In this work, we observe that most local search (W)PMS solvers usually flip a single variable per iteration.
no code implementations • 14 Aug 2021 • Xiaobo Jiang, Kun He, Jiajun He, Guangyu Yan
Entity extraction is a key technology for obtaining information from massive texts in natural language processing.
no code implementations • ACL 2021 • Xinze Zhang, Junzhe Zhang, Zhenhua Chen, Kun He
We first show the current NMT adversarial attacks may be improperly estimated by the commonly used mono-directional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks.
no code implementations • 31 Jul 2021 • Xiaodong Xin, Kun He, Jialu Bao, Bart Selman, John E. Hopcroft
Our previous work proposes a general structure amplification technique called HICODE that uncovers many layers of functional hidden structure in complex networks.
no code implementations • 9 Jul 2021 • Jiongzhi Zheng, Jialun Zhong, Menglei Chen, Kun He
In the hybrid algorithm, LKH can help EAX-GA improve the population by its effective local search, and EAX-GA can help LKH escape from local optima by providing high-quality and diverse initial solutions.
1 code implementation • 6 Jul 2021 • Kun He, Chao Li, Yixiao Yang, Gao Huang, John E. Hopcroft
We first propose a simple yet efficient implementation of the convolution using circular kernels, and empirically show the significant advantages of large circular kernels over the counterpart square kernels.
no code implementations • 26 Jun 2021 • Xiaosen Wang, Chuanbiao Song, LiWei Wang, Kun He
In this work, we aim to avoid the catastrophic overfitting by introducing multi-step adversarial examples during the single-step adversarial training.
2 code implementations • CVPR 2021 • Xiaosen Wang, Kun He
Incorporating variance tuning with input transformations on iterative gradient-based attacks in the multi-model setting, the integrated method could achieve an average success rate of 90. 1% against nine advanced defense methods, improving the current best attack performance significantly by 85. 1% .
1 code implementation • 19 Mar 2021 • Xiaosen Wang, Jiadong Lin, Han Hu, Jingdong Wang, Kun He
Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.
2 code implementations • ICCV 2021 • Xiaosen Wang, Xuanran He, Jingdong Wang, Kun He
We investigate in this direction and observe that existing transformations are all applied on a single image, which might limit the adversarial transferability.
no code implementations • 1 Jan 2021 • Xiaosen Wang, Kun He, Chuanbiao Song, LiWei Wang, John E. Hopcroft
A recent work targets unrestricted adversarial example using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input.
1 code implementation • 8 Dec 2020 • Jiongzhi Zheng, Kun He, Jianrong Zhou, Yan Jin, Chu-min Li
We address the Traveling Salesman Problem (TSP), a famous NP-hard combinatorial optimization problem.
no code implementations • 17 Oct 2020 • Min Zhang, Yao Shu, Kun He
Finite-sum optimization plays an important role in the area of machine learning, and hence has triggered a surge of interest in recent years.
1 code implementation • 9 Aug 2020 • Xiaosen Wang, Yichen Yang, Yihe Deng, Kun He
Adversarial training is the most empirically successful approach in improving the robustness of deep neural networks for image classification. For text classification, however, existing synonym substitution based adversarial attacks are effective but not efficient to be incorporated into practical text adversarial training.
no code implementations • 12 Jul 2020 • Liwen Li, Zequn Wei, Jin-Kao Hao, Kun He
As the counterpart problem of SUKP, however, BMCP was introduced early in 1999 but since then it has been rarely studied, especially there is no practical algorithm proposed.
no code implementations • 26 Feb 2020 • Daizong Liu, Shuangjie Xu, Pan Zhou, Kun He, Wei Wei, Zichuan Xu
In this work, we propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases by using a dynamic learnable adjacency matrix in graph structure to improve the diagnosis accuracy.
no code implementations • 3 Feb 2020 • Xinze Zhang, Kun He, Yukun Bao
Despite the superiority of convolutional neural networks demonstrated in time series modeling and forecasting, it has not been fully explored on the design of the neural network architecture and the tuning of the hyper-parameters.
no code implementations • 22 Jan 2020 • Kun He, Min Zhang, Jianrong Zhou, Yan Jin, Chu-min Li
Inspired by its success in deep learning, we apply the idea of SGD with batch selection of samples to a classic optimization problem in decision version.
no code implementations • 20 Jan 2020 • Kun He, Kevin Tole, Fei Ni, Yong Yuan, Linyun Liao
We address a new variant of packing problem called the circle bin packing problem (CBPP), which is to find a dense packing of circle items to multiple square bins so as to minimize the number of used bins.
2 code implementations • CVPR 2020 • Chao Li, Yixiao Yang, Kun He, Stephen Lin, John E. Hopcroft
IBCLN is a cascaded network that iteratively refines the estimates of transmission and reflection layers in a manner that they can boost the prediction quality to each other, and information across steps of the cascade is transferred using an LSTM.
Ranked #2 on Reflection Removal on SIR^2(Postcard)
1 code implementation • 8 Oct 2019 • Chao Li, Kun He, Guangshuai Liu, John E. Hopcroft
Results: We propose a method called HirHide (Hierarchical Hidden Community Detection), which can be combined with traditional community detection methods to enable them to discover hierarchical hidden communities.
Molecular Networks
1 code implementation • ICLR 2020 • Chuanbiao Song, Kun He, Jiadong Lin, Li-Wei Wang, John E. Hopcroft
We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples.
1 code implementation • 15 Sep 2019 • Xiaosen Wang, Hao Jin, Yichen Yang, Kun He
In the area of natural language processing, deep learning models are recently known to be vulnerable to various types of adversarial perturbations, but relatively few works are done on the defense side.
3 code implementations • ICLR 2020 • Jiadong Lin, Chuanbiao Song, Kun He, Li-Wei Wang, John E. Hopcroft
While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting" on the white-box model being attacked and generate more transferable adversarial examples.
1 code implementation • ACL 2019 • Shuhuai Ren, Yihe Deng, Kun He, Wanxiang Che
Experiments on three popular datasets using convolutional as well as LSTM models show that PWWS reduces the classification accuracy to the most extent, and keeps a very low word substitution rate.
1 code implementation • 3 Jun 2019 • Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, Li-Wei Wang
Neural network robustness has recently been highlighted by the existence of adversarial examples.
no code implementations • 15 May 2019 • Yan-li Liu, Chu-min Li, Hua Jiang, Kun He
Branch-and-bound (BnB) algorithms are widely used to solve combinatorial problems, and the performance crucially depends on its branching heuristic. In this work, we consider a typical problem of maximum common subgraph (MCS), and propose a branching heuristic inspired from reinforcement learning with a goal of reaching a tree leaf as early as possible to greatly reduce the search tree size. Extensive experiments show that our method is beneficial and outperforms current best BnB algorithm for the MCS.
no code implementations • 10 May 2019 • Kun He, Wu Wang, Xiaosen Wang, John E. Hopcroft
In this work, we propose a new method for the anchor word selection by associating the word co-occurrence probability with the words similarity and assuming that the most different words on semantic are potential candidates for the anchor words.
no code implementations • 16 Apr 2019 • Xiaosen Wang, Kun He, Chuanbiao Song, Li-Wei Wang, John E. Hopcroft
In this way, AT-GAN can learn the distribution of adversarial examples that is very close to the distribution of real data.
no code implementations • 13 Mar 2019 • Yan Jin, John H. Drake, Una Benlic, Kun He
The maximum k-plex problem is a computationally complex problem, which emerged from graph-theoretic social network studies.
no code implementations • 13 Nov 2018 • Mumtaz A. Kaloi, Kun He
In this work we propose a technique called GDCNN (Gender Determination with Convolutional Neural Networks), where the left hand radio-graphs of the children between a wide range of ages in 1 month to 18 years are examined to determine the gender.
1 code implementation • NeurIPS 2018 • Liwei Wang, Lunjia Hu, Jiayuan Gu, Yue Wu, Zhiqiang Hu, Kun He, John Hopcroft
The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively.
2 code implementations • ICLR 2019 • Chuanbiao Song, Kun He, Li-Wei Wang, John E. Hopcroft
Our intuition is to regard the adversarial training on FGSM adversary as a domain adaption task with limited number of target domain samples.
no code implementations • 10 Aug 2018 • Zhen-Xing Xu, Kun He, Chu-min Li
Although Path-Relinking is an effective local search method for many combinatorial optimization problems, its application is not straightforward in solving the MAX-SAT, an optimization variant of the satisfiability problem (SAT) that has many real-world applications and has gained more and more attention in academy and industry.
2 code implementations • ECCV 2018 • Fatih Cakir, Kun He, Stan Sclaroff
We propose theoretical and empirical improvements for two-stage hashing methods.
no code implementations • CVPR 2018 • Kun He, Yan Lu, Stan Sclaroff
In this paper, we improve the learning of local feature descriptors by optimizing the performance of descriptor matching, which is a common stage that follows descriptor extraction in local feature based pipelines, and can be formulated as nearest neighbor retrieval.
1 code implementation • 13 Apr 2018 • Huijuan Xu, Kun He, Bryan A. Plummer, Leonid Sigal, Stan Sclaroff, Kate Saenko
To capture the inherent structures present in both text and video, we introduce a multilevel model that integrates vision and language features earlier and more tightly than prior work.
2 code implementations • 2 Mar 2018 • Fatih Cakir, Kun He, Sarah Adel Bargal, Stan Sclaroff
Binary vector embeddings enable fast nearest neighbor retrieval in large databases of high-dimensional objects, and play an important role in many practical applications, such as image and video retrieval.
2 code implementations • 13 Dec 2017 • Kun He, Pan Shi, David Bindel, John E. Hopcroft
Community detection is an important information mining task in many fields including computer science, social sciences, biology and physics.
Social and Information Networks
no code implementations • 5 Nov 2017 • Mengxiao Zhang, Wangquan Wu, Yanren Zhang, Kun He, Tao Yu, Huan Long, John E. Hopcroft
Our results show that the dimensions of different categories are close to each other and decline quickly along the convolutional layers and fully connected layers.
1 code implementation • CVPR 2018 • Kun He, Fatih Cakir, Sarah Adel Bargal, Stan Sclaroff
Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval.
no code implementations • 30 Apr 2017 • Danna Gurari, Kun He, Bo Xiong, Jianming Zhang, Mehrnoosh Sameki, Suyog Dutt Jain, Stan Sclaroff, Margrit Betke, Kristen Grauman
We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems.
no code implementations • 2 Apr 2017 • Kun He, Jingbo Wang, Haochuan Li, Yao Shu, Mengxiao Zhang, Man Zhu, Li-Wei Wang, John E. Hopcroft
Toward a deeper understanding on the inner work of deep neural networks, we investigate CNN (convolutional neural network) using DCN (deconvolutional network) and randomization technique, and gain new insights for the intrinsic property of this network architecture.
1 code implementation • ICCV 2017 • Fatih Cakir, Kun He, Sarah Adel Bargal, Stan Sclaroff
Learning-based hashing methods are widely used for nearest neighbor retrieval, and recently, online hashing methods have demonstrated good performance-complexity trade-offs by learning hash functions from streaming data.
4 code implementations • 24 Feb 2017 • Kun He, Yingru Li, Sucheta Soundarajan, John E. Hopcroft
We introduce a new paradigm that is important for community detection in the realm of network analysis.
1 code implementation • NeurIPS 2016 • Kun He, Yan Wang, John Hopcroft
To our knowledge this is the first demonstration of image representations using untrained deep neural networks.
1 code implementation • 25 Sep 2015 • Yixuan Li, Kun He, David Bindel, John Hopcroft
Nowadays, as we often explore networks with billions of vertices and find communities of size hundreds, it is crucial to shift our attention from macroscopic structure to microscopic structure when dealing with large networks.
Social and Information Networks Data Structures and Algorithms Physics and Society G.2.2; H.3.3
no code implementations • 25 Jun 2015 • Sobhan Naderi Parizi, Kun He, Reza Aghajani, Stan Sclaroff, Pedro Felzenszwalb
Majorization-Minimization (MM) is a powerful iterative procedure for optimizing non-convex functions that works by optimizing a sequence of bounds on the function.
2 code implementations • 23 Jan 2015 • Kun He, Sucheta Soundarajan, Xuezhi Cao, John Hopcroft, Menglong Huang
Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against.
Social and Information Networks Physics and Society