The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.
In the strong adversarial attacks against deep neural network (DNN), the output of DNN will be misclassified if and only if the last feature layer of the DNN is completely destroyed by adversarial samples, while our studies found that the middle feature layers of the DNN can still extract the effective features of the original normal category in these adversarial attacks.
Due to the vulnerability of deep neural networks, the black-box attack has drawn great attention from the community.
Neural networks (NNs) are often leveraged to represent structural similarities of potential outcomes (POs) of different treatment groups to obtain better finite-sample estimates of treatment effects.
To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).
Machine learning has demonstrated remarkable prediction accuracy over i. i. d data, but the accuracy often drops when tested with data from another distribution.
In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
How to discriminatively vectorize graphs is a fundamental challenge that attracts increasing attentions in recent years.
In searchable encryption, the cloud server might return the invalid result to data user for saving the computation cost or other reasons.