We investigate the applicability of tine-tuning (i. e., taking a model already trained on a large generic corpora and retraining it for a specific task).
A generated sentence is verifiable if it can be corroborated or disproved by Wikipedia, and we find that the verifiability of generated text strongly depends on the decoding strategy.
We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions.
Furthermore, we report on a qualitative analysis of functions embeddings.
An important task of malware analysis is the classification of malware samples into known families.
Cryptography and Security