Adversarial instances are malicious inputs designed to fool machine learning models. In particular, motivated and sophisticated attackers intentionally design adversarial instances to evade classifiers which have been trained to detect security violation, such as malware detection. While the existing approaches provide effective solutions in detecting and defending adversarial samples, they fail to detect them when they are encrypted. In this study, a novel framework is proposed which employs statistical test to detect adversarial instances, when data under analysis are encrypted. An experimental evaluation of our approach shows its practical feasibility in terms of computation cost.

Original languageEnglish
Title of host publicationSecurity and Trust Management - 15th International Workshop, STM 2019, Proceedings
EditorsSjouke Mauw, Mauro Conti
PublisherSpringer
Pages71-88
Number of pages18
ISBN (Print)9783030315108
DOIs
Publication statusPublished - 2019
Event15th International Workshop on Security and Trust Management, STM 2019 held in conjunction with the 24th European Symposium on Research in Computer Security, ESORICS 2019 - Luxembourg, Luxembourg
Duration: 26 Sep 201927 Sep 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11738 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference15th International Workshop on Security and Trust Management, STM 2019 held in conjunction with the 24th European Symposium on Research in Computer Security, ESORICS 2019
CountryLuxembourg
CityLuxembourg
Period26/09/1927/09/19

    Research areas

  • Adversarial machine learning, Homomorphic encryption, Privacy

ID: 67416055