Adversarial Attacks and Defenses in Healthcare

Prof. Jimeng Sun, Aug 2017 - Present

Deep Learning as well as traditional Machine Learning techniques are successfully being applied in many healthcare application such as personalized treatment recommendation and early prediction of diseases.

On the other hand, it has been shown recently that Neural Network based classifiers as well as other algorithms such as Support Vector Machine and Logistic Regression are vulnerable to adversarial perturbations which is small to be detected by human but can lead classifiers to be fooled to give wrong results in testing time.

Previous works have shown that the relatively simple methods such as Fast Gradient Sign Method (FGSM) and Jacobian-based Saliency Map Approach (JSMA) can craft very effective adversarial attacks either in white-box or black-box settings.

We are currently investigating if these methods are still effective on healthcare domain and developing improved attacks as well as possible defensive methods. Also, we are developing an adversarial attack based on sequential data which are not studied well so far.