image image
Sanghyun Hong

Hardwear.io Webinar

PRACTICAL HARDWARE ATTACKS ON DEEP LEARNING

By Sanghyun Hong

Ph.D. Candidate, Maryland Cybersecurity Center, University of Maryland-College Park

Date & Time: 2nd of March 2021, 14:00pm CET







Talk Title:

Practical Hardware Attacks on Deep Learning

Abstract:

The widespread adoption of machine learning (ML) incentivizes potential adversaries who wish to manipulate systems that include ML components. Consequently, research in adversarial ML studies attack surfaces such as predictions (manipulated by adversarial examples) or models (manipulated by malicious training data). However, most of the prior work concerns ML as an isolated concept, and it overlooks the security threats caused by practical hardware attacks such as fault injection or side-channels.

In this talk, we will present a new perspective to study the threats: we view ML as a computational tool running on hardware, a potentially vulnerable body. We will then introduce our emerging research on the vulnerabilities of ML models to practical hardware attacks. First, we will review the impact of a well-studied fault-injection attack, Rowhammer. Second, we will discuss the impact of information leakage attacks, such as side-channel attacks. Those attacks can inflict unexpected damages, and ultimately, they shed new light on the dangers of hardware-based attack vectors. We will conclude by emphasizing the vulnerability of ML to hardware attacks is as yet an under-studied topic; thus, we encourage to re-examine security properties guaranteed by previous works with a new angle.

You can find more about our research at http://hardwarefail.ml/.


Speaker Bio:

Sanghyun Hong is a Ph.D. candidate in the Maryland Cybersecurity Center (MC2) at the University of Maryland-College Park (UMD), advised by Prof. Tudor Dumitras. Sanghyun's research interests lie in computer security and machine learning (ML). In his dissertation research, he exposed the vulnerabilities of deep learning systems to practical hardware attacks. His research was published in premier security and ML conferences: USENIX, ICLR, ICML, and NeurIPs. He was also an invited speaker at USENIX Enigma 2021. He is a recipient of the Ann G. Wylie Dissertation Fellowship and currently a future faculty fellow in A. James Clark School of Engineering at UMD. You can find more about Sanghyun at https://secure-ai.systems/.