GrrCon 2016 – Machine Duping: Pwning Deep Learning Systems

Deep learning and neural networks have gained incredible popularity in recent years. The technology has grown to be the most talked-about and least well-understood branch of machine learning. Successful applications of deep learning in image and speech recognition has kickstarted movements to integrate it into critical fields like medical imaging, and self-driving cars. [1] In the security field, deep learning has shown good experimental results in malware/anomaly detection, [2] APT protection, spam/phishing detection, and traffic identification. However, most deep learning systems are not designed with security and resiliency in mind, and can be duped by any attacker with a good understanding of the system. [3] The efficacy of applications using machine learning should not only be measured with precision and recall, but also by their malleability in an adversarial setting. In this talk, we will dive into popular deep learning software and show how it can be tampered with to do what you want it do, while avoiding detection by system administrators. Besides giving a high level overview of deep learning and its inherent shortcomings in an adversarial setting, we will focus on tampering real systems to show real weaknesses in critical systems built with it. In particular, this demo-driven session will be focused on manipulating an image recognition, speech recognition, and phishing detection system built with deep learning at the core. By discussing defensive measures that should be put in place to prevent the class of attacks demonstrated, we hope to address the hype behind deep learning from the context of security and look towards a more resilient future of the technology where developers can more safely use it in critical applications. Citations: [1] Ashlee Vance. 2015. The First Person to Hack the iPhone Built a Self-Driving Car. In His Garage. http://www.bloomberg.com/features/2015-george-hotz-self-driving-car/ [2] Zhenlong Yuan, Yongqiang Lu, Zhaoguo Wang, and Yibo Xue. 2014. Droid-Sec: deep learning in android malware detection. SIGCOMM Comput. Commun. Rev. 44, 4 (August 2014), 371-372. DOI=http://dx.doi.org/10.1145/2740070.2631434 [3] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2016. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples. (arXiv:1602.02697v2)
For More Information Please Visit: http://grrcon.com/
http://www.irongeek.com/i.php?page=videos/grrcon2016/mainlist

Source: SecurityTube.Net @ March 5, 2017 at 06:37PM

0