A Knife Cuts Both Ways – Attacks and Defenses of Deep Neural Networks
编号:11
访问权限:仅限参会人
更新:2023-10-11 13:05:44
浏览:92次
特邀报告
摘要
The flourishing Internet of Things (IoT) has rekindled on-premises computing to allow data to be analyzed closer to the source. Neural architecture search, open-source deep neural network (DNN) model compilers and commercially available toolkits have evolved to facilitate rapid development and deployment of Artificial Intelligence (AI) applications. This “model once, run optimized anywhere” paradigm shift in deep learning computations introduces new attack surfaces and threat models that are methodologically different from existing software-based attacks. Model integrity is a primary pillar for AI trust to ensure that the system delivers and maintains the desirable quality of service and are free from unauthorized deliberate or inadvertent manipulation of the system throughout the lifetime of their deployment. A superior and well-trained DNN classifier is not only an intellectual property (IP) of high market value but also consists of private and sensitive information. Unfortunately, existing DNN hardware implementations mainly focus on throughput and energy efficiency optimization, which can unintentionally introduce exploitable vulnerabilities. The situation is aggravated by the trend of outsourcing model training, renting cloud computing platforms, and deploying partially or fully trained third-party models for AI application development and edge inference. This talk will present some of our research works on the attacks and defenses of DNNs.
发表评论