A Knife Cuts Both Ways – Attacks and Defenses of Deep Neural Networks
编号:11 访问权限:仅限参会人 更新:2023-10-11 13:05:44 浏览:92次 特邀报告

报告开始:2023年10月31日 15:00(Asia/Shanghai)

报告时间:60min

所在会场:[P] The FIRST Interdisciplinary Conference 2023 [P4] Plenary Session 4

暂无文件

摘要
The flourishing Internet of Things (IoT) has rekindled on-premises computing to allow data to be analyzed closer to the source. Neural architecture search, open-source deep neural network (DNN) model compilers and commercially available toolkits have evolved to facilitate rapid development and deployment of Artificial Intelligence (AI) applications. This “model once, run optimized anywhere” paradigm shift in deep learning computations introduces new attack surfaces and threat models that are methodologically different from existing software-based attacks. Model integrity is a primary pillar for AI trust to ensure that the system delivers and maintains the desirable quality of service and are free from unauthorized deliberate or inadvertent manipulation of the system throughout the lifetime of their deployment. A superior and well-trained DNN classifier is not only an intellectual property (IP) of high market value but also consists of private and sensitive information. Unfortunately, existing DNN hardware implementations mainly focus on throughput and energy efficiency optimization, which can unintentionally introduce exploitable vulnerabilities. The situation is aggravated by the trend of outsourcing model training, renting cloud computing platforms, and deploying partially or fully trained third-party models for AI application development and edge inference. This talk will present some of our research works on the attacks and defenses of DNNs.
关键词
暂无
报告人
Chip Hong CHANG
Nanyang Technological University Singapore

IEEE Fellow, Professor at the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore

发表评论
验证码 看不清楚,更换一张
全部评论
重要日期
  • 会议日期

    10月30日

    2023

    10月31日

    2023

主办单位
天津大学
联系方式
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询