学术报告题目: Vulnerability, Misuse and Protection of Deep Networks
时间:2024年4月22日(星期一)下午15:00-16:30
地点:东校区工业中心106室
主讲人介绍:
Dr. Adams Wai-Kin Kong received his Ph.D. degree from the University of Waterloo, Canada. Currently, he is an associate professor at the Nanyang Technological University, Singapore and the director of Master of Science in Artificial Intelligence programme. His research works have been published in major AI conferences and journals. One of his papers was selected as a spotlight paper by IEEE Transactions on Pattern Analysis and Machine Intelligence and another one was selected as an Honorable Mention by Pattern Recognition. In addition, one of his papers published in CVPR was selected for oral presentation. His PhD students received Best Student Paper Awards in The IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), 2012 and IEEE International Conference on Bioinformatics and Bioengineering, 2013 and other awards such as Google Anita Borg Memorial Scholarship. He served as an associate editor for IEEE Transactions on Information Forensics and Security, area chair of AI and CV conferences, including EECV and IJCAI and expert consultant for cross-border legal cases. He is listed in Stanford University's Top 2% Scientists’ Study. In 2022, Dr. Kong was nominated for the MSc in Business Analytics Teacher of the Year Award and received faculty award from his school. His recent research interests include pattern recognition, deep learning and their applications on computational simulation, healthcare, and biometrics.
报告内容摘要:
In recent years, deep networks have demonstrated their powerful capabilities, often outperforming human experts, such as AlphaGO. Many have been deployed to real applications, creating great business value. Because of their great impact to human beings and our society, many have concerned about their potential risks. In this talk, the speaker will first give some background information about adversarial attacks, including black-box and white-box attacks and attribution methods, including integrated gradients. Then, he will discuss how to mislead object detectors and perform a strong transferable attack. Then, he will further discuss issues about misusing image inpainting networks and image-based generative models. Taking visible watermark as an example, he will present a method to protect watermarks against the misuse of inpainting networks. Furthermore, he will demonstrate how to remove the toxic concepts in diffusion networks. Finally, he will briefly mention some of theoretical works about attribution protection.
广东省知识产权大数据重点实验室
2024年4月19日