NJIT - New Jersey Institute of Technology

01/15/2025 | News release | Distributed by Public on 01/15/2025 10:28

Collaboration Between NJIT, Rutgers, Temple Adds Security to AI Education

In modern software, the train of security vulnerabilities is headed full-steam at the artificial intelligence track, while experts from NJIT, Rutgers University and Temple University are developing new educational materials intended to prevent a collision.

NJIT's Cong Shi, assistant professor in Ying Wu College of Computing, is a principal investigator on the $127,000 grant, Education on Securing AI System under Adversarial Machine Learning Attacks, from the National Science Foundation. His prior research on the security of computer vision systems and voice assistants led him and collaborators Yingying Chen (Rutgers) and Yan Wang (Temple) to see that AI's fast and vast adoption, without proper education, could expose massive risk.

Shi further explained why AI courses tend to lack security aspects. "I believe the main reason is the rapid pace at which AI technologies have evolved, combined with the huge focus on the benefits of benign applications, such as ChatGPT and other widely used AI models. As a result, most AI courses tend to prioritize teaching foundational concepts like model construction, optimization and evaluation using clean datasets. Unfortunately, this leaves out real-world scenarios where models are vulnerable to adversarial attacks during deployment or backdoor attacks during the training phase," he said. "AI security is still relatively new compared to traditional cybersecurity. While cybersecurity education has long focused on protecting data, networks and systems, AI security presents unique challenges - like adversarial examples and model poisoning - that educators may not yet be familiar with or ready to teach in a systematic way."

"This realization motivated us to contribute to educating the next generation of engineers and researchers, equipping them to develop secure and robust AI systems," Shi said.

The lessons will include group projects, laboratory work and programming assignments, focusing on AI/ML topics such as computer vision, which includes image recognition and object detection, and also on voice assistant issues, which include speaker identification and speech recognition. During the three-year project, the researchers will collect feedback from instructors and students, and they will work with another Temple professor, Yu Wang, who has expertise in educational evaluation. They anticipate challenges such as students' diverse backgrounds and balance between simplicity vs. technical depth.

"These labs offer a perfect opportunity for students to design and experiment with new types of attacks and explore innovative defense strategies. For example, students could work on designing adversarial perturbations embedded in music to hijack AI models in voice assistant systems. They can also explore mitigating physical attack challenges, such as varying attack distances and acoustic distortions," Shi continued. "Beyond image and audio domains, students can apply what they learn to secure AI models used in other areas, such as natural language processing like chatbots, Internet-of-things devices, and cyber-physical systems like smart homes, healthcare devices, and autonomous vehicles. Since the project includes modules on physical adversarial attacks, students and researchers can further investigate how environmental factors, like lighting, distance, and noise, affect attack success in real-world scenarios."

Project results will be shared with two NSF programs - CyberCorps for Service at the university level, and GenCyber at the K-12 level - and also through online platforms like GitHub, Launchpad and SourceForge.

"Some colleagues [at NJIT] have expressed interest in utilizing the proposed modules, especially the hands-on labs and projects, to enhance their teaching," Shi said. "There is a growing curiosity among students about adversarial ML and defense mechanisms."

Looking forward, he said, "I see AI security education expanding significantly over the next decade. It will likely evolve into a more cross-disciplinary field, incorporating elements of cybersecurity, machine learning, computer vision, natural language processing and even ethics. Students will not only need to master technical skills but also understand the broader societal and ethical implications of deploying secure AI systems."

"Additionally, as AI becomes more pervasive across industries, AI security will become a standard component of AI and cybersecurity courses at all levels - from K-12 to advanced graduate programs. We'll likely see more emphasis on hands-on learning, with practical labs and projects becoming an essential part of AI security education to prepare students for real-world challenges."