AI Security Education Gets a Boost to Combat Growing Cyber Threats

As AI rapidly integrates into critical systems, security vulnerabilities pose a major threat. Researchers are creating hands-on educational tools to train the next generation in protecting AI against adversarial attacks, ensuring a more secure technological future.

Image Credit: AntonKhrupinArt / Shutterstock

In modern software, the train of security vulnerabilities is headed full-steam at the artificial intelligence track, while experts from NJIT, Rutgers University, and Temple University are developing new educational materials intended to prevent a collision.

NJIT's Cong Shi, an assistant professor at Ying Wu College of Computing, is a principal investigator on the $127,000 grant, Education on Securing AI System under Adversarial Machine Learning Attacks, from the National Science Foundation. His prior research on the security of computer vision systems and voice assistants led him and collaborators Yingying Chen (Rutgers) and Yan Wang (Temple) to see that AI's fast and vast adoption, without proper education, could expose massive risk.

Shi further explained why AI courses tend to lack security aspects. "I believe the main reason is the rapid pace at which AI technologies have evolved, combined with the huge focus on the benefits of benign applications, such as ChatGPT and other widely used AI models. As a result, most AI courses tend to prioritize teaching foundational concepts like model construction, optimization and evaluation using clean datasets. Unfortunately, this leaves out real-world scenarios where models are vulnerable to adversarial attacks during deployment or backdoor attacks during the training phase," he said. "AI security is still relatively new compared to traditional cybersecurity. While cybersecurity education has long focused on protecting data, networks and systems, AI security presents unique challenges - like adversarial examples and model poisoning - that educators may not yet be familiar with or ready to teach in a systematic way."

"This realization motivated us to contribute to educating the next generation of engineers and researchers, equipping them to develop secure and robust AI systems," Shi said.

The lessons will include group projects, laboratory work, and programming assignments, focusing on AI/ML topics such as computer vision, which provides image recognition and object detection, and voice assistant issues, which include speaker identification and speech recognition. During the three-year project, the researchers will collect feedback from instructors and students, and they will work with another Temple professor, Yu Wang, who has expertise in educational evaluation. They anticipate challenges such as students' diverse backgrounds and balance between simplicity vs. technical depth.

"These labs offer a perfect opportunity for students to design and experiment with new types of attacks and explore innovative defense strategies. For example, students could work on designing adversarial perturbations embedded in music to hijack AI models in voice assistant systems. They can also explore mitigating physical attack challenges, such as varying attack distances and acoustic distortions," Shi continued. "Beyond image and audio domains, students can apply what they learn to secure AI models used in other areas, such as natural language processing like chatbots, Internet-of-things devices, and cyber-physical systems like smart homes, healthcare devices, and autonomous vehicles. Since the project includes modules on physical adversarial attacks, students and researchers can further investigate how environmental factors, like lighting, distance, and noise, affect attack success in real-world scenarios."

Project results will be shared with two NSF programs - CyberCorps for Service at the university level and GenCyber at the K-12 level - and also through online platforms like GitHub, Launchpad, and SourceForge.

"Some colleagues [at NJIT] have expressed interest in utilizing the proposed modules, especially the hands-on labs and projects, to enhance their teaching," Shi said. "There is a growing curiosity among students about adversarial ML and defense mechanisms."

Looking forward, he said, "I see AI security education expanding significantly over the next decade. It will likely evolve into a more cross-disciplinary field, incorporating elements of cybersecurity, machine learning, computer vision, natural language processing and even ethics. Students will not only need to master technical skills but also understand the broader societal and ethical implications of deploying secure AI systems."

"Additionally, as AI becomes more pervasive across industries, AI security will become a standard component of AI and cybersecurity courses at all levels - from K-12 to advanced graduate programs. We'll likely see more emphasis on hands-on learning, with practical labs and projects becoming an essential part of AI security education to prepare students for real-world challenges."

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.