New AI Model Fights Malware with Machine-Generated Defense Rules

A security expert is building an AI-driven defense system to counter AI-generated malware, creating a rapid-response tool for analysts to identify and neutralize threats faster than ever.

​​​​​​​Image Credit: solarseven / Shutterstock​​​​​​​Image Credit: solarseven / Shutterstock

Security was at the top of my mind when Dr. Marcus Botacin, assistant professor in the Department of Computer Science and Engineering, heard about large language models (LLMs) like ChatGPT. LLMs are a type of AI that can quickly craft text. Some LLMs, including ChatGPT, can also generate computer code. Botacin became concerned that attackers would use LLMs' capabilities to write massive amounts of malware rapidly.

"When you're a security researcher (or security paranoid), you see new technology and think, 'What might go wrong? How can people abuse this kind of thing?'" Botacin said.

Beginning this year, Botacin plans to develop an LLM to address this security threat. He compares his project to building a smaller, security-focused version of ChatGPT. 

"The idea is to fight with the same weapons as the attackers," Botacin said. "If attackers use LLMs to create millions of malwares at scale, we want to create millions of rules to defend at scale." 

Malware often displays unique patterns that can be used as signatures, like fingerprints, to identify it. Botacin plans for his LLM to use signatures to identify malware and write rules to defend against it automatically. Currently, human analysts write these rules, but this task is time-consuming and requires substantial experience, making it difficult for a human to defend against attackers using AI to generate a large amount of code instantaneously. Botacin wants his LLM to be a tool analysts can use to complement their skills and identify malware faster and more accurately. 

"The idea is, of course, not to replace the analyst but to leave the analyst free to think-to guide the machine and then let the machine do the heavy work for the analyst," Botacin said. 

Botacin is still deciding on the format of the software interface for his LLM. It may be a website or source code that people can download, but it will be available to the public. Though it could be used preventatively, Botacin anticipates that analysts will use this LLM for incident response. For example, an analyst could run the LLM on their laptop, bring it with them to a company, and use it to search network computers for malware signatures. 

This project aligns with Botacin's other ongoing research, in which he integrates malware detection into computer hardware as a preventative approach. 

To make the LLM small enough to run on a laptop—"a ChatGPT that runs in your pocket," as Botacin puts it —extensive training will be required. Conducting more training during development will allow for a smaller final product. Botacin has access to a cluster of graphics processing units (GPUs) that he will use to train the LLM. GPUs are ideal for training LLMs because they can process lots of data simultaneously.

The Laboratory of Physical Science is Botacin's scientific partner for his research. He has been awarded a $150,000 grant to complete this project, which will fund doctoral and master's students in his lab.

Source:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.