Unless you deliberately steer clear of social media and the internet, you’ve probably come across ChatGPT, a novel AI model currently available for public testing. This availability enables cybersecurity professionals, such as myself, to explore its potential applications within our industry.
The widespread adoption of machine learning and artificial intelligence (ML/AI) in the realm of cybersecurity is a relatively recent development. One of the most prevalent applications involves endpoint detection and response (EDR), where ML/AI leverages behavior analytics to identify unusual activities. This technology can distinguish outliers by comparing them to known good behavior patterns, enabling the identification and termination of processes, account locking, triggering alerts, and more.
Whether employed for task automation or aiding in the development and refinement of new concepts, ML/AI undeniably holds the potential to enhance security efforts and fortify a robust cybersecurity posture. Let’s explore some of the conceivable opportunities it presents.
AI and its potential in cybersecurity
As a junior analyst entering the cybersecurity field, my initial role involved detecting fraud and security events using Splunk, a Security Information and Event Management (SIEM) tool. Splunk utilizes its proprietary language known as the Search Processing Language (SPL), which becomes progressively intricate as queries become more advanced.
Understanding this context is crucial in appreciating the capabilities of ChatGPT. This AI model has already acquired proficiency in SPL, enabling it to swiftly transform a junior analyst’s prompt into a query within seconds. This significantly lowers the entry barrier for individuals starting in this field. For instance, if I were to task ChatGPT with crafting an alert for a brute force attack targeting Active Directory, it would not only generate the alert but also provide an explanation of the query’s underlying logic. This type of alert closely resembles a standard Security Operations Center (SOC) alert, making it an ideal guide for novice SOC analysts.
Another compelling application of ChatGPT is automating routine tasks for overburdened IT teams. In virtually any IT environment, the accumulation of dormant Active Directory accounts can range from a few dozen to several hundred. These accounts often possess elevated privileges, and although implementing a comprehensive Privileged Access Management (PAM) strategy is advisable, practical constraints may delay its execution.
This situation often forces IT teams to resort to traditional do-it-yourself (DIY) methods, wherein system administrators create and schedule custom scripts to deactivate stale accounts. Now, ChatGPT can be entrusted with the responsibility of crafting the logic required to identify and deactivate accounts that have remained inactive for the past 90 days. If a junior engineer can generate and schedule such a script, along with comprehending the logic behind it, ChatGPT can assist senior engineers and administrators in reclaiming valuable time for more advanced tasks.
If you seek a force multiplier for dynamic operations, ChatGPT can be harnessed for purple teaming exercises or collaborative efforts between red and blue teams aimed at evaluating and enhancing an organization’s security posture. It can generate basic script examples that penetration testers might employ or debug scripts that aren’t functioning as anticipated.
One prevalent technique in cyber incidents, according to the MITRE ATT&CK framework, is persistence. For instance, a common persistence tactic involves attackers adding their designated script or command as a startup script on a Windows machine. With a simple request, ChatGPT can create a rudimentary yet functional script that empowers red team members to establish this type of persistence on a target host. While the red team utilizes this tool to enhance penetration tests, the blue team can leverage it to gain insights into how such tools might appear, aiding in the development of more effective alerting mechanisms.
Benefits are plenty, but so are the limits
Certainly, if a situation or research scenario requires analysis, AI can serve as a valuable tool to expedite the process or introduce alternative approaches to the required analysis. This is especially evident in the field of cybersecurity, where AI can be employed to automate tasks and inspire fresh insights, ultimately reducing the efforts required to fortify cybersecurity measures.
However, it’s essential to acknowledge the inherent limitations of AI. These limitations stem from the complexity of human cognition and the real-world experiences that often inform decision-making. Regrettably, we cannot program AI to replicate the nuanced thinking of a human being; rather, we can use it as a support tool to analyze data and generate outputs based on the input data and facts. Despite the remarkable advancements AI has made in a relatively short span of time, it can still yield false positives that necessitate human intervention for identification.
Nonetheless, one of AI’s most significant advantages lies in its ability to automate routine tasks, thereby liberating human professionals to concentrate on more creative and time-intensive endeavors. For instance, I recently utilized ChatGPT to revamp a dark-web scraping tool I had developed, resulting in a substantial reduction in completion time from days to mere hours.
Undoubtedly, AI represents a crucial asset that security practitioners can employ to alleviate the burden of repetitive and mundane tasks while also providing valuable guidance to less experienced professionals in the field.
Regarding the potential drawbacks of AI in informing human decision-making, there exists a legitimate concern whenever the term “automation” is invoked—namely, the fear that technology might progress to a point where it supplants human roles. In the realm of security, there are also tangible apprehensions that AI could be misused for malicious purposes. Unfortunately, the latter concern has already materialized, with threat actors leveraging AI tools to craft increasingly convincing and effective phishing emails.
When it comes to decision-making, it remains early days for relying solely on AI to arrive at definitive conclusions in everyday practical scenarios. Human capacity for universally subjective thinking remains at the core of the decision-making process, and as of now, AI lacks the capability to replicate these skills.
In conclusion, while the various iterations of ChatGPT have generated considerable excitement since their preview last year, it is essential, as with any emerging technology, to address the concerns and uncertainties it has raised. I firmly believe that AI will not render jobs in information technology or cybersecurity obsolete. Quite the opposite, AI serves as an indispensable tool that empowers security practitioners to streamline repetitive tasks and enhance their capabilities.
As we witness the nascent stages of AI technology, even its creators appear to have only scratched the surface of its potential. The possibilities for how ChatGPT and other machine learning/AI models will reshape cybersecurity practices are boundless, and I eagerly anticipate the forthcoming innovations in this dynamic field.