ChatGPT is about to revolutionize cybersecurity

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more

Unless you deliberately avoid social media or the internet altogether, you’ve likely heard of a new AI model called ChatGPT, which is currently open to the public for testing. This allows cybersecurity professionals like me to see how it could be useful to our industry.

The widely available use of machine learning/artificial intelligence (ML/AI) for cybersecurity professionals is relatively new. One of the most common use cases has been endpoint detection and response (EDR), where ML/AI uses behavior analysis to identify anomalous activities. You can use known good behavior to discern outliers, then identify and kill processes, lock accounts, trigger alerts, and more.

Whether used to automate tasks or to help create and refine new ideas, ML/AI can certainly help amplify security efforts or bolster a strong cybersecurity posture. Let’s look at some of the possibilities.

AI and its potential in cybersecurity

When I started in cybersecurity as a junior analyst, I was responsible for detecting fraud and security events using Splunk, a security event and information management (SIEM) tool. Splunk has its own language, Search Processing Language (SPL), which can increase in complexity as queries progress.


transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.

Register now

That context helps to understand the power of ChatGPT, which has already learned SPL and can turn a junior analyst’s tip into a query in just seconds, significantly lowering the entry bar. If you asked ChatGPT to write an alert for a brute force attack against Active Directory, it would create the alert and explain the logic behind the query. Since it’s closer to a standard SOC type alert and not an advanced Splunk search, this can be a perfect guide for a novice SOC analyst.

Another compelling use case for ChatGPT is automating daily tasks for a large IT team. In almost all environments, the number of outdated Active Directory accounts can range from dozens to hundreds. These accounts often have privileged permissions, and while a full privileged access management technology strategy is recommended, enterprises may not be able to prioritize its implementation.

This creates a situation where IT resorts to the old DIY approach, where system administrators use self-written scripts to deactivate outdated accounts.

The creation of these scripts can now be handed over to ChatGPT, which can create the logic to identify and disable accounts that have not been active in the last 90 days. If a junior engineer can create and code this script as well as learn how the logic works, then ChatGPT can help senior engineers/admins free up time for more advanced work.

If you are looking for a force multiplier in a dynamic exercise, ChatGPT can be used for purple teams or a collaboration of red and blue teams to test and improve an organization’s security posture. You can create simple script examples that a penetration tester might use, or debug scripts that may not work as expected.

One MITER ATT&CK technique that is nearly universal in cyber incidents is persistence. For example, a standard persistence tactic that a threat analyst or hunter should look for is when an attacker adds their specified script/command as a startup script on a Windows machine. With a simple request, ChatGPT can create a rudimentary but functional script that will allow a red team to add this persistence to a target host. While the red team uses this tool to help with penetration testing, the blue team can use it to understand what those tools would look like to create better alerting mechanisms.

The benefits are many, but so are the limits

Of course, if an analysis is needed for a situation or research scenario, AI is also a critically useful aid in accelerating or introducing alternative paths for that required analysis. Especially in cybersecurity, whether to automate tasks or generate new ideas, AI can reduce efforts to reinforce a strong cybersecurity posture.

However, there are limitations to this utility, and by that, I mean complex human cognition along with real-world experiences that are often involved in decision making. Unfortunately, we can’t program an AI tool to work like a human being; we can only use it for support, to analyze data and produce results based on the facts we input. While AI has made great strides in a short period of time, it can still produce false positives that need to be identified by a human.

Still, one of the biggest benefits of AI is automating daily tasks to free up humans to focus on more creative or time-consuming jobs. AI can be used to create or increase the efficiency of scripts for use by cybersecurity engineers or system administrators, for example. I recently used ChatGPT to rewrite a dark web scraping tool I created that reduced completion time from days to hours.

Without a doubt, AI is an important tool that security professionals can use to alleviate repetitive and mundane tasks, and can also provide instructional help for less experienced security professionals.

If there are downsides to AI informing human decision making, I would say that every time we use the word “automation,” there is a palpable fear that technology will evolve and eliminate the need for humans in their jobs. In the security sector, we also have tangible concerns that AI could be used in nefarious ways. Unfortunately, the last of these concerns has already been proven true, with threat actors using tools to create more convincing and effective phishing emails.

In terms of decision-making, I think it’s still too early to rely on AI to make final decisions in practical, everyday situations. The human ability to use universally subjective thinking is critical to the decision process, and AI so far lacks the ability to emulate those abilities.

So while the various iterations of ChatGPT have generated quite a bit of a buzz since last year’s preview, as with other new technologies, we need to address the concern it has generated. I don’t think AI will kill IT or cybersecurity jobs. Rather, AI is an important tool that security professionals can use to alleviate repetitive and mundane tasks.

While we are witnessing the early days of AI technology, and even its creators seem to have a limited understanding With its power, we have barely scratched the surface of the possibilities of how ChatGPT and other ML/AI models will transform cybersecurity practices. I am looking forward to seeing the next innovations.

Thomas Aneiro is Senior Director of Technology Advisory Services at Moxfive.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers


Scroll to Top