- Explainable AI builds more trust between AI and human actors
- Graph neural networks process visual data to create new insights
- Utilizing the attack lifecycle, AI can target defensive efforts to greater effect
- Graph neural networks can bolster multiple aspects of cyber security
Cyber security has never been easy, but with the rapid expansion of AI and powerful computational tools, it seems to grow complex at faster rates every day.
Fortunately, the same resources that make security feel so difficult can also empower your defensive efforts to build better security and peace of mind.
In particular, explainable AI in the form of graphical neural networks could give you a whole new analytical approach that creates more robust and responsive cyber security.
A Quick Course in Explainable AI
To keep it succinct, explainable AI is the process of presenting AI outputs in a way that is easy for humans to understand. Visual presentations and simple breakdowns of the AI process help human actors see more than just the results — they can follow the AI’s logic.
This approach makes AI results easier to trust, and human actors tend to make more decisive and effective decisions when using AI.
Read: The Power of Rotating Residential Proxies in Modern Data Operations
Adding Graph Neural Networks
Today’s second component, graph neural networks (GNNs), focuses on a specific means of feeding data into AI. With these neural networks, input data takes the form of various graphs. There are many possible ways to do this, but ultimately, GNNs analyze visual data rather than purely numerical or linguistic data in order to process outputs.
GNNs can combine with traditional machine learning and/or natural language processing (NLP) to create a broader input scope (adding numbers and words to input options), but the heavy focus of this article will stick to graphical inputs.

GNNs in Secure Ops
With that covered, how do GNNs change secure operations?
Ideally, they can learn and adapt by reading cyber threat data. By utilizing graphical inputs, GNNs can identify and break targeted stages of the attack life cycle. Graphical representations allow for an alternate approach to cataloguing and classifying threat data, enabling these AIs to find new weak points in an attack in order to identify and defeat it early.
The Attack Life Cycle
As GNN SecOps revolve around the attack life cycle, understanding that cycle presents a clearer picture.
Multiple attack life cycle models exist. For the sake of brevity, we can focus on the Lockheed Martin cyber kill chain as a case study.
This model breaks attacks into the following lifecycle: reconnaissance -> weaponization -> delivery -> exploitation -> installation -> command and control -> actions on objections.
During reconnaissance, attackers search for viable victims. They gather information in order to find vulnerabilities to form an attack. This step often involves information harvesting that might include login credentials, system configurations, user IDs, and more.
In the weaponization phase, attackers create or modify tools that can exploit information gained during reconnaissance. This can include malware, threat agents, or any other resource that enables the attack.
Delivery marks the point where the attack physically begins. Whatever weapon was developed is now delivered to the target. Delivery might utilize phishing, physically removable media, social engineering, or a number of other methods.
Exploitation begins after delivery when the weapon carries out its function. This is where an attacker gains unauthorized access. While it marks a dangerous point in the attack, it is not yet the end.
Once a vulnerability is exploited, an attack moves to installation. This is where the attacker creates a persistence channel allowing them to make better use of unauthorized access. While installation can vary in scope and degree, this is an escalation phase during the attack.
Now, we move to command and control. This covers communication between the attacker and compromised infrastructure. It allows persistent control by the attacker and paves the way for the attacker to carry out objectives.
The final phase includes actions on objectives. Presumably, the attack was initiated for a reason (or reasons). In this phase, the attacker has sufficient access and control to carry out those objectives. They may steal information, ransom a system, or simply cause damage. While other phases represent risk, this is where risk translates into loss.
GNN Security Applications
Looking at the lifecycle, GNN security works to classify elements of threats according to its prescribed model. GNN can order classifications around nodes, edges, and graphs, depending on the AI design.
With these varying approaches, the model can identify key components of cyber attacks, identify where they exist in the model, and prescribe prevention and/or responses to each of those components.
As an example, a GNN could see that user data is insufficiently secured, enabling malicious actors to more readily identify valuable targets. This would apply to the reconnaissance phase.
Similarly, the model might find that physical access controls create vulnerabilities with physical media, targeting the delivery phase of attacks.
In application, GNNs can support a number of security concerns with a single model:
- Privacy maintenance
- Research
- Anomaly detection
- Vulnerability detection
- Intrusion detection
- Malware detection
- Reporting
Why Explainable AI Helps Security
How does this tie back to explainable AI? AI cannot fully automate every aspect of cyber security. In many cases, vulnerabilities and defenses require physical interaction on the part of IT teams.
Utilizing explainable AI helps human actors clearly understand issues and risks as well as why conclusions are reached. With the increased trust that comes from explainability, decision makers are better suited to utilize AI outputs and make changes that prevent and address threats more effectively. Combining all of this with visual data, such as a network graph, expands explainability and its benefits.

