Skip to content

Threat Intelligence Triage

Security teams face an impossible volume of alerts. Traditional tools apply rigid rules that either miss subtle threats or drown analysts in false positives. This solution shows how to build a triage system that learns from your team’s expertise—automatically improving its judgment over time.

Every security team knows these failure modes:

Alert fatigue: Your SIEM pages on-call at 3am for an IOC match. Two hours later, it’s confirmed as a Tor exit node hitting a dev server. Another false positive. Analysts start ignoring alerts.

Missed threats: A “low severity” PowerShell event sits at position #347 in the queue. Three days later, ransomware detonates. The subtle indicators were there—but buried in noise.

Traditional tools can’t solve this because they lack context and can’t learn. They treat every IOC match the same, regardless of what the target is, what the historical false positive rate is, or what your team has learned from past incidents.

The system combines three things traditional SIEMs lack:

  1. Contextual awareness — The system knows not just that an alert fired, but what asset was targeted, how critical it is, who owns it, and what threat intelligence says about the indicators involved.

  2. Learned judgment — Through analyst feedback, the system learns what your team considers noise vs. real threats. It internalizes your risk tolerance, attribution standards, and escalation preferences.

  3. Continuous improvement — As analysts correct decisions, those corrections compound into systematic improvements. The system gets smarter over time, not just more rules.

The same system handles both directions—reducing noise AND catching subtle threats—because it’s learned what your team actually cares about.

Intelligent triage requires three categories of data, stored as structured tables in a dataset.

Your SIEM, EDR, or detection systems feed events into an events table:

event_idtimestampevent_typesource_ipdest_ipdest_hostnameseverityraw_log
evt-0012024-01-15T03:42:00Zssh_brute_force185.220.101.4210.0.1.50dev-server-03HIGHFailed SSH attempts…
evt-0022024-01-15T09:15:00Zpowershell_anomaly10.0.2.100fin-admin-wsLOWEncoded PowerShell…
evt-0032024-01-15T11:30:00Zmalware_callback10.0.3.2591.234.56.78marketing-pcCRITICALC2 beacon detected…

Each row captures the raw detection with its original severity—before any contextual analysis.

Your CMDB or asset management system populates an assets table:

hostnameip_addressenvironmentcriticalitydata_classificationownerbusiness_unit
dev-server-0310.0.1.50developmentlowinternalPlatform TeamEngineering
fin-admin-ws10.0.2.100productioncriticalpii, financialJ. MartinezFinance
marketing-pc10.0.3.25productionmediuminternalS. ChenMarketing

When an alert fires, the system joins against this table to understand what’s actually at stake. A “HIGH” severity alert hitting a dev server is very different from the same alert hitting a finance workstation.

Structured indicators populate an IOCs table:

indicatorindicator_typethreat_actorconfidencefirst_seentags
185.220.101.42ipv4APT29medium2023-06-15tor_exit_node, cozy_bear
91.234.56.78ipv4FIN7high2024-01-10carbanak, c2_server
encoded_ps_loader.ps1file_hashUnknownlow2024-01-12powershell, obfuscated

Beyond structured IOCs, unstructured threat intelligence—reports, TTPs, incident post-mortems—goes into a knowledge base for semantic search. When the system sees an APT29 indicator, it can retrieve context about that actor’s typical targets, techniques, and historical false positive patterns.

The pipeline takes each security event and produces a structured triage decision. Here’s a simplified configuration:

name: security-event-triage
handler: language_model
inputs:
event: # The raw security event
asset: # Joined asset context
ioc_matches: # Any matching threat intel
threat_context: # Retrieved from knowledge base
outputs:
adjusted_severity: enum [CRITICAL, HIGH, MEDIUM, LOW, INFORMATIONAL]
confidence: number (0-100)
reasoning: string
recommended_action: enum [page_oncall, investigate_urgent, investigate_normal, log_only]
threat_assessment:
likely_threat_actor: string | null
attribution_confidence: enum [high, medium, low, none]
attack_stage: enum [reconnaissance, initial_access, execution, persistence, lateral_movement, exfiltration, unknown]

The system prompt instructs the model to weigh factors like asset criticality, IOC confidence, historical patterns, and threat actor TTPs—then produce a structured decision.

Let’s trace how the SSH brute force event flows through:

1. Event arrives:

event_id: evt-001
event_type: ssh_brute_force
source_ip: 185.220.101.42
dest_hostname: dev-server-03
original_severity: HIGH

2. Asset context joined:

environment: development
criticality: low
data_classification: internal

3. IOC matches found:

indicator: 185.220.101.42
threat_actor: APT29
confidence: medium
tags: [tor_exit_node, cozy_bear]

4. Knowledge base retrieval:

“APT29 (Cozy Bear) typically targets government and diplomatic entities. IP 185.220.101.42 is a known Tor exit node used by multiple actors. Attribution to APT29 based solely on this IP should be considered low confidence…”

5. Triage decision:

adjusted_severity: MEDIUM
confidence: 85
reasoning: |
Downgraded from HIGH to MEDIUM. While the source IP matches APT29 threat intel,
it's a known Tor exit node (attribution unreliable). Target is a development
server with low criticality and no sensitive data. SSH brute force against
dev infrastructure is common noise.
recommended_action: investigate_normal
threat_assessment:
likely_threat_actor: null # Can't attribute via Tor
attribution_confidence: none
attack_stage: reconnaissance

The analyst reviews this in the morning, confirms the reasoning, and moves on in 10 minutes instead of a 3am wake-up.

How do you know the system is improving? Track these metrics:

MetricWhat It MeasuresTarget
False positive rateAlerts escalated that turned out to be noiseDecreasing over time
Mean time to triageHow long events sit before getting reviewedDecreasing for high-priority
After-hours pages3am wake-ups for non-critical issuesDecreasing
Missed threat rateReal incidents that weren’t escalatedZero or near-zero

Building this system requires:

  1. Data integration — Connect your SIEM events, asset inventory, and threat intelligence feeds
  2. Knowledge base setup — Index your threat reports and IOC data for semantic search
  3. Pipeline configuration — Define the triage logic and output schema
ComponentRole in Threat Triage
DatasetContains structured security data (events, IOCs, assets)
TablesSchema-defined storage for events, indicators, and inventory
FilesThreat reports, runbooks, CVE bulletins
Knowledge BaseSemantic search over threat intel for contextual retrieval
PipelineMakes triage/notification decisions using LLM judgment