-
Notifications
You must be signed in to change notification settings - Fork 521
/
Copy pathexternal_alerts.toml
112 lines (89 loc) · 5.94 KB
/
external_alerts.toml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
[metadata]
creation_date = "2020/07/08"
maturity = "production"
promotion = true
updated_date = "2025/01/15"
[rule]
author = ["Elastic"]
description = """
Generates a detection alert for each external alert written to the configured indices. Enabling this rule allows you to
immediately begin investigating external alerts in the app.
"""
index = [
"apm-*-transaction*",
"traces-apm*",
"auditbeat-*",
"filebeat-*",
"logs-*",
"packetbeat-*",
"winlogbeat-*",
]
language = "kuery"
license = "Elastic License v2"
max_signals = 10000
name = "External Alerts"
risk_score = 47
rule_id = "eb079c62-4481-4d6e-9643-3ca499df7aaa"
rule_name_override = "message"
setup = """## Setup
This rule is configured to generate more **Max alerts per run** than the default 1000 alerts per run set for all rules. This is to ensure that it captures as many alerts as possible.
**IMPORTANT:** The rule's **Max alerts per run** setting can be superseded by the `xpack.alerting.rules.run.alerts.max` Kibana config setting, which determines the maximum alerts generated by _any_ rule in the Kibana alerting framework. For example, if `xpack.alerting.rules.run.alerts.max` is set to 1000, this rule will still generate no more than 1000 alerts even if its own **Max alerts per run** is set higher.
To make sure this rule can generate as many alerts as it's configured in its own **Max alerts per run** setting, increase the `xpack.alerting.rules.run.alerts.max` system setting accordingly.
**NOTE:** Changing `xpack.alerting.rules.run.alerts.max` is not possible in Serverless projects."""
severity = "medium"
tags = ["OS: Windows", "Data Source: APM", "OS: macOS", "OS: Linux", "Resources: Investigation Guide"]
timestamp_override = "event.ingested"
type = "query"
query = '''
event.kind:alert and not event.module:(endgame or endpoint or cloud_defend)
'''
note = """## Triage and analysis
> **Disclaimer**:
> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
### Investigating External Alerts
External alerts are crucial for identifying potential threats across diverse environments like Windows, macOS, and Linux. These alerts are generated from various sources, excluding specific modules like endpoint or cloud defend, to focus on broader threat landscapes. Adversaries may exploit vulnerabilities in these systems to execute unauthorized actions. The 'External Alerts' detection rule filters and highlights such activities by focusing on alert events, enabling analysts to swiftly investigate and mitigate risks.
### Possible investigation steps
- Review the alert details to identify the specific event.kind:alert that triggered the detection, ensuring it is not associated with the excluded modules (endgame, endpoint, or cloud_defend).
- Examine the source and context of the alert by checking the associated tags, such as 'OS: Windows', 'OS: macOS', or 'OS: Linux', to understand the environment affected.
- Gather additional context by correlating the alert with other logs or events from the same time frame or system to identify any related suspicious activities.
- Assess the risk score and severity level to prioritize the investigation and determine the potential impact on the organization.
- Investigate the origin of the alert by identifying the source IP, user account, or process involved, and check for any known vulnerabilities or exploits associated with them.
- Consult threat intelligence sources to determine if the alert corresponds to any known threat actors or campaigns targeting similar environments.
### False positive analysis
- Alerts from benign third-party applications may trigger false positives. Review and identify these applications, then create exceptions to exclude them from future alerts.
- Routine system updates or patches can generate alerts. Monitor update schedules and create exceptions for known update activities to reduce noise.
- Network monitoring tools might produce alerts due to their scanning activities. Verify these tools and exclude their activities if deemed non-threatening.
- Alerts from internal security testing or penetration testing exercises can be mistaken for threats. Coordinate with security teams to whitelist these activities during scheduled tests.
- Certain administrative scripts or automation tasks may trigger alerts. Evaluate these scripts and exclude them if they are part of regular operations and pose no risk.
### Response and remediation
- Isolate affected systems immediately to prevent further unauthorized actions and contain the threat.
- Conduct a thorough review of the alert details to identify any specific vulnerabilities or exploits used by the adversary.
- Apply relevant patches or updates to the affected systems to remediate any identified vulnerabilities.
- Restore systems from a known good backup if unauthorized changes or actions have been detected.
- Monitor network traffic and system logs closely for any signs of further suspicious activity or attempts to exploit similar vulnerabilities.
- Escalate the incident to the appropriate security team or management if the threat appears to be part of a larger attack campaign or if additional resources are needed for remediation.
- Enhance detection capabilities by updating security tools and configurations to better identify similar threats in the future."""
[[rule.risk_score_mapping]]
field = "event.risk_score"
operator = "equals"
value = ""
[[rule.severity_mapping]]
field = "event.severity"
operator = "equals"
severity = "low"
value = "21"
[[rule.severity_mapping]]
field = "event.severity"
operator = "equals"
severity = "medium"
value = "47"
[[rule.severity_mapping]]
field = "event.severity"
operator = "equals"
severity = "high"
value = "73"
[[rule.severity_mapping]]
field = "event.severity"
operator = "equals"
severity = "critical"
value = "99"