ELK Setup
basic
The data lifecycle for ELK goes a little something like this:
Syslog Server
feeds Logstash
Logstash
filters and parses logs and stores them within Elasticsearch
Elasticsearch
indexes and makes sense out of all the dataKibana
makes millions of data points consumable by us mere mortals
For this project you will need…
- A Linux Ubuntu Server 14.04 LTS: 1 core, 4Gb Memory, 100Gb storage
- A Palo Alto Networks firewall with a Threat Prevention Subscription
- Something on the firewall to generate traffic
Install ELK
The ELK Stack can be installed using a variety of methods and on a wide array of different operating systems and environments.
- ELK can be installed locally, on the cloud, using Docker and configuration management systems like Ansible, Puppet, and Chef.
- The stack can be installed using a tarball or .zip packages or from repositories.
Environment specifications
- single AWS Ubuntu 18.04 machine on an m4.large instance using its local storage.
- started an EC2 instance in the public subnet of a VPC
- set up the security group (firewall) to enable access from anywhere using SSH and TCP 5601 (Kibana).
- added a new elastic IP address to the instance for internet connection.
Configuration
Filebeat
link
Logstash collect data
Logstash架构:
setup
- 下载安装
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| # 下载
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.3.tar.gz
tar -zxvf logstash-6.4.3.tar.gz -C /usr/local
ls /usr/local/logstash-6.4.3/
# 测试logstash-6.4.3
logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
# [2019-09-25T15:12:47,020][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
# 如果启动成功会出现提示语句
# 接着屏幕就等着你输入了,比如输入一个Hello World,会出现以下的提示语句。
HelloWorld
{
"@timestamp" => 2019-09-25T07:14:40.491Z,
"host" => "localhost",
"@version" => "1",
"message" => "HelloWorld"
}
|
- 配置文件简单测试
pipeline配置简介:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
|
# Pipeline用于配置input、filter和output插件
input {}
filter {}
output {}
# 创建配置文件logstash.conf:
vim config/logstash.conf
input {
stdin { }
}
output {
stdout {
codec => rubydebug { }
}
elasticsearch {
hosts => ["0.0.0.0:9200"]
# user => elastic
# password => xW9dqAxThD5U4ShQV1JT
}
}
# 启动elasticsearch
# 指定配置文件启动
./bin/logstash -f config/logstash.conf
# 同样命令行等着你输入指令
Hello World
{
"@version" => "1",
"host" => "localhost",
"@timestamp" => 2019-09-25T07:25:03.292Z,
"message" => "Hello World"
}
# 访问:
# https://192.168.77.132:9200/_search?q=Hello
{
"took":59,
"timed_out":false,
"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},
"hits":{
"total":{"value":0,"relation":"eq"},
"max_score":null,
"hits":[]
}
}
|
相关配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
| 1. 持久队列基本配置(pipelines.yml)
queue.type:persisted # 默认是memory
queue.max_bytes:4gb # 队列存储最大数据量
2. 线程相关配置(logstash.yml)
pipeline.worksers | -w
# pipeline线程数,即filter_output的处理线程数,默认是cpu核数
pipeline.batch.size | -b
# Batcher一次批量获取的待处理文档数,默认是125,可以根据输出进行调整,越大会占用越多的heap空间,可以通过jvm.options调整
pipeline.batch.delay | -u
# Batcher等待的时长,单位为ms
3. Logstash配置文件:
# logstash.yml:logstash相关配置,比如node.name、path.data、pipeline.workers、queue.type等,这其中的配置可以被命令行参数中的相关参数覆盖
# jvm.options:修改jvm的相关参数,比如修改heap size等
# pipeline配置文件:定义数据处理流程的文件,以.conf结尾
logstash.yml配置项:
node.name: # 节点名称,便于识别
path.data: # 持久化存储数据的文件夹,默认是logstash home目录下的data
path.config: # 设定pipeline配置文件的目录(如果指定文件夹,会默认把文件夹下的所有.conf文件按照字母顺序拼接为一个文件)
path.log: # 设定pipeline日志文件的目录
pipeline.workers: # 设定pipeline的线程数(filter+output),优化的常用项
pipeline.batch.size/delay: # 设定批量处理数据的数据和延迟
queue.type: # 设定队列类型,默认是memory
queue.max_bytes: # 队列总容量,默认是1g
|
Elasticsearch
1
2
3
4
5
6
| # since we are installing Elasticsearch on AWS, it is a good best practice to bind Elasticsearch to either a private IP or localhost:
sudo vim /etc/elasticsearch/elasticsearch.yml
network.host: "localhost"
http.port:9200
cluster.initial_master_nodes: ["<PrivateIP"]
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| # ======================== Elasticsearch Configuration =========================
# ---------------------------------- Cluster -----------------------------------
#cluster.name: my-application
# ------------------------------------ Node ------------------------------------
#node.name: node-1
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# ----------------------------------- Memory -----------------------------------
#bootstrap.memory_lock: true
# ---------------------------------- Network -----------------------------------
network.host: "localhost"
http.port: 9200
# --------------------------------- Discovery ----------------------------------
#discovery.seed_hosts: ["host1", "host2"]
cluster.initial_master_nodes: ["node-1", "node-2"]
# ---------------------------------- Gateway -----------------------------------
#gateway.recover_after_nodes: 3
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true
|
kibana
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| sudo vim /etc/kibana/kibana.yml
server.port: 5601
server.host: 10.0.1.168
# server.host: 127.0.0.1
# server.host: "localhost"
elasticsearch.url: "https://localhost:9200"
elasticsearch.hosts: ["https://localhost:9200"]
# tell Kibana which Elasticsearch to connect to and which port to use.
cd /var/log/kibana/
root@319291962e3b:/var/log/kibana# ls -la
total 48
drwxr-s--- 2 kibana kibana 4096 Jan 12 15:04 .
drwxr-xr-x 1 root root 4096 Jan 12 15:04 ..
-rw-r--r-- 1 root kibana 576 Jan 12 15:10 kibana.stderr
-rw-r--r-- 1 root kibana 24858 Jan 12 15:10 kibana.stdout
chmod 777 kibana.stderr
chmod 777 kibana.stdout
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| # Kibana:
# /etc/kibana/kibana.yml
server.port: 5601
server.host: "localhost"
server.basePath: ""
server.rewriteBasePath: false
server.maxPayloadBytes: 1048576
server.name: "your-hostname"
elasticsearch.hosts: ["https://localhost:9200"]
kibana.index: ".kibana"
kibana.defaultAppId: "home"
elasticsearch.username: "kibana_system"
elasticsearch.password: "pass"
server.ssl.enabled: false
server.ssl.certificate: /path/to/your/server.crt
server.ssl.key: /path/to/your/server.key
elasticsearch.ssl.certificate: /path/to/your/client.crt
elasticsearch.ssl.key: /path/to/your/client.key
elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
elasticsearch.ssl.verificationMode: full
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.requestHeadersWhitelist: [ authorization ]
elasticsearch.customHeaders: {}
elasticsearch.shardTimeout: 30000
elasticsearch.logQueries: false
pid.file: /var/run/kibana.pid
logging.dest: stdout
logging.silent: false
logging.quiet: false
logging.verbose: false
ops.interval: 5000
i18n.locale: "en"
|
installation
Install ELK on Mac OS X
Installation
Install Homebrew
Install Java
Install Elasticsearch logstash kibana
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| brew install elasticsearch && brew info elasticsearch
brew services start elasticsearch
brew install logstash
brew services start logstash
brew install kibana
brew services start kibana
brew services list
# Name Status User Plist
# elasticsearch started luo /Users/luo/Library/LaunchAgents/homebrew.mxcl.elasticsearch.plist
# kibana started luo /Users/luo/Library/LaunchAgents/homebrew.mxcl.kibana.plist
# logstash started luo /Users/luo/Library/LaunchAgents/homebrew.mxcl.logstash.plist
# openvpn started root /Library/LaunchDaemons/homebrew.mxcl.openvpn.plist
|
ship the data
- configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
| # --------------------- Kibana configure:
# /usr/local/etc/kibana/kibana.yml
server.port: 5601 # defining the Kibana port
# elasticsearch.url: "https://localhost:9200” # defining the Elasticsearch instance
elasticsearch.hosts: ["https://localhost:9200"]
# --------------------- Logstash configure:
# /etc/logstash/conf.d/syslog.conf
# Logstash pipeline sending syslog logs into the stack.
# Brew installs logstash in /usr/local/Cellar/logstash/7.9.0,
# and creates a symlink of the config folder
# in /usr/local/Cellar/logstash/7.9.0/libexec/config to /usr/local/etc/logstash.
# when your run sudo vim /etc/logstash/conf.d/syslog.conf,, it will not find the dir and will throw an error.
# even if you create the folder with the conf file, you will see no "syslog-demo" index in Kibana.
# Solution!
# Create a new conf.d folder inside /usr/local/etc/logstash, and then create a syslog config file.
# mkdir /usr/local/etc/logstash/conf.d
# sudo vim /etc/logstash/conf.d/syslog.conf
# Copy the configuration, paste it, save and quit vim.
# After that, set Logstash config path:
# logstash -f /usr/local/etc/logstash/conf.d/*.conf
input {
file {
path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
type => "syslog"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "syslog-demo"
}
stdout { codec => rubydebug }
}
|
install ELK on Ubuntu in Docker
Install Docker
1
2
3
4
5
6
| apt-get -y install sudo gnupg2 gnupg vim curl apache2 ufw
apt-get update
apt-get install wget
docker run ubuntu:18.04
docker run -it ubuntu:18.04 /bin/bash
|
Install Elasticsearch :9200
- add Elastic’s signing key
- so that the downloaded package can be verified
- (skip this step if you’ve already installed packages from Elastic):
1
2
3
4
5
6
7
| # Ubuntu:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
# Debian:
# need to then install the apt-transport-https package
sudo apt-get update
sudo apt-get install apt-transport-https
|
- add the repository definition to the system:
1
2
3
4
| echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
# To install a version of Elasticsearch that contains only features licensed under Apache 2.0 (aka OSS Elasticsearch):
echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
|
- update repositories and install Elasticsearch:
1
2
| sudo apt-get update
sudo apt-get install elasticsearch
|
- Elasticsearch configurations
- using a configuration file to configure
- general settings (e.g. node name),
- network settings (e.g. host and port),
- where data is stored, memory, log files, and more.
1
2
3
4
5
6
| # since we are installing Elasticsearch on AWS, it is a good best practice to bind Elasticsearch to either a private IP or localhost:
sudo vim /etc/elasticsearch/elasticsearch.yml
network.host: "localhost"
http.port:9200
cluster.initial_master_nodes: ["<PrivateIP"]
|
- To run Elasticsearch
1
| sudo service elasticsearch start
|
- To confirm that everything is working as expected
- point curl or your browser to https://localhost:9200,
- and you should see something like the following output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
curl https://localhost:9200
{
"name" : "319291962e3b",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.10.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "1c34507e66d7db1211f66f3513706fdf548736aa",
"build_date" : "2020-12-05T01:00:33.671820Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
|
Installing an Elasticsearch cluster requires a different type of setup.
Installing Logstash
- Logstash requires Java 8 or Java 11 to run
- start the process of setting up Logstash with:
1
2
3
4
5
6
7
8
9
10
11
| sudo apt-get install -y default-jre
# Verify java is installed:
java -version
# openjdk version "1.8.0_191"
# OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.16.04.1-b12)
# OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
# # mine:
# openjdk version "11.0.9.1" 2020-11-04
# OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.18.04)
# OpenJDK 64-Bit Server VM (build 11.0.9.1+1-Ubuntu-0ubuntu1.18.04, mixed mode, sharing)
|
- Since already defined the repository in the system, install Logstash is run:
1
| sudo apt-get install logstash
|
Before you run Logstash, you will need to configure a data pipeline. We will get back to that once we’ve installed and started Kibana.
Installing Kibana :5601
- install Kibana:
1
| sudo apt-get install -y kibana
|
- Kibana configuration file
/etc/kibana/kibana.yml
1
2
3
4
5
6
7
8
9
| sudo vim /etc/kibana/kibana.yml
server.port: 5601
server.host: 10.0.1.168
# server.host: 127.0.0.1
# server.host: "localhost"
elasticsearch.url: "https://localhost:9200"
elasticsearch.hosts: ["https://localhost:9200"]
# tell Kibana which Elasticsearch to connect to and which port to use.
|
- start Kibana with:
1
| sudo service kibana start
|
- Open up Kibana in browser with: https://localhost:5601 > Kibana home page.
1
| curl https://localhost:5601
|
Installing Metricbeat
1
2
3
| # install Metricbeat:
sudo apt-get install -y metricbeat
sudo service metricbeat start
|
Metricbeat
- begin monitoring your server and create an Elasticsearch index which you can define in Kibana.
Ship the data, set up data pipeline by Logstash.
- prepared some sample data containing Apache access logs that is refreshed daily.
- create a new Logstash configuration file
/etc/logstash/conf.d/apache-01.conf
1
| sudo vim /etc/logstash/apache-01.conf
|
- Enter the following Logstash configuration
- (change the path to the file you downloaded accordingly):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| input {
file {
path => "/root/test/apache-daily-access.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok {match => { "message" => "%{COMBINEDAPACHELOG}" }}
date {match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]}
geoip {source => "clientip"}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
|
- Start Logstash with:
1
| sudo service logstash start
|
- a new Logstash index will be created in Elasticsearch
- the pattern of which can now be defined in Kibana.
- In Kibana, go to Management → Kibana Index Patterns.
- Kibana should display the Logstash index and along with the Metricbeat index if you followed the steps for installing and running Metricbeat).
Enter “logstash-*
” as the index pattern, and in the next step select @timestamp as your Time Filter field.
Hit Create index pattern, and you are ready to analyze the data. Go to the Discover tab in Kibana to take a look at the data (look at today’s data instead of the default last 15 mins).
Congratulations! You have set up your first ELK data pipeline using Elasticsearch, Logstash, and Kibana.
Installing ELK on Docker
install
- Install
1
2
3
4
5
6
7
8
9
10
11
| # Install
git clone https://github.com/deviantony/docker-elk.git
cd /docker-elk
docker-compose up -d
# Verifying the installation
docker ps
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# a1a00714081a dockerelk_kibana "/bin/bash /usr/loca…" 54 seconds ago Up 53 seconds 0.0.0.0:5601->5601/tcp dockerelk_kibana_1
# 91ca160f606f dockerelk_logstash "/usr/local/bin/dock…" 54 seconds ago Up 53 seconds 5044/tcp, 0.0.0.0:5000->5000/tcp, 9600/tcp dockerelk_logstash_1
# de7e3368aa0c dockerelk_elasticsearch "/usr/local/bin/dock…" 55 seconds ago Up 54 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp dockerelk_elasticsearch_1
|
- ports on localhost have been mapped to the default ports
- Elasticsearch (9200/9300), Kibana (5601) and Logstash (5000/5044).
- Everything is already pre-configured with a privileged username and password:
- user: elastic
- password: changeme
- query Elasticsearch using:
https://localhost:5601
curl https://localhost:9200/_security/_authenticate
ship data
Our next step is to forward some data into the stack.
1
2
3
4
5
6
7
| Other Beats currently available from Elastic are:
Filebeat: collects and ships log files.
Packetbeat: collects and analyzes network data.
Winlogbeat: collects Windows event logs.
Auditbeat: collects Linux audit framework data and monitors file integrity.
Heartbeat: monitors services for their availability with active probing.
|
- By default, the stack will be running Logstash with the default Logstash configuration file.
- configure that file to suit purposes
- and ship any type of data into your Dockerized ELK
- and then restart the container.
Metricbeat
- Alternatively, install Filebeat
- either on your host machine or as a container
- and have Filebeat forward logs into the stack.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
| # 1. download and install Metricbeat:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.1.2-darwin-x86_64.tar.gz
tar xzvf metricbeat-6.1.2-darwin-x86_64.tar.gz
# 2. configure the metricbeat.yml
# - collect metrics on operating system and ship them to the Elasticsearch container:
cd metricbeat-6.1.2-darwin-x86_64
sudo vim metricbeat.yml
metricbeat.modules:
- module: system
metricsets:
- cpu
- filesystem
- memory
- network
- process
enabled: true
period: 10s
processes: ['.*']
cpu_ticks: false
fields:
env: dev
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
metricbeat.modules:
- module: system
metricsets: ["cpu","memory","network", "filesystem", "process"]
enabled: true
period: 10s
processes: ['.*']
fields:
env: dev
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
output.logstash:
hosts: ["localhost:5044"]
# 3. start Metricbeat
sudo chown root metricbeat.yml
sudo chown root modules.d/system.yml
sudo ./metricbeat -e -c metricbeat.yml -d "publish"
# 4. will see a Metricbeat index created in Elasticsearch, and it’s pattern identified in Kibana.
curl -XGET 'localhost:9200/_cat/indices?v&pretty'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana XPHh2YDCSKKyz7PtmHyrMw 1 1 2 1 67kb 67kb
yellow open metricbeat-6.1.2-2018.01.25 T_8jrMFoRYqL3IpZk1zU4Q 1 1 15865 0 3.4mb 3.4mb
# mine
curl -XGET 'localhost:9200/_cat/indices?v&pretty' -u elastic:changeme
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-kibana-7-2021.01.12 tgGfn-eLSme2BEAwLWVpCw 1 0 694 0 421.4kb 421.4kb
green open .triggered_watches bfOzsoyKRUONi2rVEaAtsg 1 0 0 0 45.7kb 45.7kb
green open .apm-agent-configuration wVWyj03nRGGNKTd9C1IypA 1 0 0 0 208b 208b
yellow open logstash-2021.01.12-000001 VnUG-uKsTlew9hnrhP0c0A 1 1 0 0 208b 208b
green open .kibana_1 RfvZCRBuSqCTu8kdwT4vRw 1 0 1548 46 5.3mb 5.3mb
green open .monitoring-logstash-7-2021.01.12 alkl8u8cRqy4_rf0SFmijQ 1 0 1946 0 477.5kb 477.5kb
green open .ml-config KD5Y-O6lT6uvA3IpPeqXtw 1 0 20 0 51.2kb 51.2kb
green open .security-7 omtVvA4-QmmtGo550YJBrA 1 0 55 0 178.3kb 178.3kb
green open .apm-custom-link pcellkZLTiultUWQaq6vEA 1 0 0 0 208b 208b
green open .kibana_task_manager_1 BaTGM6N-QemCI05zSH_BxA 1 0 6 23 196.1kb 196.1kb
green open .monitoring-es-7-2021.01.12 yO-smpAuQXCelOxYk5x9BA 1 0 8083 1150 9.4mb 9.4mb
green open .monitoring-alerts-7 BX68CsmmQSKpEu_R_9rNWA 1 0 2 7 111.9kb 111.9kb
green open .kibana-event-log-7.10.1-000001 M36agcORQY-QeoKohg0Mog 1 0 2 0 11kb 11kb
yellow open filebeat-7.10.1-2021.01.12-000001 eM3rpds0TwCrp9LjJEonFw 1 1 0 0 208b 208b
green open .watches bfRDY58eRRyBefLRdPd0Wg 1 0 6 48 375.5kb 375.5kb
|
Fillbeat
1
2
3
4
5
6
7
8
9
10
11
12
| curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.1-darwin-x86_64.tar.gz
tar xzvf filebeat-7.10.1-darwin-x86_64.tar.gz
cd filebeat-7.10.1-darwin-x86_64/
vim filebeat.yml
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "changeme"
setup.kibana:
host: "localhost:5601"
|
Palo Alto Network Syslog
CortexXDR
To send XDR notifications to Syslog server. define the settings for the Syslog receiver from which you want to send notifications.
- Before define the Syslog settings, enable access to the following
XDR IP addresses
for your deployment region in your firewall configurations:
- Navigate to Settings > Integrations > External Applications.
- In Syslog Servers, add a + New Server.
- Define the Syslog server parameters:
- Name — Unique name for the server profile.
- Destination — IP address or fully qualified domain name (FQDN) of the Syslog server.
- Port — The port number on which to send Syslog messages.
- Facility — Choose one of the Syslog standard values.
- The value maps to how your Syslog server uses the facility field to manage messages.
- For details on the facility field, see RFC 5424.
- Protocol — Select a method of communication with the Syslog server:
TCP
— No validation is made on the connection with the Syslog server. However, if an error occurred with the domain used to make the connection, the Test connection will fail.UDP
— XDR runs a validation to ensure connection was made with the syslog server.TCP + SSL
— XDR validates the syslog server certificate and uses the certificate signature and public key to encrypt the data sent over the connection.
- Certificate — The communication between XDR and the Syslog destination can use TLS. In this case, upon connection, XDR validates that the Syslog receiver has a certificate signed by either a trusted root CA or a self signed certificate.
- If your syslog receiver uses a
self signed CA
:- Browse and upload the Self Signed Syslog Receiver CA.
- Make sure the self signed CA includes the public key.
- If you only use a
trusted root CA
:- leave the Certificate field empty.
- Ignore Certificate Error — XDR does not recommend, but you can choose to select this option to ignore certificate errors if they occur. This will forward alerts and logs even if the certificate contains errors.
- Test the parameters to ensure a valid connection and Create when ready.
- You can define up to five Syslog servers.
- Upon success, the table displays the Syslog servers and their status.
- (Optional) Manage your Syslog server connection.
- In the Syslog Servers table
- Locate your Syslog server and right-click to Send text message to test the connection.
- XDR sends a message to the defined Syslog server, check to see if the test message indeed arrived.
- Locate the Status field.
- The Status field displays a Valid or Invalid TCP connection.
- XDR tests connection with the Syslog server every 10min.
- If no connection is found after 1 hour, XDR send a notice to the Notification Center.
- Configure Notification Forwarding.
- After you integrate with your Syslog receiver, you can configure your forwarding settings.
Cortex Data Lake
By default, Cortex Data Lake forwards logs in CSV format and follows IETF Syslog message format defined in RFC 5425. but can select other log record formats, such as LEEF, that may adhere to different standards.
- For each instance of Cortex Data Lake, you can forward logs to ten Syslog destinations.
- The communication between Cortex Data Lake and the Syslog destination uses
Syslog over TLS
- and upon connection Cortex Data Lake validates that the Syslog receiver has a certificate signed by a trusted root CA.
- To complete the SSL handshake and establish the connection, the Syslog receiver must present all the certificates from the chain of trust.
- Cortex Data Lake does not support self-signed certificates.
- Enable communication between Cortex Data Lake and your Syslog receiver.
- Ensure the Syslog receiver can connect to Cortex Data Lake
- and can present a valid CA certificate to complete the connection request.
- Allow an inbound TLS feed to the Syslog receiver from the following IP address ranges:
- If you have allowed specific IP addresses for inbound traffic, you must also allow the above IP address ranges to forward logs to your Syslog receiver.
- Obtain a certificate from a well-known, public CA, and install it on your Syslog receiver.
- Because Cortex Data Lake validates the server certificate to establish a connection, must verify that the Syslog receiver is configured to properly send the SSL certificate chain to Cortex Data Lake.
- If the app cannot verify that the certificate of the receiver and all CA’s in the chain are trustworthy, the connection cannot be established.
- See the list of trusted certificates.
- Sign In to the hub
- Select the Cortex Data Lake instance to configure for Syslog forwarding.
- If you have multiple Cortex Data Lake instances, click the Cortex Data Lake tile and select an instance from the list of those available.
- Select Log Forwarding > Add to add a new Syslog forwarding profile.
- Enter a descriptive Name for the profile.
- Enter the Syslog Server IPv4 address or FQDN.
- Enter the Port on which the Syslog server is listening.
- The default port for
Syslog messages over TLS
is 6514
.
- Select the Facility.
- Choose one of the Syslog standard values.
- The value maps to how the Syslog server uses the facility field to manage messages.
- For details on the facility field, see the IETF standard for the log format that you will choose in the next step.
- Specify the Format in which to forward the logs.
- The log format select depends on the destination of the log data.
- For example, select LEEF if you are forwarding logs to IBM QRadar SIEM.
- Specify the Delimiter to separate the fields in your log messages.
- (Optional) To receive a Status Notification when Cortex Data Lake is unable to connect to the Syslog server, enter the email address at which you’d like to receive the notification.
- These notifications describe the error impacting communication between Cortex Data Lake and the Syslog server, so that you can take the appropriate steps to restore Syslog connectivity.
- (Optional) Enter a Profile Token to send logs to a cloud Syslog receiver.
- If you use a third-party cloud-based Syslog service, you can enter a token that Cortex Data Lake inserts into the Syslog message so that the cloud Syslog provider can identify the source of the logs.
- Follow your cloud Syslog provider’s instructions for generating an identifying token.
- Enter the Profile Token.
- Tokens have a maximum length of 128 characters.
- Select the logs you want to forward.
- Add a new log filter.
- Select the log type.
- The Threat log type does not include URL logs or Data logs.
- If you wish to forward these log types, you must add them individually.
- (Optional)Create a log filter to forward only the logs that are most critical to you.
- Log filters function like queries in Explore.
- As such, you can either write your own queries from scratch or use the query builder. Also, selecting the query field presents some common predefined queries that you can use.
- If you want to forward all logs of the type you selected, do not enter a query. Instead, proceed to the next step.
- Save your changes.
- Verify the Status of the Syslog forwarding profile is
Running
. - Verify that you can view logs on the Syslog receiver.
- For details about the log format, refer to the Syslog field descriptions (Select the PAN-OS Administrator’s Guide for your firewall version).
ref:
.
Comments powered by Disqus.