Securely Monitoring Nginx Access Logs with Elasticsearch, Logstash, Filebeat, and Shield
Securely monitor your Nginx access logs using Elasticsearch, Logstash, Filebeat, and Shield. This guide provides step-by-step instructions for secure setup, including user management and HTTPS traffic handling via Filebeat. Enhance log analysis with GeoIP data.
Step-by-Step Guide:
1. Shield Installation and User Management:
Before configuring Logstash and Filebeat, install and configure the Elasticsearch Shield plugin to secure your Elasticsearch instance. Create users with specific roles (e.g., admin
, beats
) using the esusers
command, defining roles and privileges in /etc/elasticsearch/shield/roles.yml
. Ensure the logstash
role includes necessary privileges for logstash-*
, filebeat-*
, packetbeat-*
, and topbeat-*
indices.
2. Logstash Configuration:
Configure your Logstash pipeline (/etc/logstash/conf.d/logstash.conf
) to receive beats input on port 5400. Use a grok
filter to parse the Nginx access logs, extracting relevant fields (IP, timestamp, request, response code, etc.). Incorporate a geoip
filter (requiring the GeoLiteCity.dat database) to enrich your logs with geolocation data. Crucially, specify the beats
user and password for Elasticsearch authentication in the output section—these must be the last options.
input {
beats {
host => "0.0.0.0"
port => "5400"
}
}
filter {
if [type] == "nginx-access" {
grok {
match => { 'message' => '%{IPORHOST:clientip} %{USER:ident} %{USER:agent} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{URIPATHPARAM:request}(?: HTTP/%{NUMBER:httpversion})?|)\" %{NUMBER:answer} (?:%{NUMBER:byte}|-) (?:\"(?:%{URI:referrer}|-))\" (?:%{QS:referree}) %{QS:agent}' }
}
geoip {
source => "clientip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["127.0.0.1:9200"]
user => beats
password => beatspassword
}
}
3. Filebeat Configuration:
Configure Filebeat (/etc/filebeat.yml
) to monitor your Nginx access logs (/var/log/nginx/*/access.log
or a specific path). Specify the Logstash instance's IP and port as the output. Adjust scan_frequency
, harvester_buffer_size
, and other settings for optimal performance.
filebeat:
prospectors:
-
paths:
- /var/log/nginx/*/access.log
input_type: log
exclude_files: [".gz$"]
document_type: nginx-access
scan_frequency: 10s
harvester_buffer_size: 16384
max_bytes: 10485760
max_backoff: 10s
backoff_factor: 2
spool_size: 2048
idle_timeout: 5s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["<logstash_ip>:5400"]
worker: 1
compression_level: 3
index: filebeat
shipper:
tags: ["service-X", "web"]
refresh_topology_freq: 10
topology_expire: 15
queue_size: 1000
logging:
to_syslog: true
level: error
4. GeoIP Database:
Download and install the MaxMind GeoLiteCity database to /etc/logstash/
for geolocation enrichment.
5. Service Management:
Start and enable Elasticsearch, Logstash, and Filebeat services. Test the Logstash configuration using logstash -t
.
Conclusion:
This setup provides a secure and efficient way to monitor your Nginx access logs, leveraging the power of the ELK stack and Shield for comprehensive log management and analysis. Remember to adapt paths and settings to your specific environment. Regularly update the GeoIP database for accurate location data.