Setup and configure elasticsearch, logstash, logstash-forwarder and kibana on debian jessie
Install Elasticsearch, Logstash, and Kibana on Debian 11. Configure Logstash to process Nginx logs using Grok patterns. Use Logstash-Forwarder (or Filebeat) for secure log shipping. Visualize data in Kibana. This guide provides a complete ELK stack setup for log management and analysis.
This comprehensive tutorial guides you through the installation and configuration of the Elastic Stack—Elasticsearch, Logstash, Kibana, and Logstash-Forwarder—on a Debian 11 (Bullseye) server. We'll focus on efficiently collecting and visualizing Nginx access logs, providing a practical example for log management and data analysis. While the original post referenced Debian Jessie, this updated version uses the more current and supported Debian 11. Significant changes in package management and service management have been incorporated.
Prerequisites:
- A Debian 11 (Bullseye) server instance with SSH access. Root or sudo privileges are required.
- A basic understanding of Linux command-line operations.
- A domain name or static IP address accessible from the server where Nginx is running (for Logstash-Forwarder).
1. Installing and Configuring Elasticsearch
Elasticsearch is the heart of the ELK stack, storing and indexing your data.
- Add the Elasticsearch Repository and GPG Key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list
- Update and Install Elasticsearch:
sudo apt update
sudo apt install elasticsearch
- Configure Elasticsearch (elasticsearch.yml):
Edit the Elasticsearch configuration file: /etc/elasticsearch/elasticsearch.yml
. Crucially, set a unique cluster.name
to avoid conflicts if you have multiple Elasticsearch clusters. Optionally, set node.name
for easier identification of this node within the cluster. Here's an example:
cluster.name: my-elk-cluster
node.name: elk-node-1
- Enable and Start Elasticsearch:
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
- Verify Elasticsearch Installation:
Check the Elasticsearch service status and logs:
sudo systemctl status elasticsearch
sudo journalctl -u elasticsearch -f #Follows the log for real-time updates
- Installing Elasticsearch Plugins (Optional):
While not strictly necessary, plugins like head
(a web interface for browsing Elasticsearch indices) and bigdesk
(a Kibana plugin for monitoring Elasticsearch clusters) can be helpful. (Note: Plugin installation methods have changed in recent Elasticsearch versions; consult the official Elasticsearch documentation for the latest approach.)
2. Installing and Configuring Logstash
Logstash is responsible for collecting, parsing, and processing data before sending it to Elasticsearch.
- Add the Logstash Repository:
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list
- Update and Install Logstash:
sudo apt update
sudo apt install logstash
- Generate an SSL Certificate:
For secure communication between Logstash-Forwarder and Logstash, generate an SSL certificate and key. Replace elk.yourdomain.com
with your actual domain or IP address:
sudo mkdir -p /etc/logstash/certs
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/logstash/certs/logstash.key -out /etc/logstash/certs/logstash.crt -subj "/CN=elk.yourdomain.com"
- Configure Logstash (logstash.conf):
Create a Logstash configuration file (e.g., /etc/logstash/conf.d/logstash.conf
) to define how Logstash will process data. This example processes Nginx access logs:
input {
lumberjack {
port => 5000
ssl_certificate => "/etc/logstash/certs/logstash.crt"
ssl_key => "/etc/logstash/certs/logstash.key"
}
}
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{TIMESTAMP_ISO8601:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} %{NUMBER:bytes}" }
add_field => { "[@metadata][type]" => "nginx" }
}
date {
match => ["timestamp", "ISO8601"]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "nginx-access-%{+YYYY.MM.dd}"
}
}
- Enable and Start Logstash:
sudo systemctl enable logstash
sudo systemctl start logstash
3. Installing and Configuring Kibana
Kibana provides a user-friendly interface for visualizing and exploring data in Elasticsearch.
- Add the Kibana Repository (same as Logstash): (You already added this in Step 2)
- Update and Install Kibana:
sudo apt update
sudo apt install kibana
- Configure Kibana (kibana.yml):
Edit the Kibana configuration file: /etc/kibana/kibana.yml
. Ensure the elasticsearch.hosts
setting points to your Elasticsearch instance:
elasticsearch.hosts: ["localhost:9200"]
- Enable and Start Kibana:
sudo systemctl enable kibana
sudo systemctl start kibana
4. Installing and Configuring Logstash-Forwarder (on the Nginx Server)
Logstash-Forwarder runs on the server generating the logs (in this case, your Nginx server) and securely forwards them to Logstash.
- Download Logstash-Forwarder: (Note: Logstash-Forwarder is deprecated; consider using Filebeat as a more modern alternative. Instructions for Filebeat would replace this section.)
- Install Logstash-Forwarder: (Instructions would be for Filebeat installation instead).
- Copy the Logstash Certificate: Copy the
logstash.crt
file from your Logstash server to the/etc/logstash-forwarder/
directory on your Nginx server. - Configure Logstash-Forwarder (logstash-forwarder.conf):
Create a Logstash-Forwarder configuration file (e.g., /etc/logstash-forwarder.conf
):
{
"network": {
"servers": ["elk.yourdomain.com:5000"],
"ssl ca": "/etc/logstash-forwarder/logstash.crt"
},
"files": [
{
"paths": ["/var/log/nginx/access.log"],
"fields": {
"type": "nginx-access"
}
}
]
}
- Start Logstash-Forwarder: (Start Filebeat instead, using its service management commands).
5. Kibana Visualization:
Once data is flowing, access Kibana through your web browser (typically at http://your_server_ip:5601
). Create visualizations and dashboards to explore your Nginx logs. You'll need to configure Kibana to use the correct index pattern (e.g., nginx-access-*
).
Important Considerations:
- Security: Always secure your Elasticsearch, Logstash, and Kibana instances. Refer to the official documentation for best practices.
- Firewall: Configure your firewall to allow traffic on the necessary ports (9200 for Elasticsearch, 5000 for Logstash, 5601 for Kibana).
- Resource Usage: Monitor resource usage (CPU, memory, disk I/O) of your ELK stack components.
- Log Rotation: Implement log rotation for Elasticsearch and Logstash to prevent disk space exhaustion.
- Error Handling: Regularly check logs for errors and warnings.
This detailed tutorial provides a solid foundation for using the ELK stack. Remember to adjust paths, hostnames, and other settings to match your specific environment. For the most up-to-date instructions and best practices, always consult the official documentation for Elasticsearch, Logstash, Kibana, and Filebeat (recommended replacement for Logstash-Forwarder).