Then, it's also quite straight forward to start an Elasticserach cluster with Kibana for some visualizations.
The problem started for us when we wanted to secure the whole pipeline - from the app server to the client through Logstash, Elasticsearch and Kibana.
We also got to know filebeat tand we almost immediately felt in love.
I will explain here, step by step, the whole installation and deploying process that we have gone through. All the configuration files are attached. Enjoy.
General Architecture:
- Components:
- Filebeat - An evolution of the old forwarder. Light and an easy to use tool for sending data from log files to Logstash.
- Logstash - A ready to use tool for sending logs data to Elasticsearch. It supports many other outputs, inputs and manipulations of its input log records. I will show the integration with filebeat as input and Elasticsearch and s3 as output.
- Elaticsearch - Our back-end for storing and indexing the logs
- Kibana - The visualization tool on top of elasticsearch
- Filebeat agent installation (talking with HTTPS to logstash)
- As for the project time, the newest version of filebeat (1.0.0 -rc1) had a bug when sending data to logstash. It's automatically creating the "@timestamp" field, which also get created by logstash, and makes it fail. We worked with 1.0.0 -beta4 version (the first one) of filebeat.
- configuration file
- Take a look at the Logstash as output scope.
- We want to make filebeat trust logstash server. If you signed the certificate that logstash is using with some CA, you can give filebeat a certificate that is signed by that CA. Configuration - "certificate_authorities"
- If you didn't use a sign certificate for the logstash server, you can use the certificate and the public key of logstash server (I'll explain later how to create them) as demonstrated in the configuration file
- Keep in mind that we don't use java keystores for that filebeat - logstash connection.
- Starting command: filebeat_location/filebeat -v -c filebeat_location/filebeat.yml
- Logstash (Https communication to Elasticsearch)
- Download and unpack
- generate certificate:
- openssl req -new -text -out server.req
- ###When asking for common name, enter the server's ip : --Common Name (eg, your name or your server's hostname) []:10.184.2.232
- openssl rsa -in privkey.pem -out server.key
- rm privkey.pem
- sign the req file with the CA certificate
- Logstash configuration
- The input is configured to get data from filebeat
- In the Filer and groke scope:
- we are creating the json documents out of the "message" field that we get from filebeat.
- duplicating our "msg" field. We call it msg_tokenized - that's important for Elasticsearch later on. From one hand, we want to search the logs so we save it as an analyzed field ("msg") but we also want to visualize the logs so we also save the "msg" as an unanalysed field (the whole messages). I will explain later how to set an Elasticsearch template for that.
- creating a new field with the file_name of the source log file
- removing the "logtime" field
- removing the "message" field
- Output scope:
- S3 and Elasticsearch outputs
- For ssl between Logstash and Elasticsearch, you can use the CA name in the cacert configuration under elasticsearch output.
- If we specify a new name for our index in the logstash elasticsearch output, the index would automatically get created. We decided to create a new index for each log file and then we created an abstraction layer for all application logs with Kibana with queries on top of couple of indices.
- The s3 bucket path is not dynamic currently
- Elasticseach (open for HTTPS only)
- Download Elasticsearch, and Shield (shield, license)
- install the 2 addons for shield:
- bin/plugin install file:///path/to/file/license-2.0.0.zip
- bin/plugin install file:///path/to/file/shield-2.0.0.zip
- Create Certificate for elasticsearch
- openssl req -new -text -out server.req
- openssl rsa -in privkey.pem -out server.key
- rm privkey.pem
- sign the req file (organisation ca sign) and save the signed certificate as signed.cer
- Create a java keystore with the sign ertificate and the private key (2 steps)
-
openssl pkcs12 -export -name myservercert -in signed.cer -inkey server.key -out keystore.p12
- keytool -importkeystore -destkeystore node01.jks -srckeystore keystore.p12 -srcstoretype pkcs12 -alias myservercert
- If you are not using a signing CA you can use that command in roder to create the keystore: keytool -genkey -alias node01_s -keystore node01.jks -keyalg RSA -keysize 2048 -validity 712 -ext san=ip:10.184.2.238
- Configuration File
- Take a look at the security shield options. We are using the local java keystore that holds our signed certificate and the private key
- Start Elasticsearch: /apps/elasticsearch-2.0.0/bin/elasticsearch
- Add user: elasticsearch/bin/esusers useradd alog -p alog123 -r admin
- Add Template for elasticsearch - We want all the indexes that are created by logstash to contain both "msg" field for search and "msg" field for visualization. The template would help us get one analyse field and one that is not.
- curl -XPUT https://alog:alog123@elastic-server:9200/_template/template_1 -d '{"template" : "alog-*","mappings" : {"_default_" : {"properties": {"msg":{"type":"string", "index" : "not_analyzed"}}}}
- This template will catch for all indices that start with "alog" in their name.
- Kibana (HTTPS communication to Elasticsearch, users logon via HTTPS )
- Download and unpack
- Start command: bin/kibana
- Configuration
- In order to enable https connection of users we create another pair of certificate and key with openssl tool and set the certificate and key in the configuration
- We set the elasticsearch.username and password for Shield authentication
- We didn't supply Kibana the Elasticsearch certificate and key becuase we used a signed certificate
And that's it :)
I hope that all of the above would help you go through some of the obstacles that we encountered, whether with the SSL setup, with Groke command or with anything else.
Good Luck !
No comments:
Post a Comment