分享

ELK环境简单安装

 拼命奋斗的自己 2020-07-23

235306146ab.jpeg

 

安装包下载地址:

wget https://artifacts./downloads/elasticsearch/elasticsearch-7.3.2-linux-x86_64.tar.gz &

wget https://artifacts./downloads/logstash/logstash-7.3.2.tar.gz &

wget https://artifacts./downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gz &

wget https://artifacts./downloads/beats/filebeat/filebeat-7.6.1-linux-x86_64.tar.gz&

解压出安装包到/usr/local/elk/目录下

2b805e24521.jpeg

本机地址为10.1.3.43

1.     安装配置elasticsearch

修改/usr/local/elk/elasticsearch-7.3.2/config/elasticsearch.yml文件

# ======================== Elasticsearch Configuration =========================

#

# NOTE: Elasticsearch comes with reasonable defaults for most settings.

#       Before you set out to tweak and tune the configuration, make sure you

#       understand what are you trying to accomplish and the consequences.

#

# The primary way of configuring a node is via this file. This template lists

# the most important settings you may want to configure for a production cluster.

#

# Please consult the documentation for further information on configuration options:

# https://www./guide/en/elasticsearch/reference/index.html

#

# ---------------------------------- Cluster -----------------------------------

#

# Use a descriptive name for your cluster:

#

#cluster.name: my-application

# 自定义名称

cluster.name: qingfu

#

# ------------------------------------ Node ------------------------------------

#

# Use a descriptive name for the node:

#

#node.name: node-1

#自定义node名称,每个集群配置不同名称

node.name: qingfu-1

#

# Add custom attributes to the node:

#

#node.attr.rack: r1

#

# ----------------------------------- Paths ------------------------------------

#

# Path to directory where to store the data (separate multiple locations by comma):

#

#path.data: /path/to/data

path.data: /data/elk/es

#

# Path to log files:

path.logs: /data/logs/es

#

#path.logs: /path/to/logs

#

# ----------------------------------- Memory -----------------------------------

#

# Lock the memory on startup:

#

#bootstrap.memory_lock: true

bootstrap.memory_lock: false

bootstrap.system_call_filter: false

#

# Make sure that the heap size is set to about half the memory available

# on the system and that the owner of the process is allowed to use this

# limit.

#

# Elasticsearch performs poorly when the system is swapping the memory.

#

# ---------------------------------- Network -----------------------------------

#

# Set the bind address to a specific IP (IPv4 or IPv6):

#

#network.host: 192.168.0.1

network.host: 0.0.0.0

#

# Set a custom port for HTTP:

#

#http.port: 9200

http.port: 9200

#解决跨域问题

http.cors.enabled: true

http.cors.allow-origin: "*"

http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE

http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"

 

#

# For more information, consult the network module documentation.

#

# --------------------------------- Discovery ----------------------------------

#

# Pass an initial list of hosts to perform discovery when this node is started:

# The default list of hosts is ["127.0.0.1", "[::1]"]

#

#discovery.seed_hosts: ["host1", "host2"]

#

# Bootstrap the cluster using an initial set of master-eligible nodes:

#

#cluster.initial_master_nodes: ["node-1", "node-2"]

# 初始化主节点

cluster.initial_master_nodes: ["qingfu-1"]

#

# For more information, consult the discovery and cluster formation module documentation.

#

# ---------------------------------- Gateway -----------------------------------

#

# Block initial recovery after a full cluster restart until N nodes are started:

#

#gateway.recover_after_nodes: 3

#

# For more information, consult the gateway module documentation.

#

# ---------------------------------- Various -----------------------------------

#

# Require explicit names when deleting indices:

#

#action.destructive_requires_name: true

启动elasticsearch

sh /usr/local/elk/elasticsearch-7.3.2/bin/elasticsearch -d

打开浏览器方位http://ip:9200, 服务启动正常

 

c2ac3d08618.jpeg

 

2.     安装配置Kibana

修改/usr/local/elk/kibana-7.3.2/config/kibana.yml文件

# Kibana is served by a back end server. This setting specifies the port to use.

#server.port: 5601

 

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.

# The default is 'localhost', which usually means remote machines will not be able to connect.

# To allow connections from remote users, set this parameter to a non-loopback address.

#server.host: "localhost"

server.host: 0.0.0.0

 

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.

# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath

# from requests it receives, and to prevent a deprecation warning at startup.

# This setting cannot end in a slash.

#server.basePath: ""

 

# Specifies whether Kibana should rewrite requests that are prefixed with

# `server.basePath` or require that they are rewritten by your reverse proxy.

# This setting was effectively always `false` before Kibana 6.3 and will

# default to `true` starting in Kibana 7.0.

#server.rewriteBasePath: false

 

# The maximum payload size in bytes for incoming server requests.

#server.maxPayloadBytes: 1048576

 

# The Kibana server's name.  This is used for display purposes.

#server.name: "your-hostname"

 

# The URLs of the Elasticsearch instances to use for all your queries.

#elasticsearch.hosts: ["http://localhost:9200"]

elasticsearch.hosts: ["http://localhost:9200"]

 

 

# When this setting's value is true Kibana uses the hostname specified in the server.host

# setting. When the value of this setting is false, Kibana uses the hostname of the host

# that connects to this Kibana instance.

#elasticsearch.preserveHost: true

 

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and

# dashboards. Kibana creates a new index if the index doesn't already exist.

#kibana.index: ".kibana"

 

# The default application to load.

#kibana.defaultAppId: "home"

 

# If your Elasticsearch is protected with basic authentication, these settings provide

# the username and password that the Kibana server uses to perform maintenance on the Kibana

# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which

# is proxied through the Kibana server.

#elasticsearch.username: "kibana"

#elasticsearch.password: "pass"

 

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.

# These settings enable SSL for outgoing requests from the Kibana server to the browser.

#server.ssl.enabled: false

#server.ssl.certificate: /path/to/your/server.crt

#server.ssl.key: /path/to/your/server.key

 

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.

# These files validate that your Elasticsearch backend uses the same key files.

#elasticsearch.ssl.certificate: /path/to/your/client.crt

#elasticsearch.ssl.key: /path/to/your/client.key

 

# Optional setting that enables you to specify a path to the PEM file for the certificate

# authority for your Elasticsearch instance.

#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

 

# To disregard the validity of SSL certificates, change this setting's value to 'none'.

#elasticsearch.ssl.verificationMode: full

 

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of

# the elasticsearch.requestTimeout setting.

#elasticsearch.pingTimeout: 1500

 

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value

# must be a positive integer.

#elasticsearch.requestTimeout: 30000

 

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side

# headers, set this value to [] (an empty list).

#elasticsearch.requestHeadersWhitelist: [ authorization ]

 

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten

# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.

#elasticsearch.customHeaders: {}

 

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.

#elasticsearch.shardTimeout: 30000

 

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.

#elasticsearch.startupTimeout: 5000

 

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.

#elasticsearch.logQueries: false

 

# Specifies the path where Kibana creates the process ID file.

#pid.file: /var/run/kibana.pid

 

# Enables you specify a file where Kibana stores log output.

#logging.dest: stdout

 

# Set the value of this setting to true to suppress all logging output.

#logging.silent: false

 

# Set the value of this setting to true to suppress all logging output other than error messages.

#logging.quiet: false

 

# Set the value of this setting to true to log all events, including system usage information

# and all requests.

#logging.verbose: false

 

# Set the interval in milliseconds to sample system and process performance

# metrics. Minimum is 100ms. Defaults to 5000.

#ops.interval: 5000

 

# Specifies locale to be used for all localizable strings, dates and number formats.

# Supported languages are the following: English - en , by default , Chinese - zh-CN .

#i18n.locale: "en"

 

i18n.locale: "zh-CN"

启动kibana

sh /usr/local/elk/kibana-7.3.2/bin/kibana --allow-root &

3.     安装filebeat

filebeat部署到日志文件服务器上,开始对日志文件采集

修改/usr/local/filebeat-7.6.1/filebeat.yml文件

[root@api1 filebeat]# cat filebeat.yml

###################### Filebeat Configuration Example #########################

 

# This file is an example configuration file highlighting only the most common

# options. The filebeat.reference.yml file from the same directory contains all the

# supported options with more comments. You can use it as a reference.

#

# You can find the full configuration reference here:

# https://www./guide/en/beats/filebeat/index.html

 

# For more available modules and options, please see the filebeat.reference.yml sample

# configuration file.

 

#=========================== Filebeat inputs =============================

 

filebeat.inputs:

 

# Each - is an input. Most options can be set at the input level, so

# you can use different inputs for various configurations.

# Below are the input specific configurations.

 

- type: log

  enabled: true

  paths:

    - /usr/local/service/tomcat-boss/AppLogs/*.log

  fields:

# logstash中根据[fields]区分日志处理方式

    service: boss

 

  exclude_files: ['.gz$']

 

#匹配logback或log4j多行   

 

  multiline.pattern: ^\[

  multiline.negate: true

  multiline.match: after

 

 

#============================= Filebeat modules ===============================

 

filebeat.config.modules:

  # Glob pattern for configuration loading

  path: ${path.config}/modules.d/*.yml

 

  # Set to true to enable config reloading

  #reload.enabled: true

 

  # Period on which files under path should be checked for changes

  #reload.period: 10s

 

#==================== Elasticsearch template setting ==========================

 

setup.template.settings:

  index.number_of_shards: 3

  #index.codec: best_compression

  #_source.enabled: false

 

#================================ General =====================================

 

# The name of the shipper that publishes the network data. It can be used to group

# all the transactions sent by a single shipper in the web interface.

#name:

 

# The tags of the shipper are included in their own field with each

# transaction published.

#tags: ["service-X", "web-tier"]

 

# Optional fields that you can specify to add additional information to the

# output.

#fields:

#  env: staging

 

 

#============================== Dashboards =====================================

# These settings control loading the sample dashboards to the Kibana index. Loading

# the dashboards is disabled by default and can be enabled either by setting the

# options here or by using the `setup` command.

#setup.dashboards.enabled: true

 

# The URL from where to download the dashboards archive. By default this URL

# has a value which is computed based on the Beat name and version. For released

# versions, this URL points to the dashboard archive on the artifacts.

# website.

#setup.dashboards.url:

 

#============================== Kibana =====================================

 

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

# This requires a Kibana endpoint configuration.

setup.kibana:

 

  # Kibana Host

  # Scheme and port can be left out and will be set to the default (http and 5601)

  # In case you specify and additional path, the scheme is required: http://localhost:5601/path

  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

  #host: "localhost:5601"

 

  host: "10.1.3.43:5601"

  # Kibana Space ID

  # ID of the Kibana Space into which the dashboards should be loaded. By default,

  # the Default Space will be used.

  #space.id:

 

#============================= Elastic Cloud ==================================

 

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud./).

 

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and

# `setup.kibana.host` options.

# You can find the `cloud.id` in the Elastic Cloud web UI.

#cloud.id:

 

# The cloud.auth setting overwrites the `output.elasticsearch.username` and

# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.

#cloud.auth:

 

#================================ Outputs =====================================

 

# Configure what output to use when sending the data collected by the beat.

 

#-------------------------- Elasticsearch output ------------------------------

#output.elasticsearch:

  # Array of hosts to connect to.

  #hosts: ["localhost:9200"]

 

  # Protocol - either `http` (default) or `https`.

  #protocol: "https"

 

  # Authentication credentials - either API key or username/password.

  #api_key: "id:api_key"

  #username: "elastic"

  #password: "changeme"

 

#----------------------------- Logstash output --------------------------------

output.logstash:

  # The Logstash hosts

 

  hosts: ["10.1.3.43:5044"]

 

  # Optional SSL. By default is off.

  # List of root certificates for HTTPS server verifications

  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

 

  # Certificate for SSL client authentication

  #ssl.certificate: "/etc/pki/client/cert.pem"

 

  # Client Certificate Key

  #ssl.key: "/etc/pki/client/cert.key"

 

#================================ Processors =====================================

 

# Configure processors to enhance or manipulate events generated by the beat.

 

processors:

  - add_host_metadata: ~

  - add_cloud_metadata: ~

  - add_docker_metadata: ~

  - add_kubernetes_metadata: ~

 

#================================ Logging =====================================

 

# Sets log level. The default log level is info.

# Available log levels are: error, warning, info, debug

#logging.level: debug

 

# At debug level, you can selectively enable logging only for some components.

# To enable all selectors use ["*"]. Examples of other selectors are "beat",

# "publish", "service".

#logging.selectors: ["*"]

 

#============================== X-Pack Monitoring ===============================

# filebeat can export internal metrics to a central Elasticsearch monitoring

# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The

# reporting is disabled by default.

 

# Set to true to enable the monitoring reporter.

#monitoring.enabled: false

 

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this

# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch

# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.

#monitoring.cluster_uuid:

 

# Uncomment to send the metrics to Elasticsearch. Most settings from the

# Elasticsearch output are accepted here as well.

# Note that the settings should point to your Elasticsearch *monitoring* cluster.

# Any setting that is not set is automatically inherited from the Elasticsearch

# output configuration, so if you have the Elasticsearch output configured such

# that it is pointing to your Elasticsearch monitoring cluster, you can simply

# uncomment the following line.

#monitoring.elasticsearch:

 

#================================= Migration ==================================

 

# This allows to enable 6.7 migration aliases

#migration.6_to_7.enabled: true

启动filebeat

nohup sh /usr/local/filebeat-7.6.1/filebeat -c filebeat.yml &

4.     安装logstash

修改/usr/local/elk/logstash-7.3.2/config/logstash.conf文件

 

input {

    beats {

        port => 5044

    }

}

filter {

    #if [fields][service] == 'boss' {

    #    grok {

    #        match => {

    #           "message" => "\[(?<datetime>\w.*?)\]\[%{LOGLEVEL:level} \]\[(?<thread>\w.*?)\]\[(?<source>\w.*?)\]%{GREEDYDATA:message}" 

    #        }

    #        overwrite => ["message"]

    #    }

    #}

}

output {

    #stdout{ codec => rubydebug }

 

    if [fields][service] == 'boss' {

        elasticsearch {

            hosts => "localhost:9200"

            index =>  "boss-%{+YYYY.MM.dd}"

            document_type => "log4j_type"

        }

    }

 

}

启动logstash

sh /usr/local/elk/logstash-7.3.2/bin/logstash -f /usr/local/elk/logstash-7.3.2/config/logstash.conf

访问http://ip:5601/ 开启ELK之旅

 

e1fef9a0383.jpeg

 

以上配置大多采取默认配置,关于更多日志解析内容可自行了解grok

使用中filebeat会随着窗口关闭而停止,可单独使用脚本启动filebeat确保后台运行,自定义start_filebeat.sh

#!/bin/bash

nohup /usr/local/elk/filebeat/filebeat -e -c /usr/local/elk/filebeat/filebeat.yml >/dev/null 2>&1 &

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多