# DockerCompose编排ELK系统 elasticsearch logstash kibana

# 一、ELK系统搭建

  1. 通过DockerCompose编排容器 elk/docker-compose.yml

    version: '3.8'
    
    services:
      elasticsearch:
        image: elasticsearch:7.9.3
        container_name: ufs-elasticsearch
        restart: always
        privileged: true
        ports:
          - "9200:9200"
          - "9300:9300"
        environment:
          - "cluster.name=elasticsearch-spring" #设置集群名称为elasticsearch
          - "discovery.type=single-node" #设置使用jvm内存大小
          - "ES_JAVA_OPTS=-Xms64m -Xmx128m" #设置使用jvm内存大小
        volumes:
          - ./elasticsearch/data:/usr/share/elasticsearch/data #数据文件挂载
          - ./elasticsearch/plugins:/usr/share/elasticsearch/plugins #插件文件挂载
          - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml #配置文件挂载
    
      kibana:
        container_name: ufs-kibana
        image: kibana:7.9.3
        restart: always
        environment:
          - ELASTICSEARCH_URL=http://elasticsearch:9200 #设置访问elasticsearch的地址
          - I18N_LOCALE=zh-CN # 汉化
        ports:
          - "5601:5601"
        depends_on:
          - elasticsearch
    
      logstash:
        image: logstash:7.9.3
        restart: always
        container_name: ufs-logstash
        volumes:
          - ./logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf #挂载logstash的配置文件
        ports:
          - "4560:4560"
        environment:
          LS_JAVA_OPTS: "-Xms512m -Xmx512m"
        depends_on:
          - elasticsearch
        links:
          - elasticsearch:es #可以用es这个域名访问elasticsearch服务
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
  2. 准备配置es的配置文件 elk/elasticsearch/config/elasticsearch.yml 因为前面挂载用到了

    network.host: 0.0.0.0  #使用的网络
    http.cors.enabled: true #跨域配置
    http.cors.allow-origin: "*"
    xpack.security.enabled: false  #开启密码配置 true是打开 默认用户名 elastic
    
    1
    2
    3
    4
  3. 准备logstash配置文件 elk/logstash/pipeline/logstash.conf 因为前面挂载用到了

    input {
      tcp {
        mode => "server"
        host => "0.0.0.0"
        port => 4560
        codec => json_lines
      }
    }
    
    output {
        elasticsearch {
            hosts =>["elasticsearch:9200"]
            index => "boutique-logstash-%{+yyyy.MM.dd}"
            #user => elastic #用户名
            #password => "neihan241.."   #es 配置了密码 就把这个打开
        }
    }
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
  4. 创建并启动容器

    docker compose -d up ufs-elasticsearch ufs-kibana ufs-logstash
    
    1
  5. 如果es报错,如下,则给elk/elasticsearch/data最高权限,chmod 777 ./elk/elasticsearch/data 不报错就忽略

    uncaught exception in thread [main]
    ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
    Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
    	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
    
    1
    2
    3
    4
  6. 还报错的话,重启容器

  7. 访问kibana ip:5601 选择Kibann -> Discover -> 索引模式/创建索引 -> 索引名称可在配置文件中更改 下一步就完事了

# 二、SpringBoot日志推送ELK

  1. springboot 选择logback-spring.xml配置文件方式

  2. 坐标依赖

    <!--集成logstash-->
    <dependency>
        <groupId>net.logstash.logback</groupId>
        <artifactId>logstash-logback-encoder</artifactId>
        <version>5.2</version>
    </dependency>
    
    1
    2
    3
    4
    5
    6
  3. logback-spring.xml新增logstash配置

    <!-- 为logstash输出的JSON格式的Appender -->
    <appender name="logstash"
              class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <!--Logstash服务地址-->
        <destination>logstash的IP:4560</destination>
        <!-- 日志输出编码 -->
        <encoder
                 class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>
    
    ....
    <root level="INFO">
        <appender-ref ref="logstash" />
    <root/>
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35