<del id="d4fwx"><form id="d4fwx"></form></del>
      <del id="d4fwx"><form id="d4fwx"></form></del><del id="d4fwx"><form id="d4fwx"></form></del>

            <code id="d4fwx"><abbr id="d4fwx"></abbr></code>
          • 線上filebeat部署文檔和使用方法

            第一步:安裝filebeat
            參考:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html
            第二步:filebeat目錄說明

            專注于為中小企業(yè)提供成都網(wǎng)站設(shè)計(jì)、網(wǎng)站建設(shè)服務(wù),電腦端+手機(jī)端+微信端的三站合一,更高效的管理,為中小企業(yè)玄武免費(fèi)做網(wǎng)站提供優(yōu)質(zhì)的服務(wù)。我們立足成都,凝聚了一批互聯(lián)網(wǎng)行業(yè)人才,有力地推動(dòng)了上千企業(yè)的穩(wěn)健成長,幫助中小企業(yè)通過網(wǎng)站建設(shè)實(shí)現(xiàn)規(guī)模擴(kuò)充和轉(zhuǎn)變。

            Type                  Description                                                                 Location
            home                 Home of the Filebeat installation.                             {extract.path}
            bin                   The location for the binary files.                                {extract.path}
            config              The location for configuration files.                           {extract.path}
            data                   The location for persistent data files.                      {extract.path}/data
            logs                    The location for the logs created by Filebeat.         {extract.path}/logs

            第三步:filebeat配置
            默認(rèn)配置文件為filebeat.yml
            內(nèi)容為:
            ###################### Filebeat Configuration Example #########################

            #This file is an example configuration file highlighting only the most common
            #options. The filebeat.reference.yml file from the same directory contains all the
            #supported options with more comments. You can use it as a reference.

            #You can find the full configuration reference here:
            https://www.elastic.co/guide/en/beats/filebeat/index.html

            #For more available modules and options, please see the filebeat.reference.yml sample
            #configuration file.

            #=========================== Filebeat inputs =============================

            filebeat.inputs:

            #Each - is an input. Most options can be set at the input level, so
            #you can use different inputs for various configurations.
            #Below are the input specific configurations.

            • type: log

              #Change to true to enable this input configuration.
              enabled: false

              #Paths that should be crawled and fetched. Glob based paths.
              paths:

              • /var/log/.log
                #- c:\programdata\elasticsearch\logs\

              #Exclude lines. A list of regular expressions to match. It drops the lines that are
              #matching any regular expression from the list.
              #exclude_lines: ['^DBG']

              #Include lines. A list of regular expressions to match. It exports the lines that are
              #matching any regular expression from the list.
              #include_lines: ['^ERR', '^WARN']

              #Exclude files. A list of regular expressions to match. Filebeat drops the files that
              #are matching any regular expression from the list. By default, no files are dropped.
              #exclude_files: ['.gz$']

              #Optional additional fields. These fields can be freely picked
              #to add additional information to the crawled log files for filtering
              #fields:
              #level: debug
              #review: 1

              Multiline options

              #Multiline can be used for log messages spanning multiple lines. This is common
              #for Java Stack Traces or C-Line Continuation

              #The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
              #multiline.pattern: ^[

              #Defines if the pattern set under pattern should be negated or not. Default is false.
              #multiline.negate: false

              #Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
              #that was (not) matched before or after or as long as a pattern is not matched based on negate.
              #Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
              #multiline.match: after

            #============================= Filebeat modules ===============================

            filebeat.config.modules:
            #Glob pattern for configuration loading
            path: ${path.config}/modules.d/*.yml

            #Set to true to enable config reloading
            reload.enabled: false

            #Period on which files under path should be checked for changes
            #reload.period: 10s

            #==================== Elasticsearch template setting ==========================

            setup.template.settings:
            index.number_of_shards: 3
            #index.codec: best_compression
            #_source.enabled: false

            #================================ General =====================================

            #The name of the shipper that publishes the network data. It can be used to group
            #all the transactions sent by a single shipper in the web interface.
            #name:

            #The tags of the shipper are included in their own field with each
            #transaction published.
            #tags: ["service-X", "web-tier"]

            #Optional fields that you can specify to add additional information to the
            #output.
            #fields:
            #env: staging

            #============================== Dashboards =====================================
            #These settings control loading the sample dashboards to the Kibana index. Loading
            #the dashboards is disabled by default and can be enabled either by setting the
            #options here, or by using the -setup CLI flag or the setup command.
            #setup.dashboards.enabled: false

            #The URL from where to download the dashboards archive. By default this URL
            #has a value which is computed based on the Beat name and version. For released
            #versions, this URL points to the dashboard archive on the artifacts.elastic.co
            #website.
            #setup.dashboards.url:

            #============================== Kibana =====================================

            #Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
            #This requires a Kibana endpoint configuration.
            setup.kibana:

            #Kibana Host
            #Scheme and port can be left out and will be set to the default (http and 5601)
            #In case you specify and additional path, the scheme is required: http://localhost:5601/path
            #IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
            #host: "localhost:5601"

            #Kibana Space ID
            #ID of the Kibana Space into which the dashboards should be loaded. By default,
            #the Default Space will be used.
            #space.id:

            #============================= Elastic Cloud ==================================

            #These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

            #The cloud.id setting overwrites the output.elasticsearch.hosts and
            #setup.kibana.host options.
            #You can find the cloud.id in the Elastic Cloud web UI.
            #cloud.id:

            #The cloud.auth setting overwrites the output.elasticsearch.username and
            #output.elasticsearch.password settings. The format is &lt;user&gt;:&lt;pass&gt;.
            #cloud.auth:

            #================================ Outputs =====================================

            #Configure what output to use when sending the data collected by the beat.

            #-------------------------- Elasticsearch output ------------------------------
            output.elasticsearch:
            #Array of hosts to connect to.
            hosts: ["localhost:9200"]

            #Optional protocol and basic auth credentials.
            #protocol: "https"
            #username: "elastic"
            #password: "changeme"

            #----------------------------- Logstash output --------------------------------
            #output.logstash:
            #The Logstash hosts
            #hosts: ["localhost:5044"]

            #Optional SSL. By default is off.
            #List of root certificates for HTTPS server verifications
            #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

            #Certificate for SSL client authentication
            #ssl.certificate: "/etc/pki/client/cert.pem"

            #Client Certificate Key
            #ssl.key: "/etc/pki/client/cert.key"

            #================================ Procesors =====================================

            #Configure processors to enhance or manipulate events generated by the beat.

            processors:

            • add_host_metadata: ~
            • add_cloud_metadata: ~

            #================================ Logging =====================================

            #Sets log level. The default log level is info.
            #Available log levels are: error, warning, info, debug
            #logging.level: debug

            #At debug level, you can selectively enable logging only for some components.
            #To enable all selectors use [""]. Examples of other selectors are "beat",
            #"publish", "service".
            #logging.selectors: ["
            "]

            #============================== Xpack Monitoring ===============================
            #filebeat can export internal metrics to a central Elasticsearch monitoring
            #cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
            #reporting is disabled by default.

            #Set to true to enable the monitoring reporter.
            #xpack.monitoring.enabled: false

            #Uncomment to send the metrics to Elasticsearch. Most settings from the
            #Elasticsearch output are accepted here as well. Any setting that is not set is
            #automatically inherited from the Elasticsearch output configuration, so if you
            #have the Elasticsearch output configured, you can simply uncomment the
            #following line.
            #xpack.monitoring.elasticsearch:

            配置文件解釋詳細(xì)見:https://www.cnblogs.com/zlslch/p/6622079.html
            第四步:filebeat抓取各個(gè)服務(wù)日志并以服務(wù)的名字創(chuàng)建索引存儲(chǔ)到es當(dāng)中

            1. 編寫一個(gè)filebeat-123.yml文件文件內(nèi)容如下:
            filebeat.config:
              prospectors:
                path: /data/software/filebeat-6.5.1/conf/*.yml
                reload.enabled: true
                reload.period: 10s
            output.elasticsearch:
              hosts: ["IP:9200"]
              index: "%{[fields][out_topic]}"
            setup.template.name: "customname"
            setup.template.pattern: "customname-*"
            setup.template.overwrite: true
            logging:
              level: debug
            1. 結(jié)合自定義路徑conf下的文件ceshi.yml
              - type: log
              paths:
              - /var/log/zookeeper/zookeeper.log
              tags: ["zookeeper"]
              exclude_files: [".gz$"]
              scan_frequency: 1s
              fields:
              server_name: 主機(jī)名
              out_topic: "zookeeper_log"
              multiline:
              pattern: "^\\S"
              match: after
              - type: log
              paths:
              - /var/log/nginx/access.log
              tags: ["nginx"]
              exclude_files: [".gz$"]
              scan_frequency: 1s
              fields:
              server_name: 主機(jī)名
              out_topic: "nginx_log"
              multiline:
              pattern: "^\\S"
              match: after

            上邊這塊我們抓取了zookeeper日志和nginx日志,定義索引名稱分別為zookeeper_log和nginx_log

            第五步:啟動(dòng)filebeat并在es中查看生成的索引

            ./filebeat -e -c filebeat-123.yml
            去es中查看索引
            線上filebeat部署文檔和使用方法

            在es中已生成nginx_log和zookeeper_log索引,我們?cè)趉ibana中去查看索引中的內(nèi)容
            線上filebeat部署文檔和使用方法

            線上filebeat部署文檔和使用方法

            線上filebeat部署文檔和使用方法
            線上filebeat部署文檔和使用方法

            我看看到zookeeper_log索引里邊已經(jīng)有實(shí)時(shí)日志在跑,那么怎么自動(dòng)讓他更新呢。

            線上filebeat部署文檔和使用方法

            線上filebeat部署文檔和使用方法
            然后我們?cè)趉ibana上就可以看到1分鐘后日志在實(shí)時(shí)更新。

            文章題目:線上filebeat部署文檔和使用方法
            文章路徑:http://www.jbt999.com/article48/pdjhhp.html

            成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站制作、App開發(fā)、面包屑導(dǎo)航、微信公眾號(hào)網(wǎng)站策劃、手機(jī)網(wǎng)站建設(shè)

            廣告

            聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:[email protected]。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

            成都定制網(wǎng)站建設(shè)

              <del id="d4fwx"><form id="d4fwx"></form></del>
              <del id="d4fwx"><form id="d4fwx"></form></del><del id="d4fwx"><form id="d4fwx"></form></del>

                    <code id="d4fwx"><abbr id="d4fwx"></abbr></code>
                  • 亚洲狼友视频 | 日本一区视频在线 | 男人操女人在线观看 | 国产精品─色哟哟 | 女人18毛片水多毛片久久1 |