Skip to content
This repository has been archived by the owner on Apr 24, 2023. It is now read-only.

Timestamp_Key not working when using kafka output #49

Open
kaybinwong opened this issue Mar 11, 2019 · 4 comments
Open

Timestamp_Key not working when using kafka output #49

kaybinwong opened this issue Mar 11, 2019 · 4 comments

Comments

@kaybinwong
Copy link

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-kafka.conf

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.*
        Path              /var/log/containers/*.log
        Parser            docker
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10

  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc.cluster.local:443
        Merge_Log           On
        K8S-Logging.Parser  On

  output-kafka.conf: |
    [OUTPUT]
        Name           kafka
        Match          *
        Brokers        192.168.12.104:9092,192.168.12.105:9092,192.168.12.106:9092
        Topics         app-logs-k8s
        Format         json
        Timestamp_Key  @timestamp
        Timestamp_Format iso8601
        Retry_Limit    false
        # hides errors "Receive failed: Disconnected" when kafka kills idle connections
        rdkafka.log.connection.close false
        rdkafka.queue.buffering.max.kbytes 10240
        rdkafka.request.required.acks 1
......
{"@timestamp":1552268032.077178, "log":"2019-03-11 01:33:52.076 [INFO][76] ipsets.go 222: Asked to resync with the dataplane on next update. family=\"inet\"\n", "stream":"stdout", "time":"2019-03-11T01:33:52.077177656Z", "kubernetes":{"pod_name":"calico-node-ls9fs", "namespace_name":"kube-system", "pod_id":"76fe49b0-3da4-11e9-9109-00163e06302b", "labels":{"controller-revision-hash":"755ffcc586", "k8s-app":"calico-node", "pod-template-generation":"1"}, "annotations":{"kubespray.etcd-cert/serial":"DDC485DA1003CC0D", "prometheus.io/port":"9091", "prometheus.io/scrape":"true", "scheduler.alpha.kubernetes.io/critical-pod":""}, "host":"a-docker-cluster01", "container_name":"calico-node", "docker_id":"bb8eb87a56fc0c48cc9ac23e64f18707331943b8cf8c905fc7350423a7a040c2"}}
{"@timestamp":1552268032.077077, "log":"2019-03-11 01:33:52.076 [INFO][76] int_dataplane.go 733: Applying dataplane updates\n", "stream":"stdout", "time":"2019-03-11T01:33:52.077076685Z", "kubernetes":{"pod_name":"calico-node-ls9fs", "namespace_name":"kube-system", "pod_id":"76fe49b0-3da4-11e9-9109-00163e06302b", "labels":{"controller-revision-hash":"755ffcc586", "k8s-app":"calico-node", "pod-template-generation":"1"}, "annotations":{"kubespray.etcd-cert/serial":"DDC485DA1003CC0D", "prometheus.io/port":"9091", "prometheus.io/scrape":"true", "scheduler.alpha.kubernetes.io/critical-pod":""}, "host":"a-docker-cluster01", "container_name":"calico-node", "docker_id":"bb8eb87a56fc0c48cc9ac23e64f18707331943b8cf8c905fc7350423a7a040c2"}}
{"@timestamp":1552268032.077666, "log":"2019-03-11 01:33:52.077 [INFO][76] ipsets.go 253: Resyncing ipsets with dataplane. family=\"inet\"\n", "stream":"stdout", "time":"2019-03-11T01:33:52.077666066Z", "kubernetes":{"pod_name":"calico-node-ls9fs", "namespace_name":"kube-system", "pod_id":"76fe49b0-3da4-11e9-9109-00163e06302b", "labels":{"controller-revision-hash":"755ffcc586", "k8s-app":"calico-node", "pod-template-generation":"1"}, "annotations":{"kubespray.etcd-cert/serial":"DDC485DA1003CC0D", "prometheus.io/port":"9091", "prometheus.io/scrape":"true", "scheduler.alpha.kubernetes.io/critical-pod":""}, "host":"a-docker-cluster01", "container_name":"calico-node", "docker_id":"bb8eb87a56fc0c48cc9ac23e64f18707331943b8cf8c905fc7350423a7a040c2"}}
{"@timestamp":1552268032.082286, "log":"2019-03-11 01:33:52.082 [INFO][76] ipsets.go 295: Finished resync family=\"inet\" numInconsistenciesFound=0 resyncDuration=4.928899ms\n", "stream":"stdout", "time":"2019-03-11T01:33:52.082285552Z", "kubernetes":{"pod_name":"calico-node-ls9fs", "namespace_name":"kube-system", "pod_id":"76fe49b0-3da4-11e9-9109-00163e06302b", "labels":{"controller-revision-hash":"755ffcc586", "k8s-app":"calico-node", "pod-template-generation":"1"}, "annotations":{"kubespray.etcd-cert/serial":"DDC485DA1003CC0D", "prometheus.io/port":"9091", "prometheus.io/scrape":"true", "scheduler.alpha.kubernetes.io/critical-pod":""}, "host":"a-docker-cluster01", "container_name":"calico-node", "docker_id":"bb8eb87a56fc0c48cc9ac23e64f18707331943b8cf8c905fc7350423a7a040c2"}}
{"@timestamp":1552268032.082313, "log":"2019-03-11 01:33:52.082 [INFO][76] int_dataplane.go 747: Finished applying updates to dataplane. msecToApply=5.248031999999999\n", "stream":"stdout", "time":"2019-03-11T01:33:52.082312879Z", "kubernetes":{"pod_name":"calico-node-ls9fs", "namespace_name":"kube-system", "pod_id":"76fe49b0-3da4-11e9-9109-00163e06302b", "labels":{"controller-revision-hash":"755ffcc586", "k8s-app":"calico-node", "pod-template-generation":"1"}, "annotations":{"kubespray.etcd-cert/serial":"DDC485DA1003CC0D", "prometheus.io/port":"9091", "prometheus.io/scrape":"true", "scheduler.alpha.kubernetes.io/critical-pod":""}, "host":"a-docker-cluster01", "container_name":"calico-node", "docker_id":"bb8eb87a56fc0c48cc9ac23e64f18707331943b8cf8c905fc7350423a7a040c2"}}
{"@timestamp":1552268034.247845, "log":"172.18.88.200 - - [11/Mar/2019:01:33:54 +0000] \"GET / HTTP/1.1\" 200 3420\n", "stream":"stdout", "time":"2019-03-11T01:33:54.247845484Z", "kubernetes":{"pod_name":"passport-phpmyadmin-cfd7b6649-59flh", "namespace_name":"infrastructure", "pod_id":"406ad9e5-3e4e-11e9-9109-00163e06302b", "labels":{"app":"phpmyadmin", "chart":"phpmyadmin-2.0.5", "pod-template-hash":"cfd7b6649", "release":"passport"}, "host":"a-docker-cluster01", "container_name":"phpmyadmin", "docker_id":"01323fddd854c7079b338ad3e251e4dc2f180cf69483ef9aa2d100baf1fd8f1b"}}

the @timestamp field is still numberic, and the logstash got an error.

......
[2019-03-11T09:22:46,971][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2019-03-11T09:22:47,953][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2019-03-11T09:22:47,966][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
[2019-03-11T09:22:50,953][WARN ][org.logstash.Event       ] Unrecognized @timestamp value type=class org.jruby.RubyFloat
@sherwinwangs
Copy link

same issue, following

@sherwinwangs
Copy link

Try rename Timestamp_Key @timestamp,please do not use @timestamp when output to kafka

@sherwinwangs
Copy link

Finally I found out that the @timestamp value 1552268032.077178 cannot be parsed by logstash, It's support UNIX,UNIX_MS,etc... but this type. If you have a better solution,please let me know. I solve this problem by not send a filed @timestamp to kafka.

If you send log direct to kafka,then get log from kafka use logstash,It's work fine.if you try to resolve the log then logstash read msgs from kafka,but @timestamp is an object, If try to resolve ,It's resolve as string,then there is something wrong

@queenns
Copy link

queenns commented Jan 26, 2021

@kaybinwong @sherwinwangs
I have exactly the same problem,Fluent I currently use version 1.3.7,Found the answer in the official stability.

Key descrption default
Timestamp_Format 'iso8601' or 'double' double

https://docs.fluentbit.io/manual/v/1.3/output/kafka

Configure correctly:
[OUTPUT]
Name kafka
Match kube.*
Brokers 172.18.96.29:9092
Topics my-topic
Timestamp_Format iso8601
rdkafka.compression.type lz4

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants