Conversation
3f8e5de to
927db20
Compare
|
Hmm, now I see. I would say this is not a preferable option to collect logs. Instead, Vector or something else should be used, right? https://uptrace.dev/ingest/logs/vector If anybody searches for a solution, this might be helm values for Vector reporting logs from all pods to the Uptrace: role: Agent
service:
enabled: false
customConfig:
data_dir: /vector-data-dir
sources:
kubernetes_logs:
type: kubernetes_logs
transforms:
enrich_metadata:
type: remap
inputs: ["kubernetes_logs"]
source: |
.namespace = .kubernetes.pod_namespace
.container = .kubernetes.container_name
.pod = .kubernetes.pod_name
.node = .kubernetes.pod_node_name
.owner = .kubernetes.pod_owner
.image = .kubernetes.container_image
sinks:
uptrace:
type: http
method: post
inputs: ["enrich_metadata"]
uri: "http://my-uptrace.uptrace.svc.cluster.local:14318/api/v1/vector/logs"
encoding:
codec: json
framing:
method: newline_delimited
compression: gzip
request:
headers:
uptrace-dsn: "http://project2_secret_token@localhost:14318?grpc=14317"
|
When possible, we probably should prefer using otelcol as long as it works.
I guess this uses the official helm chart? I like that someone else will be maintaining it... going to check if there is a similar solution for otelcol... |
|
Despite many whitespace changes, LGTM. Thank you very much :-) |
This is an addition to #50
It adds a configuration which would allow OTEL to receive logs from all pods.
I use it this way as a one-for-everything solution for metrics, tracing and log management in my clusters.
I am not sure if this should be enabled by default. If not, some if-switches/configuration can be added. What would be your opinion to this?