Skip to content

Commit 6f03fb9

Browse files
authored
Merge pull request #2372 from fluent/lynettemiles/sc-162129/fb-broken-links
2 parents 6e7d8e9 + f91512c commit 6f03fb9

File tree

5 files changed

+8
-8
lines changed

5 files changed

+8
-8
lines changed

administration/configuring-fluent-bit/yaml/upstream-servers-section.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Upstream servers
22

3-
The `upstream_servers` section of YAML configuration files defines a group of endpoints, referred to as nodes. Nodes are used by output plugins to distribute data in a round-robin fashion. Use this section for plugins that require load balancing when sending data. Examples of plugins that support this capability include [Forward](../../../pipeline/outputs/forward.md) and [Elasticsearch](../../../pipeline/outputs/elasticsearch).
3+
The `upstream_servers` section of YAML configuration files defines a group of endpoints, referred to as nodes. Nodes are used by output plugins to distribute data in a round-robin fashion. Use this section for plugins that require load balancing when sending data. Examples of plugins that support this capability include [Forward](../../../pipeline/outputs/forward.md) and [Elasticsearch](../../../pipeline/outputs/elasticsearch.md).
44

55
The `upstream_servers` section require specifying a `name` for the group and a list
66
of `nodes`. The following example defines two upstream server groups, `forward-balancing` and `forward-balancing-2`:

installation/downloads/aws-container.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,20 +7,20 @@ AWS maintains a distribution of Fluent Bit that combines the latest official rel
77
The [AWS for Fluent Bit](https://github.com/aws/aws-for-fluent-bit) image contains Go Plugins for:
88

99
- Amazon CloudWatch as `cloudwatch_logs`. See the
10-
[Fluent Bit docs](../../pipeline/outputs/cloudwatch) or the
10+
[Fluent Bit docs](../../pipeline/outputs/cloudwatch.md) or the
1111
[Plugin repository](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit).
1212
- Amazon Kinesis Data Firehose as `kinesis_firehose`. See the
13-
[Fluent Bit docs](../../pipeline/outputs/firehose) or the
13+
[Fluent Bit docs](../../pipeline/outputs/firehose.md) or the
1414
[Plugin repository](https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit).
1515
- Amazon Kinesis Data Streams as `kinesis_streams`. See the
16-
[Fluent Bit docs](../../pipeline/outputs/kinesis) or the
16+
[Fluent Bit docs](../../pipeline/outputs/kinesis.md) or the
1717
[Plugin repository](https://github.com/aws/amazon-kinesis-streams-for-fluent-bit).
1818

1919
These plugins are higher performance than Go plugins.
2020

2121
Also, Fluent Bit includes an S3 output plugin named `s3`.
2222

23-
- [Amazon S3](../../pipeline/outputs/s3)
23+
- [Amazon S3](../../pipeline/outputs/s3.md)
2424

2525
## Versions and regional repositories
2626

installation/downloads/kubernetes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ description: Kubernetes Production Grade Log Processor
1414

1515
Before getting started it's important to understand how Fluent Bit will be deployed. Kubernetes manages a cluster of nodes. The Fluent Bit log agent tool needs to run on every node to collect logs from every pod. Fluent Bit is deployed as a DaemonSet, which is a pod that runs on every node of the cluster.
1616

17-
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](../../pipeline/filters/kubernetes) filter plugin.
17+
When Fluent Bit runs, it reads, parses, and filters the logs of every pod. In addition, Fluent Bit adds metadata to each entry using the [Kubernetes](../../pipeline/filters/kubernetes.md) filter plugin.
1818

1919
The Kubernetes filter plugin talks to the Kubernetes API Server to retrieve relevant information such as the `pod_id`, `labels`, and `annotations`. Other fields, such as `pod_name`, `container_id`, and `container_name`, are retrieved locally from the log file names. All of this is handled automatically, and no intervention is required from a configuration aspect.
2020

installation/downloads/macos.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ To make the access path easier to Fluent Bit binary, extend the `PATH` variable:
163163
export PATH=/opt/fluent-bit/bin:$PATH
164164
```
165165

166-
To test, try Fluent Bit by generating a test message using the [Dummy input plugin](../../pipeline/inputs/dummy) which prints to the standard output interface every one second:
166+
To test, try Fluent Bit by generating a test message using the [Dummy input plugin](../../pipeline/inputs/dummy.md) which prints to the standard output interface every one second:
167167

168168
```shell
169169
fluent-bit -i dummy -o stdout -f 1

pipeline/filters/multiline-stacktrace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ The plugin supports the following configuration parameters:
4040

4141
| Property | Description |
4242
| -------- | ----------- |
43-
| `multiline.parser` | Specify one or multiple [Multiline Parser definitions](../pipeline/parsers/multiline-parsing.md) to apply to the content. You can specify multiple multiline parsers to detect different formats by separating them with a comma. |
43+
| `multiline.parser` | Specify one or multiple [Multiline Parser definitions](../../pipeline/parsers/multiline-parsing.md) to apply to the content. You can specify multiple multiline parsers to detect different formats by separating them with a comma. |
4444
| `multiline.key_content` | Key name that holds the content to process. A multiline parser definition can specify the `key_content` This option allows for overwriting that value for the purpose of the filter. |
4545
| `mode` | Mode can be `parser` for regular expression concatenation, or `partial_message` to concatenate split Docker logs. |
4646
| `buffer` | Enable buffered mode. In buffered mode, the filter can concatenate multiple lines from inputs that ingest records one by one (like Forward), rather than in chunks, re-emitting them into the beginning of the pipeline (with the same tag) using the `in_emitter` instance. With buffer off, this filter won't work with most inputs, except Tail. |

0 commit comments

Comments
 (0)