Skip to content
This repository has been archived by the owner on Nov 15, 2022. It is now read-only.

Docs/standardize docs for config builder #111

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/CONFIGURE_LOG_EXPORT_CONTAINER.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ nav_order: 2

### Required configuration

- **LOG_EXPORT_CONTAINER_INPUT**. Container input format (`syslog-json`, `syslog-csv`, `tcp-json`, `tcp-csv`, `file-json` or `file-csv`). Default: `syslog-json`
- **LOG_EXPORT_CONTAINER_OUTPUT**. Container output storage (`stdout`, `remote-syslog`, `s3`, `cloudwatch`, `splunk-hec`, `datadog`, `azure-loganalytics`, `sumologic`, `kafka`, `mongo`, `logz`, `loki`, `elasticsearch` and/or `bigquery`). Default: `stdout`. You could configure multiple storages, for example: `stdout s3 datadog`.
* **LOG_EXPORT_CONTAINER_INPUT**. Container input format (`syslog-json`, `syslog-csv`, `tcp-json`, `tcp-csv`, `file-json` or `file-csv`). Default = `syslog-json`.
* **LOG_EXPORT_CONTAINER_OUTPUT**. Container output storage (`stdout`, `remote-syslog`, `s3`, `cloudwatch`, `splunk-hec`, `datadog`, `azure-loganalytics`, `sumologic`, `kafka`, `mongo`, `logz`, `loki`, `elasticsearch` and/or `bigquery`). Default = `stdout`. You could configure multiple storages, for example: `stdout s3 datadog`.

When using `LOG_EXPORT_CONTAINER_INPUT=file-json` or `LOG_EXPORT_CONTAINER_INPUT=file-csv` add variables listed in [CONFIGURE_FILE_INPUT.md](inputs/CONFIGURE_FILE_INPUT.md)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,32 +9,28 @@ nav_order: 9

First, to make this work, you need to provide the following variable:

- **SDM_ADMIN_TOKEN**. Admin Token created in SDM Web UI. You need to check the options `Activities`, `Datasources`, `Users`, `Roles` and `Gateways`
to have permissions to extract all logs from the SDM CLI audit command.
* **SDM_ADMIN_TOKEN**. Admin Token created in SDM Web UI. The token must have the audit permissions for `Activities`, `Datasources`, `Users`, `Roles` and `Gateways`.

**NOTE**: if you intend to run LEC locally, you'll need to install the [SDM CLI](https://www.strongdm.com/docs/user-guide/client-installation).

## Configure Periodic Audit Data Extraction

The Log Export Container uses [fluentd input exec plugin](https://docs.fluentd.org/input/exec) to extract the logs from strongDM Audit command.
To export the logs about activities, resources, users and roles coming from strongDM Audit command, you need to specify the value of the following
variable with the name of the entity (activities, resources, users or roles) and the extract interval in minutes (you should follow the syntax
shown below where we have `entity_name/extract_interval` space-separated):
variable:

```
LOG_EXPORT_CONTAINER_EXTRACT_AUDIT=activities/15 resources/480 users/480 roles/480
```
* **LOG_EXPORT_CONTAINER_EXTRACT_AUDIT**. The value should be the names of the entities (activities, resources, users or roles) and their extract interval minutes following the syntax `entity_name/extract_interval` (space-separated for each entity). E.g., `activities/15 resources/480 users/480 roles/480`.

It is worth noting that if you do not specify the interval value after each `/`, the default interval values for each entity will be as defined above.

If you want to specifically extract the activity logs you can also use the variables below:

- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES=true` Variable responsible for indicating whether activity logs will be extracted, default = false.
- `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL=15` Interval in minutes for running the extractor script, default = 15.
* **LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES**. Variable responsible for indicating whether activity logs will be extracted. Default = `false`.
* **LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL**. Interval in minutes for running the extractor script for activities. Default = `15`.

However, be aware that if these variables are informed together with `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT`, their content will have priority over `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT`.

**NOTE**: the variables `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES` and `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL`
**NOTE**: The variables `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES` and `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT_ACTIVITIES_INTERVAL`
will be deprecated. So we encourage to use the `LOG_EXPORT_CONTAINER_EXTRACT_AUDIT` variable instead.

## Configure Stream
Expand Down
2 changes: 1 addition & 1 deletion docs/inputs/CONFIGURE_FILE_INPUT.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ nav_order: 9
The Log Export Container uses [fluent plugin tail](https://docs.fluentd.org/input/tail).
In order to enable it you need to specify `LOG_EXPORT_CONTAINER_INPUT=file-json` or `LOG_EXPORT_CONTAINER_INPUT=file-csv` and provide the following variables:

- **LOG_FILE_PATH**. Log file path, e.g. `/var/log/sdm/logs.log`
* **LOG_FILE_PATH**. Log file path. E.g., `/var/log/sdm/logs.log`.


## Configuration using docker
Expand Down
4 changes: 3 additions & 1 deletion docs/monitoring/CONFIGURE_PROMETHEUS.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@ When Prometheus is enabled, an endpoint is available in port `24321` allowing to
- `fluentd_output_status_emit_count` - the total count of forwarded logs by output (e.g.: `stdout`, `remote-syslog`, `s3`, `cloudwatch`, `splunk-hec`, `datadog`, `azure-loganalytics`, `sumologic`, `kafka`, `mongo`, `loki`, `elasticsearch` and `bigquery`)
- `fluentd_output_status_num_errors` - the count of total errors by output match

To enable it, you need to set the variable `LOG_EXPORT_CONTAINER_ENABLE_MONITORING=true`.

To enable it, you need to configure the following variable:
* **LOG_EXPORT_CONTAINER_ENABLE_MONITORING**. Boolean variable to enable the monitoring endpoint.

To see an example, you can use `docker-compose-prometheus.yml` to run Log Export Container with Prometheus and Grafana. Then you can access the `Log Export Container Metrics` dashboard in Grafana (in the port `3000`) and see how it's used. There we have the following panels:

Expand Down
6 changes: 3 additions & 3 deletions docs/outputs/CONFIGURE_CLOUDWATCH.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ nav_order: 3
The Log Export Container uses a [fluentd cloudwatch output plugin](https://github.com/fluent-plugins-nursery/fluent-plugin-cloudwatch-logs). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=cloudwatch` and provide the following variables:
* **AWS_ACCESS_KEY_ID**. AWS Access Key
* **AWS_SECRET_ACCESS_KEY**. AWS Access Secret
* **AWS_REGION**. AWS Region Name, for example: `us-west-2`
* **CLOUDWATCH_LOG_GROUP_NAME**. AWS CloudWatch Log Group Name to store logs, for example: `aws/sdm-logs`
* **CLOUDWATCH_LOG_STREAM_NAME**. AWS CloudWatch Log Stream Name to store logs, for example: `test`
* **AWS_REGION**. AWS Region Name. E.g., `us-west-2`.
* **CLOUDWATCH_LOG_GROUP_NAME**. AWS CloudWatch Log Group Name to store logs. E.g., `aws/sdm-logs`.
* **CLOUDWATCH_LOG_STREAM_NAME**. AWS CloudWatch Log Stream Name to store logs. E.g., `test`.

## IAM permissions
Add -at least- the following policy to your IAM user:
Expand Down
6 changes: 3 additions & 3 deletions docs/outputs/CONFIGURE_ELASTICSEARCH.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@ nav_order: 5

The Log Export Container uses [fluentd elasticsearch output plugin](https://docs.fluentd.org/output/elasticsearch/).
In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=elasticsearch` and provide the following variables:
* **ELASTICSEARCH_HOST**. ElasticSearch server host, e.g. `127.0.0.1`
* **ELASTICSEARCH_PORT**. ElasticSearch server port, e.g. `9201`. Default: `9200`
* **ELASTICSEARCH_INDEX_NAME**. ElasticSearch index name, e.g. `my-index`
* **ELASTICSEARCH_HOST**. ElasticSearch server host. E.g., `127.0.0.1`.
* **ELASTICSEARCH_PORT**. ElasticSearch server port. E.g., `9201`. Default = `9200`.
* **ELASTICSEARCH_INDEX_NAME**. ElasticSearch index name. E.g., `my-index`.
2 changes: 1 addition & 1 deletion docs/outputs/CONFIGURE_KAFKA.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ nav_order: 6
The Log Export Container uses a [fluentd kafka output plugin](https://github.com/fluent/fluent-plugin-kafka). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=kafka` and provide the following variables:
* **KAFKA_BROKERS**. List of brokers, following the format: `<broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>`
* **KAFKA_TOPIC**. Topic name
* **KAFKA_FORMAT_TYPE**. Input text type, for example: `text, json, ltsv, msgpack`. Default = json
* **KAFKA_FORMAT_TYPE**. Input text type. E.g., `text, json, ltsv, msgpack`. Default = `json`.

## Plugin changes

Expand Down
2 changes: 1 addition & 1 deletion docs/outputs/CONFIGURE_LOGZ.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ nav_order: 7

The Log Export Container uses [fluent plugin logzio](https://github.com/logzio/fluent-plugin-logzio).
In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=logz` and provide the following variables:
* **LOGZ_ENDPOINT**. Logz.io Endpoint URL, e.g. `https://listener.logz.io:8071?token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&type=my_type`
* **LOGZ_ENDPOINT**. Logz.io Endpoint URL. E.g., `https://listener.logz.io:8071?token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&type=my_type`.
3 changes: 1 addition & 2 deletions docs/outputs/CONFIGURE_LOKI.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,4 @@ nav_order: 8

The Log Export Container uses [fluent plugin grafana loki](https://grafana.com/docs/loki/latest/clients/fluentd/).
In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=loki` and provide the following variables:
* **LOKI_URL**. Loki Endpoint URL, e.g. `http://localhost:3100`

* **LOKI_URL**. Loki Endpoint URL. E.g., `http://localhost:3100`.
2 changes: 1 addition & 1 deletion docs/outputs/CONFIGURE_MONGO.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ nav_order: 9

The Log Export Container uses [fluentd mongo output plugin](https://docs.fluentd.org/output/mongo). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=mongo` and provide the following variables:
* **MONGO_URI**. Mongo Connection URI
* **MONGO_COLLECTION**. Mongo Collection to store the Log Export Container events. Default=sdm_logs.
* **MONGO_COLLECTION**. Mongo Collection to store the Log Export Container events. Default = `sdm_logs`.
2 changes: 1 addition & 1 deletion docs/outputs/CONFIGURE_REMOTE_SYSLOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ nav_order: 10
The Log Export Container uses [fluent remote_syslog plugin](https://github.com/fluent-plugins-nursery/fluent-plugin-remote_syslog). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=remote-syslog` and provide the following variables:
* **REMOTE_SYSLOG_HOST**. Remote Syslog host address.
* **REMOTE_SYSLOG_PORT**. Remote Syslog port.
* **REMOTE_SYSLOG_PROTOCOL**. Remote Syslog protocol. Possible values: `tcp` or `udp`. Default: `tcp`.
* **REMOTE_SYSLOG_PROTOCOL**. Remote Syslog protocol. Possible values: `tcp` or `udp`. Default = `tcp`.
6 changes: 3 additions & 3 deletions docs/outputs/CONFIGURE_S3.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ nav_order: 11
The Log Export Container uses [fluentd s3 output plugin](https://docs.fluentd.org/output/s3). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=s3` and provide the following variables:
* **AWS_ACCESS_KEY_ID**. AWS Access Key
* **AWS_SECRET_ACCESS_KEY**. AWS Access Secret
* **S3_BUCKET**. AWS S3 Bucket Name, for example: `log-export-container`
* **S3_REGION**. AWS S3 Bucket Region Name, for example: `us-west-2`
* **S3_PATH**. AWS S3 Path to Append to your Logs, for example: `logs`. The actual path on S3 will be: `{path}{container_id}{time_slice_format}_{sequential_index}.gz (see s3_object_key_format)`
* **S3_BUCKET**. AWS S3 Bucket Name. E.g., `log-export-container`.
* **S3_REGION**. AWS S3 Bucket Region Name. E.g., `us-west-2`.
* **S3_PATH**. AWS S3 Path to Append to your Logs. E.g., `logs`. The actual path on S3 will be: `{path}{container_id}{time_slice_format}_{sequential_index}.gz (see s3_object_key_format)`

## Plugin changes

Expand Down
6 changes: 3 additions & 3 deletions docs/outputs/CONFIGURE_SPLUNK_HEC.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ nav_order: 12
# Configure Splunk HEC

The Log Export Container uses [fluentd splunk hec output plugin](https://github.com/splunk/fluent-plugin-splunk-hec). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=splunk-hec` and provide the following variables:
* **SPLUNK_HEC_HOST**. The hostname/IP for the HEC token or the HEC load balancer, for example: `prd-p-xxxxx.splunkcloud.com`
* **SPLUNK_HEC_PORT**. The port number for the HEC token or the HEC load balancer, for example: `8088`
* **SPLUNK_HEC_TOKEN**. Identifier for the HEC token, for example: `xxxxxxxx-yyyy-yyyy-yyyy-zzzzzzzzzzzz`
* **SPLUNK_HEC_HOST**. The hostname/IP for the HEC token or the HEC load balancer. E.g., `prd-p-xxxxx.splunkcloud.com`.
* **SPLUNK_HEC_PORT**. The port number for the HEC token or the HEC load balancer. E.g., `8088`.
* **SPLUNK_HEC_TOKEN**. Identifier for the HEC token. E.g., `xxxxxxxx-yyyy-yyyy-yyyy-zzzzzzzzzzzz`.

IMPORTANT: SSL validation is disabled by default, you can pass different [SSL Params](https://github.com/splunk/fluent-plugin-splunk-hec#ssl-parameters) overriding the builtin configuration as commented below

Expand Down
2 changes: 1 addition & 1 deletion docs/outputs/CONFIGURE_SUMOLOGIC.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ nav_order: 13

The Log Export Container uses a [fluentd sumologic output plugin](https://github.com/SumoLogic/fluentd-output-sumologic). In order to enable it you need to specify `LOG_EXPORT_CONTAINER_OUTPUT=sumologic` and provide the following variables:
* **SUMOLOGIC_ENDPOINT**. SumoLogic HTTP Collector URL
* **SUMOLOGIC_SOURCE_CATEGORY**. Source Category metadata field within SumoLogic, for example: `/prod/sdm/logs`
* **SUMOLOGIC_SOURCE_CATEGORY**. Source Category metadata field within SumoLogic. E.g., `/prod/sdm/logs`.

## Plugin changes

Expand Down