Skip to content

Commit

Permalink
Merge pull request #7 from SumoLogic/ssong-SUMO-76740-metadata
Browse files Browse the repository at this point in the history
Ssong sumo 76740 metadata
  • Loading branch information
Bin Yi authored Sep 8, 2017
2 parents 45cb601 + 6c8863d commit 53e8e5e
Show file tree
Hide file tree
Showing 8 changed files with 249 additions and 51 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,6 @@

# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
.glide/

# plugin file system
rootfs/
45 changes: 45 additions & 0 deletions DEVELOPER.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Guide for developers
This project is a plugin for the docker engine, which delivers logs to Sumo Logic by pushing log messages through an HTTP source.
This is the guide for developers who want build and extend the plugin. If you just want use this plugin in your docker environment, please refer to the [readme file](README.md).

## Prerequisite
* [Download](https://www.docker.com/get-docker) and install latest docker engine
* [Download](https://golang.org/dl/) and install latest Go language distribution
* Clone/Download this repository to a local directory, and
* Get all dependencies with
```bash
$ go get -d ./...
```

## Build and install plugin to docker
In bash, run:
```bash
$ sudo bash ./plugin_install.sh
```
If everything goes fine, you can verify that the plugin is correctly installed with:
```bash
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
2dcbb3a32956 sumologic:latest Sumo Logic logging driver true
```

## Uninstall and cleanup the plugin
In bash, run:
```bash
$ sudo bash ./plugin_uninstall.sh
```

## Run sanity test
* Make sure the plugin is installed and enabled
* In bash, run:
```bash
$ docker run --log-driver=sumologic --log-opt sumo-url=<url> -i -t ubuntu bash
```
This will create a bash session in a docker container and send all console contents to a Sumo Logic HTTP source as log lines

## Run unit test
The unit test is written in `XXX_test.go` which `XXX` is the module to be tested. You can launch all unit tests with:
```bash
$ go test -v
```
The unit test do not require docker environment to run. For details about unit test or test framework in Go language, click [here](https://golang.org/pkg/testing/).
36 changes: 20 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@

A Docker logging driver plugin to send logs to Sumo Logic.

**Disclaimer:** This plugin is still being developed. We recommend using this plugin in non-production environments.

**Note:** Docker plugins are not yet supported on Windows; see Docker's logging driver plugin [documentation].

[documentation]: https://github.com/docker/cli/blob/master/docs/extend/plugins_logging.md
Expand All @@ -22,7 +20,7 @@ ID NAME DESCRIPTION ENABLED
cb0021522669 sumologic:latest SumoLogic logging driver true
```

### Create an HTTP Metrics Source in Sumo Logic
### Create HTTP Source in Sumo Logic
Create a [Sumo Logic account](https://www.sumologic.com/) if you don't currently have one.

Follow these instructions for [setting up an HTTP Source](https://help.sumologic.com/Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/zGenerate-a-new-URL-for-an-HTTP-Source) in Sumo Logic. Be sure to obtain the URL endpoint after creating an HTTP Source.
Expand All @@ -37,18 +35,24 @@ $ docker run --log-driver=sumologic --log-opt sumo-url=https://<deployment>.sumo
### Sumo Logic Options
To specify additional logging driver options, you can use the `--log-opt NAME=VALUE` flag.

| Option | Required? | Default Value | Description
| --------------------------- | :-------: | :-----------: | -------------------------------------- |
| sumo-url | Yes | | HTTP Source URL
| sumo-compress | No | true | Enable/disable gzip compression. Boolean.
| sumo-compress-level | No | -1 | Set the gzip compression level. Valid values are -1 (default), 0 (no compression), 1 (best speed) ... 9 (best compression).
| sumo-batch-size | No | 1000000 | The number of bytes of logs the driver should wait for before sending them in bulk. If the number of bytes never reaches `sumo-batch-size`, the driver will send the logs in smaller batches at predefined intervals; see `sumo-sending-interval`.
| sumo-sending-interval | No | 2s | The maximum time the driver waits for number of logs to reach `sumo-batch-size` before sending the logs, even if the number of logs is less than the batch size. In the format 72h3m5s, valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
| sumo-proxy-url | No | | Set a proxy URL.
| sumo-insecure-skip-verify | No | false | Ignore server certificate validation. Boolean.
| sumo-root-ca-path | No | | Set the path to a custom root certificate.
| sumo-server-name | No | | Name used to validate the server certificate. By default, uses hostname of the `sumo-url`.
| sumo-queue-size | No | 100 | The maximum number of log batches of size `sumo-batch-size` we can store in memory in the event of network failure, before we begin dropping batches. Thus in the worst case, the plugin will use `sumo-batch-size` * `sumo-queue-size` bytes of memory per container (default 100 MB).
| Option | Required? | Default Value | Description
| ------------------------- | :-------: | :------------------: | -------------------------------------- |
| sumo-url | Yes | | HTTP Source URL
| sumo-source-category | No | HTTP source category | Source category to appear when searching in Sumo Logic by `_sourceCategory`. Within the source category, the token `{{Tag}}` will be replaced with the value of the Docker tag option. If not specified, the default source category configured for the HTTP source will be used.
| sumo-source-name | No | container's name | Source name to appear when searching in Sumo Logic by `_sourceName`. Within the source name, the token `{{Tag}}` will be replaced with the value of the Docker tag option. If not specified, the container's name will be used.
| sumo-source-host | No | host name | Source host to appear when searching in Sumo Logic by `_sourceHost`. Within the source host, the token `{{Tag}}` will be replaced with the value of the Docker tag option. If not specified, the machine host name will be used.
| sumo-compress | No | `true` | Enable/disable gzip compression. Boolean.
| sumo-compress-level | No | `-1` | Set the gzip compression level. Valid values are -1 (default), 0 (no compression), 1 (best speed) ... 9 (best compression).
| sumo-batch-size | No | `1000000` | The number of bytes of logs the driver should wait for before sending them in bulk. If the number of bytes never reaches `sumo-batch-size`, the driver will send the logs in smaller batches at predefined intervals; see `sumo-sending-interval`.
| sumo-sending-interval | No | `2s` | The maximum time the driver waits for number of logs to reach `sumo-batch-size` before sending the logs, even if the number of logs is less than the batch size. In the format 72h3m5s, valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`.
| sumo-proxy-url | No | | Set a proxy URL.
| sumo-insecure-skip-verify | No | `false` | Ignore server certificate validation. Boolean.
| sumo-root-ca-path | No | | Set the path to a custom root certificate.
| sumo-server-name | No | | Name used to validate the server certificate. By default, uses hostname of the `sumo-url`.
| sumo-queue-size | No | `100` | The maximum number of log batches of size `sumo-batch-size` we can store in memory in the event of network failure, before we begin dropping batches. Thus in the worst case, the plugin will use `sumo-batch-size` * `sumo-queue-size` bytes of memory per container (default 100 MB).
| tag | No | `{{.ID}}` | Specifies a tag for messages, which can be used in the "source category", "source name", and "source host" fields. Certain tokens of the form {{X}} are supported. Default value is `{{.ID}}`, the first 12 characters of the container ID. Refer to the [tag log-opt documentation] for more information and a list of supported tokens.

[tag log-opt documentation]: https://docs.docker.com/engine/admin/logging/log_tags/

### Example

Expand All @@ -59,7 +63,7 @@ $ docker run --log-driver=sumologic \
--log-opt sumo-queue-size=400 \
--log-opt sumo-sending-frequency=500ms \
--log-opt sumo-compress=false \
--log-opt ...
--log-opt ... \
your/container
```

Expand Down
92 changes: 80 additions & 12 deletions driver.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,15 @@ import (
"io/ioutil"
"net/http"
"net/url"
"regexp"
"strconv"
"strings"
"sync"
"syscall"
"time"

"github.com/docker/docker/daemon/logger"
"github.com/docker/docker/daemon/logger/loggerutils"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/tonistiigi/fifo"
Expand Down Expand Up @@ -52,12 +55,18 @@ const (
If the number of bytes never reaches the batch size, the driver will send the logs in smaller
batches at predefined intervals; see sending interval. */
logOptBatchSize = "sumo-batch-size"
/* The _sourceCategory. If empty, the category of HTTP source will be used */
logOptSourceCategory = "sumo-source-category"
/* The _sourceName. If empty, will be the container's name */
logOptSourceName = "sumo-source-name"
/* The _sourceHost. If empty, will be the machine host name */
logOptSourceHost = "sumo-source-host"

defaultGzipCompression = true
defaultGzipCompressionLevel = gzip.DefaultCompression
defaultInsecureSkipVerify = false

defaultSendingIntervalMs = 2000 * time.Millisecond
defaultSendingInterval = 2000 * time.Millisecond
defaultQueueSizeItems = 100
defaultBatchSizeBytes = 1000000

Expand Down Expand Up @@ -93,6 +102,12 @@ type sumoLogger struct {
logBatchQueue chan *sumoLogBatch
sendingInterval time.Duration
batchSize int

info logger.Info
tag string
sourceCategory string
sourceName string
sourceHost string
}

func newSumoDriver() *sumoDriver {
Expand Down Expand Up @@ -120,6 +135,29 @@ func (sumoDriver *sumoDriver) NewSumoLogger(file string, info logger.Info) (*sum
}
sumoDriver.mu.Unlock()

sumoUrl := parseLogOptUrl(info, logOptUrl)
if sumoUrl == nil {
return nil, fmt.Errorf("%s: sumo-url must exist and be a valid URL", pluginName)
}

hostname, err := info.Hostname()
if err != nil {
hostname = ""
}

tag, err := loggerutils.ParseLogTag(info, loggerutils.DefaultTemplate)
if err != nil {
return nil, err
}

dictionary := map[string]string {
"tag": tag,
}

sourceCategory := parseLogOptMetadata(info, logOptSourceCategory, "", dictionary)
sourceName := parseLogOptMetadata(info, logOptSourceName, info.ContainerName[1:len(info.ContainerName)], dictionary) // trim leading "/"
sourceHost := parseLogOptMetadata(info, logOptSourceHost, hostname, dictionary)

gzipCompression := parseLogOptBoolean(info, logOptGzipCompression, defaultGzipCompression)
gzipCompressionLevel := parseLogOptGzipCompressionLevel(info, logOptGzipCompressionLevel, defaultGzipCompressionLevel)

Expand All @@ -139,7 +177,7 @@ func (sumoDriver *sumoDriver) NewSumoLogger(file string, info logger.Info) (*sum
}

transport := &http.Transport{}
proxyUrl := parseLogOptProxyUrl(info, logOptProxyUrl, nil)
proxyUrl := parseLogOptUrl(info, logOptProxyUrl)
transport.Proxy = http.ProxyURL(proxyUrl)
transport.TLSClientConfig = tlsConfig

Expand All @@ -148,7 +186,7 @@ func (sumoDriver *sumoDriver) NewSumoLogger(file string, info logger.Info) (*sum
Timeout: 30 * time.Second,
}

sendingInterval := parseLogOptDuration(info, logOptSendingInterval, defaultSendingIntervalMs)
sendingInterval := parseLogOptDuration(info, logOptSendingInterval, defaultSendingInterval)
queueSize := parseLogOptIntPositive(info, logOptQueueSize, defaultQueueSizeItems)
batchSize := parseLogOptIntPositive(info, logOptBatchSize, defaultBatchSizeBytes)

Expand All @@ -159,7 +197,7 @@ func (sumoDriver *sumoDriver) NewSumoLogger(file string, info logger.Info) (*sum
}

newSumoLogger := &sumoLogger{
httpSourceUrl: info.Config[logOptUrl],
httpSourceUrl: sumoUrl.String(),
httpClient: httpClient,
proxyUrl: proxyUrl,
tlsConfig: tlsConfig,
Expand All @@ -170,6 +208,11 @@ func (sumoDriver *sumoDriver) NewSumoLogger(file string, info logger.Info) (*sum
logBatchQueue: make(chan *sumoLogBatch, queueSize),
sendingInterval: sendingInterval,
batchSize: batchSize,
info: info,
tag: tag,
sourceCategory: sourceCategory,
sourceName: sourceName,
sourceHost: sourceHost,
}

sumoDriver.mu.Lock()
Expand All @@ -191,6 +234,31 @@ func (sumoDriver *sumoDriver) StopLogging(file string) error {
return nil
}

func interpretAll(re *regexp.Regexp, input string, dictionary map[string]string) string {
result := ""
lastIndex := 0

for _, v := range re.FindAllSubmatchIndex([]byte(input), -1) {
groups := []string{}
for i := 0; i < len(v); i += 2 {
groups = append(groups, input[v[i]:v[i + 1]])
}

result += input[lastIndex:v[0]] + dictionary[strings.ToLower(groups[1])] // groups[0] represents the whole pattern, groups[1] is the first capture group
lastIndex = v[1]
}

return result + input[lastIndex:]
}

func parseLogOptMetadata(info logger.Info, logOptKey string, defaultValue string, dictionary map[string]string) string {
if input, exists := info.Config[logOptKey]; exists {
re := regexp.MustCompile(`(?i)\{\{(.*?)\}\}`) // needs to be a lazy match
return interpretAll(re, input, dictionary)
}
return defaultValue
}

func parseLogOptIntPositive(info logger.Info, logOptKey string, defaultValue int) int {
if input, exists := info.Config[logOptKey]; exists {
inputValue64, err := strconv.ParseInt(input, stringToIntBase, stringToIntBitSize)
Expand All @@ -201,7 +269,7 @@ func parseLogOptIntPositive(info logger.Info, logOptKey string, defaultValue int
}
inputValue := int(inputValue64)
if inputValue <= 0 {
logrus.Error(fmt.Errorf("%s: %s must be a positive value, got %d. Using default %d.",
logrus.Error(fmt.Errorf("%s: %s must be a positive value, got %d. Using default %d",
pluginName, logOptKey, inputValue, defaultValue))
return defaultValue
}
Expand All @@ -220,7 +288,7 @@ func parseLogOptDuration(info logger.Info, logOptKey string, defaultValue time.D
}
zeroSeconds, _ := time.ParseDuration("0s")
if inputValue <= zeroSeconds {
logrus.Error(fmt.Errorf("%s: %s must be a positive duration, got %s. Using default %s.",
logrus.Error(fmt.Errorf("%s: %s must be a positive duration, got %s. Using default %s",
pluginName, logOptKey, inputValue.String(), defaultValue.String()))
return defaultValue
}
Expand All @@ -242,17 +310,17 @@ func parseLogOptBoolean(info logger.Info, logOptKey string, defaultValue bool) b
return defaultValue
}

func parseLogOptProxyUrl(info logger.Info, logOptKey string, defaultValue *url.URL) *url.URL {
func parseLogOptUrl(info logger.Info, logOptKey string) *url.URL {
if input, exists := info.Config[logOptKey]; exists {
inputValue, err := url.Parse(input)
if err != nil {
logrus.Error(fmt.Errorf("%s: Failed to parse value of %s as url. Initializing without proxy. %v",
pluginName, logOptKey, defaultValue, err))
return defaultValue
logrus.Error(fmt.Errorf("%s: Failed to parse value of %s as url. %v",
pluginName, logOptKey, err))
return nil
}
return inputValue
}
return defaultValue
return nil
}

func parseLogOptGzipCompressionLevel(info logger.Info, logOptKey string, defaultValue int) int {
Expand All @@ -266,7 +334,7 @@ func parseLogOptGzipCompressionLevel(info logger.Info, logOptKey string, default
inputValue := int(inputValue64)
if inputValue < defaultValue || inputValue > gzip.BestCompression {
logrus.Error(fmt.Errorf(
"%s: Not supported level '%d' for %s (supported values between %d and %d). Using default compression.",
"%s: Not supported level '%d' for %s (supported values between %d and %d). Using default compression",
pluginName, inputValue, logOptKey, defaultValue, gzip.BestCompression))
return defaultValue
}
Expand Down
Loading

0 comments on commit 53e8e5e

Please sign in to comment.