You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like Fleet to support cloud object storage providers like AWS S3 for adding/updating software resources when using git ops. For example, s3://my-fleet-bucket/packages/not_publicly_accessible.pkg
Not all packages are available at publicly accessible URLs. Oftentimes vendors, especially for security software, gate access to their installers behind things like user authentication. There are also times when internal, organization specific internal tools which can't be made public need to be distributed using Fleet. It's a common pattern for organizations to put their private packages in cloud object storage like a S3 bucket, and then allow access to their CI/CD runner with something like (in AWS terms) an IAM policy. This provides secure access to the object only for that runner where a Fleet GitOps flow would run.
Other ways to share private packages are possible, but possibly less secure and require more overhead.
What have you tried?
Well, I did try. fleetctl gitops does not support non-HTTP(s) URIs. No dice.
Error: applying software installers for team "Test": performing request for URL "s3://my-fleet-bucket/packages/not_publicly_accessible.pkg": Get "s3://my-fleet-bucket/packages/not_publicly_accessible.pkg": unsupported protocol scheme "s3"
Likely I could do this using the API (https://fleetdm.com/docs/rest-api/rest-api#add-package) directly by downloading the file first, but I'd rather the product support it without doing additional work outside fleetctl gitops.
Potential solutions
Add protocol support for major cloud object storage providers like AWS S3, Google Cloud Storage, and Azure Blob Storage. Prioritizing S3 is preferred.
Alternatively, if Fleet would rather not (or can't) include libraries for things like S3 in the codebase, support a local_path argument where the CI/CD runner could do the work of downloading files from cloud object storage and place at a static location. gitops fleetctl could then pick up file locally instead of always assuming it's available remote.
What is the expected workflow as a result of your proposal?
I can use a cloud object storage URI like s3://my-fleet-bucket/packages/not_publicly_accessible.pkg to upload a package to Fleet using git ops flows. If not feasible, implement support for local_path to add/update a package locally.
The text was updated successfully, but these errors were encountered:
Problem
https://fleetdm.com/docs/configuration/yaml-files#software
I'd like Fleet to support cloud object storage providers like AWS S3 for adding/updating software resources when using git ops. For example,
s3://my-fleet-bucket/packages/not_publicly_accessible.pkg
Not all packages are available at publicly accessible URLs. Oftentimes vendors, especially for security software, gate access to their installers behind things like user authentication. There are also times when internal, organization specific internal tools which can't be made public need to be distributed using Fleet. It's a common pattern for organizations to put their private packages in cloud object storage like a S3 bucket, and then allow access to their CI/CD runner with something like (in AWS terms) an IAM policy. This provides secure access to the object only for that runner where a Fleet GitOps flow would run.
Other ways to share private packages are possible, but possibly less secure and require more overhead.
What have you tried?
Well, I did try.
fleetctl gitops
does not support non-HTTP(s) URIs. No dice.Likely I could do this using the API (https://fleetdm.com/docs/rest-api/rest-api#add-package) directly by downloading the file first, but I'd rather the product support it without doing additional work outside
fleetctl gitops
.Potential solutions
Add protocol support for major cloud object storage providers like AWS S3, Google Cloud Storage, and Azure Blob Storage. Prioritizing S3 is preferred.
Alternatively, if Fleet would rather not (or can't) include libraries for things like S3 in the codebase, support a
local_path
argument where the CI/CD runner could do the work of downloading files from cloud object storage and place at a static location.gitops fleetctl
could then pick up file locally instead of always assuming it's available remote.What is the expected workflow as a result of your proposal?
I can use a cloud object storage URI like s3://my-fleet-bucket/packages/not_publicly_accessible.pkg to upload a package to Fleet using git ops flows. If not feasible, implement support for
local_path
to add/update a package locally.The text was updated successfully, but these errors were encountered: