If you want the fastest path between 0 and S3 distribution, see the TL;DR section for a quick and easy TEA bootstrapping script.
- History of TEA - A Brief History of where TEA came from
- Purpose of TEA - What TEA is and isn't
- System Architecture - Components and technologies in TEA
- External Integrations - How TEA related to other things
- Build Process - Where the code comes from
- Getting TEA Code - Where you can get TEA code and deployment mechanism
- TEA Core Features - Configuring TEA to work for you
- Working inside EDC - How to integrate into Earthdata Cloud
- Deploying TEA - Everything required for deploying TEA
- Post deployment procedures - What do to AFTER Deploying
- Logs - What TEA does with logs
- When things go wrong - Hopefully we can help!
- Make it better - It's not perfect, but it's getting better!
- TL;DR - We understand not everyone likes to read docs. We can do it for you!
TEA was originally designed as a lightweight S3 distribution application with Earthdata Login (EDL) support meant to help test and evaluate the (then) forethcoming NGAP Enterprise Egress platform. TEA leveraged much of the design and features of ASF's original full-featured s3 distribution app, without any of the throttling or egress tracking capabilities. Without the need to do throttling, TEA ditched docker/nginx/ECS in favor of the lighter, more nimble Lambda + API Gateway architecture.
At the most fundamental level, TEA is intended to be a FULLY Earthdata Cloud (EDC) compliant application to enable distribution of EOSDIS data from Amazon S3 while ensuring data providers have all the tools, controls and metrics required for ensuring data distribution restriction and reporting compliance. TEA Distribution supports only static S3 data, and is not intended for service-style or dynamic data distribution.
This documentation is intended to support stand-alone TEA deployments, and while it may be a valuable tool for TEA deployments provisioned through Cumulus, we suggest you visit cumulus documentation for guidance on using and customizing TEA from within cumulus.
We the TEAm Strive to create the simplest, most reliable, most feature-rich S3 distribution app. Our intent is not to provide EVERY feature imaginable, but to maintain the THIN core of broadly applicable features. We want to continue to engage the user community to improve the feature set while staunchly resisting the siren song of feature bloat and one-off special ponies.
- Keep it simple - No special ponies!
- Remove unwanted/unused code
- Write clean code
- Engage and support the community
- Strive for 10% community provided code base - we ❤️ PR's!
In the shortest possible explanation, TEA is a Python Chalice application that leverages AWS Lambda and API Gateway to create Pre-signed S3 URLs to facilitate user downloads over https via CloudFront.
TEA cannot really operate in a vacuum. TEA is part of what might be considered the Earthdata ecosystem.
TEA can be considered the "official" distribution application of Cumulus, and while there is a symbiotic relationship, TEA is an add-on to cumulus rather than an extension. We suggest utilizing the applicable Cumulus documentation for tightly coupled TEA+Cumulus deployments.
The Earthdata Login OAUTH service is a requirement for distributing data where user activity tracking and reporting is required. Before deploying TEA, it is important to have access to, or create a new EDL Application. Creating and administrating an EDL Application requires the EDL App Admin role, which must be requested through the feedback modal in the Earthdata login header menu.
While TEA can be deployed outside of the Earthdata Cloud, it is designed and intended to be used within the Earthdata Cloud environment. While we assume you'll be deploying to the EDC, we cannot provide assistance in setting up or configuring access to EDC. However, when there are steps or considerations that are required for integration into EDC, we will help ensure you have all the information you need to do the right thing.
TEA leverages several open source platforms along with mandatory peer code review to help ensure the code base is clean, stable, and secure
CodeFactor.io github integrations provide merge request blocking based on appropriate levels of code linting.
PyUp, Snyk and Whitesource all provide Common Vulnerabilities and Exposures (CVE) Analysis. Emergent CVE fixes are automatically pushed to the TEA repo as merge requests and merged into the repo as soon as possible.
CodeFactor.io and Whitesource provide merge request block based on Static Code Analysis to help ensure our code is secure and safe to run.
Along with full-project human-executed manual regression testing, TEA also uses the Python unittest framework (admittedly somewhat unconventionally) to run comprehensive integration tests on freshly deployed stacks for every build. All test failures prevent successful builds and code cannot be released without all tests completing successfully.
If you prefer to roll your own zip for the lambda code:
We've found that if the dependency layer isn't built in an amazonlinux:2
environment, the JWT crypto doesn't work. Here are instructions for gathering and packaging the dependencies in an amazonlinux
docker container.
# Make up a filename for code archive:
DEPENDENCYLAYERFILENAME=tea_depencency_layer.zip
# get the repo
git clone https://github.com/asfadmin/thin-egress-app
cd thin-egress-app
# copy requirements to a directory that is mounted into the docker container
cp rain-api-core/requirements.txt lambda/requirements_rain-api-core.txt
cp lambda/requirements.txt lambda/requirements_tea.txt
cp build/dependency_builder.sh lambda/
docker run --rm -v lambda/:/depbuild/in -v ./:/depbuild/out -e "ZIPFILENAME=${DEPENDENCYLAYERFILENAME}" amazonlinux:2 bash /depbuild/in/dependency_builder.sh
# Upload to S3
aws s3 cp --profile=default ./${DEPENDENCYLAYERFILENAME} s3://${CODE_BUCKET}/
# Make up a filename for code archive:
CODE_ARCHIVE_FILENAME=thin-egress-app-code.zip
# get the repo
git clone https://github.com/asfadmin/thin-egress-app
cd thin-egress-app
# Create a scratch directory in which to confine the required module
# To make the archive smaller, you can cull unnecessary packages and files from the directory. The particulars
# are beyond the scope of this documentation.
# Create the zip archive and put the required modules in
zip -r9 ./${CODE_ARCHIVE_FILENAME} ./lambda
# Add the egress python code
zip -g ./${CODE_ARCHIVE_FILENAME} ./app.py
# Add the update lambda
zip -g ./${CODE_ARCHIVE_FILENAME} ./update_lambda.py
# Add the html templates
zip -g -r2 ./${CODE_ARCHIVE_FILENAME} ./templates
# Add the common code:
cd rain_api_core
zip -g -r9 ./${CODE_ARCHIVE_FILENAME} ./rain_api_core
cd ..
# Upload to S3
aws s3 cp --profile=default ./${CODE_ARCHIVE_FILENAME} s3://${CODE_BUCKET}/
Major release documentation can be found in github.
Raw TEA code is broken into two github repos:
- thin-egress-app - Chalice App, infrastructure-as-code, build scaffolding
- rain-api-core - EDL, AWS, Session Management helper functions
All public releases of ASF code are made available via the public bucket asf.public.code. Each build has 4 files:
- tea-cloudformation-build.#.yaml - CloudFormation template
- tea-code-build.#.zip - Lambda App Python Code
- tea-dependencylayer-build.#.zip - Lambda layer Python Dependency Code
- tea-terraform-build.#.zip - TEA as a Terraform module (For cumulus!)
At the heart of TEA is the concept of the Bucket Map. This YAML file tells TEA how to map URLs to buckets and how to control access to data in buckets. Mapping depths are arbitrary and dynamically account for object path prefixes. Bucket maps are case sensitive!
When working with a bucket map, it is important to remember that bucket names are
relative to the optional bucket prefix
TEA parameter. If a bucket prefix
is
supplied, that value is prepended to any bucket name. This optional feature is intended
to allow bucket maps to be used in multiple maturities (e.g. Sandbox, SIT, UAT, Prod)
where bucket names are consistent across maturities given a specific bucket prefix.
Assume the following bucket map:
MAP:
path1: bucket-path-1
path2:
path2a: bucket-path-2a
path2b: bucket-path-2b
path3:
path3a:
path3ai: bucket-path-3ai
You can derive the following object -> URL mappings:
s3://bucket-path-1/object.txt => https://[APP-BASE-URL]/path1/object.txt
s3://bucket-path-2a/file.ext => https://[APP-BASE-URL]/path2/path2a/file.ext
s3://bucket-path-2b/low-res/browse.jpg => https://[APP-BASE-URL]/path2/path2b/low-res/browse.jpg
s3://bucket-path-3ai/some-product.h5 => https://[APP-BASE-URL]/path3/path3a/path3ai/some-product.h5
It is NOT possible to have distribution at the app root. That is,
s3://bucket-path-1/object.txt
cannot be configured to be distributed at
https://[APP-BASE-URL]/object.txt
.
Custom HTTP response headers can be added to bucket mappings:
MAP:
path1:
bucket: bucket-path-1
headers:
Content-Type: "text/plain"
By default, all buckets are assumed to require a user to log into Earthdata Login to download data. However, there are two options to change that behavior
Buckets can be made public by adding a PUBLIC_BUCKETS
block:
MAP:
browse: browse-bucket
PUBLIC_BUCKETS:
browse-bucket: "Internal only Comment about browse being public"
In the above example, accessing https://[APP-BASE-URL]/browse/some-browse-image.jpg
would not require EDL authentication, however, if a user was already logged in, the
downloads would be tagged with that users EDL User ID.
TEA download can also be restricted to users who belonging to an EDL Application User Group. This allows App and Data owners to specify a discrete list of specially approved users who can download data.
MAP:
pre-commission: pre-comission-data-bucket
PRIVATE_BUCKETS:
pre-commission-data-bucket:
- internal_users
- external_team
In the example above, TEA will ensure that a user attempt to download
https://[APP-BASE-URL]/pre-commission/not-yet-public.zip
belongs to either
the App User Group internal_users
or external_team
. Users who are not in one or
both of those groups will not be granted a download.
In addition you specifying whole buckets as Public or Private, you can also use object prefixing to control access:
MAP:
data-bucket: data-bucket
PUBLIC_BUCKETS:
data-bucket/browse: "Browse image"
PRIVATE_BUCKETS:
data-bucket/pre-commission-data:
- internal_users
- external_team
In the above example, while access data in data-bucket
requires simple auth,
accessing an object with the prefix browse/
will require NO auth, and
pre-commission-data/
will require EDL App group access as specified.
You may optionally create your own jinja2 html
templates. If no custom templates are supplied in the HtmlTemplateDir
subdirectory
of the ConfigBucket
bucket, ultra-generic (and a little annoying!)
default templates
are used.
base.html This is the base template.
Blocks:
pagetitle
: Gets inserted inside the<title></title>
elementcontent
: Content payload fed into the template.
root.html
Child template. Gets called by /
and /logout
for 200 responses.
Variables:
title
: page titleURS_URL
: used to make the login linkSTAGE
: used to make a URL back to the egress appprofile
: in the default template,profile.first_name
andprofile.last_name
are used to greet a logged-in user. The entire profile dict is available to the template.contentstring
: any text can go here
error.html Child template that gets called when various 400 type errors happen.
Variables:
title
: page titlestatus_code
: http status code goes herecontentstring
: any text can go here
profile.html Child template that displays profile info. Only used for debugging in dev.
TEA can accept a shared EDL Token as an Authorization (Bearer Token) method. To enable this behavior, EDL Apps (Service + TEA) must belong to a shared EDL App Group. Processing a shared token is temporally expensive. After the initial request, subsequent Service->TEA data requests should reuse cookies. EULA enforcement is preserved with shared tokens.
TEA deploys a Lambda, triggered by a subscription to the SNS Topic
arn::aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged
, which downloads the
AWS Provided ip-ranges.json
file, parses out the IP CIDRs for the deploy region, and automatically updates
the In-Region IAM Role's Policy Document condition
block.
This lambda is defined in update_lambda.py
While TEA is primarily designed to be deployed inside EDC, below are some important configurations to be aware of to make deploying to EDC Successful.
This bash script will give you all the parameters you'll need to deploy into EDC:
export AWS_REGION='us-west-2'
export AWS_PROFILE='default'
export AWSENV="--profile=${AWS_PROFILE} --region=${AWS_REGION}"
export VPCID=$(aws $AWSENV ec2 describe-vpcs --query "Vpcs[*].VpcId" --filters "Name=tag:Name,Values=Application VPC" --output text)
export SUBNETID=$(aws $AWSENV ec2 describe-subnets --query "Subnets[?VpcId=='$VPCID'].{ID:SubnetId}[0]" --filters "Name=tag:Name,Values=Private*" --output=text)
export SECURITYGROUP=$(aws $AWSENV ec2 describe-security-groups --query "SecurityGroups[?VpcId=='$VPCID'].{ID:GroupId}" --filters "Name=tag:Name,Values=Application Default*" --output=text)
echo "PrivateVPC=$VPCID; VPCSecurityGroupIDs=$SECURITYGROUP; VPCSubnetIDs=$SUBNETID;"
It is also important to be aware that an API Gateway VPC Endpoint will need to be setup prior to deployment. You can check to see if your account has the appropriate VPCE by running this command:
aws $AWSENV ec2 describe-vpc-endpoints --query "VpcEndpoints[?(VpcId=='$VPCID' && ServiceName=='com.amazonaws.${AWS_REGION}.execute-api')].{ID:VpcEndpointId}" --output=text
When deploying into NGAP, it is important to supply a Permission Boundary. This allows CloudFormation to create necessary IAM roles on your behalf.
The answer is that it certainly helps. If your account does not have a bastion, validating a deployment will be very difficult.
At its core, TEA uses a CloudFormation template to deploy all of its resources to AWS. Additionally, there is an available Terraform module that can be used on its own, or as an add-on to Cumulus.
Two secrets manager objects are required for TEA to function properly:
To set up the URS Auth secret object, You'll need these parameters:
UrsId
can be found on your SSO application's URS home page asClient ID
.UrsAuth
is theUID
for your URS SSO app and password separated by a colon. You can create the value with the command below. See EDL's Request Authorization Code documentation for more details.
Encode the Creds
UrsAuth=$(echo -n "<App UID>:<App Password>" | base64)
UrsId="<Client ID>"
Write the base64 encoded keys into a json file:
cat << EOL > urscreds.json
{
"UrsId": "$UrsAuth",
"UrsAuth": "$UrsId"
}
EOL
Create the secret in AWS, referencing the json file created above
aws $AWSENV secretsmanager create-secret --name urs_creds_for_tea \
--description "URS creds for TEA app" \
--secret-string file://urscreds.json
JWT secrets allow multiple TEA instances to encode and decode each others JWT cookies. This helps reduce the number of times EDL Authentication needs to happen. There are many ways to create JWS Secrets, here is one example:
First, Create a key pair and b64 encode them:
ssh-keygen -t rsa -b 4096 -m PEM -f ./jwtcookie.key
rsa_priv_key=$(openssl base64 -in jwtcookie.key -A)
rsa_pub_key=$(openssl base64 -in jwtcookie.key.pub -A)
Put the base-64 encoded keys into a json file like so:
cat << EOL > jwtkeys.json
{
"rsa_priv_key": "$rsa_priv_key",
"rsa_pub_key": "$rsa_pub_key"
}
EOL
Create the secret in AWS, referencing the json file created above
aws $AWSENV secretsmanager create-secret --name jwt_secret_for_tea \
--description "RS256 keys for TEA app JWT cookies" \
--secret-string file://jwtkeys.json
- App Settings:
Loglevel
- How verbose the logging should beLogtype
- How log entries are formed. Use json in conjunction with log analysis tools. Use flat for human debugging.Maturity
- Maturity, mostly for logsConfigBucket
- S3 bucket where configuration files (bucket map, templates) are keptHtmlTemplateDir
- (OPTIONAL) Subdirectory ofConfigBucket
where templates can be foundStageName
- API Gateway Stage valueEnableApiGatewayLogToCloudWatch
- Whether or not API Gateway logs should be dumped
- Domain Settings:
DomainName
- (OPTIONAL) domain name as a user will access it (ie CloudFront, CNAME, etc)CookieDomain
- (OPTIONAL) domain name value for minting cookiesDomainCertArn
- (OPTIONAL) Arn to a AWS ACM SSL Cert for HTTPS access
- Lambda Code:
LambdaCodeS3Bucket
- S3 bucket where Lambda code zip files are keptLambdaCodeS3Key
- Object name of Lambda code zipLambdaCodeDependencyArchive
- Object name of Lambda Dependency Layer zipLambdaTimeout
- Lambda timeout in secondsLambdaMemory
- The amount of memory available to the function during execution. Must be multiple of 64. Minimum: 128. Max: 3008. Default 1792.
- URS Settings:
URSAuthCredsSecretName
- URS Auth Creds SecretAuthBaseUrl
- Which maturity of URS to hit
- Data Bucket Setup:
BucketnamePrefix
- (OPTIONAL) Bucket prefix value (see Bucket Mapping)BucketMapFile
- bucket map YAML file (see Bucket Mapping)UseReverseBucketMap
- Ignore this value!DownloadRoleArn
- (OPTIONAL) Pre-created IAM Role for minting presigned urls- Leave blank to create a new role for deployment
DownloadRoleInRegionArn
- (OPTIONAL) Pre-created IAM Role for minting IN-REGION presigned urls- Leave blank to create a new role for deployment
SuppressHeadCheck
- Disable the pre-sign object validation. Enable for speedier access.
- Session Settings:
SessionTTL
- How long, in seconds, JWT Cookies are valid forJwtAlgo
- JWT Signing algorithmJwtKeySecretName
- JWT Cookie Secret
- NGAP Integration:
⚠️ These are REQUIRED for EDC Deployments. See VPC and NetworkingPrivateVPC
- Private VPC ID in which we create resources ($VPCID
)VPCSecurityGroupIDs
- Security group ($SUBNETID
)VPCSubnetIDs
- VPC Subnets to deploy into ($SECURITYGROUP
)PermissionsBoundaryName
- Permissions Boundary used for creating IAM Roles; probablyNGAPShRoleBoundary
Originally, I was going to walk through deploying TEA via the AWS CloudFormation web console, but there are better guides out there. So, instead, I'd just remind you that you need THREE files:
- This file will be uploaded to the CloudFormation web console:
- tea-cloudformation-build.#.yaml
- These two zips need to be uploaded to a bucket in your account:
- tea-code-build.#.zip
- tea-dependencylayer-build.#.zip
As with AWS Console deployments, you'll need to upload the two zip files to a code bucket in your account. After doing that, be sure you've sourced the VPC and Networking parameters as well as create the two required Secrets.
# Code path variables
CF_TEMPLATE_FILE=/local/path/to/tea-cloudformation-build.#.yaml
CODE_BUCKET=my-tea-code-bucket
CODE_ARCHIVE_FILENAME=tea-code-build.#.zip
DEPENDENCY_LAYER_FILENAME=tea-dependencylayer-build.#.zip
BUCKETMAP_FILENAME=my_bucketmap.yaml
CFG_BUCKETNAME=my-tea-cfg
# See the Cloudformation parameters section above for a description of these params.
STACK_NAME=my-tea # needs to be compatible with S3 naming requirements (lower case, no underscores, etc)
# because the CF template may create buckets using this name.
AWS_REGION=us-west-2 # Or another region if desired.
AWS_PROFILE=default # Or whichever awscli profile you want to deploy to.
# Deploy the stack
aws cloudformation deploy --profile=${AWS_PROFILE} --region=${AWS_REGION} \
--stack-name ${STACK_NAME} \
--template-file ${CF_TEMPLATE_FILE} \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides \
AuthBaseUrl=https://uat.urs.earthdata.nasa.gov \
BucketMapFile=${BUCKETMAP_FILENAME} \
ConfigBucket=${CFG_BUCKETNAME} \
EnableApiGatewayLogToCloudWatch="False" \
JwtAlgo="RS256" \
JwtKeySecretName="jwt_secret_for_tea" \
LambdaCodeDependencyArchive={DEPENDENCY_LAYER_FILENAME} \
LambdaCodeS3Bucket=${CODE_BUCKET} \
LambdaCodeS3Key=${CODE_ARCHIVE_FILENAME} \
LambdaTimeout=6 \
Loglevel=INFO \
Logtype=json \
Maturity=DEV \
PermissionsBoundaryName=NGAPShRoleBoundary \
PrivateVPC=$VPCID \
SessionTTL=168 \
StageName=API \
URSAuthCredsSecretName="urs_creds_for_tea" \
UseReverseBucketMap="False"
VPCSecurityGroupIDs=$SECURITYGROUP \
VPCSubnetIDs=$SUBNETID
Why would you want to deploy it from Terraform? Just use CloudFormation! But for real, if someone wants to do a write up for this, I'd love to include it.
Please see the Cumulus documentation for current TEA deployment and configuration procedures.
Once a TEA stack has been successfully deployed, there are still a few remaining steps that need to be completed before users can start downloading data.
After deploying TEA, you'll need to add the new URI to your EDL App's authorized redirect URIs. To get the proper value, you can query the cloudformation stack:
aws $AWSENV cloudformation describe-stacks \
--stack-name=$STACK_NAME \
--query 'Stacks[0].Outputs[?OutputKey==`URSredirectURI`].OutputValue' \
--output=text
TEA has a /version
endpoint you can use to check that a deployment is healthy
and responsive:
api_endpoint=$(aws $AWSENV cloudformation describe-stacks \
--stack-name=$STACK_NAME \
--query 'Stacks[0].Outputs[?OutputKey==`ApiEndpoint`].OutputValue' \
--output=text)
curl ${api_endpoint}version
It's important to understand that in the EDC, you can't do the above step. There is no route into the VPC from the general internet. There are write-ups elsewhere that go into depth on how to proxy HTTPS requests into the private VPC.
To access TEA from outside the private VPC, you'll need to make a Publish New App NASD request.
After the NASD has been processed and a new CloudFront domain is generated
for your TEA deployment, you'll need to update the DomainName
and CookieName
parameters of your TEA Deployment with the new CloudFront
domain name.
If the bucket map is updated, because of caching it can take quite some time before TEA loads it. If you would like to force TEA to load the new configs immediately, invoke the bumper lambda:
aws $AWSENV lambda invoke --function-name="${STACK_NAME}-BumperLambda" output.txt
There is a process by which you can request an Alternate Domain name to your
CloudFront endpoint. If you do this, you'll need to update the DomainName
,
CookieName
, and DomainCertArn
parameters of your
TEA Deploment.
The two primary locations where you can see the affects of TEA in the logs.
TEA Creates two log streams:
/aws/lambda/<STACK_NAME>-EgressLambda
- This is where App logs go/aws/lambda/<STACK_NAME>-UpdatePolicyLambda
- Logs from the Lambda that keeps the in-region CIDR list up to date in the in-region download role.
When TEA generates a pre-signed S3 download URL, it adds a query parameter
A-userid
, the value of which is the EDL User ID, if any, that was used to
download the data. This parameter is available to be mined out of the S3
access logs.
When CloudFront is deployed in front of TEA, that A-userid
parameters is
also added to the s3 download request that CloudFront generates. CloudFront
will also add a sourceip
parameter that holds the true IP of the external
user.
There is a lot that can go wrong, but we hope that if you followed this guide, you'll have prevented many of the common problems. If not, start here!
If you see an error message in the Cloudformation Events like this:
CloudWatch Logs role ARN must be set in account settings to enable logging (Service: AmazonApiGateway; Status Code: 400; Error Code: BadRequestException; ...
EnableApiGatewayLogToCloudWatch
is set to True
. If you don't need API Gateway logging to cloudwatch, set to False
. If you do, you must create a role with write access to Cloudwatch Logs and add its ARN here: https://console.aws.amazon.com/apigateway/home?region=<REGION>#/settings
.
For efficiency, TEA will cache configuration into the Lambda run environment. Certain changes, like modifications to the bucket map or secrets, may not be immediately picked up. Lambda run times will eventually time out, but you can force refetching of cached values be simply adding a dummy environment variable to the running Lambda environment. There may be a better way to trigger Lambda environment flushing, lets us know if you find a way.
First step when you encounter an error is to check the
Troubleshooting section. If your problem is not there,
feel free to reach out in the #tea-pot
Slack Channel. If your problem
can't be resolved, we'll put in a
Github Issue to help
track the problem and seek resolution.
We have no concrete cycle. If nothing else, we'll do periodic releases to help resolve CVE issues.
Join us in this journey! Together we can make TEA better!
We love contributions from outside the team! We have 2 requirements for accepting contributions:
- All contributions should conform to our TEA Vision
- Contributions should come with unittests to validate the feature
If you have a feature you'd like to request, first confirm the feature aligns with our TEA Vision. If you think the feature belongs in TEA, you have TWO options:
- Request the feature using Github Issue
- Request the feature in the
#tea-pot
Slack Channel
There are LOTS of words in this document. Lots of steps. Lots of ways things can go wrong. It's understandable that you might not be interested reading all the words and performing all the steps. In that case, try our magic bootstrapping script!
You can either download deploy.sh
and run it, or, as show below, curl the output directly into bash.
This bash script requires curl
,
awscli
,
Session Manager Plugin
,
and jq
to be
installed.
Run deploy.sh
by replacing all <VARIABLES>
with the appropriate
values. STACK-NAME
should be less than 20 characters, lower-case
letters, numbers, or -
. It should also be globally unique.
curl -s 'https://raw.githubusercontent.com/asfadmin/thin-egress-app/devel/build/deploy.sh' | bash /dev/stdin \
--stack-name=<STACK-NAME> \
--aws-profile=<AWS-PROFILE-NAME \
--bastion="<SSM-BASTION-NAME>" \
--key-file=<LOCAL-PATH-TO-YOUR-PRIVATE-KEY> \
--uid=<EDL-APP-UID> \
--client-id=<EDL-APP-CLIENT-ID> \
--pass='<EDL-APP-PASSWORD>' \
--maturity=<SBX|DEV|SIT|INT|UAT|TEST|PROD> \
--edl-user-creds='<EDL-USERNAME>:<EDL-PASSWORD>'
All parameters are optional, but not supplying params affects behavior.