Skip to content
This repository has been archived by the owner on Feb 9, 2022. It is now read-only.

Latest commit

 

History

History
299 lines (209 loc) · 13.4 KB

quickstart-eks.md

File metadata and controls

299 lines (209 loc) · 13.4 KB

Quickstart: BKPR on Amazon Elastic Container Service for Kubernetes (Amazon EKS)

Table of contents

Introduction

This document walks you through setting up an Amazon Elastic Container Service for Kubernetes (Amazon EKS) cluster and installing the Bitnami Kubernetes Production Runtime (BKPR) on it.

Prerequisites

DNS requirements

In addition to the requirements listed above, a domain name is also required for setting up Ingress endpoints to services running in the cluster. The specified domain name can be a top-level domain (TLD) or a subdomain. In either case, you have to manually set up the NS records for the specified TLD or subdomain so as to delegate DNS resolution queries to an Amazon Route 53 hosted zone created and managed by BKPR. This is required in order to generate valid TLS certificates.

Installation and setup

Step 1: Set up the cluster

In this section, you will deploy an Amazon Elastic Container Service for Kubernetes (Amazon EKS) cluster using the eksctl tool.

  • Make sure you have a working Amazon CLI environment. Refer to the Configuring the AWS CLI page for more information.

  • Configure the following environment variables, replacing the examples shown with valid information:

    export BKPR_DNS_ZONE=my-domain.com
    export [email protected]
    export AWS_EKS_CLUSTER=my-eks-cluster
    export AWS_EKS_K8S_VERSION=1.16
    • BKPR_DNS_ZONE specifies the DNS suffix for the externally-visible websites and services deployed in the cluster.
    • AWS_EKS_USER specifies the email address used in requests to Let's Encrypt.
    • AWS_EKS_CLUSTER specifies the name of the EKS cluster.
    • AWS_EKS_K8S_VERSION specifies the version of Kubernetes to use for the cluster. The BKPR Kubernetes version support matrix lists the base Kubernetes versions supported by BKPR.
  • Create the EKS cluster:

    eksctl create cluster --name=${AWS_EKS_CLUSTER} \
                          --nodes=3 \
                          --version=${AWS_EKS_K8S_VERSION}

    TIP: The --ssh-access command line flag to the eks create cluster command configures SSH access to the Kubernetes nodes. This is really useful when debugging issues that require you to log in to the nodes.

    NOTE: At the time of this writing, EKS clusters created with eksctl are affected by a bug that causes Elasticsearch to get into a crashloop. The workaround consists of overriding the AMI used when creating the cluster. The AMI named amazon-eks-node-1.10-v20190211 is known to work. You will need to find its ID that corresponds to the region and zone where you are creating the cluster. For instance:

    Region AMI ID
    eu-central-1 ami-074583f8d5a05e27b
    us-east-1 ami-0c5b63ec54dd3fc38

    The AWS CLI command used to retrieve the AMI ID for the corresponding image is:

    aws ec2 describe-images --owners 602401143452 --filters "Name=name,Values=amazon-eks-node-1.10-v20190211" --output json | jq -r '.Images[].ImageId'

    Then use the --node-ami command-line argument to eksctl to override the AMI ID. For example:

     eksctl create cluster --name=${AWS_EKS_CLUSTER} \
                           --nodes=3 \
                           --version=${AWS_EKS_K8S_VERSION} \
                           --node-ami ami-074583f8d5a05e27b

    Provisioning an EKS cluster can take a long time. Please be patient while the request is being processed.

  • Configure kubectl to use the new cluster:

    aws eks update-kubeconfig --name=${AWS_EKS_CLUSTER}
  • Verify that your cluster is up and running:

    kubectl get nodes

Step 2: Configure a User Pool in AWS Cognito

In order to authenticate users for applications running atop EKS, BKPR requires a User Pool in AWS Cognito to be configured.

If you already have a working User Pool in AWS Cognito that you would like to use for authenticating users, you will need to retrieve its ID in the form ${AWS_REGION}-${USER_POOL_ID}, and export it as shown below, then skip to the Create user section.

export AWS_COGNITO_USER_POOL_ID=eu-central-1_sHSdWT6VL

If you are new to using BKPR on EKS, or if you want to create a new User Pool in AWS Cognito, follow these steps:

  1. Browse to the Cognito module in the AWS Console.
  2. Navigate to Manage User Pools > Create a user pool page.
  3. Enter a valid Pool name, like eks-test, then click on the Review defaults button:

  1. Go to the Policies and select the Only allow administrators to create users option, otherwise anyone would be able to sign up and gain access to services running in the cluster. Save changes before continuing to the next step:

  1. Feel free to customize other sections, like Tags, to your liking. Once done, go to the Review section and click on the Create pool button:

  1. Go to App integration > Domain name setting and configure the Amazon Cognito domain, which has to be unique to all users in an AWS Region. Once done, click the Save changes button:

  1. Select the General settings option, note the Pool Id and export its value:
export AWS_COGNITO_USER_POOL_ID=eu-central-1_sHSdWT6VL

Create a user

In order to access protected resources which require authentication, such as Prometheus, Kibana or Grafana, you will need to create users in the newly-created user pool. The next steps highlight how to create a test user which can be used to access these protected resources:

  1. Browse to the Cognito module in the AWS Console.
  2. Navigate to the Manage User Pools > YOUR_USER_POOL > Users and Groups > Create user page.
  3. Fill in the input fields as shown below:

At any time, if you are presented with an Amazon AWS authentication form, you can use this user account to authenticate against protected resources in BKPR.

NOTE: if the credentials you configured for the user fail to work, e.g. getting into a "loop" being asked for password change after 1st login, while consistently not succeeding, you may need to forcebly set its credentials and state with:

aws --region REGION cognito-idp admin-set-user-password --user-pool-id ID --username USER --password PASS --permanent

Step 3: Deploy BKPR

To bootstrap your Kubernetes cluster with BKPR, use the command below:

kubeprod install eks \
  --email ${AWS_EKS_USER} \
  --dns-zone "${BKPR_DNS_ZONE}" \
  --user-pool-id "${AWS_COGNITO_USER_POOL_ID}"

Wait for all the pods in the cluster to enter the Running state:

kubectl get pods -n kubeprod

Step 4: Configure domain registration records

BKPR creates and manages a DNS zone which is used to map external access to applications and services in the cluster. However, for it to be usable, you need to configure the NS records for the zone.

Query the name servers of the zone with the following command and configure the records with your domain registrar.

BKPR_DNS_ZONE_ID=$(aws route53 list-hosted-zones-by-name --dns-name "${BKPR_DNS_ZONE}" \
                                                         --max-items 1 \
                                                         --query 'HostedZones[0].Id' \
                                                         --output text)
aws route53 get-hosted-zone --id ${BKPR_DNS_ZONE_ID} --query DelegationSet.NameServers

Please note that it can take a while for the DNS changes to propagate.

Step 5: Access logging and monitoring dashboards

After the DNS changes have propagated, you should be able to access the Prometheus, Kibana and Grafana dashboards by visiting https://prometheus.${BKPR_DNS_ZONE}, https://kibana.${BKPR_DNS_ZONE} and https://grafana.${BKPR_DNS_ZONE} respectively. Login with credentials created in the Create a user step.

Congratulations! You can now deploy your applications on the Kubernetes cluster and BKPR will help you manage and monitor them effortlessly.

Next steps

Upgrading BKPR

Step 1: Update the installer

Follow the installation guide to update the BKPR installer binary to the latest release.

Step 2: Edit kubeprod-manifest.jsonnet

Edit the kubeprod-manifest.jsonnet file that was generated by kubeprod install and update the version referred to in the import statement. For example, the following snippet illustrates the changes required in the kubeprod-manifest.jsonnet file if you're upgrading to version v1.3.0 from version v1.2.0.

 // Cluster-specific configuration
-(import "https://releases.kubeprod.io/files/v1.2.0/manifests/platforms/eks.jsonnet") {
+(import "https://releases.kubeprod.io/files/v1.3.0/manifests/platforms/eks.jsonnet") {
  config:: import "kubeprod-autogen.json",
  // Place your overrides here
 }

Step 3: Perform the upgrade

Re-run the kubeprod install command from the Deploy BKPR step in the directory containing the existing kubeprod-autogen.json and updated kubeprod-manifest.jsonnet files.

Teardown and cleanup

Step 1: Uninstall BKPR from your cluster

kubecfg delete kubeprod-manifest.jsonnet

Step 2: Wait for the kubeprod namespace to be deleted

# Specific finalizers cleanup, to avoid kubeprod ns lingering
# - cert-manager challenges if TLS certs have not been issued
kubectl get -n kubeprod challenges.acme.cert-manager.io -oname| \
  xargs -rtI{} kubectl patch -n kubeprod {} \
    --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]'
kubectl wait --for=delete ns/kubeprod --timeout=300s

Step 3: Delete the Hosted Zone in Route 53

BKPR_DNS_ZONE_ID=$(aws route53 list-hosted-zones-by-name --dns-name "${BKPR_DNS_ZONE}" \
                                                         --max-items 1 \
                                                         --query 'HostedZones[0].Id' \
                                                         --output text)
aws route53 list-resource-record-sets --hosted-zone-id ${BKPR_DNS_ZONE_ID} \
                                      --query '{ChangeBatch:{Changes:ResourceRecordSets[?Type != `NS` && Type != `SOA`].{Action:`DELETE`,ResourceRecordSet:@}}}' \
                                      --output json > changes

aws route53 change-resource-record-sets --cli-input-json file://changes \
                                        --hosted-zone-id ${BKPR_DNS_ZONE_ID} \
                                        --query 'ChangeInfo.Id' \
                                        --output text

aws route53 delete-hosted-zone --id ${BKPR_DNS_ZONE_ID} \
                               --query 'ChangeInfo.Id' \
                               --output text

Additionally you should remove the NS entries configured at the domain registrar.

Step 4: Delete the BKPR user

ACCOUNT=$(aws sts get-caller-identity | jq -r .Account)
aws iam detach-user-policy --user-name "bkpr-${BKPR_DNS_ZONE}" --policy-arn "arn:aws:iam::${ACCOUNT}:policy/bkpr-${BKPR_DNS_ZONE}"
aws iam delete-policy --policy-arn "arn:aws:iam::${ACCOUNT}:policy/bkpr-${BKPR_DNS_ZONE}"
ACCESS_KEY_ID=$(jq -r .externalDns.aws_access_key_id kubeprod-autogen.json)
aws iam delete-access-key --user-name "bkpr-${BKPR_DNS_ZONE}" --access-key-id "${ACCESS_KEY_ID}"
aws iam delete-user --user-name "bkpr-${BKPR_DNS_ZONE}"

Step 5: Delete the BKPR App Client

USER_POOL=$(jq -r .oauthProxy.aws_user_pool_id kubeprod-autogen.json)
CLIENT_ID=$(jq -r .oauthProxy.client_id kubeprod-autogen.json)
aws cognito-idp delete-user-pool-client --user-pool-id "${USER_POOL}" --client-id "${CLIENT_ID}"

Step 6: Delete the EKS cluster

eksctl delete cluster --name ${AWS_EKS_CLUSTER}

Useful links