This little project creates the infrastructure in Terraform for my blog post Kubernetes The (real) Hard Way on AWS.
AWS CDK Python code available 🔗 HERE
You can practice creating a multi node K8s Cluster yourself for training purposes or CKA exam preparation.
- aws-cli Profile
- Existing AWS Route53 Public Hosted Zone
Default values - you can adapt all of them to your needs
- Terraform 0.12
- 1x VPC, 3x Public Subnets, 3x Private Subnets, Route Tables, Routes
- 3x Worker Nodes
- 3x Master Nodes
- 3x Etcd Nodes
- 1x Bastion Host
- Genertes AWS Key Pair for instances
- Route53 Records for internal & external IPv4 addresses
- 1x Public LoadBalancer for Master Nodes (external kubectl access)
- 1x Private LoadBalancer for Master Nodes (fronting kube-apiservers)
- 1x Public LoadBalancer for Bation Host (AutoScalingGroup)
- Gets most recent Ubuntu AMI for all regions
- Install awscli, cfssl, cfssl_json via UserData
- Allows external access from workstation IPv4 address only
Name | Description | Type | Default | Required |
---|---|---|---|---|
availability_zones | Number of different AZs to use | number | "3" |
no |
aws_key_pair_name | AWS Key Pair name to use for EC2 Instances (if already existent) | string | "null" |
no |
aws_profile | AWS cli profile (e.g. default ) |
string | "default" |
no |
aws_region | AWS region (e.g. us-east-1 => US North Virginia) |
string | "us-east-1" |
no |
aws_vpc_cidr | VPC CIDR block | string | "10.23.0.0/16" |
no |
bastion_instance_type | EC2 instance type for Bastion Host | string | "t3a.small" |
no |
bastion_max_size | Maximum number of EC2 instances for Bastion AutoScalingGroup | number | "1" |
no |
bastion_min_size | Minimum number of EC2 instances for Bastion AutoScalingGroup | number | "1" |
no |
bastion_size | Desired number of EC2 instances for Bastion AutoScalingGroup | number | "1" |
no |
etcd_instance_type | EC2 instance type for etcd instances | string | "t3a.small" |
no |
etcd_max_size | Maximum number of EC2 instances for etcd AutoScalingGroup | number | "3" |
no |
etcd_min_size | Minimum number of EC2 instances for etcd AutoScalingGroup | number | "3" |
no |
etcd_size | Desired number of EC2 instances for etcd AutoScalingGroup | number | "3" |
no |
hosted_zone | Route53 Hosted Zone for creating records (without . suffix, e.g. napo.io ) |
string | n/a | yes |
master_instance_type | EC2 instance type for K8s master instances | string | "t3a.small" |
no |
master_max_size | Maximum number of EC2 instances for K8s Master AutoScalingGroup | number | "3" |
no |
master_min_size | Minimum number of EC2 instances for K8s Master AutoScalingGroup | number | "3" |
no |
master_size | Desired number of EC2 instances for K8s Master AutoScalingGroup | number | "3" |
no |
owner | Owner name used for tags | string | "napo.io" |
no |
pod_cidr | The first two octets for the Pod network CIDR (used in Worker UserData to generate POD_CIDR envvar) | string | "10.200" |
no |
project | Project name used for tags | string | "k8s-the-right-hard-way-aws" |
no |
ssh_public_key_path | SSH public key path (to create a new AWS Key Pair from existing local SSH public RSA key) | string | "~/.ssh/id_rsa.pub" |
no |
stage | Environment name (e.g. testing , dev , staging , prod ) |
string | "testing" |
no |
worker_instance_type | EC2 instance type for K8s worker instances | string | "t3a.small" |
no |
worker_max_size | Maximum number of EC2 instances for K8s Worker AutoScalingGroup | number | "3" |
no |
worker_min_size | Minimum number of EC2 instances for K8s Worker AutoScalingGroup | number | "3" |
no |
worker_size | Desired number of EC2 instances for K8s Worker AutoScalingGroup | number | "3" |
no |
Name | Description |
---|---|
route53_bastion_public_fqdn | Route53 record for Bastion Host Load Balancer |
route53_master-public-lb_public_fqdn | Route53 record for Master Public Load Balancer |