- Terraform Beginners Bootcamp 2023 - Week 1
- Root Module Structure
- Terraform and Input Variables
- Dealing With Configuration Drift
- What happens if we lose our state file?
- Terraform Modules
- Considerations when using ChatGPT to write Terraform
- Working with Files in Terraform
- Terraform Locals
- Terraform Data Sources
- Working with JSON
- Terraform Data
- Provisioners
- For Each Expressions
- And a table of contents
- On the right
Our root module structure is as follows:
PROJECT_ROOT
│
├── main.tf # everything else.
├── variables.tf # stores the structure of input variables
├── terraform.tfvars # the data of variables we want to load into our terraform project
├── providers.tf # defined required providers and their configuration
├── outputs.tf # stores our outputs
└── README.md # required for root modules
In terraform we can set two kind of variables:
- Enviroment Variables - those you would set in your bash terminal eg. AWS credentials
- Terraform Variables - those that you would normally set in your tfvars file
We can set Terraform Cloud variables to be sensitive so they are not shown visibliy in the UI.
You can configure Terraform input variables using the -var flag, which allows you to set or override variables specified in .tfvars files. For instance:
terraform apply -var user_uuid=0000-...
This command will override the predefined variable values in other files with the specified input.
When dealing with multiple variables needed in your Terraform commands, it's more convenient to manage them in a single file. You can define all your variables within a file, whether it's in .tfvars or .tfvars.json format, and then reference the file using the -var-file flag. For example:
terraform apply -var-file="bulkvalues.tfvars"
By default, Terraform loads variables from the terraform.tfvars file. This file serves as a central repository for your variable configurations, making it easy to manage variables in bulk.
Terraform automatically loads variable definitions from various sources in a specific order of precedence, prioritizing them from top to bottom and left to right:
- Any .auto.tfvars or .auto.tfvars.json files are processed in lexical order of their filenames.
- Files with names ending in .auto.tfvars or .auto.tfvars.json.
- Files named exactly terraform.tfvars.json or terraform.tfvars.
- This sequence ensures that variables are loaded according to their designated importance and specificity.
If you lose your statefile, you most likley have to tear down all your cloud infrastructure manually.
You can use terraform port but it won't for all cloud resources. You need check the terraform providers documentation for which resources support import.
terraform import aws_s3_bucket.bucket bucket-name
Terraform Import AWS S3 Bucket Import
If someone goes and delete or modifies cloud resource manually through ClickOps.
If we run Terraform plan is with attempt to put our infrstraucture back into the expected state fixing Configuration Drift.
It is recommend to place modules in a modules directory when locally developing modules but you can name it whatever you like.
We can pass input variables to our module. The module has to declare the terraform variables in its own variables.tf
module "terrahouse_aws" {
source = "./modules/terrahouse_aws"
user_uuid = var.user_uuid
bucket_name = var.bucket_name
}
Using the source we can import the module from various places eg:
- locally
- Github
- Terraform Registry
module "terrahouse_aws" {
source = "./modules/terrahouse_aws"
}
LLMs such as ChatGPT may not be trained on the latest documentation or information about Terraform.
It may likely produce older examples that could be deprecated. Often affecting providers.
This is a built in terraform function to check the existance of a file.
condition = fileexists(var.error_html_filepath)
https://developer.hashicorp.com/terraform/language/functions/fileexists
https://developer.hashicorp.com/terraform/language/functions/filemd5
In terraform there is a special variable called path that allows us to reference local paths:
- path.module = get the path for the current module
- path.root = get the path for the root module Special Path Variable
resource "aws_s3_object" "index_html" { bucket = aws_s3_bucket.website_bucket.bucket key = "index.html" source = "${path.root}/public/index.html" }
Locals allows us to define local variables. It can be very useful when we need transform data into another format and have referenced a varaible.
locals {
s3_origin_id = "MyS3Origin"
}
This allows use to source data from cloud resources.
This is useful when we want to reference cloud resources without importing them.
data "aws_caller_identity" "current" {}
output "account_id" {
value = data.aws_caller_identity.current.account_id
}
We use the jsonencode to create the json policy inline in the hcl.
> jsonencode({"hello"="world"})
{"hello":"world"}
Plain data values such as Local Values and Input Variables don't have any side-effects to plan against and so they aren't valid in replace_triggered_by. You can use terraform_data's behavior of planning an action each time input changes to indirectly use a plain value to trigger replacement.
Provisioners allow you to execute commands on compute instances eg. a AWS CLI command.
They are not recommended for use by Hashicorp because Configuration Management tools such as Ansible are a better fit, but the functionality exists.
This will execute command on the machine running the terraform commands eg. plan apply
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
}
This will execute commands on a machine which you target. You will need to provide credentials such as ssh to get into the machine.
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
For each allows us to enumerate over complex data types
[for s in var.list : upper(s)]
This is mostly useful when you are creating multiples of a cloud resource and you want to reduce the amount of repetitive terraform code.