Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Invalid reference from destroy provisioner #5

Open
nhutpm219 opened this issue Apr 8, 2021 · 3 comments
Open

Error: Invalid reference from destroy provisioner #5

nhutpm219 opened this issue Apr 8, 2021 · 3 comments

Comments

@nhutpm219
Copy link

Hi,

I got error in the last video of "Bringing It All Together" and can not plan or apply, but in video you can. Have any solution to troubleshoot code for resolve this problem?

Terraform v0.14.9
ansible 2.9.6
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]

--- instances.tf---
resource "aws_instance" "jenkins-worker-oregon" {
provider = aws.region-worker
count = var.workers-count
ami = data.aws_ssm_parameter.linuxAmiOregon.value
instance_type = var.instance-type
key_name = aws_key_pair.worker-key.key_name
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.jenkins-sg-oregon.id]
subnet_id = aws_subnet.subnet_1_oregon.id

tags = {
Name = join("_", ["jenkins_worker_tf", count.index + 1])
}
depends_on = [aws_main_route_table_association.set-worker-default-rt-assoc, aws_instance.jenkins-master]

provisioner "local-exec" {
command = <<EOF
aws --profile ${var.profile} ec2 wait instance-status-ok --region ${var.region-worker} --instance-ids ${self.id}
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name} master_ip=${aws_instance.jenkins-master.private_ip}' ~/terraform-aws/ansible_templates/install_jenkins_worker.yml
EOF
}

provisioner "remote-exec" {
when = destroy
inline = [
"java -jar /home/ec2-user/jenkins-cli.jar -auth @/home/ec2-user/jenkins_auth -s http://${aws_instance.jenkins-master.private_ip}:8080 delete-node ${self.private_ip}"
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}
}

--- Error ---
nhutpm@nhutpm:~/terraform-aws$ terraform validate

Error: Invalid reference from destroy provisioner

on instances.tf line 84, in resource "aws_instance" "jenkins-worker-oregon":
84: inline = [
85: "java -jar /home/ec2-user/jenkins-cli.jar -auth @/home/ec2-user/jenkins_auth -s http://${aws_instance.jenkins-master.private_ip}:8080 delete-node ${self.private_ip}"
86: ]

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.

@nmofonseca
Copy link

Hello, I have the same issue, it seems the issue is on terraform 0.13v + which seems that changed since that version, I found something that I am going to try here:

https://dev.to/jmarhee/upgrading-terraform-destroy-time-provisioners-50p7

But will be good to have some comments from the Trainer and to get the code updated because it's really difficult for someone that is coming into Terraform to work this one out.

@nmofonseca
Copy link

I kinda made it work but without the ability to use count to spin up multiple workers I don't know how I can do that .

For make the destroy-provider work in v0.13+ I used the following:

#Create EC2 in us-west-2
resource "aws_instance" "jenkins-worker-oregon" {
  provider = aws.region-worker
  #count                       = var.workers-count
  ami                         = data.aws_ssm_parameter.linuxAmiOregon.value
  instance_type               = var.instance-type
  key_name                    = aws_key_pair.worker-key.key_name
  associate_public_ip_address = true
  vpc_security_group_ids      = [aws_security_group.jenkins-sg-oregon.id]
  subnet_id                   = aws_subnet.subnet_1_oregon.id
  provisioner "local-exec" {
    command = <<EOF
aws --profile ${var.profile} ec2 wait instance-status-ok --region ${var.region-worker} --instance-ids ${self.id} \
&& ansible-playbook --extra-vars 'passed_in_hosts=tag_Name_${self.tags.Name} master_ip=${aws_instance.jenkins-master.private_ip}' ansible_templates/install_worker.yaml
EOF
  }
  tags = {
    Name = "jenkins_worker_tf"
  }
  depends_on = [aws_main_route_table_association.set-worker-default-rt-assoc, aws_instance.jenkins-master]
}

resource "null_resource" "delete_jenkins_worker" {
  triggers = {
    worker_id       = aws_instance.jenkins-worker-oregon.id
    worker_ssh_key  = "${file("${path.module}/test_rsa")}"
    worker_pub_ip   = aws_instance.jenkins-worker-oregon.public_ip
    worker_priv_ip  = aws_instance.jenkins-worker-oregon.private_ip
    master_ip       = aws_instance.jenkins-master.private_ip
    worker_con_type = "ssh"
    worker_ssh_user = "ec2-user"
  }
  provisioner "remote-exec" {
    when = destroy
    inline = [
      "java -jar /home/ec2-user/jenkins-cli.jar -auth @/home/ec2-user/jenkins_auth -s http://${self.triggers.master_ip}:8080 -auth @/home/ec2-user/jenkins_auth delete-node ${self.triggers.worker_priv_ip}"
    ]
    connection {
      type        = self.triggers.worker_con_type
      user        = self.triggers.worker_ssh_user
      private_key = self.triggers.worker_ssh_key
      host        = self.triggers.worker_pub_ip
    }
  }
}

@nmofonseca
Copy link

It seems this has been fixed in a different repository:
https://github.com/ACloudGuru-Resources/content-deploying-to-aws-ansible-terraform/blob/master/terraform_v13_compatible_code/instances.tf

Much more elegant option that works with count :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants