Migrating count to for_each in Terraform v0.12
version 1.0, 2020-01-29
This post describes how to migrate Terraform configuration written in v0.11.x making use of the count
keyword to the newly added for_each
keyword from Terraform v0.12.6+.
The examples described here are from an Big IP F5 LTM Terraform configuration whose resources can’t be destroyed and re-created without causing an outage. This requires ensuring that the underlying infrastructure stays the same while allowing use of the new features.
Definitions
- pool
-
In F5 terms a pool load balances traffic to member nodes.
- node | pool member
-
endpoint receiving actual traffic.
The Problem
Lets say I have a bunch of members in my pool, in Terraform v0.11.x I was passing them as a list:
node_port_list = [
"${module.dgamba-web-0.name}:8080",
"${module.dgamba-web-1.name}:8080",
]
On the receiving side, I attach them to my pool by creating a count
with the length of the list:
resource "bigip_ltm_pool_attachment" "this" {
count = length(var.node_port_list)
pool = bigip_ltm_pool.this.name
node = element(var.node_port_list, count.index)
depends_on = [bigip_ltm_pool.this]
}
Now, for any particular reason, one of my web nodes needs to be removed from the pool. That node happens to be a node in the list that has elements after it:
node_port_list = [
- "${module.dgamba-web-0.name}:8080",
"${module.dgamba-web-1.name}:8080",
]
One common scenario for us is where we change the AMI of the underlying web node from one patch level to the next. We first add the second set of nodes side by side to the old set of nodes and then remove the old set of nodes.
After doing something like that, Terraform will:
-
Delete index 0 from the list.
-
See that the list is shrunk from 2 elements to 1, so it will go ahead and destroy element index 1.
-
See that index 0 is a new entry so it will try to create it.
Terraform will perform the following actions:
# module.dgamba_test_80.bigip_ltm_pool_attachment.this[0] must be replaced
-/+ resource "bigip_ltm_pool_attachment" "this" {
~ id = "/Common/dgamba_test_80-/Common/dgamba-web-0.example.com:8080" -> (known after apply)
~ node = "/Common/dgamba-web-0.example.com:8080" -> "/Common/dgamba-web-1.example.com:8080" # forces replacement
pool = "/Common/dgamba_test_80"
}
# module.dgamba_test_80.bigip_ltm_pool_attachment.this[1] will be destroyed
- resource "bigip_ltm_pool_attachment" "this" {
- id = "/Common/dgamba_test_80-/Common/dgamba-web-1.example.com:8080" -> null
- node = "/Common/dgamba-web-1.example.com:8080" -> null
- pool = "/Common/dgamba_test_80" -> null
}
Plan: 1 to add, 0 to change, 2 to destroy.
As you can see from the above, this change could cause an outage for the duration of the change.
In the past (Terraform v0.11.x), our solution has been to do the following:
$ ./terraform destroy -target module.dgamba_test_80.bigip_ltm_pool_attachment.this[0] $ ./terraform state mv module.dgamba_test_80.bigip_ltm_pool_attachment.this[1] module.dgamba_test_80.bigip_ltm_pool_attachment.this[0]
That is, first destroy the node that we want to destroy using the -target
option to limit the scope of the change.
Then modify the state so index 1 becomes index 0.
In other words, every single time we update our infra as code, we need to run manual commands. This is error prone and dangerous!
The Solution
Temporarily revert your node removal from the pool so you only handle one change at a time: So back to:
node_port_list = [
"${module.dgamba-web-0.name}:8080",
"${module.dgamba-web-1.name}:8080",
]
Now, start by creating a copy of my pool module, a different version if you will, in that way I can upgrade my pools one by one instead of having to do a massive change.
module "dgamba_test_80" {
- source = "./modules/pool"
+ source = "./modules/pool-v2"
In my new module, I change the pool attachment code:
resource "bigip_ltm_pool_attachment" "this" {
- count = length(var.node_port_list)
+ for_each = toset(var.node_port_list)
pool = bigip_ltm_pool.this.name
- node = element(var.node_port_list, count.index)
+ node = each.key
depends_on = [bigip_ltm_pool.this]
}
Doing that isn’t enough:
Terraform will perform the following actions:
# module.dgamba_test_80.bigip_ltm_pool_attachment.this will be destroyed
- resource "bigip_ltm_pool_attachment" "this" {
- id = "/Common/dgamba_test_80-/Common/dgamba-web-0.example.com:8080" -> null
- node = "/Common/dgamba-web-0.example.com:8080" -> null
- pool = "/Common/dgamba_test_80" -> null
}
# module.dgamba_test_80.bigip_ltm_pool_attachment.this[1] will be destroyed
- resource "bigip_ltm_pool_attachment" "this" {
- id = "/Common/dgamba_test_80-/Common/dgamba-web-1.example.com:8080" -> null
- node = "/Common/dgamba-web-1.example.com:8080" -> null
- pool = "/Common/dgamba_test_80" -> null
}
# module.dgamba_test_80.bigip_ltm_pool_attachment.this["/Common/dgamba-web-0.example.com:8080"] will be created
+ resource "bigip_ltm_pool_attachment" "this" {
+ id = (known after apply)
+ node = "/Common/dgamba-web-0.example.com:8080"
+ pool = "/Common/dgamba_test_80"
}
# module.dgamba_test_80.bigip_ltm_pool_attachment.this["/Common/dgamba-web-1.example.com:8080"] will be created
+ resource "bigip_ltm_pool_attachment" "this" {
+ id = (known after apply)
+ node = "/Common/dgamba-web-1.example.com:8080"
+ pool = "/Common/dgamba_test_80"
}
Plan: 2 to add, 0 to change, 2 to destroy.
The above plan would also cause an outage, so we need to do terraform state mv
:
$ ./terraform state mv module.dgamba_test_80.bigip_ltm_pool_attachment.this[0] 'module.dgamba_test_80.bigip_ltm_pool_attachment.this["/Common/dgamba-web-0.example.com:8080"]' Move "module.dgamba_test_80.bigip_ltm_pool_attachment.this[0]" to "module.dgamba_test_80.bigip_ltm_pool_attachment.this[\"/Common/dgamba-web-0.example.com:8080\"]" Successfully moved 1 object(s). $ ./terraform state mv module.dgamba_test_80.bigip_ltm_pool_attachment.this[1] 'module.dgamba_test_80.bigip_ltm_pool_attachment.this["/Common/dgamba-web-1.example.com:8080"]' Move "module.dgamba_test_80.bigip_ltm_pool_attachment.this[1]" to "module.dgamba_test_80.bigip_ltm_pool_attachment.this[\"/Common/dgamba-web-1.example.com:8080\"]" Successfully moved 1 object(s).
Warning
|
Notice how even though the plan referred to node index 0 as: module.dgamba_test_80.bigip_ltm_pool_attachment.this The state move needs to point at is as: module.dgamba_test_80.bigip_ltm_pool_attachment.this[0] Additionally, the second part is wrapped in single quotes to avoid shell issues. |
Now your upgrade to for_each
is complete!
The End Result
Now we can get back to removing that single entry at index 0:
module "dgamba_test_80" {
- source = "./modules/pool"
+ source = "./modules/pool-v2"
name = "dgamba_test_80"
node_port_list = [
- "${module.dgamba-web-0.name}:8080",
"${module.dgamba-web-1.name}:8080",
]
}
And we finally get what we expect, a single node destruction:
Terraform will perform the following actions: # module.dgamba_test_80.bigip_ltm_pool_attachment.this["/Common/dgamba-web-0.example.com:8080"] will be destroyed - resource "bigip_ltm_pool_attachment" "this" { - id = "/Common/dgamba_test_80-/Common/dgamba-web-0.example.com:8080" -> null - node = "/Common/dgamba-web-0.example.com:8080" -> null - pool = "/Common/dgamba_test_80" -> null } Plan: 0 to add, 0 to change, 1 to destroy.
Conclusion
As it stands, moving resources over to the new Terraform for_each
syntax is very time consuming and requires state manipulation.
On the other hand, the state manipulation is done once and we will never have to do any additional state manipulation for that common scenario or replacing nodes.