Force Terraform Deployment Despite Of No Change
2 min read

Force Terraform Deployment Despite Of No Change

Force Terraform Deployment Despite Of No Change
Photo by Minku Kang / Unsplash

This is the 2nd time within the last week I came across this issue. There are a couple of scenarios where we truly want to re-deploy certain cloud infrastructure components - irrespective if they are changed or not.

API Gateway deployment

I recently decided to touch AWS only with Terraform, and that includes developing APIs in AWS API Gateway. While creating API routes and methods, we have to explicitly deploy them to a certain stage.

My Terraform code worked well - it created the API, resources, methods, integration with Lambda function, configured appropriate integration response, and method response. But this was all good in the process of creation. It even deployed the API to appropriate stage.

However, the following modifications to the same API gateway from the code, didn't seem to take effect. This is because changes were done to other resources of API gateway and not to the aws_api_gateway_deployment resource itself.

Since Terraform maintains a state which is independent of VCS or SCM, it does it in its own way. For Terraform's state, the deployment of the deployment resource need not be triggered simply because there was no change to its resource configuration. Now there is no point in updating the API if it is not going to be deployed. At the same time there is no need for unnecessary modification to this deployment configuration.

The internet suggested a couple of approaches to tackle this. One way is to add triggers attribute and map it to one of the resource attribute associated with this API. An attribute that changes in almost every change to the API gateways. This indicates Terraform to trigger the redeployment of aws_api_gateway_deployment resource and then it actually deploys the modified resource to target stage.

Another way is to map this trigger attribute to a timestamp which will ensure its deployment even if there is no change to the API, but something unrelated was changed.

The third way is to mark this resource as tainted before running terraform apply, and then let Terraform - by virtue - redeploy the API deployment resource.

S3 file sync

Okay so there is no straight way to sync the local files to the target S3 bucket. Terraform S3 bucket resource expects to upload objects - one by one. The feature to upload complete directory along with sub-directories does not exist yet.

So, I ended up using 'aws s3 sync' with a local-exec provisioner, by creating a null_resource in my Terraform configuration. This way it does not affect S3 or any other resource's configuration and does the job of syncing the files to target S3 bucket. I mean this is what I hoped for.

However, when the files in the source local directory were modified - added, edited, or deleted - it was not enough for Terraform to automatically trigger the sync with target S3 bucket. Again the same sweet misunderstanding with the state file.

So this time around, I followed the path of tainting a resource. null_resource does not actually create any cloud resource. It is mainly used to run local-exec provisioner. So tainting it - which is not a recommended approach generally - doesn't seem to be very harmful in case of null_resource.

And it works fine. Yes an additional step before apply, in fact, if there are multiple similar resources or triggers, then how many of them are we going to taint manually before apply? Well, my intention here is to automate this process using some CI/CD tool. So from that point of view, this should be okay, as the runners would take care of it.