aka "how to develop the terraform 12+ stuff"
NOTE: this doesn't work well with provider locking in TF14+. I recommend you disable this if you've enabled it.
~helpful tip, speed up cache by adding the following to your ~./bashrc
:~
~~
~export TF_PLUGIN_CACHE_DIR=~/.terraform.d/plugin-cache~
~[[ -d "$TF_PLUGIN_CACHE_DIR" ]] || mkdir -p $TF_PLUGIN_CACHE_DIR~
~
~
General process:
For this example, I was renaming 010-standard-vpc
to 010-vpc-splunk
in test/aws-us-gov/mdr-test-modelclient
.
cd 010-standard-vpc/
# clear out cache to make our lives easier
rm -rf .terragrunt-cache
# validate that we're on latest code
terragrunt-local apply
# Get the `bucket` and 'key' value
cat `find . -name 'backend.tf'`
# In this example:
# bucket = "afsxdr-terraform-state"
# key = "aws/test/aws-us-gov/mdr-test-modelclient/010-standard-vpc/terraform.tfstate"
aws --profile mdr-common-services-gov \
s3 mv \
s3://afsxdr-terraform-state/aws/test/aws-us-gov/mdr-test-modelclient/010-standard-vpc/terraform.tfstate \
s3://afsxdr-terraform-state/aws/test/aws-us-gov/mdr-test-modelclient/010-vpc-splunk/terraform.tfstate
# move and rename
cd ..
git mv 010-standard-vpc 010-vpc-splunk
cd 010-vpc-splunk
# Apply again: NOTE: The only changes should be to the tags. Do not accept any other changes, or you will have extra resources
rm -rf .terragrunt-cache
terragrunt-local apply
If you get:
Error refreshing state: state data in S3 does not have the expected content.
You forgot to rename the directory you're working in.
These notes will walk you through the Terragrunt git flow for making changes.
rm -rf .terragrunt-cache
to resolve "strange" errorsTF_VAR_instance_termination_protection=false terragrunt apply
TF_VAR_instance_termination_protection=false terragrunt destroy
colby-williams taught me: cp -ar to copy symlinks correctly.
ln -s ../../../../.tfswitch.toml .
ls -larth .tfswitch.toml -> ../../../../.tfswitch.toml
When running terragrunt apply
, got the following:
Initializing the backend...
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: ec9c9183a070f5ad59b9abd524810c06
The remote state looks uncorrupted:
cd ~/xdr-terraform-live/prod/aws-us-gov/mdr-prod-c2/160-splunk-indexer-cluster
find .terragrunt-cache -name 'backend.tf'
# Use the filename found and view the contents
cat .terragrunt-cache/tC_aGEvkrKzsZjSw0YQum-A6YL8/Ipji28Trjy_fymLhd4EZgtAe8xg/base/splunk_servers/indexer_cluster/backend.tf
# Use the bucket and key to from the s3 path:
scp --profile mdr-common-services-gov cp s3://afsxdr-terraform-state/aws/prod/aws-us-gov/mdr-prod-c2/160-splunk-indexer-cluster/terraform.tfstate
less -iS terraform.tfstate
To fix:
9cb9cbfdda
ec9c9183a0
terragrunt refresh
With tf14, terraform has added the creation of a 'provider state lock file' to prevent inadvertant drift of provider modules. This requires some addition management.
terragrunt-providers
(which is just a bash script that runs some cleanup and then runs terragrunt providers lock -platform=darwin_amd64 -platform=linux_amd64 -platform=windows_amd64 -platform=linux_arm64
.required_providers.tf
in your terragrunt.hcl
file for the module. This must include the modules from the root terragrunt.chl
that are used within your module. For an example, see xdr-terraform-live/common/aws-us-gov/afs-mdr-common-services-gov/085-codebuild-ecr-customer-portal/terragrunt.hcl
TF_PLUGIN_CACHE_DIR
. You can try disabling this if you have trouble getting hashes.