top of page
Writer's pictureChristoffer Windahl Madsen

How to use the 'Terraform test' Command - For use in Unit and integrations testing

Updated: Mar 17


Part 1 - Introduction

Hello all and welcome to the blog! First off, I would like to say I am very, very happy to finally be back with a new post. My last post was in July 2023, which is around 7 months ago... Safe to say a lot of things have happened since then - I joined a new company and have also earned a whole host of certifications, including the official HashiCorp Terraform Associate. If you're interested in reading more about me and what I have achieved, please visit my about page. => https://www.codeterraform.com/about


What we will be discussing today is a walk-through of firstly concepts in terms of the new "Terraform test" feature, but we will also spend time deep diving into where we came from as it's nothing new to test one's Terraform code. This topic is huge, and with the below blog post, we are only scratching the surface of what's possible. I am so excited to delve deeper into the subject of testing with more blog posts going forward. I hope you will enjoy today's post, and I encourage you to sign up to the site, which will notify you of any new blog posts and allow you to make comments personally. You can sign up here, and it's obviously free:

How to create a user for codeterraform.com

 

Part 2 - How has Terraform testing been done before the new "Test" Release?

Now, with that out of the way, let's get straight into today's topic: Terraform test, a functionality in Terraform produced by HashiCorp as part of version 1.6.0. It's no secret that for all of us who have worked extensively with Terraform and have also produced modules or reusable code, testing has always been required to ensure it's stable and ready for use. Before this update described above, third-party tools were already available, such as "Terratest" and "Terragrunt." In general, Terraform itself has always included functionality to ensure everything works as intended because, in the end, it's all about running set code and recording the results.


I myself have never used any third-party tools for testing. Instead, I have always simply used my own modules in scripts where specific scenarios have been tested. If the results from these scenarios were deemed satisfying, the "tests" were deemed successful, and the module was released. However, the approach of manually testing Terraform code has its drawbacks:


  1. Running any Terraform code "manually" for testing purposes will be perceived by Terraform just like any other "run," and a state file will be created and stored depending on the "backend configuration," whether it's local disk, Azure storage, AWS S3 bucket, or any other backend. This state file will need to be managed, even after the tests are complete.

  2. The Terraform deployment flow, whether it's "Plan," "Apply," or "Destroy," will always attempt to complete as much of the deployment as possible. Even if errors occur within a module, if some parts of the deployment can finish, Terraform will proceed. This behavior is generally not favorable in a testing scenario because errors caught during runtime are typically considered a "test failure.

  3. There are situations within testing where specific errors are expected, but these cannot be directly controlled by using the "old" and manual method of simply deploying using a module and observing the result. This will make more sense as we delve further into the topic of "Terraform test."

 

Part 3 - How does this new "Test" Functionality make testing easier, more convinient and even more standarized?


With the above description of how Terraform testing used to be, let's begin to dive into the technology update, starting by concretely defining what this new update means:

  1. The new update introduces a specific "Terraform core" command called "test," which differs greatly from the normal "Apply," "Plan," and "Destroy" commands. With this new command, Terraform will execute a given configuration using a combination of a "Terraform template script," which is just like a normal script utilizing either direct providers or modules, along with a new file type with the extension of "tftest.hcl." This file defines exact unit or integration tests to be performed on the said "Terraform template script. (Further details will be described in detail during chapters 5-6-7).

  2. During the test cycle, regardless of whether Terraform is asked to only plan or actually deploy test resources, NO state file will be stored ANYWHERE other than in the executing client's RAM. This is significant, as it completely removes the maintenance part of a given state file since it's not accessible to the user and will be automatically removed by Terraform at test completion.

  3. The "test" command also offers optional flags like "verbose" output, which will be really useful when such tests are run using a CI pipeline. The more information Terraform can provide, the more the user can be assisted, both in terms of a simple function of audit and especially when errors occur and a root cause must be found.

  4. The new "test" file with the extension "tftest.hcl" offers new "block" types with very specific constraints, making it very easy to get started. This way, we can easily design reusable test cases that completely match what we, as the owner of any piece of Terraform code, want to check for, ensuring that the code itself is stable, reliable, and ready for release.

    1. A vital part of each defined test case is not only to decide exactly what output to validate but also any custom error message(s) to parse along with it. This is just another very powerful piece in the overall new solution.

  5. Advanced features, such as options within the test file allowing for specific errors to happen without flagging the overall test as failed, and many more, are available. This added functionality allows us to personalize tests extensively and even create very in-depth "integration" tests whenever these are appropriate.

This list provides a brief breakdown of the overall solution. To delve deeper into the practical applications of the above, please continue reading, starting with Chapter 4, which describes how to get started.


 

Part 4 - Checking Terraform version & getting files ready

With all the above in mind, how can we use Terraform and the "test" command to address these issues and begin defining generic tests to ensure our code behaves as expected? Let's start by examining some code and the prerequisites required to begin:


Firstly, ensure that you have at least the Terraform core library version 1.6.0 or higher installed. To check the current version, run the following command in any operating system terminal:

terraform -version

As an example, at the time of writing, my version of Terraform is 1.7.3 as shown in below screenshot:

Terraform version command example

If your version is below the threshold of 1.6.0, use the same package manager as you did when initially installing Terraform to update it. Remember, using the newest version is always recommended.


Now, to actually get started with our first use of this new Terraform feature, create a local folder anywhere on your local PC, or clone the source code by running the following Git command in a terminal:

cd <set the location to the wanted folder>
git clone https://github.com/ChristofferWin/codeterraform.git

In case of you simply cloning the repo, open VScode and the repo and navigate to the folder => /terraform projects/modules/azurerm-vm-bundle/


Otherwise create a new folder anywhere on the local PC and inside of said folder create the following files:

  1. <teraform script file name, will be used to define the "template" For all our tests to run against>.tf

  2. variables.tf <Will be needed by the terraform script file, simply best practice>

  3. <The HCL file defining the actual unit / integration tests name>.tftest.hcl

The files inside of the folder shall look something like the following (The names before the file extension is up to you)


Example of terraform test root folder

The reason for the "vm_bundle" part of the file names comes from the fact that this blog post will use examples from testing my Terraform module "Azurerm-vm-bundle." This module is a powerful piece of code capable of deploying Azure Virtual Machines at scale. To check out the actual module, start by heading over to the README and go from there. codeterraform/terraform projects/modules/azurerm-vm-bundle/readme.md at main · ChristofferWin/codeterraform (github.com)


 

Part 5 - Building the "Terraform test" Script template

In this section, we will build the actual "Terraform script" to be used as a template for all upcoming unit and/or integration tests. Although the file will resemble any other Terraform script file we have used to deploy infrastructure, it's important to note that the context for this specific file is very different. What we aim to create are specific resource block definitions, whether they are modules or direct provider calls, designed to target specific deployment scenarios that we, as developers, know need testing


Say we have the "Azurerm-vm-bundle" module, and we want to create the script file for all upcoming tests. According to the module's README, it's capable of both creating the following resources and simply using already created ones by receiving resource IDs:


  1. Resource group

  2. Virtual Network

  3. Subnet for VM

  4. Key Vault


So with this in mind, lets create a module call, where we specifically want to test the modules abillity to ONLY use existing Azure resources defined above to make sure that this SPECIFIC part of the module works as designed.


As with all other Terraform scripts, we first need define all the boilerplate Terraform code:

(All the below providers are required by the specific module. In case you want to follow along but making your own tests on your own code, simply only define your required providers)

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
    }
    random = {
      source = "hashicorp/random"
    }
    local = {
      source = "hashicorp/local"
    }
    null = {
      source = "hashicorp/null"
    }
  }
}

provider "azurerm" { //Will use command-line context, typically az cli login
  features {
  }
}

Since we use an "in-line" Azure context in the example above, we do not need to define anything other than "features" inside of our "provider" block for "azurerm." If you're in any doubt about how to create such a context, visit the blog post for more information. "Weekly tips 1" & See under "Example 1"


Lets now define a specific module block, where we seek to test using resource_ids for already existing resources for the module "azurerm-vm-bundle"


module "unit_test_1_using_existing_resources" {
  source = "github.com/ChristofferWin/codeterraform//terraform projects/modules/azurerm-vm-bundle?ref=main"
  rg_id = var.rg_id
  vnet_resource_id = var.vnet_resource_id
  subnet_resource_id = var.subnet_resource_id
  vm_windows_objects = var.vm_windows_objects
  vm_linux_objects = var.vm_linux_objects
}

Notice the name chosen for the module call itself. Although, as we know with these Terraform reference names, almost anything is allowed, but we want to make the name as descriptive as possible. This is always best practice because we should never forget that we will most likely NOT be the only ones to maintain the code, and over time, we might even ourselves forget what purpose a specific piece of code has.


Furthermore, in the code snippet, notice that the source is set to a remote repository and NOT a local directory. This is yet another hugely important detail when working with testing of modules, as we want the "scope" of all tests to be as if we were ANYONE who could potentially use the module. It also safeguards us against developers forgetting to commit changes to the cloud where situations can arise of local tests NOT reflecting the actual codebase available to everyone else.


Finally, notice the version reference for the module is set to "main" and not a specific version number. This is also important, as the mindset we as developers should have is, we want to test the "newest" code base against specific scenarios to make sure that ALL new changes are stable and THEN ready for a specific release number/snapshot of said codebase.


As we always want to use a "variables.tf" File together with a script file, lets now define this. Remember that this example only focuses on specific resource ids, in case your following along with your own code, make sure to specify other variables:


variable "rg_id" {
  type = string
  default = "/subscriptions/<SUB ID>/resourceGroups/test-rg"
}

variable "vnet_resource_id" {
  type = string
  default = "/subscriptions/<SUB ID>/resourcegroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337"
}

variable "subnet_resource_id" {
  type = string
  default = "/subscriptions/<SUB ID>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337/subnets/default"
}

variable "key_vault_resource_id" {
  type = string
  default = "/subscriptions/<SUB ID>/resourceGroups/test-rg/providers/Microsoft.KeyVault/vaults/test-kv-1337"
}

variable "vm_windows_objects" {
  type = any
  default = [
    {
        name = "test-win-vm01"
        os_name = "windows10"
    },
    {
        name = "test-win-vm02"
        os_name = "windows11"
    }
]
}

variable "vm_linux_objects" {
  type = any
  default = [
    {
        name = "test-linux-vm01"
        os_name = "DeBiAn10"
    },
    {
        name = "test-linux-vm02"
        os_name = "DeBiaN11"
    }
]
}

The specific Azure subscription ID is obfuscated for obvious security reasons. To get around this and still be able to follow along, simply create the resources yourself and add the resource IDs under each of the variables.


Also, notice that we add variables for creating both Windows and Linux machines, which is important for the upcoming module tests. The reason lies in the fact that I, as the module owner, know exactly how the code is built up and how its dependencies work internally. Put simply, due to the module receiving resource IDs, no new resources will be created other than VMs and their underlying dependencies. When we want to test using existing resources, we want to make sure that the provided resource IDs are ACTUALLY the resources that are used by the module to connect the Virtual machines. This thought process will be 100% dependent on the underlying module/code block that tests are being designed and built for and should be well-planned and thought out to ensure that the actual test results catch as many potential bugs as possible.


As a final step in creating both the Terraform script file and "variables.tf," in the command-line, run a "terraform init" command. This is necessary as the upcoming unit tests/integration tests also require this to function correctly.


terraform init
//OUTPUT OF COMMAND
Initializing the backend...
Initializing modules...
Downloading git::https://github.com/ChristofferWin/codeterraform.git?ref=main for unit_test_1_using_existing_resources...
- unit_test_1_using_existing_resources in .terraform\modules\unit_test_1_using_existing_resources\terraform projects\modules\azurerm-vm-bundle

Initializing provider plugins...
- Finding hashicorp/random versions matching ">= 3.5.1"...
- Finding hashicorp/local versions matching ">= 2.4.0"...
- Finding hashicorp/null versions matching ">= 3.2.1"...
- Finding hashicorp/azurerm versions matching ">= 3.76.0"...
- Installing hashicorp/azurerm v3.93.0...
- Installed hashicorp/azurerm v3.93.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.0...
- Installed hashicorp/random v3.6.0 (signed by HashiCorp)
- Installing hashicorp/local v2.4.1...
- Installed hashicorp/local v2.4.1 (signed by HashiCorp)
- Installing hashicorp/null v3.2.2...
- Installed hashicorp/null v3.2.2 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

And now, to help ourselves catch any kind of syntax errors in either our script or variables file, run a "terraform plan" command to make sure that we are good to go.

terraform plan
//OUTPUT OF COMMAND
Plan: 21 to add, 0 to change, 0 to destroy.

We do not need to actually look through the plan. We just need to make sure Terraform does not throw ANY exceptions, as this will indicate that we made some mistakes to be fixed in either the Terraform script file or variables file. Such errors will "taint" our actual upcoming tests and should be fixed before continuing.

 

Part 6 - Building the "Terraform test" Unit test / integration tests from the script template


Now that we are done with creating the actual template for our tests, we are ready to dive into the actual new file type from HashiCorp. Open the "<File name>.tftest.hcl".

Since this is the first blog post on the new technology, we will NOT dive into all possible configuration options available. Instead, we will focus on the most important segments in order to create our tests.


In this file, instead of defining resource blocks in the form of "resource", "data", or "module", we define what's called "run" blocks. These shall be seen as isolated unit and/or integration tests, where each of said blocks will run our Terraform script template. What's powerful is that within this "run" block, we define one to many "assert" blocks that then contain a condition + a custom error message. This way, we can logically compare outputs in any way wanted to ensure that the tested module/code block runs as expected. If it doesn't, the custom error message will be displayed. The idea of a condition/error message is nothing new in Terraform, as we have always been able to do "input variable" validation on provided values, where this new test system uses the same idea. If the condition results in true, it means the logical and custom statement to ensure everything is okay has been fulfilled, and if false, the error message is triggered, and Terraform will report that SPECIFIC test as failed during the summary.


Furthermore, the "run" block can contain an option of "command" which can be either "plan" or "apply". This is also extremely handy as controlling which specific Terraform operation to use on any given test can vary, which can save time WITHOUT decreasing the value derived from it. A great example of this is testing that data going INTO a module/piece of code correlates and is used correctly in the output of it. A complete apply will not necessarily tell us more than a plan, but the plan can be over 10x times faster to execute.


Lets write some code instead of all this theory! We start by defining our very first "run" block:

run "unit_test_1_check_rg_id {
  command = plan //We do not need to run apply to make a valid check to see if the resource group id provided matches the resource group used on the resouces

assert { //We only make 1 assert block for this use-case, as the title of the "run" Block defines a unit test only to look at the resource group id

  condition = <All return values that will be produced by the Terraform script template will be available to us in this condition> //But what differs from a normal return is that, the way the "test" File is scoped to the script file is that, whatever name the module has in the script file must be reused here>

error_message = <In case the condition = false, this will execute and it      	can contain dynamic return values directly from the terraform script file>

  } //Closing the "assert" block
} //Closing the "run" block  

Within the root scope of the "run" block is where multiple other options can be added, such as "providers," "module," "variables," "expect failures," and so much more. However, for this first introduction, we will ONLY focus on the option of "command" and the "assert" block, which in most cases will be all we need to define valid tests.


Lets now add some real code to the configuration and actually check output values mapped within the "condition" And to that also create an "error_message"

run "unit_test_1_check_rg_id" {
  command = plan

  assert {
    condition = length([for each in flatten(values(module.unit_test_1_using_existing_resources.nic_object).*.ip_configuration) : true if length(regexall(var.rg_id,"${each.subnet_id}")) == 1]) > 0
    error_message = "The resource group used for deployment via inputting a resource id does not match the resource group used for deployment. Resource group used is: ${split("/", values(module.unit_test_1_using_existing_resources.nic_object)[0].id)[4]}"
  }
}

Due to the format of code snippets, the above can be a little hard to read, therefor let me break down the logic of the condition in steps:


  1. In the innermost part, we have a return statement. Note that the name "module.unit_test_1_using_existing_resources" matches exactly the reference name defined in the Terraform script file. Instead of returning everything that the module can provide, we only want a specific piece of it, namely the "nic_object" of each VM that has been planned. The rationale behind this is that the NIC will be one of the few resources actually created by Terraform, which we can then utilize to verify whether the "resource_group_id" provided to the module in the script file aligns with the output generated by the module. In simpler terms, when we define the input variable "resource_group_id," does this exact ID get correctly ingested by the module to determine where to place the resources.

  2. Because the module provides maps of objects for the "nic objects," we need to utilize the "values()" function to eliminate the unique keys and instead obtain a simple tuple that we can index using numbers.

  3. Immediately after the keys are stripped and the map is converted, we employ a splat expression ".*.ip_configuration" to iterate through each "nic_object." Instead of obtaining a list of the entire objects, we simply extract one attribute from each, namely the "ip_configuration," as it contains the necessary information to validate the resource_group_id.

  4. The splat expression is then enclosed within the "flatten()" function to remove the innermost tuple, resulting in a single-dimensional tuple consisting of one "ip_configuration" object from each of the planned "nic_objects.

  5. Next, we enclose this new tuple in a for-loop to iterate over each "ip_configuration," enabling access to a new attribute within the object called "subnet_id." This ID will be longer than just the resource group, but like any other Azure Resource ID, the string will contain the associated resource group where the resource is stored.

  6. Within the for-loop, we incorporate an if statement to create a new tuple. If the function "regexall()" returns "true" for finding all strings that match another string given a pattern, we check whether the entire "subnet_id" contains a substring that matches exactly with the "resource_group_id" in a one-to-one manner. If a match is found, we return that string. This logic is encapsulated within a "length()" function call to determine if the length equals zero; if so, it indicates that the resource groups do not match, and a "null" value is added to the tuple; otherwise, "true" is appended

  7. Finally, the entire expression is encapsulated within a "length()" function. This function checks whether the tuple produced by the for-loop contains more than just "null." If no resource groups matched, the length would be 0, resulting in a false statement. However, if at least one resource group matches, we assume they all do. Hence, to satisfy this condition, we simply define "true" to be greater than 0, expressed as "length(expression) > 0." The rationale behind this decision lies in the fact that as the owner of the module, I understand what is feasible. I know that if just one match occurs, then all matches will happen because the source code uses the exact same reference internally to map the resource group for deployment.


The condition defined in the "assert" block can be very complex, and we should always seek to make statements as simple as possible. If we were to create a more complex testing setup where we incorporated more of the options available, we would actually be able to make the above way more readable, but it would not entirely remove the complexity; instead, it would be "spread out." Although, as part of this introduction post, we will not expand the above with further configuration options, but it will come in later posts.


For the sake of possible interest, here below, all defined unit tests can be seen defined in my example ".tftest.hcl" File:


run "unit_test_2_check_vnet_and_sub_id" {
  command = plan

  assert {
    condition = length([for each in flatten(values(module.unit_test_1_using_existing_resources.nic_object).*.ip_configuration) : true if length(regexall(var.vnet_resource_id, replace(each.subnet_id, "resourceGroups", "resourcegroups"))) == 1  && each.subnet_id == var.subnet_resource_id]) > 0
    error_message = "Either the virtual network used for the deployment, which is: ${split("/subnets/", values(module.unit_test_1_using_existing_resources.nic_object)[0].ip_configuration[0].subnet_id)[0]} not match the vnet resouce id or the subnet used which is: ${values(module.unit_test_1_using_existing_resources.nic_object)[0].ip_configuration[0].subnet_id} not match"
  }
}

run "unit_test_3_check_vm_count" {
  command = plan

  assert {
    condition = length(flatten([module.unit_test_1_using_existing_resources.summary_object.linux_objects, module.unit_test_1_using_existing_resources.summary_object.windows_objects])) == length(flatten([var.vm_linux_objects, var.vm_windows_objects]))
    error_message = "The amount of VMs defined in variables: ${length(flatten([var.vm_linux_objects, var.vm_windows_objects]))} does not match the amount planned: ${length(flatten([module.unit_test_1_using_existing_resources.summary_object.linux_objects, module.unit_test_1_using_existing_resources.summary_object.windows_objects]))}"
  }
}

run "unit_test_4_check_vm_count_apply" {
  //Default command is apply
  assert {
    condition = length(flatten([module.unit_test_1_using_existing_resources.summary_object.linux_objects, module.unit_test_1_using_existing_resources.summary_object.windows_objects])) == length(flatten([var.vm_linux_objects, var.vm_windows_objects]))
    error_message = "The amount of VMs defined in variables: ${length(flatten([var.vm_linux_objects, var.vm_windows_objects]))} does not match the amount planned: ${length(flatten([module.unit_test_1_using_existing_resources.summary_object.linux_objects, module.unit_test_1_using_existing_resources.summary_object.windows_objects]))}"
  }
}

The context/logic of which is exactly defined in all the tests is not important for this post, but there are a few things I would like you to consider. First off, notice the unified way of defining each 'run' block with a descriptive name and index—a name indicating '<Type of test><its number><and what data to check>'. Also, notice how I have decided not to add 'plan' in the name of the 'run' block, as I see a plan operation as the simplest form of test. However, for an Apply run, as of the last test, this is defined within the name. This is to make anyone aware that this last test is more extensive and will take way more time to complete, but it's needed for the test context of ensuring that the 'VMs' planned for are exactly the amount actually created.


 

Part 7 - Running the specific Terraform command and understanding the flow of the test file

Its time to see the actual Terraform "test" Command in action, so lets run it on our configuration!


First, make sure to have completed section 4 of this post so that all files are readu and that the environment is initialised.


In the command-line run "terraform test" And see the magic happen!

terraform test
//OUTPUT OF COMMAND
unit_test.tftest.hcl... in progress
  run "unit_test_1_check_rg_id"... pass
  run "unit_test_2_check_vnet_and_sub_id"... pass
  run "unit_test_3_check_vm_count"... pass
  run "unit_test_4_check_vm_count_apply"... pass
unit_test.tftest.hcl... tearing down
unit_test.tftest.hcl... pass

Success! 4 passed, 0 failed.

As it can be seen by running the command with no options, a simple output is provided, separating each test and their result. Depending on whether you decided to create your own tests or are simply reusing the provided code, the answers of succeeded & failed will vary. Notice how the second-to-last message defines a message of "tearing down," which indicates that the "Apply" command has been used and that the test was able to automatically tear down the deployed resources again. This "tear down" CAN fail, where Terraform will let us know that specific resources must be manually destroyed. I will showcase examples of this in later posts.


After simply running the command to make sure that all tests ran, I always recommend errors to be forced on purpose, SIMPLY to make sure that all test "conditions" work as intended. For my specific use-case and testing on the "resource_ids," if we go into the actual Terraform "script file" and remove the variable references and replace them with bogus values, we should see all our tests fail.

The following has been manually done to the file to provoke failures:

module "unit_test_1_using_existing_resources" {
  source = "github.com/ChristofferWin/codeterraform//terraform projects/modules/azurerm-vm-bundle?ref=main"
  rg_id = "/subscriptions/<SUB ID>/resourceGroups/test-rg" //I simply changed a single digit in the sub id
  vnet_resource_id = "/subscriptions/<SUB ID>/resourcegroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337" //I simply changed a single digit in the sub id
  subnet_resource_id = "/subscriptions/<SUB ID>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337/subnets/default" //I simply changed a single digit in the sub id
vm_windows_objects = [ //Removing 1 of the windows objects ONLY
    {
        name = "test-win-vm01"
        os_name = "windows10"
    }
] 
vm_linux_objects = [
    {
        name = "test-linux-vm01"
        os_name = "DeBiAn10"
    },
    {
        name = "test-linux-vm02"
        os_name = "DeBiaN11"
    }
]
//REST OF THE CODE IS THE SAME AS EARLIER

Its important to instead of changing the default values of the input variables to replace the variable references under the module call - This is to SIMMULATE a situation where we know there will be a missmatch between what "inputs" The module consumes and the outputs it produces - The reason for this is, that we will CONTINUE to use the already existing input variables in the <Terraform Test file>.tftest.hcl


By running the command "terraform test" Again, we should see all the tests fail and that the "error_messages" Defined be shown to the console:

unit_test.tftest.hcl... in progress
  run "unit_test_1_check_rg_id"... fail
╷
│ Error: Test assertion failed
│ 
│   on unit_test.tftest.hcl line 5, in run "unit_test_1_check_rg_id":
│    5:     condition = length([for each in flatten(values(module.unit_test_1_using_existing_resources.nic_object).*.ip_configuration) : true if length(regexall(var.rg_id,"${each.subnet_id}")) == 1]) > 0
│     ├────────────────
│     │ module.unit_test_1_using_existing_resources.nic_object is object with 3 attributes
│     │ var.rg_id is "/subscriptions/<SUB ID>/resourceGroups/test-rg"
│ 
╵
  run "unit_test_2_check_vnet_and_sub_id"... fail
╷
│ Error: Test assertion failed
│
│   on unit_test.tftest.hcl line 14, in run "unit_test_2_check_vnet_and_sub_id":
│   14:     condition = length([for each in flatten(values(module.unit_test_1_using_existing_resources.nic_object).*.ip_configuration) : true if length(regexall(var.vnet_resource_id, replace(each.subnet_id, "resourceGroups", "resourcegroups"))) == 1  && each.subnet_id == var.subnet_resource_id]) > 0
│     ├────────────────
│     │ module.unit_test_1_using_existing_resources.nic_object is object with 3 attributes
│     │ var.subnet_resource_id is "/subscriptions/<SUB ID> /resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337/subnets/default"
│     │ var.vnet_resource_id is "/subscriptions/<SUB ID>/resourcegroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337"
│
│ Either the virtual network used for the deployment, which is: /subscriptions/<SUB ID>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337 not match the vnet resouce id or the subnet used which is:
│ /subscriptions/<SUB ID>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/test-vnet-1337/subnets/default not match
╵
  run "unit_test_3_check_vm_count"... fail
╷
│ Error: Test assertion failed
│
│   on unit_test.tftest.hcl line 23, in run "unit_test_3_check_vm_count":
│   23:     condition = length(flatten([module.unit_test_1_using_existing_resources.summary_object.linux_objects, module.unit_test_1_using_existing_resources.summary_object.windows_objects])) == length(flatten([var.vm_linux_objects, var.vm_windows_objects]))       
│     ├────────────────
│     │ module.unit_test_1_using_existing_resources.summary_object.linux_objects is tuple with 2 elements
│     │ module.unit_test_1_using_existing_resources.summary_object.windows_objects is tuple with 1 element
│     │ var.vm_linux_objects is tuple with 2 elements
│     │ var.vm_windows_objects is tuple with 2 elements
│
│ The amount of VMs defined in variables: 4 does not match the amount planned: 3
╵
  run "unit_test_4_check_vm_count_apply"... fail
╷
│ Error: creating Network Interface (Subscription: "<SUB ID>"
│ Resource Group Name: "test-rg"
│ Network Interface Name: "test-linux-vm02-nic"): performing CreateOrUpdate: unexpected status 403 with error: LinkedAuthorizationFailed: The client has permission to perform action 'Microsoft.Network/virtualNetworks/subnets/join/action' on scope '/subscriptions/<SUB ID>/resourceGroups/test-rg/providers/Microsoft.Network/networkInterfaces/test-linux-vm02-nic', however the linked subscription '<SUB ID>' was not found.

unit_test.tftest.hcl... tearing down
unit_test.tftest.hcl... fail

Failure! 0 passed, 4 failed.

Notice how ALL the tests have failed - but Terraform was STILL able to clean up by itself even in this scenario. Furthermore, notice how all the first 3 tests started with the message 'Test assertion failed,' which indicates that the tests themselves FINISHED without ANY errors occurring in the runtime of the module, but the return value did not satisfy each of the assertion blocks; therefore, the 'error_message' was provided. Finally, the 4th test failure did not mention any assertion, therefore it failed even before the condition was evaluated. Both levels of errors are very valid and useful for troubleshooting, as these defined tests are reused every time changes are made to the codebase. Remember, we can even make these part of CI pipelines for automatically verifying stability and behavior BEFORE being able to merge to a 'production' branch or make new releases.


BONUS:

If we use the command "terraform test" with the flag "-help" We can get more insides into what the command supports:

terraform test -help
Options:

-filter=testfile      If specified, Terraform will only execute the test files
                        specified by this flag. You can use this option multiple
                        times to execute more than one test file.

  -json                 If specified, machine readable output will be printed in
                        JSON format

  -no-color             If specified, output won't contain any color.

  -test-directory=path  Set the Terraform test directory, defaults to "tests".

  -var 'foo=bar'        Set a value for one of the input variables in the root
                        module of the configuration. Use this option more than
                        once to set more than one variable.

  -var-file=filename    Load variable values from the given file, in addition
                        to the default files terraform.tfvars and *.auto.tfvars.
                        Use this option more than once to include more than one
                        variables file.

  -verbose              Print the plan or state for each test run block as it
                        executes.

We can use options like '-filter=<path to testfile1>', '<path to testfile 2>' in case a root folder contains multiple '<Terraform test file name>.tftest.hcl' AND we do NOT want to consume ALL the files but only specific ones, which I see as very useful especially in automatic scenarios where we define folders with many different testing files for a whole subset of testing scenarios. Another scenario for a flag is the '-json' which makes the Terraform core engine print out JSON-valid string output every second, which could be directly read by some other software and used to enrich a pipeline or user with more detailed information about a given Terraform test. There is so much more we can dive into, and I will aim to do so in many more blog posts about 'Terraform test'.


 

Part 8 - Conclusion


As mentioned in sections 1 & 2, when we compare the "old way" and the "new way" of testing Terraform modules or code blocks, investing time upfront in designing, implementing, and maintaining standardized unit and/or integration tests proves invaluable. This approach consistently aids in development, quality assurance, and overall maintenance of any Terraform-related code. Moreover, we can enhance this process by documenting how to execute these tests and designing new ones going forward. By integrating these practices into DevOps principles, we instill a habit of rigorously testing our code in a standardized manner before merging to production or creating new Terraform releases, thereby ensuring reliability for all stakeholders.


That's all for today, folks! Thank you so much for reading along; I really, really appreciate it! Stay tuned as I will begin to upload posts way, way more frequently than only once every 6 months!


Cheers!


PS.

Want to learn more about Terraform? Click here -> terraform (codeterraform.com)

Want to learn more about other cool stuff like Automation or Powershell -> powershell (codeterraform.com) / automation (codeterraform.com)





83 views0 comments

Commenti

Valutazione 0 stelle su 5.
Non ci sono ancora valutazioni

Aggiungi una valutazione
bottom of page