How Terraform Helps Deploying Network Infrastructure in Azure Cloud
Every network engineer who has manually clicked through the Azure portal to deploy a VNet, configure NSG rules, create route tables, and wire up a VPN Gateway knows the problem: it works once. The second time you need the same architecture — in a different region, for a different environment, for a customer who wants their own copy — you are starting from scratch, making slightly different choices each time, and producing infrastructure that drifts from your intended design the moment a colleague adds a subnet through the portal.
Terraform solves this problem at the root. As a declarative Infrastructure as Code tool, Terraform lets you describe your entire Azure network topology in HashiCorp Configuration Language (HCL) — VNets, subnets, NSGs, route tables, firewalls, peering connections, and VPN gateways — and deploy it identically every time with a single command. This article walks through exactly how Terraform integrates with Azure for network infrastructure, with real HCL examples for the constructs network engineers use every day.
1. Why Terraform for Azure Network Infrastructure
Azure provides its own native IaC tooling in ARM templates and Bicep. Terraform's advantage for network engineers is not syntax — it is state management and multi-cloud portability. Terraform maintains a state file that maps every resource it manages to its live configuration in Azure. When you re-run Terraform after making a change to your HCL, it computes a diff between the desired state (HCL) and the current state (Azure), and applies only the delta. This means adding a new subnet to an existing VNet does not redeploy the entire VNet — Terraform surgically adds the subnet and leaves everything else untouched.
For network engineers managing multi-cloud or hybrid environments, the same Terraform workflow used for Azure VNets can be used for AWS VPCs, GCP VPCs, and on-premises Cisco infrastructure (via the Cisco IOS XE Terraform provider) — giving a consistent operational model across the entire estate.
| Capability | Azure Portal | Terraform (HCL) |
|---|---|---|
| Repeatability | Manual re-click every deployment | Identical every run — version-controlled |
| Drift Detection | No — manual audit required | terraform plan shows live drift instantly |
| Multi-environment | Separate manual effort per env | Workspaces and variables — one codebase |
| Dependency Management | Manual ordering of resource creation | Implicit dependency graph — automatic ordering |
| Code Review | Not possible — GUI actions | Pull request review of every infrastructure change |
2. The AzureRM Provider — Connecting Terraform to Azure
The AzureRM provider is the Terraform plugin that translates HCL resource declarations into Azure REST API calls. Every Azure networking resource — azurerm_virtual_network, azurerm_network_security_group, azurerm_firewall — is exposed as a Terraform resource type by this provider. Authentication to Azure is handled via a Service Principal with Contributor rights on the target subscription, with credentials passed through environment variables or a managed identity in CI/CD pipelines.
provider.tf — AzureRM Provider Configuration
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.90"
}
}
# Remote state in Azure Storage Account
backend "azurerm" {
resource_group_name = "rg-terraform-state"
storage_account_name = "sttfstateproduction"
container_name = "tfstate"
key = "network/hub/terraform.tfstate"
}
}
provider "azurerm" {
features {}
subscription_id = var.subscription_id
# Auth via SP environment variables:
# ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_TENANT_ID
}
ⓘ Always store Terraform state remotely — Azure Storage Account with state locking via Azure Blob leases prevents concurrent state corruption in team environments.
3. Deploying VNets, Subnets & NSGs with Terraform
The foundation of every Azure network deployment is the Virtual Network. With Terraform, the VNet, all subnets, their NSGs, and the NSG-to-subnet associations are defined as discrete resources with explicit dependencies — Terraform resolves the creation order automatically based on resource references.
network.tf — VNet, Subnets, NSG & Association
resource "azurerm_virtual_network" "hub" {
name = "vnet-hub-uks-prod"
address_space = ["10.0.0.0/16"]
location = var.location
resource_group_name = azurerm_resource_group.network.name
tags = local.common_tags
}
resource "azurerm_subnet" "gateway" {
name = "GatewaySubnet"
resource_group_name = azurerm_resource_group.network.name
virtual_network_name = azurerm_virtual_network.hub.name
address_prefixes = ["10.0.1.0/27"]
}
resource "azurerm_subnet" "firewall" {
name = "AzureFirewallSubnet"
resource_group_name = azurerm_resource_group.network.name
virtual_network_name = azurerm_virtual_network.hub.name
address_prefixes = ["10.0.2.0/26"]
}
resource "azurerm_network_security_group" "workload" {
name = "nsg-workload-prod"
location = var.location
resource_group_name = azurerm_resource_group.network.name
security_rule {
name = "allow-https-inbound"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "443"
source_address_prefix = "10.0.0.0/8"
destination_address_prefix = "*"
}
security_rule {
name = "deny-all-inbound"
priority = 4096
direction = "Inbound"
access = "Deny"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# Associate NSG to subnet
resource "azurerm_subnet_network_security_group_association" "workload" {
subnet_id = azurerm_subnet.workload.id
network_security_group_id = azurerm_network_security_group.workload.id
}
ⓘ NSG rules defined inline within the resource block are managed entirely by Terraform. Use separate azurerm_network_security_rule resources when rules need independent lifecycle management.
⚠ Network Engineer Note: GatewaySubnet and AzureFirewallSubnet are reserved names in Azure — Terraform must use these exact names or resource creation will fail. NSGs cannot be associated with GatewaySubnet or AzureFirewallSubnet — the plan will fail if attempted.
4. Route Tables, User Defined Routes & Azure Firewall
Forcing spoke-VNet traffic through Azure Firewall requires two Terraform resources working together: the azurerm_route_table with a default route pointing to the firewall's private IP, and an azurerm_subnet_route_table_association applying it to the target subnet. The Azure Firewall private IP is referenced directly from the azurerm_firewall resource — no hardcoded IPs needed.
routing.tf — Azure Firewall, Route Table & UDR
# Azure Firewall with zone-redundant public IP
resource "azurerm_public_ip" "firewall" {
name = "pip-fw-hub-prod"
location = var.location
resource_group_name = azurerm_resource_group.network.name
allocation_method = "Static"
sku = "Standard"
zones = ["1", "2", "3"]
}
resource "azurerm_firewall" "hub" {
name = "fw-hub-uks-prod"
location = var.location
resource_group_name = azurerm_resource_group.network.name
sku_name = "AZFW_VNet"
sku_tier = "Premium"
zones = ["1", "2", "3"]
ip_configuration {
name = "fw-ipconfig"
subnet_id = azurerm_subnet.firewall.id
public_ip_address_id = azurerm_public_ip.firewall.id
}
}
# UDR — force all spoke traffic through Azure Firewall
resource "azurerm_route_table" "spoke_default" {
name = "rt-spoke-to-firewall"
location = var.location
resource_group_name = azurerm_resource_group.network.name
disable_bgp_route_propagation = true
route {
name = "default-to-firewall"
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.hub.ip_configuration[0].private_ip_address
}
}
resource "azurerm_subnet_route_table_association" "spoke_workload" {
subnet_id = azurerm_subnet.spoke_workload.id
route_table_id = azurerm_route_table.spoke_default.id
}
ⓘ disable_bgp_route_propagation = true prevents ExpressRoute/VPN gateway routes from overriding the UDR — critical when the firewall must inspect all traffic including hybrid connectivity paths.
5. Hub-Spoke Topology & VNet Peering with Terraform
Hub-Spoke is the standard Azure enterprise network topology. Terraform models it precisely — one hub VNet and multiple spoke VNets, each connected via bidirectional peering. The peering must be created in both directions: hub-to-spoke and spoke-to-hub. Terraform's resource references ensure the correct VNet IDs are used without hardcoding.
peering.tf — Bidirectional Hub-Spoke VNet Peering
# Spoke VNet (repeat pattern for each spoke)
resource "azurerm_virtual_network" "spoke_app" {
name = "vnet-spoke-app-uks-prod"
address_space = ["10.1.0.0/24"]
location = var.location
resource_group_name = azurerm_resource_group.network.name
}
# Hub → Spoke peering
resource "azurerm_virtual_network_peering" "hub_to_spoke_app" {
name = "peer-hub-to-spoke-app"
resource_group_name = azurerm_resource_group.network.name
virtual_network_name = azurerm_virtual_network.hub.name
remote_virtual_network_id = azurerm_virtual_network.spoke_app.id
allow_gateway_transit = true # Hub shares its VPN/ER gateway
allow_forwarded_traffic = true
allow_virtual_network_access = true
}
# Spoke → Hub peering
resource "azurerm_virtual_network_peering" "spoke_app_to_hub" {
name = "peer-spoke-app-to-hub"
resource_group_name = azurerm_resource_group.network.name
virtual_network_name = azurerm_virtual_network.spoke_app.name
remote_virtual_network_id = azurerm_virtual_network.hub.id
use_remote_gateways = true # Use hub gateway for hybrid connectivity
allow_forwarded_traffic = true
allow_virtual_network_access = true
}
ⓘ allow_gateway_transit = true on the hub side and use_remote_gateways = true on the spoke side enables spokes to use the hub's ExpressRoute or VPN gateway for on-premises connectivity.
✔ Terraform Modules Pattern: In production environments, the Hub-Spoke pattern is best implemented as a Terraform module — one spoke_vnet module that accepts the hub VNet ID, spoke CIDR, and peering flags as input variables. Adding a new spoke becomes a single module call, and the bidirectional peering and route table association are created automatically.
6. VPN Gateway & ExpressRoute with Terraform
Deploying a zone-redundant Virtual Network Gateway via Terraform follows the same declarative pattern — define the gateway, its public IP, and the local network gateway representing the on-premises endpoint. Terraform manages the SKU, BGP ASN, and connection resource in a single terraform apply.
vpn_gateway.tf — Zone-Redundant VPN Gateway with BGP
resource "azurerm_virtual_network_gateway" "hub_vpn" {
name = "vgw-hub-uks-prod"
location = var.location
resource_group_name = azurerm_resource_group.network.name
type = "Vpn"
vpn_type = "RouteBased"
sku = "VpnGw2AZ" # Zone-redundant SKU
generation = "Generation2"
enable_bgp = true
active_active = false
bgp_settings {
asn = 65001
}
ip_configuration {
name = "vgw-ipconfig"
public_ip_address_id = azurerm_public_ip.vpn_gw.id
private_ip_address_allocation = "Dynamic"
subnet_id = azurerm_subnet.gateway.id
}
}
# On-premises site representation
resource "azurerm_local_network_gateway" "onprem_dc" {
name = "lgw-onprem-dc-lon"
location = var.location
resource_group_name = azurerm_resource_group.network.name
gateway_address = "203.0.113.10" # On-prem public IP
address_space = ["10.100.0.0/16"] # On-prem subnets
bgp_settings {
asn = 65000
bgp_peering_address = "169.254.21.1"
}
}
ⓘ VPN Gateway deployment takes 25–45 minutes in Azure — Terraform waits for the resource to become available before proceeding with dependent resources like the VPN connection. No manual polling required.
7. The Terraform Network Deployment Workflow
The standard Terraform workflow for production network deployments follows four commands, each with a specific safety purpose:
Critical for Network Engineers: A Terraform plan showing forces replacement on an azurerm_virtual_network or azurerm_virtual_network_gateway means the resource will be destroyed and recreated — causing a complete network outage for everything connected to it. Address space changes and gateway SKU downgrades are common triggers. Always review plans with a network architect before applying in production.
Key Terraform Azure Networking Resources — Quick Reference
| azurerm_virtual_network | VNet with address space and DNS servers |
| azurerm_subnet | Subnet within a VNet with service endpoints |
| azurerm_network_security_group | NSG with inline or separate security rules |
| azurerm_route_table | UDR table with routes and BGP propagation control |
| azurerm_firewall | Azure Firewall Standard or Premium with SKU and zones |
| azurerm_virtual_network_peering | Bidirectional peering — must create both directions |
| azurerm_virtual_network_gateway | VPN or ExpressRoute gateway with BGP settings |
| azurerm_virtual_hub | Azure Virtual WAN hub for large-scale connectivity |
Terraform as the Network Engineer's Control Plane
Terraform transforms Azure network deployment from a series of manual portal actions into a version-controlled, peer-reviewed, repeatable engineering process. The HCL resource model maps directly to the Azure network constructs network engineers already know — VNets, subnets, NSGs, route tables, firewalls, and gateways — so the learning curve is about workflow and tooling, not networking concepts.
Start with a single VNet and NSG. Add route tables and firewall. Grow into the full Hub-Spoke module pattern. The investment in IaC compounds over time — every new environment, every disaster recovery test, every configuration audit becomes a terraform plan and a pull request rather than a week of manual work.
AzureRM provider resource arguments and default behaviours change across provider versions. Always pin provider versions in production and review the Terraform AzureRM changelog before upgrades.