F Cisco Secure Firewalls : Designing for High Availability HA vs. Clustering - The Network DNA: Networking, Cloud, and Security Technology Blog

Cisco Secure Firewalls : Designing for High Availability HA vs. Clustering

Cisco Secure Firewall HA vs. Clustering: Complete Design Guide 2025

A complete technical guide to Active/Standby failover, horizontal clustering, geo-redundant design, VRF multi-tenancy and centralised management

By Route XP |  Published: March 3, 2026 | Updated: March 4, 2026 |  Cisco Secure Firewall, Clustering, Network Security, Cisco

Cisco Secure Firewall high availability and clustering design diagram — Active Standby failover pair and multi-node cluster topology from The Network DNA
Cisco Secure Firewall — Designing for High Availability HA vs. Clustering

Introduction: HA vs. Clustering — Which Do You Need?

Cisco Secure Firewall supports two distinct high availability models: traditional Active/Standby failover (HA) and full Active/Active clustering. Understanding the difference is fundamental to designing a network security architecture that meets both your performance and redundancy requirements.

In short: HA gives you redundancy; clustering gives you redundancy plus horizontal scale. The right choice depends on your throughput needs, connection table requirements, budget, and operational complexity tolerance. This guide covers both models in full technical depth, along with multi-tenancy design using Cisco Secure Firewall VRF Lite and Multi-Instance containerisation.

📌 Related Reading: For full hardware specs and performance numbers across all Cisco Secure Firewall models, see our companion guide: Cisco Secure Firewall Platforms: A Complete Deep Dive Guide.

Active/Standby High Availability (HA)

Active/Standby HA is the simplest form of firewall redundancy. One unit (the Active) handles all traffic. The other (the Standby) mirrors configuration and flow state but does not forward traffic. When the active unit fails or a manual switchover is triggered, the standby promotes itself to active with minimal disruption.

FTD inherits ASA's proven failover infrastructure, meaning organisations migrating from Cisco ASA to FTD retain the same HA concepts and behaviours they are already familiar with.

How Active/Standby Failover Works

The two units communicate over a dedicated Failover Link (which also carries state information) and optionally a separate Stateful Failover Link for high-volume state synchronisation. The active unit sends hello packets at regular intervals; if the standby does not receive them within the configured hold time, it declares a failure and takes over.

  • Both units must be identical hardware models running the same software version
  • IP addresses are shared using a virtual IP — clients never notice the switchover
  • TCP sessions, UDP flows, NAT translations and VPN tunnels are all synchronised
  • Switchover typically completes in under 1 second for stateful failover

HA Key Features and Requirements

  • Supports all NGFW/NGIPS interface modes (routed, transparent, inline, passive)
  • Interface health monitoring — configurable thresholds per interface
  • Snort instance health monitoring — failover triggered if fewer than 50% of Snort instances are healthy
  • Zero-downtime upgrades for most application types (one unit upgraded at a time)
  • Full stateful flow symmetry in both NGIPS and NGFW modes
  • Supported on all Cisco Secure Firewall platforms including 1010, 1100, 1200, 2100, 3100, 4100, 4200 and 9300
  • Active/Active HA also supported on ASA with multi-context mode
Active/Standby HA — At a Glance
AttributeDetail
Nodes2 (one active, one standby)
Traffic forwardingActive unit only
State synchronisationFull — connections, NAT, VPN, routing
Switchover timeSub-second (stateful) to a few seconds
Performance scalingNone — standby adds no capacity
ManagementSingle logical device in FMC
Best forBasic redundancy, lower throughput requirements, simpler operations

Clustering: Horizontal Scaling Up to 16 Nodes

Clustering combines multiple Cisco Secure Firewall appliances into a single logical device that scales performance linearly with every node added. Unlike HA, all nodes are simultaneously active, with the cluster transparently managing flow ownership, direction, and backup roles. This means adding a node adds both capacity and redundancy at the same time.

FTD fully inherits ASA's proven clustering infrastructure. The cluster appears as a single device to FMC for management and policy deployment, while load balancing is handled by the upstream switching/routing infrastructure via EtherChannel (spanned mode) or ECMP routing (individual/routed mode).

Cluster Sizing: Throughput, CPS and Connections

Cluster performance is calculated using three de-rating multipliers that account for the overhead of inter-node coordination:

Cisco Secure Firewall Cluster Sizing Multipliers
MetricMultiplierReason
Throughput×0.8 (L2) or ×1.0 (L3 routing)EtherChannel load-balancing overhead for L2; L3 ECMP can achieve full line rate
Connections per second (CPS)×0.5Additional tasks for flow creation, director election and state updates
Maximum concurrent connections×0.6Stub connection maintained on non-owner nodes for failover
16-Node Cluster Theoretical Maximums (NGFW 1024B profile)
PlatformPer-Node16-Node Throughput16-Node CPS16-Node Max Connections
314045 Gbps / 300k CPS / 10M conn0.57 Tbps2.4M96M
4245145 Gbps / 800k CPS / 60M conn1.79 Tbps6.4M576M

Example calculation for a 4-node 3140 cluster:

  • Throughput: 4 × 45 Gbps × 0.8 = 144 Gbps
  • CPS: 4 × 300k × 0.5 = 600k connections/sec
  • Max connections: 4 × 10M × 0.6 = 24M concurrent connections

Cluster Roles: Owner, Director, Forwarder, Backup

Every flow in a cluster is managed by four distinct per-connection roles:

  • Owner — the node that first received the flow; performs full inspection and maintains primary state
  • Director — maintains a lightweight hash entry to redirect packets that arrive at the wrong node back to the Owner
  • Forwarder — any node that receives packets for a flow it does not own; redirects via the Cluster Control Link (CCL)
  • Backup — holds a copy of the Owner's connection state for seamless failover if the Owner goes down

One node per chassis also holds the Control Unit (Master) global role, responsible for cluster health, configuration sync and management plane operations.

Cluster Enhancements in FTD 7.6 / ASA 9.22

Individual Interface Mode (Routed Clustering)

FTD 7.6 and ASA 9.22 reintroduced Individual Interface Mode (also called routed clustering) for FTDv, 3100, and 4200 platforms. This was previously only available on legacy ASA hardware. In this mode:

  • Each cluster node operates as an independent routing instance with its own IP addresses
  • No spanned EtherChannel is required to upstream switches
  • Load balancing is handled by upstream routers via ECMP, UCMP, or PBR
  • Routed mode only (transparent mode is not supported in individual interface mode)
  • Feature supported with multi-context mode (ASA) but not yet with Multi-Instance clustering
Spanned Mode vs. Individual (Routed) Mode Cluster Comparison
AttributeSpanned EtherChannel ModeIndividual Interface Mode
Layer used for trafficLayer 2Layer 3
Data interface IPsSingle shared IP per EtherChannelIndividual IP per node (from pool)
Load balancingEtherChannel hash (upstream switch)ECMP/UCMP/PBR (upstream router)
Routing modesRouted or TransparentRouted only
Supported platformsAll clustering platformsFTDv, 3100, 4200 (FTD 7.6+)

Scale-Out Encryption in Clustering (FTD 7.7)

FTD 7.7 introduced IPsec Cluster Offload — enabling Mobile Core Protection use cases for service providers:

  • IPsec fully accelerated — offloaded to dedicated cryptographic hardware on distributed cluster members
  • Distributed Control Plane for IKE and IPsec — IKE processing occurs on the node that owns the flow rather than being centralised on the Control Unit (previously a 9300-only feature)
  • Cluster Hardware Redirect — CCL traffic redirected using FPGA hardware without CPU involvement, reducing latency and increasing scale

Geo-Redundant Clustering Across Data Centres

Cisco Secure Firewall supports stretched clusters across two geographically separated data centres, providing both horizontal performance scaling and site-level redundancy from a single logical firewall device.

Design Requirements

  • Cluster Control Link (CCL) extended between sites at Layer 2 with <10ms RTT latency
  • Single spanned EtherChannel for data traffic extends across both sites
  • Underlying transport: ideally dark fibre; also tested with VPLS, VPWS, and EVPN
  • Local vPC/VSS pairs at each site for switch-level redundancy

Traffic Patterns

  • North-South insertion — uses LISP-based traffic localisation with Owner reassignment to keep inspection local to each site where possible
  • East-West insertion — supports first-hop redundancy with VM mobility; data VLANs are not extended for North-South insertion to avoid loops and MAC/IP conflicts
⚠️ Design Note: RTT between cluster nodes must remain below 20ms for the data EtherChannel and below 10ms for the CCL. Exceeding these values causes cluster instability and flow ownership oscillation.

HA vs. Clustering: Side-by-Side Comparison

Cisco Secure Firewall — HA vs. Clustering Feature Comparison
FeatureActive/Standby HAClustering (up to 16 nodes)
Max nodes216
Active forwarding nodes1All nodes (16)
Performance scalingNoneLinear with each node added
RedundancyYesYes (node loss = no traffic loss)
Zero-downtime upgradeYes (rolling)Yes (rolling per node)
ManagementSingle device in FMCSingle logical device in FMC
Flow symmetryFull (all flows on active unit)Managed by Director/Owner roles
Layer 2 deploymentYesYes (spanned mode)
Layer 3 / routed deploymentYesYes (individual mode, FTD 7.6+)
Multi-chassisNo (same chassis only)Yes (inter-chassis clustering)
Geo-redundancyNoYes (stretched cluster)
ComplexityLowMedium–High
Best forBasic redundancy, lower throughputHigh scale, DC, SP, MSSP

Multi-Tenancy: VRF Lite and Multi-Instance

Cisco Secure Firewall provides three complementary layers of multi-tenancy that can be combined for maximum tenant density and isolation on a single physical platform.

VRF Lite (FTD 6.6+)

VRF Lite allows different firewall interfaces to participate in separate IP routing domains with overlapping address space. Key characteristics:

  • Traffic forwarding between VRFs is possible using static routes with NAT
  • FMC applies a single security policy across all VRFs — connection events are enriched with the VRF ID for visibility
  • Can be combined with Multi-Instance for an additional isolation layer
  • Starting FTD 7.7, FTDv supports up to 30 VRFs
VRF Scalability by Platform (FTD 7.7)
PlatformMax VRFsPlatformMax VRFs
1010 / 11205411260
1140 / 115010411580
1210CE/CP54125 / 4145100
1220CX104215 / 4225 / 4245100
1230 / 1240109300 SM-44/48/56100
125015FTDv30
310510ISA 300010
311015211010
312025212020
313050213030
3140100214040

Multi-Instance (Container-Based Tenant Isolation)

Multi-Instance allows a single physical appliance or chassis module to run multiple independent FTD instances using Docker container infrastructure. Each instance has its own:

  • Independent management connection to FMC (appears as a separate device)
  • Independent upgrade schedule — upgrade one instance without touching others
  • CPU, memory and interface resource allocation
  • Security policy, routing table and VPN configuration
Multi-Instance Maximum Instance Count by Platform
PlatformMax InstancesInitial FTD SupportManagement
311037.4.1FMC
312057.4.1FMC
313077.4.1FMC
3140107.4.1FMC
411576.4.0 / FXOS 2.6.1FMC & FXOS
4125106.4.0 / FXOS 2.6.1FMC & FXOS
4145146.4.0 / FXOS 2.6.1FMC & FXOS
4215107.6.0FMC
4225157.6.0FMC
4245347.6.0FMC
9300 SM-44146.3.0 / FXOS 2.3.1FMC & FXOS
9300 SM-48156.4.0 / FXOS 2.6.1FMC & FXOS
9300 SM-56186.4.0 / FXOS 2.6.1FMC & FXOS

FMC Domain-Based RBAC

Firewall Management Center (FMC) supports up to 1,024 domains. Administrators only see and manage devices assigned within their domain. Granular RBAC enables separation of duties between operators — for example, allowing a tenant to manage their own firewall instances without visibility into other tenants' configurations. This is ideal for managed service providers (MSSPs) and large enterprise multi-tenant environments.

Internet Edge: BGP Design Options on Cisco Secure Firewall

Both ASA and FTD support enterprise-grade dynamic routing including RIP, OSPFv2, OSPFv3, IS-IS, EIGRP, BGP and PIM-SM multicast — making the Cisco Secure Firewall a viable internet gateway without a dedicated separate router. For SD-WAN edge deployments, this is particularly useful for consolidating security and routing functions.

Option 1: Full BGP Table

Accept full IPv4 and IPv6 routing tables (~1.3M prefixes). Requires at least 1 GB free Data Plane RAM. Memory: ~304 MB for IPv4 + ~90 MB for IPv6 + 200–300 MB buffer for route churn. Best for organisations needing optimal path selection to all destinations.

Option 2: Partial BGP Routes (AS_PATH Filter to 2–3 hops)

Filter inbound BGP to routes with AS_PATH length of 2–3 hops, resulting in 30k–200k routes. Memory drops to ~54 MB IPv4 + ~31 MB IPv6 + 80–120 MB buffer. Best for balancing granularity and resource consumption.

Option 3: Default Route Only

ISPs advertise only a default route. BGP serves as link keepalive and ECMP mechanism. Memory consumption is minimal (<1 kB). Best for simpler topologies where optimal path selection is not needed.

BGP Scale by Cisco Secure Firewall Platform
PlatformMax BGP Routes TestedMax BGP Neighbors
1010 / 11005k–10k5
1200C / 1230–125050k100
3100 Series100k500 (with BFD)
4100 / 4200 Series200k500 (with BFD)
9300 Series200k500 (with BFD)

Access Control Policy Scale and Sizing

FTD 7.2 introduced Optimized Group Search (OGS) by default on new deployments, enabling significantly higher policy scale at the cost of slightly reduced per-packet performance. OGS was further refined in 7.6 (hit counters, timestamps, additional corner cases) and 7.7. FMC warns administrators before deploying rulesets approaching platform limits.

Maximum Tested ACE Counts — Key Platforms (FTD 7.6)
PlatformMax ACEsUI Rules at 50 ACE/ruleUI Rules at 100 ACE/rule
1010 / 1010E10,000200100
211060,0001,200600
2140500,00010,0005,000
31404,000,00080,00040,000
41458,000,000160,00080,000
424510,000,000200,000100,000
9300 w/SM-569,500,000190,00095,000

Network Module Ecosystem

The 3100, 4100, 4200, and 9300 Series all support hot-swappable network expansion modules with same-kind Online Insertion and Removal (OIR). The 4200 Series offers the most advanced high-density options:

Cisco Secure Firewall 4200 Series — Key Network Modules
Module SKUPortsMin FTD VersionNotes
FPR4K-XNM-2X400G2× 400G QSFP+7.6 (7.7 for breakout)Supports 4×10G, 4×25G, 200G modes in 7.7
FPR4K-XNM-4X200G4× 200G QSFP+7.4Supports 40G and 100G
FPR4K-XNM-2X100G2× 100G QSFP287.4Breakout to 4×10G, 4×25G, or 40G
FPR4K-XNM-8X25G8× 1/10/25G SFP+7.4.0
FPR4K-XNM-6X25SRF/LRF6× 25G Fail-to-Wire7.1Built-in optics, fixed
FPR4K-XNM-6X10SRF/LRF6× 10G Fail-to-Wire7.1SR or LR variants
💡 Fail-to-Wire (FTW) Modules: All FTW modules have built-in optics (fixed — cannot be swapped). They are designed for inline NGIPS deployments where traffic must pass even when the firewall loses power or crashes. Same-kind OIR is supported.

Firewall Management Centre (FMC): Centralised Management at Scale

The Cisco Firewall Management Center (FMC) provides centralised policy deployment, event management, reporting, and cluster health visibility for FTD deployments. All three hardware appliance models are available:

FMC Appliance Scale Comparison
ModelMax FTD SensorsMax IPS EventsMax Flow RateMax Network Hosts
FMC 17005030M5k FPS50k
FMC 270030060M12k FPS150k
FMC 47001,000400M30k FPS600k

Cluster Health Dashboard (FMC 7.3+)

FMC 7.3 introduced a dedicated Cluster Health Dashboard providing:

  • Per-member load statistics (CPU, memory, connections, throughput)
  • Cluster member status at a glance (control unit, data units, joining/leaving)
  • Aggregated and minimum/maximum metrics over a selected time period across the entire cluster
  • Historical trending to detect capacity bottlenecks before they become incidents

Summary and Recommendations

The Cisco Secure Firewall portfolio provides a mature, proven set of high availability and scaling options for every tier of the enterprise. Choosing between HA and clustering comes down to scale requirements: if you need more than one firewall worth of throughput, clustering is the right choice. If you simply need redundancy, HA is simpler to operate and maintain.

Quick Selection Guide — HA vs. Clustering by Use Case
Use CaseRecommended ApproachPlatform
Small/mid branch redundancyActive/Standby HA1100, 1200, 2100 Series
Enterprise edge redundancyActive/Standby HA3100 or 4200 Series
Enterprise DC high scaleClustering (spanned or routed)3100 or 4200 Series
SP / carrier-class scaleClustering up to 16 nodes9300 SM-56 or 4245
Multi-site geo-redundancyStretched cluster3100, 4200, 9300
Multi-tenant MSSPClustering + Multi-Instance + VRF + FMC RBAC4200 or 9300
Internet edge with BGPHA or Clustering depending on scale3100+ for full BGP table
OT/IoT with redundancyActive/Standby HAISA 3000

For full hardware specifications and throughput numbers across all Cisco Secure Firewall models, see our companion guide: Cisco Secure Firewall Platforms: A Complete Deep Dive Guide. For Cisco ISE integration with DNA Center for identity-based policy enforcement alongside your firewall, see our comprehensive technical guide.

Frequently Asked Questions

Q: What is the difference between Cisco Secure Firewall HA and Clustering?

HA (Active/Standby) provides basic redundancy — one unit handles traffic while a standby mirrors state and takes over on failure. Clustering combines up to 16 units into one logical device where all nodes are simultaneously active, providing both redundancy and linear horizontal scaling of throughput, CPS, and concurrent connections.

Q: How many nodes can a Cisco Secure Firewall cluster support?

Up to 16 nodes. A 16-node cluster of 3140 appliances delivers up to 0.57 Tbps NGFW throughput, while a 16-node cluster of 4245 appliances reaches 1.79 Tbps. CPS scales to 2.4M (3140) or 6.4M (4245) and concurrent connections to 96M or 576M respectively.

Q: Can I run both ASA and FTD in the same cluster?

No — all nodes in a cluster must run the same application (either all ASA or all FTD) and the same software version. However, on the 9300 chassis, different Security Modules can run different applications (ASA on one module, FTD on another) in a service chaining configuration.

Q: What is VRF Lite on Cisco Secure Firewall?

VRF Lite (available from FTD 6.6) allows different firewall interfaces to belong to separate Layer 3 routing domains with overlapping IP address space. It enables multi-tenant deployments on a single firewall without dedicated hardware separation. Top-tier platforms like the 3140, 4245, and 9300 SM-56 support up to 100 VRFs.

Q: Which platforms support Multi-Instance on Cisco Secure Firewall?

Multi-Instance is supported on the 3100 Series (FTD 7.4.1+), 4100 Series (FTD 6.3+), 4200 Series (FTD 7.6+), and 9300 Series (FTD 6.3+). It is not supported on the 1010, 1100, 1200, or 2100 Series. The 4245 supports up to 34 instances and the 9300 SM-56 up to 18 instances.

Q: Does Cisco Secure Firewall support BGP for internet edge deployments?

Yes. Both ASA and FTD support full BGP (eBGP and iBGP), OSPFv2/v3, IS-IS, EIGRP, RIP, and PIM-SM multicast. The 3100, 4100, 4200, and 9300 Series have been tested with up to 200k BGP routes and 500 BGP neighbours (with BFD), making them viable internet gateway platforms.