F Deploying Cisco Virtual APIC (vAPIC) on VMware ESXi - The Network DNA: Networking, Cloud, and Security Technology Blog

Deploying Cisco Virtual APIC (vAPIC) on VMware ESXi

Cisco ACI Deep-Dive Series

Complete Guide: Deployment Modes, Guidelines & Limitations for Cisco ACI 6.0(2)+

Cisco ACI Virtual APIC VMware ESXi ACI 6.0(2)+

1. What is Cisco Virtual APIC (vAPIC)?

The Cisco Application Policy Infrastructure Controller (APIC) is the central nervous system of any Cisco ACI fabric. Traditionally delivered as a physical appliance, it has evolved significantly with the introduction of the Virtual APIC (vAPIC) — a software-based controller that can be deployed on commodity virtualization platforms, bringing the full power of Cisco ACI policy management without dedicated hardware.

Deploying Cisco Virtual APIC (vAPIC) on VMware ESXi

Beginning with Cisco APIC release 6.0(2), you can deploy a cluster wherein all the APICs in the cluster are virtual APICs. The vAPIC can be deployed in two ways:

☁️ AWS Deployment

Deployed on Amazon Web Services using the CloudFormation template, with the ACI fabric running on customer premises.

🖥️ ESXi Deployment

Deployed on a VMware ESXi host using the OVF template in VMware vCenter, placed on existing customer-premises servers.

This article focuses specifically on the ESXi-based vAPIC deployment — an approach that is ideal for organizations looking to reduce hardware costs, simplify controller lifecycle management, and leverage their existing VMware infrastructure within a Cisco ACI environment.

🔑 Key Benefit: Why Deploy a Virtual APIC?

  • Eliminates the need for dedicated physical APIC appliances
  • Reduces capital expenditure (CapEx) for ACI deployments
  • Enables ACI management on existing VMware server infrastructure
  • Supports all Cisco ACI Multi-Pod, Remote Leaf, and Multi-Site topologies
  • Deployable on-premises or in the cloud (AWS)

2. Deployment Modes: Layer 2 vs Layer 3

Cisco vAPIC on ESXi supports two distinct deployment modes, each designed for different network topologies. Choosing the right mode depends on the physical connectivity of your ESXi host relative to the ACI fabric leaf switches.

MODE 1

Layer 2 Deployment

In Layer 2 mode, the ESXi host on which the virtual APIC(s) are deployed is directly connected to the ACI fabric leaf switches. This is the most straightforward deployment model and the most common for on-premises environments.

Uplink options supported:

🔗 Active-Standby Uplinks

Standard failover model with one active uplink and one in standby.

⚡ Active-Active (LACP Port Channel)

Both uplinks active simultaneously, providing higher bandwidth and redundancy via LACP port channels.

⚠️ Note: LLDP must be disabled on the vDS when the ESXi host is directly connected to ACI leaf switches. The vAPIC needs to consume LLDP traffic directly to/from the leafs, so vDS LLDP interception must be avoided. Enable CDP instead via Cisco APIC GUI → Virtual Networking → VMware → DVS.

MODE 2

Layer 3 Deployment

In Layer 3 mode, the ESXi host on which the virtual APIC(s) are deployed is remotely attached to the ACI fabric via an external IP network. This mode is designed for scenarios where the controller cannot be physically collocated with the ACI fabric — for example, when the virtual APICs are hosted in a separate data center or a different network segment.

This topology is commonly used in Cisco ACI Multi-Pod designs where the virtual APICs reside in a POD-0 (management zone) and connect to the ACI fabric PODs through the IPN (Inter-Pod Network).

Layer 2 vs Layer 3: Quick Comparison

Feature Layer 2 Mode Layer 3 Mode
ESXi-to-Leaf Connectivity Direct physical connection Remote via external network
Uplink Options Active-Standby or LACP IP routing via IPN
Use Case Single-site, on-prem ACI Multi-Pod, remote deployments
LLDP Behavior Disable on vDS; enable CDP Managed via external network
Topology Support Standard ACI fabric Multi-Pod, Remote Leaf, Multi-Site

3. Guidelines and Limitations for ESXi Deployment

Before deploying a virtual APIC on ESXi, it is essential to understand all guidelines and limitations. Violating any of these can result in deployment failures, cluster instability, or unsupported configurations.

📌 Guideline 1 — One vAPIC Per ESXi Host (Recommended)

Although multiple virtual APICs per ESXi host are technically supported, a single virtual APIC per ESXi host is strongly recommended for high availability. Co-locating multiple APICs on the same host creates a single point of failure that could bring down the entire APIC cluster.

📌 Guideline 2 — Minimum Fabric Switch Software Requirement

Fabric switches must be running Cisco ACI release 6.0(2) or later. Switches running older releases can be automatically upgraded to 6.0(2) during fabric discovery using the Auto Firmware Update feature — but this must be planned and validated in advance.

📌 Guideline 3 — Mixed-Mode Clusters (Pre-6.2(1))

In releases prior to Cisco ACI 6.2(1), mixed-mode clusters — combining physical and virtual APICs — are not supported. All nodes in the APIC cluster must use the same form factor. The only exception is Cisco Mini ACI, which supports clusters with one physical APIC and two virtual APICs.

📌 Guideline 4 — Mixed-Mode Clustering Supported from 6.2(1)

Starting with Cisco ACI release 6.2(1), mixed-mode clustering is fully supported. You can now combine different controller types (physical and virtual) within the same cluster, providing far greater flexibility for brownfield deployments and staged migrations.

⛔ Limitation — AWS Virtual Controllers Cannot Be Mixed

Virtual controllers deployed in AWS cannot be mixed with any other controller type. Additionally, if any APIC in the cluster uses the VAPIC-S1 form factor, all other controllers in that cluster must also use the VAPIC-S1 form factor.

📌 Guideline 5 — NTP Time Synchronization

Before deploying any virtual APIC on ESXi, ensure that all ESXi host clocks are synchronized using NTP. Clock drift between nodes is one of the leading causes of cluster formation failures and should be validated as a critical pre-deployment check.

⛔ Limitation — No Cross-Platform Migration

Virtual APICs deployed using ESXi cannot be migrated to AWS, and vice versa. If you start your deployment on one platform, you are committed to that platform. Plan your deployment target carefully before going live.

⛔ Limitation — No Downgrade Below 6.0(2)

After deploying a virtual APIC with release 6.0(2) using ESXi, you cannot downgrade it to any release prior to Cisco APIC release 6.0(2). This is a hard constraint and must be considered in your change management planning.

📌 Guideline 6 — Standby APICs Supported from 6.1(3)

Starting from Cisco APIC 6.1(3), standby APICs are now supported for virtual deployments. Cluster and fabric security is provided using self-signed certificates.

📌 Guideline 7 — VMM / DVS Configuration for vAPIC Co-hosting

For VMM deployments using the same DVS that hosts the virtual APICs, you must enable CDP and disable LLDP on the Cisco APIC GUI. Navigate to: Virtual Networking → VMware → DVS.

⛔ Limitation — No Intermediate Switches Supported

ESXi hosts connected over an intermediate switch — including Cisco UCS Fabric Interconnects — are not supported. The ESXi host must be directly connected to the Cisco ACI-mode leaf switches. This is necessary for the vAPIC to properly consume LLDP traffic from the leaf switches.

📌 Guideline 8 — Consistent PID Across Cluster Nodes

All nodes in a cluster must use the same Product ID (PID). For example, if one node is APIC-SERVER-VMWARE-S1, all other nodes must also be APIC-SERVER-VMWARE-S1. PID consistency is mandatory for a supported cluster configuration.

📌 Guideline 9 — Mini ACI Cluster Specifics

In a Cisco Mini ACI cluster setup, the only supported combination is one physical APIC and two virtual APICs, and the virtual APICs must use the APIC-SERVER-VMWARE-M1 form factor only.

📌 Guideline 10 — Full Topology Support

Virtual APIC clusters are supported in all Cisco ACI Multi-Pod, Remote Leaf Switch, and Cisco ACI Multi-Site topologies. For exact scalability limits applicable to each topology, refer to the Verified Scalability Guide for Cisco APIC.

4. High Availability Considerations

High availability for the vAPIC cluster is primarily achieved through proper placement of virtual APIC VMs across multiple ESXi hosts and through supported uplink configurations. Here are the key HA design principles:

🧱 One vAPIC Per Host

Spread virtual APICs across separate ESXi hosts to ensure a single host failure does not impact the entire controller cluster.

🔁 LACP Port Channels

Use LACP active-active uplinks for redundant ESXi-to-leaf connectivity, eliminating link-level single points of failure.

⏱️ Standby APICs

From Cisco APIC 6.1(3), standby APICs are supported. A standby can be promoted quickly in the event of an active APIC failure.

⚠️ Important: Virtual APIC does not support VMware vSphere High Availability (HA) or VMware vSphere Fault Tolerance (FT). Do not rely on VMware-level HA mechanisms for vAPIC cluster resilience — instead, design for ACI-native redundancy through proper cluster sizing and placement.

5. Mixed-Mode Clustering: Physical + Virtual APICs

One of the most significant milestones in the evolution of Cisco ACI is the introduction of mixed-mode clustering. Here is a timeline of how this capability has evolved:

Cisco ACI 6.0(2)

First release to support a fully virtual APIC cluster. All nodes must be the same type (all-virtual or all-physical). Mixed mode not supported.

Cisco ACI 6.1(3)

Standby APICs introduced for virtual APIC deployments, enhancing cluster resilience.

Cisco ACI 6.2(1) — Mixed Mode Unlocked

Mixed-mode clustering officially supported. Physical and virtual APICs can now coexist in the same cluster, enabling flexible brownfield and greenfield deployment options. Additionally, dynamic cluster-aware EPG deployment is supported via VMware vCenter integration.

🔹 Special Case: Cisco Mini ACI

Even before 6.2(1), Cisco Mini ACI supported a hybrid cluster of 1 physical APIC + 2 virtual APICs using the APIC-SERVER-VMWARE-M1 form factor. This remains the only supported combination for Mini ACI deployments.

6. VMware vMotion, HA & FT Support

Understanding which VMware features are and are not compatible with vAPIC is critical for infrastructure designers. The following table summarizes VMware feature support:

VMware Feature Supported? Notes
vMotion (Live Migration) ✅ Yes vAPIC VMs can be live-migrated between ESXi hosts.
vSphere High Availability (HA) ❌ No Not supported. Use ACI-native cluster HA design instead.
vSphere Fault Tolerance (FT) ❌ No Not supported. FT is incompatible with vAPIC workloads.

The support for vMotion is an important operational advantage — it allows platform teams to perform ESXi host maintenance (such as patching) without disrupting the APIC cluster, provided the cluster has adequate redundancy.

7. Best Practices & Pre-Deployment Checklist

Use the following checklist before initiating your Cisco virtual APIC deployment on ESXi:

Verify ACI fabric software version — Confirm all fabric switches are on Cisco ACI 6.0(2) or later. Plan Auto Firmware Update if required.

Configure NTP on all ESXi hosts — Ensure all ESXi hosts hosting vAPIC VMs are synchronized to the same NTP source.

Validate direct leaf connectivity — Confirm ESXi hosts are directly connected to ACI-mode leaf switches. Remove any intermediate switches or UCS Fabric Interconnects from the path.

Configure CDP and disable LLDP on vDS — Mandatory when the VMM domain uses the same DVS as the vAPICs.

Plan one vAPIC per ESXi host — For maximum availability, avoid placing more than one vAPIC VM on the same ESXi host.

Confirm PID homogeneity — All vAPIC nodes in the cluster must use the same Product ID (e.g., all APIC-SERVER-VMWARE-S1).

Choose and commit to a platform — Decide between ESXi and AWS before deployment. Cross-platform migration is not supported.

Disable VMware HA and FT for vAPIC VMs — Do not enable vSphere HA or FT on the VMs running virtual APIC. These features are not supported and could cause cluster instability.

8. Conclusion

The Cisco Virtual APIC represents a transformative shift in how organizations can deploy and manage Cisco ACI fabrics. By removing the requirement for dedicated physical controller hardware, Cisco opens ACI management to a broader range of deployment scenarios — from lean edge environments to fully cloud-connected enterprise fabrics.

Whether you opt for a Layer 2 direct-connect topology or a Layer 3 remote-attach design, the vAPIC provides the full control-plane functionality of its physical counterpart — including support for Multi-Pod, Remote Leaf, Multi-Site, and VMware vMotion.

As Cisco ACI continues to evolve — with milestones like mixed-mode clustering in 6.2(1) — the vAPIC becomes an increasingly compelling choice for both greenfield and brownfield ACI deployments. Following the guidelines and limitations outlined in this article will ensure a stable, supported, and highly available virtual APIC cluster.

Found this guide helpful?

Share it with your network and follow the author for more deep-dive Cisco ACI content.

Follow 

Tags

Cisco ACI Virtual APIC vAPIC ESXi Deployment VMware vCenter ACI 6.0(2) Multi-Pod APIC Cluster Cisco CCDE CCIE Data Center