20 Cisco SD-Access Interview Questions: What Architects Are Really Asked
Cisco SD-Access interviews at the senior or architect level are not about memorizing the three-tier fabric hierarchy or recalling that LISP stands for Locator/ID Separation Protocol. Interviewers at that level already assume foundational knowledge. What they are testing is whether you understand why the fabric is designed the way it is — how LISP solves the endpoint mobility problem, how VRFs map to Virtual Networks, why the border node is the most design-critical node in the fabric, and how you migrate a 500-VLAN enterprise campus without a maintenance window.
This guide covers 20 of the most important network-centric Cisco SD-Access interview questions — from fabric node roles and the control plane to policy, multicast, multi-site, and migration — answered with the architectural reasoning that separates strong candidates from the rest.
① Fabric Architecture & Node Roles
Q1
Explain the roles of the Edge Node, Border Node, and Control Plane Node in SD-Access and what happens to traffic if the Control Plane Node fails.
Edge Node is the access layer switch that connects end devices to the fabric. It registers endpoints with the Control Plane Node using LISP and encapsulates traffic in VXLAN for fabric transport. Border Node is the fabric exit point — it connects the fabric to external networks (WAN, internet, shared services, SD-WAN) and translates between VXLAN overlay and native routing. Control Plane Node (typically Catalyst Center or a dedicated device) runs the LISP Map Server/Map Resolver and maintains the endpoint-to-RLOC mapping database. The critical resiliency answer: if the Control Plane Node fails, existing VXLAN tunnels continue forwarding traffic based on cached endpoint mappings in the Edge Nodes. New endpoint registrations and mobility events cannot be processed until recovery, but established sessions survive. This is architecturally similar to vSmart in SD-WAN — the control plane is separate from and non-blocking to the data plane.
Q2
What is an Intermediate Node in SD-Access and what restrictions apply to it?
An Intermediate Node is a fabric node that carries VXLAN-encapsulated traffic between Edge and Border Nodes without participating in the LISP control plane or terminating VXLAN tunnels. It forwards based on the outer IP header of the VXLAN packet — treating it as standard routed traffic. The key restriction is that Intermediate Nodes must not have any end devices connected to them. They operate as pure IP underlay transit nodes. Not all Cisco platforms support the Intermediate Node role — check the SD-Access compatibility matrix before assuming a distribution-layer switch can serve this function. A design mistake in which endpoints are inadvertently connected to an Intermediate Node will result in those endpoints being unreachable through the fabric policy model.
Q3
How does LISP solve the endpoint mobility problem that traditional campus routing cannot?
In traditional campus routing, an endpoint's IP address encodes its location — it belongs to a subnet that is topologically anchored to a specific switch or access layer. When an endpoint moves to a different switch, routing tables must converge to reflect the new location, causing disruption. LISP separates the Endpoint Identifier (EID — the endpoint's IP or MAC) from the Routing Locator (RLOC — the fabric node's underlay IP). When an endpoint moves to a new Edge Node, that Edge Node simply re-registers the EID-to-RLOC mapping with the Control Plane Node. Traffic destined for the endpoint is immediately redirected via a Map Request/Reply exchange without any routing convergence event or subnet change. The endpoint retains its IP address regardless of physical location — enabling true seamless roaming across the entire fabric without breaking TCP sessions or triggering DHCP renewals.
② VXLAN Data Plane & Underlay Design
Q4
Why does SD-Access use VXLAN for the data plane instead of MPLS or a traditional VLAN-based overlay?
VXLAN provides a 24-bit Virtual Network Identifier (VNID) space — supporting over 16 million logical segments compared to VLAN's 4,096 limit. This scale is essential for enterprise micro-segmentation where individual SGTs (Scalable Group Tags) require their own overlay segment. VXLAN also encapsulates full Ethernet frames, including the MAC header, allowing Layer 2 domain extension across a routed underlay without spanning tree — preserving the flat access layer that many enterprise applications require while providing a fully routed and loop-free underlay. MPLS would require label distribution protocol (LDP or BGP-LU) across the campus infrastructure, adding complexity and limiting hardware support to MPLS-capable platforms. VXLAN runs over standard IP/UDP, meaning any IP-routable underlay — IS-IS, OSPF, or even static routes — is sufficient.
Q5
What underlay routing protocol does SD-Access use and why?
SD-Access uses IS-IS as the underlay routing protocol, deployed and managed automatically by Catalyst Center. IS-IS was chosen over OSPF for several reasons: it runs directly over Layer 2 and does not use IP for its own hellos, making it more resilient in campus environments where Layer 3 adjacencies over SVIs can be problematic. It also scales better than OSPF in large flat topologies and supports Traffic Engineering extensions more naturally. Catalyst Center provisions IS-IS automatically across fabric nodes during onboarding — operators do not manually configure IS-IS neighbors or area assignments. The underlay provides IP reachability between all fabric node RLOCs, which is all VXLAN requires to build its overlay tunnels.
③ Macro & Micro Segmentation
| # | Question | Architect-Level Answer |
|---|---|---|
| Q6 | What is the difference between macro segmentation (VN) and micro segmentation (SGT) in SD-Access? | Macro segmentation uses Virtual Networks (VNs) — each VN maps to a VRF in the overlay and provides complete routing separation between groups of users or systems. Traffic between VNs must traverse a Fusion Router or Firewall at the border. Micro segmentation uses Scalable Group Tags (SGTs) — numeric labels assigned to endpoints based on identity (ISE authentication). SGT policy, enforced by SGACL, controls which SGTs can communicate within the same VN without requiring IP ACLs or separate subnets. Macro segmentation answers "which network segment does this device belong to?" while micro segmentation answers "within that network, what is this device allowed to do?" |
| Q7 | How are Scalable Group Tags assigned and propagated across the fabric? | SGTs are assigned by Cisco ISE at the time of 802.1X or MAB authentication — the ISE authorization policy maps the endpoint's identity (user, device type, posture) to a specific SGT value. The SGT is communicated to the Edge Node via the RADIUS Access-Accept as the Cisco-AVPair: cts:security-group-tag=<value> attribute. Within the fabric, the SGT is carried in the VXLAN Group Policy Option (GPO) header field — eliminating the need for separate SGT propagation protocols (SXP). At the border, SXP may be required to propagate SGTs to non-fabric devices that do not understand the VXLAN GPO field. |
| Q8 | What is a Fusion Router and when is it required in SD-Access? | A Fusion Router is an external routing device connected to the Border Node that enables communication between different Virtual Networks (VRFs) within the SD-Access fabric. Because VNs provide macro-segmentation through VRF isolation, traffic between VNs cannot cross the fabric directly — it must exit the fabric, be routed between VRFs on the Fusion Router (where policy can be applied), and re-enter. A Fusion Router is required whenever two VNs need controlled inter-VN communication — for example, when a Guest VN needs access to a shared DNS or DHCP server in the Corporate VN. The Fusion Router is where a firewall is typically inserted for inter-VN inspection before traffic returns into the fabric. |
⚠ Common Interview Trap: Candidates often say SGTs replace VLANs. They do not — each Endpoint Group (user pool) in SD-Access still maps to a VLAN at the access layer. SGTs add a policy identity layer on top of the VLAN construct. The fabric handles the VLAN-to-VNID-to-SGT mapping automatically.
④ Catalyst Center (DNA Center) & Policy
Q9
What is the relationship between Catalyst Center, ISE, and the SD-Access fabric, and what happens if Catalyst Center goes offline?
Catalyst Center (formerly DNA Center) is the orchestration and management plane — it provisions fabric nodes, pushes IS-IS underlay configuration, creates VNs, defines Endpoint Groups, and automates Day-0/1/2 operations via NetConf/RESTCONF. Cisco ISE is the policy and identity engine — it handles 802.1X/MAB authentication, assigns SGTs, and acts as the pxGrid publisher for endpoint context sharing. The fabric itself (LISP control plane, VXLAN data plane, IS-IS underlay) runs independently on the network devices. If Catalyst Center goes offline, existing fabric forwarding continues uninterrupted — no new policy changes, onboarding automation, or topology modifications can be made until recovery. ISE must remain available for endpoint authentication to continue — an ISE outage impacts new endpoint access, not existing authenticated sessions.
Q10
What is the purpose of the Endpoint Group (EPG) in SD-Access and how does it differ from Cisco ACI's EPG?
In SD-Access, an Endpoint Group is a logical grouping of users or devices that share the same network policy — it maps to a VLAN (user pool) in the access layer and to a VNID in the overlay. The EPG defines what IP pool and VLAN endpoints in that group receive, and which SGT is associated with authenticated members of that group. Unlike Cisco ACI's EPG — which is a pure policy construct defining communication permissions via contracts — the SD-Access EPG is primarily a network placement and identity mapping construct. Policy between EPGs in SD-Access is defined through SGACLs at the ISE level, not through fabric contracts. The naming similarity between the two platforms causes confusion in interviews; the underlying models are architecturally distinct.
Q11
How does Catalyst Center use network profiles and templates to provision fabric nodes at scale?
Catalyst Center uses Network Profiles to associate topology-level configuration (fabric design, VN assignments, authentication templates) with specific sites in the hierarchy. Day-N Templates (based on Apache Velocity scripting) allow engineers to push parameterized CLI or Netconf configuration to fabric nodes for settings Catalyst Center does not natively manage — custom QoS policies, specific interface configurations, or vendor-specific features. The onboarding workflow combines PnP (Plug-and-Play) for Zero Touch Provisioning, Claim and Deploy for initial fabric role assignment, and template push for Day-N settings. Engineers who can author Velocity templates and understand the Catalyst Center template editor's variable binding model demonstrate a significantly higher level of operational maturity than those who rely solely on the GUI.
⑤ Border Node Design & Multi-Site
| # | Question | Architect-Level Answer |
|---|---|---|
| Q12 | What is the difference between an Internal Border Node and an External Border Node? | Internal Border Node connects the fabric to other SD-Access fabric sites — it handles fabric-to-fabric transit and extends the policy model (SGTs, VNs) between sites. It uses the VXLAN overlay to maintain end-to-end policy. External Border Node connects the fabric to non-fabric networks — WAN, internet, legacy campus segments, or data centers. It translates between the VXLAN overlay and native IP routing, stripping the fabric encapsulation at the exit point. Most enterprise designs use dedicated Border Node hardware pairs (not shared with Edge or Intermediate roles) to isolate the external routing failure domain from internal fabric forwarding. |
| Q13 | How does SD-Access Multi-Site work and what role does the Transit Control Plane play? | SD-Access Multi-Site connects multiple fabric sites using a Transit Fabric — either an SD-Access Transit (a dedicated fabric connecting sites using the same LISP/VXLAN architecture) or an IP-based Transit (which routes between sites using standard BGP). The Transit Control Plane Node acts as the LISP map server for inter-site endpoint resolution — when an Edge Node at Site A needs to reach an endpoint at Site B, it queries the Transit Control Plane which resolves the EID to the Site B Border Node's RLOC. Policy (SGTs, VNs) is preserved end-to-end across a fabric transit but may be lost across an IP transit where standard routing replaces the overlay. Multi-Site design requires careful alignment of VN names and SGT values across all fabric instances managed by Catalyst Center. |
| Q14 | How do you handle multicast traffic in an SD-Access fabric? | SD-Access supports two multicast modes. Native multicast maps the overlay multicast group to a unique underlay multicast group per VNID — the underlay must run PIM and have multicast-capable hardware throughout. This provides the most efficient multicast delivery but requires underlay multicast configuration. Head-end replication replicates multicast packets at the source Edge Node as individual unicast VXLAN packets to all interested Edge Nodes — no underlay multicast required, but it increases CPU and bandwidth load at the source node. For most enterprise campuses without a multicast-capable underlay, head-end replication is the deployable default. Native multicast is mandatory for high-volume multicast applications like video surveillance or financial market data feeds. |
⑥ Wireless Integration, Migration & Troubleshooting
| # | Question | Architect-Level Answer |
|---|---|---|
| Q15 | How does wireless integrate into the SD-Access fabric — what is the role of the WLC? | In SD-Access, the Wireless LAN Controller (WLC) is a fabric-aware node that integrates with the fabric through the Edge Node. APs connect to fabric Edge Nodes which act as their VXLAN tunnel endpoint. The WLC signals endpoint registration to the LISP Control Plane Node on behalf of wireless clients — when a client associates and authenticates, the WLC notifies the Control Plane Node of the client's EID (MAC and IP) and associated RLOC (the Edge Node the AP is connected to). This means wireless clients participate in the same endpoint mobility model as wired clients — seamless roaming across APs on different Edge Nodes is handled by LISP re-registration without IP address changes. The WLC must be onboarded into Catalyst Center and its site assignment must align with the fabric site hierarchy for policy to apply correctly. |
| Q16 | How do you migrate a brownfield campus with 300 VLANs to SD-Access without a maintenance window? | The recommended approach is a phased coexistence migration using the SD-Access Coexistence Mode. Deploy new fabric infrastructure (Edge, Border, Control Plane Nodes) alongside the existing distribution layer. Use External Border Nodes to connect the fabric to the existing traditional campus — this provides reachability between fabric and non-fabric segments during migration. Migrate VLANs one building or wiring closet at a time: remove devices from the traditional switch, onboard the replacement Edge Node into the fabric via PnP, and re-terminate endpoints. The Border Node handles routing between migrated (fabric) and un-migrated (traditional) segments throughout the migration. Never migrate authentication (802.1X) and fabric simultaneously — introduce ISE-based authentication in monitor mode first, validate SGT assignments, then migrate the network segment into the fabric. |
| Q17 | A wired endpoint is authenticated and associated with the correct SGT but cannot reach its destination. How do you troubleshoot? | Start with show lisp site detail on the Control Plane Node to verify the endpoint's EID is registered with the correct RLOC (Edge Node). If unregistered, the Edge Node has not completed LISP registration — check show lisp service ipv4 on the Edge Node. If registered, verify the SGACL policy on ISE — check that the source SGT to destination SGT matrix has an ALLOW entry for the required protocol. Run show cts role-based permissions on the Edge Node to confirm the SGACL is downloaded and active. Verify VXLAN encapsulation with show fabric forwarding and check underlay reachability between the source and destination Edge Node RLOCs before suspecting a policy issue. |
| Q18 | What is the purpose of the Anycast Gateway in SD-Access and how does it differ from a traditional HSRP/VRRP gateway? | The Anycast Gateway provides the default gateway function for endpoints in the fabric. Every Edge Node hosts the same IP address and MAC address as the default gateway for each subnet — there is no active/standby failover because every Edge Node simultaneously responds to ARP requests for the gateway IP. When an endpoint sends an ARP for its gateway, the local Edge Node responds immediately without any network-wide coordination. This eliminates the HSRP/VRRP election overhead and provides sub-second gateway failover (equivalent to the time to detect the Edge Node failure and re-associate to a new Access Point or switch port). The anycast gateway also enables seamless endpoint mobility — since every Edge Node has the same gateway IP and MAC, moving between Edge Nodes does not require a gateway reachability event. |
| Q19 | How does SD-Access handle external DHCP — where does the DHCP server sit and how do requests traverse the fabric? | DHCP servers in SD-Access are typically external to the fabric — hosted in a shared services VN or in the data center. When an endpoint sends a DHCP Discover, the Edge Node's anycast gateway (acting as DHCP relay) intercepts the broadcast and unicasts it toward the external DHCP server via the Border Node. The relay agent adds the correct DHCP Option 82 (circuit ID matching the Endpoint Group VLAN) so the DHCP server can assign the correct IP pool. The IP address assigned must fall within the subnet configured for that Endpoint Group in Catalyst Center. A common design failure is mismatched DHCP pools — the DHCP server assigns an IP from the wrong pool because Option 82 is not configured or the DHCP scope is not aligned with the fabric's Endpoint Group subnet definition. |
Q20 — The Architect Closer
A 2,000-user enterprise campus with 200 VLANs, no existing 802.1X, and a legacy Catalyst 3850 infrastructure wants to deploy SD-Access. Walk me through the full design and phased deployment approach.
Phase 0 — Assessment: Inventory all 3850s against SD-Access compatibility matrix. Identify which can serve as Edge, Intermediate, or Border Nodes. Map the 200 VLANs to target Virtual Networks (typically 3–5 VNs: corporate, IoT, guest, voice, management). Identify all applications requiring multicast to determine overlay multicast mode. Phase 1 — Identity foundation: Deploy ISE in monitor mode. Enable 802.1X on one pilot building in open authentication — no traffic impact, but ISE begins logging endpoint identity. Define SGT taxonomy aligned with business policy. Phase 2 — Fabric foundation: Deploy Catalyst Center, onboard Control Plane Nodes and Border Nodes. Connect Border Nodes to existing distribution layer to maintain reachability. Establish IS-IS underlay on fabric nodes. Phase 3 — Phased edge migration: Migrate one access layer closet at a time. Onboard Edge Nodes via PnP. Migrate endpoints VLAN by VLAN. Validate LISP registration, VXLAN forwarding, and DHCP assignment per EPG. Phase 4 — Policy enforcement: Switch ISE from monitor to low-impact mode, then closed mode. Enable SGACL enforcement. Validate inter-SGT policy matrix. Key rule: Never combine Phase 2 and Phase 3 on the same change window. Never enforce policy before identity coverage exceeds 95% of endpoints.
Key Principles to State in Any SD-Access Interview
| Control plane failure = forwarding continues | Cached LISP mappings sustain existing flows independently |
| SGT travels in VXLAN GPO field | No SXP needed inside the fabric — only at non-fabric boundaries |
| Anycast gateway = no HSRP/VRRP | Same IP/MAC on every Edge Node — instant failover, seamless mobility |
| VN = macro, SGT = micro | Always distinguish segmentation layers when asked about policy |
| Identity before enforcement | Monitor mode → low impact → closed mode — never skip this sequence |
Approaching the Cisco SD-Access Interview
The 20 questions above share one consistent pattern: every strong answer lives in the reasoning behind the design decision, not the feature name. SD-Access is a rich enough platform that you can always name more components — what interviewers test is whether you understand the why behind each architectural choice, what breaks when a design assumption fails, and how you sequence a production migration without causing an outage.
Lead with the constraint that drives the decision. Acknowledge the alternative approaches. State what you sacrifice and why. That architectural reasoning — more than any CLI command or Catalyst Center screenshot — is what defines a Cisco SD-Access architect in any interview room.
Cisco SD-Access features and platform support evolve across Catalyst Center and IOS-XE releases. Always validate design decisions against the current Cisco SD-Access Design Guide and compatibility matrix for your target software version.