Top 100 Most Asked Questions About Arista Networks
Arista Networks built its name in hyperscale data centers. The switching silicon is fast, the operating system is Unix-based and deeply scriptable, and the company managed to ship consistent software across its entire hardware portfolio at a time when most vendors were running different code on every platform. That combination earned it a permanent seat in the networks of Microsoft, Meta, and most of the major cloud providers.
But Arista is no longer just a data center vendor. CloudVision handles network-wide management and telemetry, the Campus product line puts Arista in branch offices and corporate floors, and the security portfolio has grown substantially. Which means there are a lot more things to learn — and a lot more questions being asked.
Section 1: Arista Networks — The Fundamentals (Q1–15)
Q1. What is Arista Networks?
Arista Networks is a networking company founded in 2004, headquartered in Santa Clara, California. It makes Ethernet switches and routers for data centers, campus networks, and cloud environments. The company went public in 2014. What set Arista apart early on was running a single operating system — EOS — across every hardware platform, at a time when most vendors ran completely different software on different product lines. That consistency made automation and scripting dramatically easier, which is why hyperscale cloud operators adopted it early and heavily.
Q2. What is Arista EOS?
EOS (Extensible Operating System) is the Linux-based network operating system that runs on all Arista hardware. It's built on a standard Linux kernel, with all system state — routing tables, MAC tables, interface configs, spanning tree state — stored in a central database called SysDB. Individual processes (routing daemons, management agents, interface drivers) read and write to SysDB rather than directly accessing hardware. This architecture means a process crash doesn't take down the whole system — the process restarts and re-syncs from SysDB without a network outage.
Q3. What hardware does Arista sell?
Arista's product lines: the 7000 series covers data center switches across a wide range — 7010, 7020, 7050, 7060, 7130, 7170, 7250, 7260, 7280, 7300, 7358, 7368, 7500, 7800 series, among others. Each targets different port densities, speeds (1G through 800G), and use cases (ToR, spine, super-spine, peering router). The 720 and 750 series target campus access and aggregation. The 7130 series is aimed at ultra-low-latency financial trading environments. The R series (7020R, 7280R) handles service provider and WAN routing roles. Choosing the right platform depends on port count, speed requirements, buffering needs, and feature set.
Q4. What is the difference between Arista and Cisco?
A few meaningful differences. EOS is a single codebase for all hardware; Cisco runs IOS, IOS-XE, IOS-XR, and NX-OS depending on which platform you're on — different commands, different behaviors. Arista's CLI is deliberately Cisco-like (which eases migration), but the underlying architecture is different. Arista's switching silicon is almost always merchant silicon (Broadcom, Intel) whereas Cisco mixes custom ASICs with merchant silicon. Arista tends to release features faster because they're not carrying the legacy code weight Cisco manages. On the other hand, Cisco has a much larger product portfolio — wireless, telephony, SD-WAN, routers, firewalls — where Arista is thinner outside of switching and is still building out those adjacencies.
Q5. What is SysDB in Arista EOS?
SysDB is the central in-memory database that holds all network state on an Arista switch. Every EOS process — the routing daemon, the spanning tree process, the management agent, the CLI — reads and writes through SysDB rather than talking to hardware directly. When you commit a configuration change, the CLI writes to SysDB, which notifies the relevant processes, which then push the change to the forwarding hardware. This decoupled architecture is why Arista can restart individual daemons without disrupting traffic: the state is in SysDB, not in the process itself.
Q6. What is Arista's MLAG?
MLAG (Multi-Chassis Link Aggregation) lets two Arista switches appear as a single logical switch to downstream devices. You connect both switches via a peer-link (typically a 40G or 100G trunk) and configure them to share the same MLAG system ID. Downstream switches or servers then run LACP against both Arista switches as if they were one. MLAG gives you active-active redundancy without spanning tree blocking — traffic distributes across both uplinks simultaneously. It's the traditional answer to the access-layer redundancy problem before spine-leaf with EVPN became widespread.
Q7. What is Arista's Zero Touch Provisioning (ZTP)?
ZTP lets a factory-fresh Arista switch boot, reach out to a DHCP server, download its startup configuration and EOS image from a server (via HTTP or TFTP), and come up fully configured without anyone touching the CLI. The DHCP server points the switch to a ZTP script (a Python file or a plain config file). This is how large deployments provision hundreds of switches efficiently — you rack and cable them, power them on, and the network configures itself. CloudVision Portal integrates with ZTP to handle the provisioning server role centrally.
Q8. What chipsets does Arista use?
Arista primarily uses Broadcom's Trident and Tomahawk series for data center switching, and Broadcom's Jericho series for routing and service provider platforms. Some platforms use Intel's Tofino programmable ASIC (on the 7170 series), which supports P4-programmable forwarding pipelines. The 7130 series uses Metamako's ultra-low-latency FPGA technology for financial trading applications. Arista's position on merchant silicon means software updates and features follow the silicon roadmap — a trade-off that generally results in faster time-to-market for new port speeds but less differentiation at the hardware level compared to vendors with custom ASICs.
Q9. How is Arista used in cloud environments?
Cloud providers use Arista switches extensively as top-of-rack (ToR) switches in their compute pods, as spine switches in their data center fabric, and as peering routers at internet exchange points. Microsoft's Azure network runs a large portion on Arista hardware. The reason is throughput-per-dollar, software consistency, and the automation-friendly EOS. Cloud operators don't configure switches by hand — they write code that talks to switches via API — and EOS was built for that use case earlier than most competitors.
Q10. What is Arista's Campus product line?
Arista entered the campus market with the 720 series (720XP and 722XP for PoE access, 750 series for aggregation and core). Campus switches run the same EOS as data center switches — which means the same CLI, same automation interfaces, same CloudVision management. This is Arista's main pitch against Cisco Catalyst: one operating system everywhere, instead of learning Catalyst IOS for campus and NX-OS for data center. Campus Wi-Fi access points (the W-series) and the CV-CUE wireless management platform round out the campus portfolio.
Q11. What is Arista's Universal Cloud Network (UCN) architecture?
UCN is Arista's framework for building a network that serves data center, campus, and cloud edge traffic from a consistent architecture and management plane. The idea is that a single CloudVision instance, a single EOS codebase, and a consistent set of automation tools manages the entire enterprise network — rather than having separate tools and teams for each domain. Whether it fully delivers on that promise in practice depends heavily on your specific environment, but the architectural goal is real and the management-plane consistency is genuine.
Q12. What is Arista's relationship with Broadcom?
Arista designs its hardware around Broadcom silicon and co-develops features with Broadcom's roadmap. This is both a strength and a dependency. The strength: Arista gets access to the fastest merchant silicon as it's released. The dependency: capabilities available in EOS are partly bounded by what the underlying Broadcom chip supports — some hardware features that a custom ASIC vendor might offer aren't available. Arista mitigates this by supporting multiple chip generations simultaneously and by offering the Tofino-based platforms where P4 programmability is needed.
Q13. What is Arista's default username and password?
Factory-default credentials are admin/admin with an empty password (or in some versions, no default password at all — the first login prompts you to set one). On first boot, the switch may be in ZTP mode, where it tries to download a configuration automatically. To abort ZTP and enter manual setup, type "zerotouch cancel" at the CLI prompt during boot. After canceling ZTP, set a privileged password immediately via "enable secret" and create named admin accounts — the default admin account without a password is a security gap that should last no longer than the first five minutes of configuration.
Q14. Does Arista support stacking?
Arista doesn't support proprietary switch stacking the way Cisco StackWise does. The equivalent for redundant uplinks is MLAG. For larger deployments, the answer is spine-leaf with ECMP — you get active-active forwarding across all uplinks without stacking at all. Arista has made a deliberate architectural choice here: stacking creates hidden shared-fate scenarios that spine-leaf avoids. For campus wiring closets where traditional stacking is common, Arista's pitch is that MLAG + a clean two-tier design is more resilient, even if it requires rethinking the physical topology.
Q15. What is the Arista support portal and how do I access it?
The Arista support portal is at arista.com — log in with your registered account to access EOS software downloads, field notices, technical documentation, and support cases. You need a valid support contract (SMART Net equivalent — Arista calls it "Arista SMARTnet" or just support services) associated with your account. The support portal also has the Arista TOI (Technical Overview and Information) library — detailed technical documents written by Arista engineers that go deeper than the standard configuration guides. The TOIs are genuinely useful and significantly better than most vendor documentation.
Section 2: EOS — Extensible Operating System (Q16–28)
Q16. How do I upgrade EOS on an Arista switch?
Copy the EOS image to the switch: "copy scp://user@server/path/EOS.swi flash:" or upload via the management interface. Verify the MD5 hash matches what Arista publishes. Set the boot image: "boot system flash:EOS-x.x.x.swi". Then reload. For minimal disruption, use hitless restart where supported, or schedule during a maintenance window. Before upgrading, run "show version" to note the current release, check the EOS release notes for any behavior changes, and verify that any third-party extensions you're using are compatible with the target version. Downgrading follows the same process — just point to the older image.
Q17. What is the EOS release naming convention?
EOS versions follow a major.minor.patch format: 4.32.1F, for example. The letter suffix matters. "F" releases are feature releases — they include new features but are less stability-tested. "M" releases are maintenance (or "long-life") releases — they get extended bug-fix backports and are preferred for production environments where stability matters more than feature access. "EFT" releases are early field trials — not for production. For most enterprise deployments, stick to M releases unless you specifically need a feature in an F release, and even then, wait a few months for the .1 or .2 patch of that F release before deploying widely.
Q18. How does EOS configuration work — running vs. startup config?
Same concept as Cisco IOS. Changes made via CLI go into the running configuration immediately and take effect. To make them permanent across reboots, you run "write memory" (or "copy running-config startup-config"). If you reboot without saving, the switch comes back with the last saved startup config. One EOS advantage: "configure session" lets you stage a batch of changes in a named session, preview them, and commit atomically — so all changes apply together or none do. This prevents the partial-configuration states that can happen when applying complex changes interactively.
Q19. What is a configure session in EOS?
A configure session is a transactional configuration method. Enter with "configure session [name]". Make your changes — they go into the session but not yet into the running config. Use "show session-config diffs" to preview exactly what will change. Then "commit" applies everything atomically, or "abort" discards the session entirely. For complex changes (adding a new VRF, reconfiguring BGP peers, changing VLAN assignments), configure sessions reduce the risk of leaving the switch in a half-configured state if something goes wrong partway through.
Q20. What is EOS's Multi-Agent Routing (MAR)?
By default, EOS runs a single routing process (Rib) that handles all protocols. Multi-Agent Routing mode runs BGP, OSPF, ISIS, and other protocols as separate independent processes. This enables features that require protocol isolation: EVPN with VXLAN, BGP VRF routing, advanced BGP features like additional paths, and segment routing extensions. CloudVision often requires multi-agent mode. To enable: "service routing protocols model multi-agent" in global config, followed by a reboot. Check the release notes for your EOS version — some features only work in multi-agent mode, and a few legacy features only work in single-agent mode.
Q21. What is Hitless Restart in EOS?
Hitless restart (also called In-Service Software Upgrade or ISSU on some platforms) lets you upgrade EOS software without dropping traffic. The switch's forwarding hardware keeps forwarding based on the existing FIB while the control plane restarts with the new software. Not every platform and feature combination supports ISSU — check the hardware and EOS version compatibility before assuming you can do it. MLAG-based hitless restart is more common: you upgrade one switch in an MLAG pair, traffic shifts to the peer, then upgrade the second. This is more reliable and widely supported across platforms.
Q22. How do I access the EOS bash shell?
From the EOS CLI, type "bash" (requires privilege level 15). This drops you into a standard Linux bash shell on the underlying OS. From bash you can run Python scripts, use standard Linux tools (tcpdump, ping, curl), access the file system, and interact with EOS internals. "FastCLI" within bash lets you run EOS commands programmatically. This is also where you'd install EOS extensions (RPM packages) and run Ansible playbooks locally on the device. Exit bash with "exit" to return to the EOS CLI.
Q23. What is the EOS event-handler feature?
Event-handler lets you trigger a script or EOS action when a specific condition occurs. Conditions include: interface going up or down, a specific log message appearing in syslog, BGP neighbor state change, CPU or memory threshold exceeded, or a scheduled time. Actions can be bash scripts or EOS CLI commands. Example use: "when interface Ethernet1 goes down, send an email, suppress a route, and alert the NOC." This is EOS's built-in event-driven automation — no external tool required. For complex logic, the action can call an external script stored on the switch's flash.
Q24. What are EOS extensions?
EOS extensions are RPM packages that install additional software into the EOS environment without modifying the base EOS image. Use cases: install SNMP MIBs, custom monitoring agents, third-party telemetry collectors, or Arista's own add-on packages (like the TerminAttr streaming telemetry agent). Install with "extension [filename.rpm]" from EOS CLI. Extensions persist across reboots and EOS upgrades if they're compatible with the new version. The Arista support portal publishes the compatibility matrix for each extension version against EOS versions.
Q25. How do I configure management VRF in EOS?
Arista has a built-in VRF called "Management" (capital M) on most platforms — the management interface (Ma0 or Ma1) is already in this VRF by default. To use it, route management traffic using a default route in the Management VRF: "ip route vrf Management 0.0.0.0/0 [gateway-IP]". DNS, NTP, and other management-plane services need to be told which VRF to use: "ip domain lookup vrf Management source-interface Management1". Similarly, when SSHing from the switch to an external host, specify the VRF: "ssh arista@host vrf Management". Missing the VRF specification in management commands is the most common source of "management is configured but nothing works" tickets.
Q26. What is TerminAttr in Arista?
TerminAttr is Arista's streaming telemetry agent. It runs as an EOS extension and streams state data from SysDB — interface counters, routing tables, BGP peer state, optical power levels, CPU, memory — to CloudVision Portal, CloudVision as a Service, or to any OpenConfig/gNMI-compatible telemetry collector (Grafana, InfluxDB, Splunk, etc.). TerminAttr uses the gRPC/gNMI protocol for the streaming transport. It's the mechanism that gives CloudVision its real-time visibility into the network — without TerminAttr running on the switches, CloudVision sees configuration only, not live operational state.
Q27. How does EOS handle VRFs?
VRFs in EOS work the same way as in IOS/NX-OS — each VRF maintains a separate routing table, and interfaces assigned to a VRF only participate in that VRF's routing. Create a VRF: "vrf instance [name]", assign an interface: "vrf [name]" under the interface config, then configure routing within the VRF. Route leaking between VRFs uses BGP or static routes with explicit VRF import/export. EVPN VXLAN uses VRFs extensively — each tenant gets a VRF, and the VXLAN overlay carries traffic between them across the fabric.
Q28. What is Arista's Smart System Upgrade (SSU)?
SSU allows EOS upgrades on modular chassis systems (7300 and 7500 series) with minimal traffic disruption. The upgrade sequence staggers across line cards so that at least one path remains active at all times. SSU requires specific EOS version combinations on source and target, compatible hardware, and a correctly configured topology. It's not a "zero interruption" guarantee — there can be brief sub-second reconvergence events — but it eliminates the full maintenance window reboot that a standard upgrade would require on a chassis running significant traffic.
Section 3: CloudVision — CVP & CVaaS (Q29–40)
Q29. What is CloudVision?
CloudVision is Arista's network management and telemetry platform. It does three things: management (configure, provision, and update EOS devices centrally), telemetry (stream real-time operational state from every switch via TerminAttr and visualize it), and security (network detection and response through the NDR module). It comes as CVP (CloudVision Portal, on-premises) or CVaaS (CloudVision as a Service, cloud-hosted by Arista). CVaaS became the preferred deployment model for most new projects starting around 2022 — it eliminates the overhead of managing CVP infrastructure and receives continuous updates.
Q30. What is the difference between CloudVision Portal (CVP) and CVaaS?
CVP is software you install and run on your own servers (bare metal or VMs in your data center). You manage the OS, the database, the backups, and the upgrades. CVaaS is the same functionality delivered as a cloud service — Arista manages the infrastructure, you just connect your switches to it via TerminAttr and use it. CVaaS is updated continuously with new features; CVP upgrades happen on your schedule (but require your maintenance effort). For organizations with strict data residency requirements or air-gapped networks, CVP is sometimes the only option. For everyone else, CVaaS reduces operational overhead significantly.
Q31. How do I connect a switch to CloudVision?
Install the TerminAttr extension on the switch. Configure TerminAttr to point to the CloudVision IP or CVaaS endpoint, specifying the gRPC port (typically 9910), the device enrollment token (for CVaaS), and the VRF to use for connectivity. Add the switch's serial number to CloudVision's provisioning list. Once TerminAttr connects, CloudVision receives the device's configuration and starts pulling streaming telemetry. The switch shows up in the CloudVision dashboard within a minute or two of TerminAttr connecting successfully. Check "show daemon TerminAttr" on the switch to confirm the agent is running and connected.
Q32. What is CloudVision's change control workflow?
Change control in CloudVision lets you stage configuration changes, have them reviewed and approved before deployment, and then execute them with automatic rollback if something goes wrong. The workflow: create a task (a configuration change for a device or group of devices), add it to a change control, assign reviewers, get approval, then schedule and execute. After execution, CloudVision can automatically verify that the expected state was reached — if it wasn't, it rolls back. This turns ad-hoc CLI changes into auditable, reviewable operations — important for environments under change management policies (ITIL, SOC 2, etc.).
Q33. What is CloudVision's telemetry and what can it monitor?
CloudVision streams and stores operational data from every connected switch: interface utilization, error counters, optical transceiver power levels, BGP peer state and prefix counts, MLAG peer status, CPU and memory utilization, VXLAN tunnel state, spanning tree topology, and more. Data arrives via TerminAttr over gNMI (gRPC Network Management Interface) and is stored in CloudVision's time-series database. You can view real-time dashboards, query historical data ("show me the interface utilization on this port over the past 72 hours"), and set alerts for threshold violations.
Q34. What are CloudVision Studios?
Studios are CloudVision's high-level configuration intent templates. Instead of writing raw EOS configuration, you fill in a form-based UI (or provide structured data) for a specific use case — L3LS (Layer 3 Leaf-Spine fabric with EVPN), L2LS (Layer 2 Leaf-Spine), Campus fabric, or WAN routing. Studios generate the EOS configuration for all devices in the fabric based on your inputs, including VXLAN tunnels, BGP peerings, VLAN assignments, and STP settings. This dramatically reduces the configuration effort for large, uniform fabrics. Studios are the recommended approach for new Arista EVPN deployments — they're significantly less error-prone than manually configuring 40 switches.
Q35. What is CloudVision's Network Topology view?
The topology view in CloudVision automatically discovers and maps the physical connectivity between Arista devices using LLDP data. It shows you a live diagram of which switches connect to which, interface-level utilization on each link, and highlights any links in an error or down state. Clicking on a device or link drills down to its current telemetry. For large fabrics with hundreds of switches, this is the difference between understanding your network topology and guessing. Topology data updates in near real-time as links come up, go down, or as new switches are added.
Q36. What is the CloudVision API?
CloudVision exposes a resource-oriented REST API and a resource-streaming API (using gRPC/gNMI). The REST API lets you programmatically manage devices, read telemetry data, create and execute change controls, and query the configuration database. Arista publishes a Python SDK called cvprac that wraps the REST API for easier use. The streaming API (CloudVision Resource API) lets you subscribe to real-time updates for any network state — useful for building custom dashboards, integration with ITSM tools, or triggering external automation when the network state changes.
Q37. What is CVP sizing — how many switches can it manage?
On-premises CVP scales based on server specs. A single CVP node handles up to ~50 devices. A three-node cluster handles up to 200–500 devices depending on telemetry load. Larger deployments use additional cluster nodes. CVaaS doesn't have a practical upper limit for most organizations — Arista scales the backend. The telemetry load matters more than raw device count: a fabric where every switch streams 1,000 paths per second generates significantly more load than one streaming basic interface counters. Arista's sizing guide on the support portal provides formulas based on your expected data rates.
Q38. What is CloudVision's role in ZTP?
CloudVision acts as the ZTP bootstrap server. When a new switch boots and broadcasts a DHCP request, CloudVision (configured as the ZTP DHCP server or pointed to by option 67) provides the switch's initial configuration. The switch downloads its startup config, installs the correct EOS version if needed, and activates its TerminAttr connection to CloudVision automatically. By the time the on-site tech has racked and cabled the switch, powered it on, and called to confirm it's up, CloudVision has already provisioned it. For branch or remote deployments, this eliminates the need for an experienced network engineer to travel to each site.
Q39. What is CloudVision's Continuous Integration/Continuous Deployment (CI/CD) support?
CloudVision integrates with Git-based CI/CD pipelines. The workflow: network engineers write configuration in a Git repository (using Arista's AVD — Ansible Validated Designs — framework), a CI pipeline runs validation and generates device configs, and changes merge through a standard pull request process. CloudVision then picks up the approved config and executes the change control. This brings software development practices — peer review, automated testing, version control, rollback — to network configuration management. AVD + CVaaS is the most common pattern for Arista deployments that take automation seriously.
Q40. What is Arista AVD (Arista Validated Designs)?
AVD is an open-source Ansible collection published by Arista on GitHub. It provides a structured, role-based framework for generating EOS configurations from high-level YAML intent files. You define your topology — spines, leaves, VLANs, VRFs, BGP ASNs — in structured YAML, and AVD generates the complete EOS configuration for every device in the fabric. AVD also includes documentation generation (draws the topology from YAML), configuration backup, and CloudVision integration for deploying the generated configs. The GitHub repository at github.com/aristanetworks/avd is actively maintained and is the community standard for Arista automation.
Section 4: Spine-Leaf, VXLAN & EVPN (Q41–55)
Q41. What is a spine-leaf architecture?
Spine-leaf is a two-tier data center network topology. Leaf switches connect to servers, storage, and external links. Spine switches connect only to leaf switches — never to each other, and never directly to servers. Every leaf connects to every spine (full mesh between the tiers). Traffic between any two servers always passes through exactly two hops: leaf-to-spine-to-leaf. This predictable path length and the use of ECMP across all spine uplinks gives you consistent latency, linear horizontal scalability (add a leaf pair for more server ports, add a spine pair for more bandwidth), and eliminates spanning tree from the core of the network.
Q42. What is VXLAN and why does the data center need it?
VXLAN (Virtual Extensible LAN) tunnels Layer 2 frames inside UDP packets. This lets you extend a Layer 2 segment (a VLAN) across a Layer 3 routed network — so two servers on the same VLAN can communicate even if they're on physically separate leaf switches with only IP connectivity between them. Data centers need this because modern workloads (VMs, containers) need to move between physical hosts without changing IP addresses, and the underlying network infrastructure needs to handle it without reconfiguring VLANs across the entire fabric each time. VXLAN handles up to 16 million segments (24-bit VNI space) versus 802.1Q VLAN's 4,094 limit.
Q43. What is EVPN and how does it relate to VXLAN?
EVPN (Ethernet VPN) is a BGP address family that distributes Layer 2 and Layer 3 reachability information across the fabric. VXLAN is the data-plane encapsulation; EVPN is the control plane that tells each switch where MAC addresses and IP addresses are located — which VTEP (VXLAN Tunnel Endpoint, i.e., which leaf switch) a particular MAC/IP is reachable from. Before EVPN, VXLAN used multicast or head-end replication for broadcast/unknown unicast traffic, which didn't scale well. EVPN eliminates the need for multicast by distributing MAC/IP bindings via BGP, making the control plane scalable and operationally manageable.
Q44. What is a VTEP in Arista?
VTEP (VXLAN Tunnel Endpoint) is the entity that performs VXLAN encapsulation and decapsulation. On Arista, each leaf switch acts as a VTEP. The VTEP has a loopback IP address that other VTEPs use as the tunnel destination. When a leaf switch receives a frame from a server, it looks up where the destination MAC is (using the EVPN BGP table), encapsulates the frame in a VXLAN/UDP packet addressed to the remote VTEP's loopback IP, and sends it across the underlay IP network. The remote VTEP receives it, strips the VXLAN header, and delivers the original frame to the destination server.
Q45. What is a VNI (VXLAN Network Identifier)?
VNI is the 24-bit segment identifier in a VXLAN header — the equivalent of a VLAN ID, but with 16 million possible values instead of 4,094. Each VNI maps to a tenant network segment. In an EVPN VXLAN fabric, Layer 2 VNIs (L2VNIs) map to VLANs — each VLAN gets a unique VNI for traffic within that segment. Layer 3 VNIs (L3VNIs) map to VRFs — they carry inter-VLAN routed traffic for a specific tenant's routing table across the fabric. The VNI-to-VLAN and VNI-to-VRF mappings must be consistent across all leaf switches in the fabric.
Q46. How do I configure a basic EVPN VXLAN fabric on Arista?
The key steps (simplified): enable multi-agent routing model; configure loopback interfaces on each device with unique IPs; bring up the underlay with BGP or OSPF/ISIS between spines and leaves to make loopbacks reachable; configure the VXLAN interface (interface Vxlan1) on each leaf with source loopback; configure BGP EVPN address family on each device; configure route-reflectors on spines for EVPN; map VLANs to L2VNIs and VRFs to L3VNIs on each leaf; configure the Anycast gateway MAC if using distributed routing. In practice, a real fabric has 60–100 lines of config per device. AVD generates this from YAML — doing it manually is feasible for a lab, tedious for production.
Q47. What is an Anycast Gateway in Arista EVPN?
Anycast Gateway (Arista also calls it the "Virtual ARP" or "Distributed IP Anycast Gateway") allows every leaf switch in the fabric to present the same IP and MAC address as the default gateway for servers in a given VLAN. When a server sends a packet to its default gateway, any locally attached leaf switch can respond and route the packet — no traffic needs to travel to a centralized gateway device. This gives you fully distributed Layer 3 forwarding: server-to-server routed traffic goes leaf-to-spine-to-leaf without hitting a central router. Configure it with "ip virtual-router address" under the VLAN interface and a shared virtual MAC.
Q48. What is a route reflector in an EVPN fabric?
In an EVPN fabric, the spines typically act as BGP route reflectors for the EVPN address family. Route reflectors reduce the number of BGP peerings needed — instead of every leaf peering with every other leaf (N² peerings), each leaf peers only with the spine route reflectors, and the spines reflect EVPN routes between leaves. This works for both the EVPN overlay and the underlay BGP sessions. For a fabric with 4 spines and 20 leaves, you'd have 20×4 = 80 BGP sessions instead of 20×19/2 = 190 full-mesh sessions. Spines don't forward VXLAN data-plane traffic — they're control-plane only for EVPN route reflection.
Q49. What is EVPN multihoming in Arista?
EVPN multihoming (defined in RFC 7432) is the EVPN standard replacement for MLAG. It lets a server or upstream device connect to two leaf switches in an active-active LAG, with the redundancy information distributed via EVPN rather than the proprietary MLAG peer-link mechanism. All-Active multihoming distributes traffic across all links simultaneously. Single-Active multihoming uses one link at a time with fast failover. EVPN multihoming scales better than MLAG for large fabrics and doesn't require the MLAG peer-link infrastructure. Arista supports it as "ESI" (Ethernet Segment Identifier) multihoming — each multi-homed LAG gets a unique ESI value advertised via EVPN.
Q50. What is underlay vs. overlay in a VXLAN fabric?
The underlay is the physical IP network between switches — routed interfaces with BGP (or OSPF/ISIS) making all loopback IPs reachable across the fabric. The overlay is the VXLAN tunnels carrying tenant traffic on top of the underlay. The underlay doesn't know or care about tenant VLANs — it just routes IP packets (which happen to be VXLAN-encapsulated). The overlay doesn't know about the physical topology — it just knows that a destination MAC is reachable via a particular VTEP IP. Keeping these two planes separate is what makes the architecture scalable: you can change physical topology (upgrade a spine, add a leaf) without impacting tenant configurations, and vice versa.
Q51. How do I verify VXLAN is working on an Arista switch?
"show vxlan vtep" — lists all known remote VTEPs and the VNIs reachable through them. "show vxlan vni" — shows local VNI-to-VLAN and VNI-to-VRF mappings. "show bgp evpn" — shows the EVPN BGP table with MAC/IP and VTEP information. "show vxlan address-table" — shows the MAC-to-VTEP mapping learned via EVPN (the VXLAN forwarding table). "show interface Vxlan1" — shows the VXLAN interface status and stats. If a remote MAC is missing from the address table, the issue is usually in the BGP EVPN control plane — check "show bgp evpn detail" for the specific route type.
Q52. What is a DCI (Data Center Interconnect) with EVPN?
DCI extends an EVPN VXLAN fabric across two or more geographically separate data centers. Traffic between sites crosses a WAN link while maintaining the same Layer 2 and Layer 3 EVPN fabric semantics. Border Leaf switches (or dedicated DCI devices) at each site peer with the remote site's border leaves via BGP EVPN, extending MAC/IP reachability across the WAN. VXLAN tunnels carry tenant traffic through the WAN path. This enables workload mobility between sites — a VM can move from Site A to Site B without changing its IP address, because the EVPN fabric extends the same L2 segment to both sites.
Q53. What is ECMP and how does it work in a spine-leaf fabric?
ECMP (Equal-Cost Multi-Path) distributes traffic across multiple equal-cost paths simultaneously. In a spine-leaf fabric, each leaf has equal-cost paths to every spine — if there are 4 spines, each leaf has 4 equal-cost BGP routes toward any remote prefix. The leaf hashes each flow across these paths using a 5-tuple hash (source IP, destination IP, protocol, source port, destination port) to ensure packets within a single flow always take the same path (avoiding reordering), while different flows spread across all available paths. EOS supports up to 64-way ECMP on most platforms. "show ip route [prefix]" and "show ip ecmp detail" let you verify the path distribution.
Q54. What is a Border Leaf in an EVPN fabric?
A Border Leaf connects the internal EVPN fabric to external networks — internet edges, WAN routers, firewalls, or other data center fabrics. Border leafs redistribute routes between the EVPN fabric's BGP overlay and external routing domains. They're also where traffic inspection can happen — placing a firewall behind a border leaf lets all inter-fabric or internet-bound traffic pass through security inspection without disrupting the internal fabric topology. In some designs, Border Leafs also handle BGP peering with internet providers (acting as edge routers).
Q55. What is a Super-Spine in large data center designs?
When a spine-leaf fabric outgrows the port density of the spine switches, you add a super-spine layer — switches that sit above the spines and interconnect multiple spine-leaf pods. Each spine connects to every super-spine; super-spines don't connect to each other. This creates a three-tier topology (leaf → spine → super-spine) while maintaining the equal-cost path property within each tier. Super-spine designs are common in very large data centers with thousands of server ports where a single two-tier fabric can't provide enough capacity. Arista's 7800 series is purpose-built for the super-spine role.
Section 5: Routing — BGP, OSPF & IS-IS (Q56–67)
Q56. How do I configure BGP on an Arista switch?
Basic BGP config: "router bgp [ASN]", then "neighbor [IP] remote-as [remote-ASN]" for each peer. For eBGP peers, use unique ASNs. For iBGP, use the same ASN. For unnumbered BGP (common in spine-leaf underlay), use "neighbor [interface-name] peer group" with interface-based peering instead of IP addresses. In multi-agent mode, add "neighbor [peer-group] send-community extended" for EVPN. Verify with "show bgp summary" (session state, prefix counts) and "show bgp neighbors [IP]" (detailed session information including received/sent capabilities).
Q57. What is BGP Unnumbered and when should I use it?
BGP Unnumbered establishes BGP sessions using IPv6 link-local addresses discovered via Router Advertisements, without needing to configure IP addresses on the physical interfaces between switches. You just enable IPv6 on the interface and configure the peer using the interface name rather than an IP. This simplifies spine-leaf underlay configuration significantly — no IP address planning for point-to-point links, no subnets to track. It's the standard approach for modern Arista spine-leaf underlay deployments. The BGP peering uses IPv6 link-local as the transport but can carry both IPv4 and IPv6 routes via extended next-hop encoding.
Q58. What is a BGP peer group in EOS?
A peer group is a named template for BGP neighbor settings. You configure common attributes (remote-AS, update-source, next-hop handling, route maps, timers) in the peer group once, then add neighbors to that peer group. For a spine with 20 leaf neighbors all configured identically, you configure the peer group once and add 20 "neighbor [IP] peer group [name]" statements rather than duplicating 15 lines per neighbor. Peer groups also improve BGP performance on large sessions — EOS can compute update messages once per peer group rather than once per neighbor.
Q59. How does BGP route filtering work in EOS?
Route filtering uses prefix-lists, route-maps, and community lists applied to BGP neighbors. A prefix-list matches routes by network prefix and length. A route-map sequences conditions and actions (permit/deny, set community, set local-pref). Apply them per neighbor: "neighbor [IP] route-map [name] in" or "out". Route-maps run top-to-bottom; first match wins; implicit deny at the end. In EVPN contexts, you generally don't filter EVPN routes between fabric devices — filtering is applied at border leafs where fabric routes meet external networks. Accidental EVPN route filtering between internal fabric devices breaks MAC/IP reachability in ways that are tricky to debug.
Q60. When should I use OSPF vs. BGP vs. ISIS in an Arista fabric?
For data center spine-leaf underlay: BGP (specifically eBGP with unique ASNs per leaf, or a two-AS model) is the current standard at scale because it provides built-in prefix suppression, policy flexibility, and doesn't require area design. OSPF is simpler to configure for smaller fabrics (fewer than ~20 devices) but doesn't scale as gracefully. IS-IS is preferred in some large-scale environments for its fast convergence and loop-free topology calculation, and is common in service provider networks. In practice, most Arista EVPN deployments use BGP for both underlay and overlay — one routing protocol to manage instead of two.
Q61. How do I configure OSPF on Arista?
"router ospf [process-id]", then "network [subnet] area [area-id]" to activate OSPF on interfaces in that subnet, or "ip ospf [process-id] area [area-id]" directly under each interface (the interface method is more explicit). Set router-id manually: "router-id [loopback-IP]". For spine-leaf OSPF underlay, use a single area (area 0) and make all routed interfaces part of it. "show ip ospf neighbor" — verify adjacencies. "show ip ospf interface" — verify OSPF is running on the right interfaces with the right area. Passive interface on any interface not supposed to form OSPF adjacencies: "passive-interface [name]".
Q62. What is BFD (Bidirectional Forwarding Detection) in EOS?
BFD is a rapid failure detection protocol that runs alongside routing protocols. Without BFD, a BGP session might take 30–90 seconds to detect a failed link (depending on hold timers). BFD sends keepalives at sub-second intervals — when they stop, BFD immediately notifies BGP (or OSPF, or static routes) to reroute. Typical BFD timers: 300ms interval, 3x multiplier = 900ms detection time. Configure under the routing protocol: "neighbor [IP] bfd" for BGP. BFD requires both ends to support it — it works between Arista switches and most other vendors' equipment.
Q63. What is segment routing on Arista?
Segment routing (SR) is a source-based routing architecture where the path through the network is encoded in the packet header as a list of segments (network instructions). Unlike traditional MPLS, segment routing doesn't require per-flow state in intermediate nodes — labels are assigned to prefixes or policies and distributed via IGP (IS-IS or OSPF with SR extensions). Arista supports SR-MPLS on the 7500R, 7280R, and 7020R platforms. SRv6 (segment routing over IPv6) is supported on platforms with Jericho2 silicon. Segment routing is most relevant for service providers and large enterprises building traffic-engineered WAN backbones.
Q64. How does Arista handle IPv6?
EOS has full dual-stack IPv4/IPv6 support. Configure IPv6 on an interface: "ipv6 address [prefix/length]". Enable OSPFv3 or BGP with IPv6 address family for dynamic routing. IPv6 access-lists work the same as IPv4 ACLs but with "ipv6 access-list" commands. VXLAN over IPv6 underlay is supported on platforms with Jericho2+ silicon. For the spine-leaf underlay, BGP unnumbered using IPv6 link-local is the modern standard. Arista's campus products also support IPv6 for access and distribution layers, with MLD snooping for IPv6 multicast in campus environments.
Q65. What is PIM and when is it needed in an Arista network?
PIM (Protocol Independent Multicast) handles IP multicast routing. It's needed when applications use IP multicast — video streaming, financial market data feeds, some industrial control protocols, VXLAN with multicast replication (the pre-EVPN method). For EVPN VXLAN fabrics, PIM is generally not required — EVPN's ingress replication handles BUM (Broadcast, Unknown Unicast, Multicast) traffic. For campus environments with multicast video distribution, PIM sparse-mode with an RP (Rendezvous Point) is the standard approach. Configure with "ip multicast-routing" globally and "ip pim sparse-mode" on interfaces.
Q66. How do I configure a static route in EOS?
"ip route [destination/mask] [next-hop-IP or interface]". For VRF-specific static routes: "ip route vrf [name] [destination/mask] [next-hop]". Static routes support administrative distance: "ip route 0.0.0.0/0 10.0.0.1 254" sets the default route with AD of 254 (lower than a dynamic protocol's routes, so dynamic takes preference when available). Floating static routes (with higher AD than the dynamic protocol) are useful as backup paths. Verify with "show ip route static" or "show ip route [prefix]".
Q67. What is Arista's Adaptive Virtual Routing (AVR)?
AVR is a feature on some Arista platforms that implements a two-tier route table: a "routing kernel" table for active paths and a "host" table that stores the full Internet routing table in memory rather than hardware, programming only active routes to the forwarding ASIC. This lets smaller-TCAM platforms handle full Internet BGP tables (900,000+ prefixes) that would otherwise overflow the hardware FIB. It's primarily relevant for platforms used as internet peering or edge routers — the 7020R and 7280R series.
Section 6: Automation, eAPI & Programmability (Q68–78)
Q68. What is Arista eAPI?
eAPI (Extensible API) is Arista's HTTP-based management API. It accepts CLI commands sent as JSON over HTTPS (or HTTP for lab use) and returns structured JSON responses. This means any tool that can make HTTP requests — Python's requests library, curl, Postman, Ansible, Terraform — can programmatically run EOS commands and parse the results. Enable it with "management api http-commands" → "protocol https" → "no shutdown" in EOS config. The eAPI Explorer (built into the switch's web UI at https://[switch-IP]/explore) is a browser-based interface for testing API calls interactively before putting them in scripts.
Q69. What is pyeapi?
pyeapi is Arista's official Python client library for eAPI. It handles authentication, HTTP connection management, and JSON serialization so you can interact with EOS in pure Python. "import pyeapi; node = pyeapi.connect(host='10.1.1.1', username='admin', password='arista'); response = node.run_commands(['show version'])". The response is a Python dictionary parsed from the JSON — you can access specific fields directly without parsing CLI text. pyeapi is available on PyPI: "pip install pyeapi". For complex automation scripts, pyeapi is cleaner than raw HTTP/JSON calls.
Q70. How does Ansible work with Arista EOS?
Ansible manages Arista switches via the arista.eos collection (on Ansible Galaxy). The collection includes modules for managing interfaces, VLANs, BGP, ACLs, static routes, and general configuration. Connection is via eAPI (network_cli or httpapi connection plugin). Define the switch in inventory with "ansible_network_os: arista.eos.eos" and credentials. Playbook tasks look like: "arista.eos.eos_vlans: config: - vlan_id: 100 name: Production". Ansible also integrates with AVD for full fabric automation. The arista.avd Ansible collection on Ansible Galaxy is the starting point for AVD-based automation.
Q71. What is gNMI and how does Arista support it?
gNMI (gRPC Network Management Interface) is an industry standard protocol for network device management and telemetry streaming. Arista supports gNMI on EOS via the TerminAttr agent. You can use gNMI to get configuration and operational state (Get RPC), set configuration (Set RPC), and subscribe to continuous streaming updates (Subscribe RPC). The Subscribe RPC is what CloudVision and external telemetry platforms use to receive real-time state from EOS. The gNMIc CLI tool is useful for testing gNMI queries against Arista switches directly from a Linux workstation.
Q72. What is OpenConfig and how does EOS use it?
OpenConfig is a vendor-neutral data modeling initiative that defines YANG models for network device configuration and state (interfaces, BGP, routing policies, etc.). EOS supports OpenConfig YANG models via its gNMI interface — you can configure EOS using OpenConfig-formatted data rather than native EOS CLI. This lets multi-vendor automation tools use the same data model across Arista, Juniper, Cisco, and other OpenConfig-supporting platforms. In practice, EOS supports a mix of native EOS models and OpenConfig models — check the specific path you need against Arista's OpenConfig support matrix for your EOS version.
Q73. How do I use Terraform to manage Arista switches?
The Arista CloudVision Terraform provider (available on the Terraform Registry) manages network resources through CVP or CVaaS. You define network resources (configlets, containers, devices) as Terraform resource blocks, and the provider pushes changes via the CloudVision API. For direct EOS management without CVP, the arista/eos Terraform provider interacts with eAPI. Terraform's state file tracks what's deployed, and "terraform plan" shows diffs before applying. The workflow fits naturally into infrastructure-as-code pipelines where network config lives alongside server and cloud infrastructure in the same Git repository and CI/CD pipeline.
Q74. What is NAPALM and does it work with Arista?
NAPALM (Network Automation and Programmability Abstraction Layer with Multivendor support) is an open-source Python library that provides a uniform API across multiple network vendor platforms. The Arista EOS driver uses eAPI as the transport. NAPALM methods like get_interfaces(), get_bgp_neighbors(), load_merge_candidate(), and compare_config() work across Arista, Juniper, Cisco IOS, and NX-OS with identical Python code — you swap the driver and the rest stays the same. NAPALM is widely used in network automation frameworks and integrates with NetBox (as the data source), Ansible, and Nornir.
Q75. What is Nornir and how does it apply to Arista?
Nornir is a Python automation framework that's an alternative to Ansible for network automation — pure Python rather than YAML playbooks. You write Python scripts that use Nornir tasks, which Nornir runs in parallel across your inventory. Combined with NAPALM, Nornir can run operations across 200 Arista switches simultaneously in a few seconds. For engineers comfortable with Python, Nornir is often faster and more flexible than Ansible — you get full Python control flow, exception handling, and data structures rather than Ansible's YAML abstractions. nornir-napalm and nornir-netmiko are the most common task plugins for Arista.
Q76. What is Arista's P4 programmability on the 7170 series?
The 7170 series uses Intel's Tofino programmable ASIC, which supports the P4 data plane programming language. P4 lets you define custom packet processing pipelines — not just configure what the switch does, but reprogram how it processes packets at the chip level. This enables use cases the chip manufacturer never anticipated: custom header parsing, novel load balancing algorithms, in-network telemetry collection (INT), and specialized packet classification for financial trading or scientific computing. P4 programming requires expertise well beyond standard network engineering, but it gives capabilities that no fixed-pipeline switch can match.
Q77. How does Arista integrate with NetBox?
NetBox is the most widely used open-source network source-of-truth and IPAM (IP Address Management) tool. Arista integrates with NetBox in both directions. NetBox-to-Arista: Ansible or Nornir scripts pull device and interface data from NetBox's API and push configuration to EOS switches via eAPI. Arista-to-NetBox: scripts pull operational state from EOS (via eAPI or CloudVision API) and update NetBox records to reflect actual network state — useful for keeping NetBox synchronized with reality. The NetBox community has developed several Ansible roles and Python scripts that make the integration straightforward.
Q78. What is Arista's AI-driven networking strategy (AI Networks)?
Arista has been deliberately building infrastructure for AI training and inference networks — specifically, the high-bandwidth, low-latency backend networks that connect GPU clusters. This means 400G and 800G Ethernet switches with large port buffers (to handle the burst traffic patterns of collective communication in AI training), support for RDMA over Converged Ethernet (RoCE), explicit congestion notification (ECN) tuning, and network telemetry designed for AI traffic analysis. The 7060X5, 7060X6, and 7800 series are the primary platforms Arista positions for AI back-end fabric. This market grew substantially from 2023–2025 as GPU cluster buildouts accelerated.
Section 7: Security, Zero Trust & NDR (Q79–86)
Q79. What security features does EOS support?
EOS includes ACLs (both ingress and egress on interfaces), control-plane protection via control-plane ACLs, 802.1X port authentication with RADIUS or TACACS+ backends, MACsec (hardware-based Layer 2 encryption on supported platforms), DHCP snooping, Dynamic ARP Inspection (DAI), IP Source Guard, and storm control. Role-based access control for management uses privilege levels and AAA (TACACS+ or RADIUS). SSH public key authentication is fully supported. EOS also supports RPKI (Resource Public Key Infrastructure) for BGP route origin validation — important for internet-facing deployments to reject BGP hijacks.
Q80. What is MACsec and which Arista platforms support it?
MACsec (IEEE 802.1AE) encrypts and authenticates Ethernet frames at Layer 2, providing point-to-point encryption on switch-to-switch or switch-to-end-device links. It works at line rate on supported hardware — no performance penalty. Arista supports MACsec on platforms with MACsec-capable PHYs, including the 7060X2 series, 7170 series, and several 720 campus series switches. MACsec requires a key agreement protocol (MKA) for key exchange, typically using 802.1X as the transport. It's commonly deployed for high-security environments (financial firms, government networks) where data must be encrypted even on internal links that could be physically tapped.
Q81. What is Arista's NDR (Network Detection and Response)?
Arista's NDR platform (originally from the Awake Security acquisition in 2020) uses network traffic metadata — flows, DNS queries, TLS certificate details, protocol behaviors — to detect threats that endpoint security tools miss. Instead of just logging traffic, the NDR platform builds behavioral models of normal activity per device, per user, and per application, then flags deviations. Lateral movement, command-and-control communication, and data exfiltration generate characteristic traffic patterns that NDR detects without requiring signature updates. NDR integrates with CloudVision, using the same telemetry infrastructure, so you get security visibility alongside network operations visibility in one platform.
Q82. What is Zero Trust Networking and how does Arista support it?
Zero Trust is the principle that no network position should be implicitly trusted — every connection must be authenticated and authorized regardless of where it originates. Arista supports zero trust principles through 802.1X port authentication (users and devices authenticate before getting network access), CloudVision's visibility layer (continuous monitoring of network behavior), NDR (detecting anomalous behavior from already-authenticated entities), and MACsec (ensuring link-layer integrity). These aren't a monolithic "zero trust product" — they're components you assemble into a layered access control architecture. Arista's zero trust portfolio is strongest on the detection and visibility side.
Q83. How does 802.1X work on Arista switches?
802.1X on EOS works via the "dot1x" configuration. Enable globally: "aaa authentication dot1x default group radius". Enable on an interface: "dot1x pae authenticator" and "dot1x port-control auto". The switch (authenticator) intercepts traffic on the port before authentication, forwards EAPOL frames to the RADIUS server (authentication server), which validates the user/device credentials. On successful authentication, the port is authorized and can pass traffic. The RADIUS server can also return VLAN assignment attributes, so authenticated users land in the right VLAN dynamically. Guest VLANs handle devices that don't support 802.1X.
Q84. How do I configure TACACS+ for management authentication on Arista?
"tacacs-server host [IP] key [key]" to define the server. "aaa authentication login default group tacacs+ local" — authenticate logins via TACACS+ first, fall back to local accounts if the server is unreachable. "aaa authorization exec default group tacacs+ local" — authorize privilege levels via TACACS+. "aaa accounting commands all default start-stop group tacacs+" — send command accounting records to TACACS+ (useful for audit trails). Always keep a local fallback account in case the TACACS+ server is unreachable — losing management access to all switches because the authentication server is down is a scenario you should prevent by design.
Q85. What is RPKI on Arista and how do I enable it?
RPKI (Resource Public Key Infrastructure) validates BGP route origin announcements against cryptographically signed Route Origin Authorizations (ROAs) from the five Regional Internet Registries. Arista supports RPKI via an external RPKI validator (such as Routinator, FORT, or OctoRPKI) that the switch connects to using the RTR (RPKI-to-Router) protocol. Configure: "router bgp [ASN]" → "rpki local-interface [Loopback0]" → "rpki origin-validation". Then under the BGP address family, configure "bgp enforce-first-as" and set route-map actions based on validity state (valid, invalid, not-found). Dropping "invalid" routes blocks hijacked prefixes from entering your routing table.
Q86. How do I configure ACLs on Arista EOS?
Create a named ACL: "ip access-list [name]", then add permit/deny rules: "permit tcp 10.1.0.0/16 any eq 443". Apply to an interface: "ip access-group [name] in" or "out". EOS evaluates ACL entries top-down; first match wins; implicit deny-all at the end. For control-plane protection (limiting what traffic reaches the switch's CPU): "control-plane" → "ip access-group [name] in". ACL entries can match on source/destination IP, protocol, port ranges, DSCP values, TTL, and TCP flags. Named ACLs support editing individual entries by sequence number without rewriting the entire ACL — an improvement over classic Cisco numbered ACLs.
Section 8: Troubleshooting & Common Errors (Q87–95)
Q87. How do I capture packets on an Arista switch?
From the bash shell: "tcpdump -i [interface] -w /tmp/capture.pcap [filter-expression]". Use standard tcpdump filters: "host 10.1.1.1", "port 443", "tcp and not arp". Copy the pcap to a workstation via SCP for analysis in Wireshark. For control-plane traffic (traffic going to the switch CPU): capture on "cpu" interface. EOS also has a built-in packet capture via CLI: "monitor capture [name] interface [int] direction [in/out] filter [ACL-name]" → "monitor capture [name] start" → "monitor capture [name] stop" → "monitor capture [name] export location flash:capture.pcap". The CLI method is useful when bash shell access isn't enabled.
Q88. Why is my BGP session stuck in Active state?
"Active" means the switch is trying to establish a TCP connection to the BGP peer but hasn't succeeded. Common causes: wrong peer IP configured, ACL blocking TCP port 179 between peers, routing issue (the switch can't reach the peer's IP in the routing table), wrong BGP ASN on one side, or update-source misconfiguration when the peer expects a specific source IP. Debug steps: "ping [peer-IP] source [your-loopback]" — can you reach the peer IP? "telnet [peer-IP] 179" — can TCP/179 connect? "show bgp neighbors [IP]" — read the error messages in the output. "show ip route [peer-IP]" — is there a route to the peer?
Q89. How do I collect a tech-support bundle from an Arista switch?
"show tech-support" in the EOS CLI outputs a large text file containing the output of dozens of diagnostic commands. To save it: "show tech-support | gzip > tech-support.gz" and copy via SCP. Arista support cases almost always ask for this file. The bundle includes running config, interface state, routing tables, logs, hardware diagnostics, and process information. Generate it as close to the time of the issue as possible. If the switch panicked or rebooted unexpectedly, the core dump and panic logs in /var/core/ and /var/log/messages are what Arista TAC needs — copy those files separately via SCP from the bash shell.
Q90. How do I debug VXLAN/EVPN issues on Arista?
"show bgp evpn" — review the EVPN BGP table for expected route types (Type 2 MAC/IP, Type 3 IMET, Type 5 IP Prefix). "show vxlan vtep" — confirm remote VTEPs are known. "show vxlan address-table" — check MAC-to-VTEP mappings. "show vxlan vni" — verify VNI-to-VLAN/VRF mappings are correct. "show interface Vxlan1 counters" — check for encap/decap counts to confirm tunnels are actually passing traffic. Most EVPN issues trace back to: BGP EVPN not enabled or not sending communities, VNI mismatch between devices, VTEP loopback not redistributed into the underlay (so tunnels can't form), or multi-agent routing mode not enabled.
Q91. How do I check interface errors on Arista?
"show interfaces [name] counters errors" — shows input/output errors, CRC errors, and runts. "show interfaces [name]" — shows full interface status including duplex, speed, last clearing of counters. "show interfaces counters rates" — real-time utilization rates. For optical interfaces: "show interfaces [name] transceiver" — optical Tx and Rx power levels, temperature, and any alarm states. CRC errors usually indicate a physical layer issue: bad cable, dirty connector, mismatched speed/duplex. For high-speed links (100G+): mismatched FEC (Forward Error Correction) settings between the two ends causes similar symptoms — "show interfaces [name]" shows the negotiated FEC mode.
Q92. How do I check CPU and memory usage on an Arista switch?
"show processes top once" — one-time snapshot of process CPU usage. "show system environment" — temperature, fan, and power supply status. "show version" — includes memory utilization. From bash: "top", "free -m", "df -h" for disk usage. High CPU in the management plane: identify the process consuming CPU from "show processes top". High CPU in the data plane (forwarding ASIC): harder to see directly — use "show platform fwd-health" and platform-specific commands. Sustained high CPU on the management plane often traces to: a logging misconfiguration generating excessive log volume, a runaway monitoring agent, or a BGP table that's too large for the available memory.
Q93. How do I check the MAC address table on Arista?
"show mac address-table" — all learned MACs. "show mac address-table address [mac]" — find a specific MAC. "show mac address-table interface [int]" — MACs learned on a specific port. "show mac address-table vlan [id]" — MACs in a specific VLAN. If a MAC you expect to see isn't in the table, the device either hasn't sent traffic recently (the entry aged out), is in a different VLAN than expected, or has a connectivity issue. The default aging time is 300 seconds — "mac address-table aging-time" adjusts it.
Q94. How does EOS handle unexpected reboots and what should I check?
"show version" includes "Last reload reason" — common values: "Reload command", "Power Cycle", "Kernel Panic". For kernel panics: "show logging" for syslog messages before the reboot, and check /var/log/messages from bash for the full panic trace. Core dumps land in /var/core/ — each crash generates a file there. "show system environment" to rule out thermal issues (overheating causing hardware protection shutdowns). Unexpected reboots should always result in a support case with Arista if they recur — particularly if the "Last reload reason" says anything other than a planned reload or power cycle.
Q95. How do I use "watch" in EOS to monitor changing output?
"watch [interval] show [command]" runs a show command repeatedly at the specified interval (in seconds) and displays the output refreshing on screen — similar to the Linux "watch" command. Example: "watch 2 show bgp summary" refreshes BGP summary every 2 seconds, letting you watch session states change in real time. "watch 1 show interfaces Ethernet1 counters" watches traffic counters update per second. This is extremely useful during maintenance windows, when you're making changes and want to watch the network state respond without repeatedly retyping the show command.
Section 9: Licensing, Certifications & Comparisons (Q96–100)
Q96. How does Arista licensing work?
EOS itself is included with hardware — there's no separate operating system license like some vendors charge. Feature licenses are required for specific advanced capabilities: CloudVision (per device, subscription), 7130 application licenses, and some advanced routing features on certain platforms. Arista uses "EOS+," a feature license bundle for some advanced features. Support contracts (Arista SmartNet) are annual subscriptions covering TAC access, hardware replacement (with tiered response times: next business day, 4-hour, etc.), and software updates. CVaaS pricing is also per-device per-year. Get quotes directly from Arista or an authorized reseller — public pricing isn't published.
Q97. What certifications does Arista offer?
The Arista certification track: ACE-A (Arista Certified Engineer — Associate, entry level, covers EOS basics, configuring switches, and common features), ACE-L1 (Professional level, covers data center networking, routing protocols, VXLAN, and CloudVision), ACE-L2 (Expert level, advanced data center and automation), and ACE-O (Optical, covers Arista's optical networking products). The ACE-A is the right starting point for network engineers new to Arista. Exam registration and study materials are at arista.com/en/training-certification. Arista's free self-paced courses at the same site are genuinely good preparation — better than most vendor training materials.
Q98. How does Arista compare to Juniper?
Juniper runs Junos — also a Unix-based OS with a strong reputation for stability and a commit-based configuration model (Arista's configure sessions are philosophically similar to Junos candidate configurations). Junos has been around longer and has some features (like a more mature policy framework) that EOS has been catching up on. Arista's hardware line is more focused — primarily switching and routing — while Juniper also sells firewalls (SRX), SD-WAN (Session Smart Router), and Wi-Fi (Mist). Juniper's Mist AI platform has gotten strong reviews for campus management. Arista has the edge in hyperscale data center deployments; Juniper has a stronger service provider presence.
Q99. How does Arista compare to Cisco Nexus?
Cisco Nexus runs NX-OS. Arista EOS was explicitly designed to be familiar to Cisco engineers — the CLI is close enough that the learning curve is shorter than switching to Juniper. Technically: Arista runs a single EOS version across all platforms; Nexus runs different NX-OS versions on different hardware (Nexus 9000 vs 7000 vs 5000 have meaningful differences). Arista tends to adopt new port speeds faster (partly because they're not managing custom ASIC development). Cisco Nexus has a deeper campus integration story and broader adjacency with the rest of Cisco's portfolio (ACI for SDN, DNA Center for campus). For pure data center switching, Arista is price-competitive with Nexus and often wins on simplicity of the software stack.
Q100. Is Arista a good choice for a small business or SMB?
Probably not, unless there's a specific technical need driving it. Arista hardware is priced for enterprise and service provider budgets. The operational model (EOS, CloudVision, automation) is powerful but requires network engineers to operate effectively — there's no consumer-grade management interface. For small businesses, Cisco Meraki, Ubiquiti, or even HP/Aruba offer better price-to-operational-simplicity ratios. Arista earns its keep in environments with 20+ switches where automation, telemetry, and consistent software matter — medium to large enterprises, financial institutions, research universities, healthcare systems with large data centers, and cloud operators. If you're managing 5 switches, the Arista value proposition doesn't pay off in most cases.
Key Arista Resources
| Resource | URL / Location |
|---|---|
| EOS Documentation & Software Downloads | arista.com (support portal) |
| Arista Validated Designs (AVD) | github.com/aristanetworks/avd |
| eAPI Explorer (per switch) | https://[switch-IP]/explore |
| pyeapi Python Library | pypi.org/project/pyeapi |
| Arista Training & Certifications | arista.com/en/training-certification |
| CloudVision as a Service | cloudvision.arista.com |
| Arista Community Forum | arista.com/en/community |