Cisco Silicon One G300: 102.4 Tbps ASIC Powering the Next Generation of AI Data Centers
Full technical deep-dive into Cisco's most advanced switching silicon — announced at Cisco Live EMEA 2026
- Overview: What Is the Cisco Silicon One G300?
- G300 Architecture and Technical Specifications
- Intelligent Collective Networking (ICN) — The Key Differentiator
- New Switches: Cisco N9000 and 8000 Series
- New Optics: 1.6T OSFP and 800G LPO
- P4 Programmability — What It Means and Why It Matters
- SONiC and Disaggregation: Cisco Plays the Open Networking Card
- Nexus One and AgenticOps: AI-Driven Network Management
- Competitive Landscape: G300 vs Broadcom TH6 vs Nvidia Spectrum-4
- Who Should Consider the G300? Use Cases and Target Markets
- Availability, Pricing and Ecosystem
- Frequently Asked Questions
Overview: What Is the Cisco Silicon One G300?
On February 10, 2026, at Cisco Live EMEA in Amsterdam, Cisco unveiled the Silicon One G300 — its most powerful custom networking ASIC to date and the latest generation in its Silicon One family. The G300 is a 102.4 Tbps full-duplex standalone switching processor purpose-built for the demands of modern AI data centers: massive GPU clusters, low-latency all-to-all communication, nanosecond-granularity congestion control, and the kind of sustained bandwidth throughput required to prevent the network from becoming the bottleneck in AI training and inference workloads.
The announcement marks a significant acceleration of Cisco's transformation from a legacy routing and switching hardware vendor into a company positioning itself as the end-to-end AI infrastructure stack. As Jeetu Patel, Cisco's President and Chief Product Officer, declared at the event: the company is "spearheading performance, manageability, and security in AI networking by innovating across the full stack — from silicon to systems and software."
The G300 is not merely a faster chip. It introduces a new architectural philosophy called Intelligent Collective Networking (ICN), which coordinates behavior across every G300 ASIC in a network fabric — sharing congestion state, telemetry, and load information in hardware time. Combined with a fully unified packet buffer, P4 programmability, and a new generation of liquid-cooled switch platforms, the G300 represents Cisco's most ambitious push into AI infrastructure to date.
G300 Architecture and Technical Specifications
The Cisco Silicon One G300 builds directly on the foundation established by its predecessor, the G200, while delivering a generational leap in raw bandwidth and architectural sophistication. At its core, the G300 is a single monolithic die designed for deterministic, low-latency switching at unprecedented scale.
SerDes and Port Configuration
The G300 integrates 512 × 200 Gbps SerDes developed entirely in-house at Cisco. These serializer/deserializers support both PAM4 (four-level pulse amplitude modulation) and NRZ (non-return to zero) signaling, providing both the ultra-high-speed performance demanded by AI scale-out networks and backwards compatibility with existing 100G and 400G infrastructure.
Those 512 SerDes lanes can be configured in multiple ways. In their maximum configuration, they support 64 × 1.6 Tbps ports — directly enabling connections to the latest generation of AI compute NICs and scale-out switches operating at 1.6T line rate. They can also be broken out into 128 × 800G, 256 × 400G, or 512 × 200G configurations depending on the deployment topology.
GPU Cluster Scale
The G300's high radix — the ability to support a very large number of ports on a single chip — has a dramatic impact on the physical scale of GPU fabrics. A G300 fabric can connect up to 128,000 GPUs using just 750 switches. For context, achieving the same GPU count with the prior generation of silicon would have required approximately 2,500 switches — a 3.3× reduction in switch count that translates directly into lower capital cost, lower power consumption, lower physical space, and a flatter, simpler network topology.
Full Specifications Table
| Parameter | Specification | Notes |
|---|---|---|
| Switching Capacity | 102.4 Tbps full-duplex | Single-chip, standalone |
| On-Chip SerDes | 512 × 200 Gbps (PAM4 / NRZ) | In-house Cisco SerDes |
| Max Port Speed | 64 × 1.6 Tbps Ethernet | 128×800G / 256×400G also supported |
| Packet Buffer | Fully Shared (Unified) | Any port can consume full buffer |
| Load Balancing | ICN Fabric-Wide Path-Based | Nanosecond granularity |
| Telemetry | In-band, hardware-speed | Shared across all G300s in fabric |
| P4 Programmable | Yes | Field-reprogrammable packet processing |
| OS Support | NX-OS, IOS-XR, ACI, SONiC | Full disaggregation supported |
| Max GPUs per Fabric | 128,000 GPUs | Using 750 switches (vs 2,500 prev. gen) |
| Job Completion Time | 28% reduction | vs non-optimized path selection |
| Network Utilization | 33% increase | via ICN vs baseline |
| Announced | February 10, 2026 | Cisco Live EMEA, Amsterdam |
| Availability | H2 2026 | Pricing not yet disclosed |
Intelligent Collective Networking (ICN) — The Key Differentiator
The single most important architectural innovation in the G300 is not the raw bandwidth — 102.4 Tbps is now table stakes at this tier of silicon. What Cisco is betting on is a fundamentally different approach to how a network fabric behaves as a system, not just as a collection of independent forwarding devices. This approach is called Intelligent Collective Networking (ICN).
ICN is composed of three interlocking technologies that work together at hardware speed across the entire G300 fabric.
Pillar 1: Fully Shared (Unified) Packet Buffer
In conventional switch ASICs — including Broadcom's Tomahawk series — packet memory is partitioned between ingress and egress queues. Each port is allocated a fixed portion of the total buffer. When one port is idle and another is experiencing a burst, the idle port's buffer allocation sits unused while the busy port drops packets. This fragmentation is a fundamental architectural constraint.
The G300 eliminates this constraint entirely. Its packet buffer is fully shared and undivided — every port has equal access to the entire buffer pool. As Rakesh Chopra, SVP and Fellow for Cisco Hardware, explained: "Any port can use it. Everyone else fragments memory between input and output ports. We can absorb more bursts without impacting the network."
This matters enormously for AI workloads, which are characterised by violent, highly synchronised traffic bursts — the so-called "incast" problem where hundreds of GPUs simultaneously transmit to a single destination at the completion of a computation step. A fully shared buffer can absorb these microbursts without dropping packets, avoiding the retransmissions that extend job completion time and reduce GPU utilisation.
Pillar 2: Path-Based Load Balancing with Fabric-Wide Coordination
Standard Equal-Cost Multi-Path (ECMP) routing assigns traffic flows to paths using a hash of the five-tuple (source IP, destination IP, protocol, source port, destination port). This approach is stateless, simple, and fast — but it is blind to actual congestion. Two large elephant flows can hash to the same path, creating a hot spot, while an alternative path sits completely idle.
The G300's load balancing agents operate at a completely different level. Each G300 in the fabric continuously shares telemetry with every other G300, building a real-time map of queue depths, link utilisation, and flow states across the entire network. When a new flow arrives, path selection is based on actual current congestion across all available paths, not a static hash. "There's a lot of telemetry-based analysis happening in hardware time, nanosecond granularity on queues, flows, and link utilization," said Chopra. This is what drives the 33% improvement in network utilisation — paths are used more evenly, and hot spots are avoided before they develop.
Pillar 3: Proactive Network Telemetry
The third pillar is the information backbone that makes the other two work. Every G300 in the fabric acts as both a telemetry consumer and a telemetry producer. In-band telemetry is collected and shared at hardware speed — not at software polling intervals — giving the load balancing system a continuously updated, fabric-wide view of congestion, link health, and flow distribution. When a link fails, the G300 responds and reroutes faster than any software-based control plane could. When congestion is building on a particular path, traffic is shifted proactively rather than reactively.
Together, these three capabilities give the G300 a level of self-awareness and self-optimisation that represents a genuine architectural leap over traditional independent-switch networking.
New Switches: Cisco N9000 and 8000 Series
The G300 ASIC powers a new generation of purpose-built switches spanning both the Cisco Nexus 9000 and Cisco 8000 product families. Cisco offers both 100% liquid-cooled and air-cooled variants, targeting different deployment contexts from hyperscale AI training clusters to enterprise data center fabrics.
Cisco N9364-SG3 — Liquid-Cooled AI Fabric Leaf
The N9364-SG3 is a Nexus 9000 series switch built around the G300 with a 100% liquid-cooled thermal design. It offers 64 ports at up to 1.6T line rate and supports NX-OS, ACI, and SONiC. The liquid-cooled design is the critical differentiator: liquid cooling removes heat with dramatically higher efficiency than air, enabling the N9364-SG3 to achieve approximately 70% better energy efficiency per bit compared to the prior generation of air-cooled switches. Perhaps more strikingly, a single N9364-SG3 delivers the same total bandwidth as six prior-generation systems — the density gain is exceptional.
Cisco 8132 — Liquid-Cooled Spine for Service Providers and AI Fabric
The Cisco 8132 is a Cisco 8000 series platform, running IOS-XR, designed for service provider edge deployments and AI fabric spine roles. Like the N9364-SG3, it offers 64 × 1.6T ports in a 100% liquid-cooled chassis and brings Cisco's carrier-grade IOS-XR feature set to the AI data center spine role. The 8132 is particularly relevant for operators who need to bridge their WAN and AI fabric environments under a single OS.
Air-Cooled N9000 3RU Model
For deployments where liquid cooling infrastructure is not available or practical, Cisco also offers a 3RU air-cooled Nexus 9000 model powered by the G300. This platform provides the same 64 × 1.6T port density in a conventional air-cooled chassis, at the cost of somewhat lower energy efficiency. It supports all four OS options: NX-OS, IOS-XR, ACI, and SONiC, making it the most flexible deployment choice for organisations in the middle of a liquid cooling transition.
| Platform | Cooling | Ports | OS Support | Target Role |
|---|---|---|---|---|
| N9364-SG3 | 100% Liquid | 64 × 1.6T | NX-OS, ACI, SONiC | AI fabric leaf, enterprise data center |
| Cisco 8132 | 100% Liquid | 64 × 1.6T | IOS-XR, SONiC | SP edge, AI fabric spine |
| N9000 3RU (Air) | Air-Cooled | 64 × 1.6T | NX-OS, IOS-XR, ACI, SONiC | General data center, enterprise AI |
New Optics: 1.6T OSFP and 800G LPO
Alongside the G300 silicon and switch platforms, Cisco announced two new optics products that complete the AI networking stack. Both address different aspects of the same core problem: how to move data between GPUs, NICs, and switches at the highest possible bandwidth with the lowest possible power consumption.
1.6T OSFP — Ultra-High-Bandwidth AI Scale-Out Optics
OSFP (Octal Small Form-Factor Pluggable) is the industry's current highest-density optical form factor, supporting up to 8 × 200G lanes in a single module. Cisco's new 1.6T OSFP modules leverage this to deliver 1.6 Tbps of raw optical bandwidth per port — directly matching the G300's maximum per-port speed. These modules target switch-to-NIC links in AI scale-out deployments and support multiple operational modes: 1.6T, 800G, 400G, and 200G depending on the endpoint capability, providing a smooth migration path for organisations upgrading their GPU cluster interconnects incrementally.
800G LPO — Linear Pluggable Optics for 50% Power Reduction
LPO (Linear Pluggable Optics) is a relatively new category of optical transceiver that eliminates the digital signal processing (DSP) retimer chip found in traditional optical modules. Conventional retimed optical modules use a DSP to re-time and re-amplify the electrical signal before converting it to optical — this provides cleaner signal integrity but consumes significant power in the DSP itself.
LPO removes the DSP entirely, converting the raw electrical signal directly to optical. The result is a module that consumes approximately 50% less power than retimed equivalents — a massive saving in a large-scale AI cluster where thousands of optical modules may be deployed. The trade-off is that LPO requires a cleaner electrical signal from the host ASIC, which is why Cisco's in-house 200G SerDes development is directly linked to enabling LPO support: the G300's SerDes is specifically designed to meet the signal quality requirements of linear optical conversion.
"The 800G LPO reduces optical power consumption by 50% versus retimed optical modules — a critical efficiency gain at the scale of modern AI clusters where optics represent a major fraction of total system power." — Cisco Silicon One technical briefing, February 2026
P4 Programmability — What It Means and Why It Matters
One of the G300's most strategically significant capabilities is its support for P4 (Programming Protocol-Independent Packet Processors) — an open-source domain-specific language that allows the packet processing pipeline of a network device to be described, modified, and reprogrammed entirely in software, without changing any hardware.
In a traditional fixed-function ASIC, the packet processing pipeline — how packets are parsed, matched against lookup tables, and forwarded — is hardwired at fabrication time. If a new forwarding feature is needed, it requires a new chip revision. P4 changes this fundamentally: the G300's match-action pipeline is programmable in the field, meaning new forwarding behaviours, new encapsulations, new telemetry collection methods, and new load balancing algorithms can all be deployed via P4 code updates without hardware replacement.
For the G300, this has direct practical implications. As Chopra explained: "You can take a single system and deploy it on the front-end, spine, or back end, all by deploying some P4 code." A single G300 platform can therefore serve as a leaf switch, a spine switch, or a front-end internet edge device — the role is determined not by the hardware specification but by the P4 program loaded onto the chip. This dramatically simplifies hardware lifecycle management and sparing.
Cisco is also using P4 as a mechanism to offer customisation to hyperscalers and sophisticated enterprise customers. While most customers will consume pre-written P4 programs developed by Cisco, a small number of large customers will write their own: "It's the type of thing that sounds great, but it's complicated. In most of our engagements, we take a feature request and write the code ourselves. But there's a small number of customers that will write their own P4 programs," said Chopra. For those customers, direct P4 access represents genuine competitive differentiation — the ability to implement proprietary forwarding and telemetry behaviours that no off-the-shelf ASIC could match.
SONiC and Disaggregation: Cisco Plays the Open Networking Card
In a move that would have been surprising from Cisco even five years ago, the G300 fully embraces SONiC (Software for Open Networking in the Cloud) — the Linux-based, open-source network operating system originally developed by Microsoft and now maintained as a community project under the Linux Foundation's SONiC project.
SONiC is the OS of choice for hyperscalers and neoclouds that want full control over their networking stack without vendor lock-in. By supporting SONiC on G300-powered switches, Cisco is directly targeting exactly these customers — organisations that have historically avoided Cisco hardware precisely because it came bundled with NX-OS or IOS-XR and required participation in the full Cisco licensing and support ecosystem.
Cisco is offering two distinct disaggregation flavours for the G300:
- Cisco Silicon + Cisco Switch + SONiC: Customers get the G300's performance advantages in a Cisco-built chassis but replace NX-OS/ACI with SONiC as the operating system. This is the simplest path for a hyperscaler wanting to try G300 without changing their SONiC-based automation toolchain.
- Cisco Silicon + Whitebox Hardware + SONiC: The most disaggregated option — Cisco sells the G300 ASIC to an ODM or whitebox hardware partner, which builds a custom chassis, and the customer runs SONiC on top. Cisco benefits from silicon revenue without requiring the customer to adopt any other Cisco hardware or software.
This two-tier disaggregation strategy is a pragmatic acknowledgement that the networking market has changed. Hyperscalers are not going to buy into a fully closed Cisco stack. By offering the G300 silicon for use in SONiC environments and even whitebox hardware, Cisco maximises the addressable market for its custom silicon without abandoning customers who want the full integrated Cisco experience.
Nexus One and AgenticOps: AI-Driven Network Management
The G300 silicon announcement was accompanied by a significant software update to Nexus One — Cisco's unified management platform for data center networking. Nexus One now delivers what Cisco calls a unified management plane that brings together silicon, systems, optics, software, and programmable intelligence as a single integrated solution.
The headline feature of the updated Nexus One is AgenticOps — an AI-driven operations capability built around AI Canvas, a natural language interface that allows network operators to troubleshoot data center fabric issues through guided, human-in-the-loop conversations. Instead of manually navigating CLI outputs, syslog streams, and vendor documentation, engineers can describe a problem in natural language and AI Canvas will guide them through the diagnostic and resolution workflow, pulling together telemetry from across the G300 fabric in real time.
Nexus One with AgenticOps is also designed to address one of the most persistent complaints about hyperscale data center networking: the operational complexity of standing up and scaling large-scale fabrics. The updated platform allows customers to "stand up fabrics faster, scale predictably, and operate securely and efficiently" — targeting enterprises specifically, who have historically been at a disadvantage compared to hyperscalers in terms of available automation tooling and operational expertise.
Competitive Landscape: G300 vs Broadcom TH6 vs Nvidia Spectrum-4
Cisco is entering a competitive silicon market where both Broadcom and Nvidia have established, shipping products. Understanding how the G300 fits into this landscape requires an honest assessment of what it offers, what it lacks, and what the timing implications are.
Broadcom Tomahawk 6
Broadcom's Tomahawk 6 was announced in June 2025 and has been shipping since then — giving it roughly a one-year head start over the G300. It matches the G300's 102.4 Tbps capacity and 512 × 200G SerDes specification. Where it differs is in buffer architecture: Broadcom uses a fragmented ingress/egress buffer model, which Cisco specifically targets with its fully shared buffer claim. Tomahawk 6 also does not offer Cisco-style ICN fabric-wide load balancing — it relies on more conventional ECMP and Broadcom's own congestion management features. The Tomahawk 6 is available now and powers switches from multiple vendors including Arista, HPE Juniper (formerly Juniper Networks), and others in the Broadcom ecosystem.
Nvidia Spectrum-4 (Spectrum-X)
Nvidia's Spectrum-4 chip, marketed as part of the Spectrum-X Ethernet networking platform, operates at a lower switching capacity — 51.2 Tbps — making it a different market tier than the G300 and TH6. However, Nvidia's differentiation is its deep integration with its own GPU ecosystem: Spectrum-X is designed in tight conjunction with ConnectX-8 NICs, SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) in-network computing, and the CUDA/NCCL software stack. For pure Nvidia GPU clusters, Spectrum-X can deliver AI workload optimisations that neither Cisco nor Broadcom can match through silicon integration alone. The question for large deployments is whether those tight integration benefits outweigh the switch count and bandwidth density advantages of a 102.4T fabric.
The Timing Risk
Cisco's biggest competitive vulnerability with the G300 is timing. With Broadcom TH6 already shipping and Nvidia Spectrum-X well-established, the G300's H2 2026 availability means Cisco will be entering a market where customers have already made purchasing decisions for their current generation of AI cluster buildouts. The AI infrastructure spending boom is happening now, not in late 2026. Cisco's bet is that the G300's architectural advantages — particularly ICN and the fully shared buffer — will be compelling enough to justify waiting, or to be designed into the next wave of AI cluster expansion rather than the current one.
| Specification | Cisco G300 | Broadcom TH6 | Nvidia Spectrum-4 |
|---|---|---|---|
| Switching Capacity | 102.4 Tbps | 102.4 Tbps | 51.2 Tbps |
| SerDes | 512 × 200G | 512 × 200G | 256 × 200G |
| Max Port Speed | 1.6 Tbps | 1.6 Tbps | 800 Gbps |
| Packet Buffer | Fully Shared | Fragmented (Ingress/Egress) | Fully Shared |
| Load Balancing | ICN Fabric-Wide | Standard ECMP + Flowlet | RoCE / DCQCN |
| P4 Programmable | Full P4 | Limited (BF-RT) | Yes (Sonic P4) |
| SONiC Support | Yes (native) | Yes (native) | Partial |
| Liquid Cooling | Yes (N9364-SG3, 8132) | Vendor-dependent | Yes (QM9700) |
| GPU Ecosystem | All vendors | All vendors | Optimised for Nvidia |
| Available | H2 2026 | Shipping now (2025) | Shipping now (2023) |
| Key Differentiator | ICN + unified buffer | Ecosystem breadth | Nvidia GPU integration |
Who Should Consider the G300? Use Cases and Target Markets
Cisco is explicitly targeting four customer segments with the G300, each of which has different requirements and different reasons to evaluate the platform.
Hyperscalers and Neoclouds
These are the customers building the largest GPU clusters in the world — the organisations constructing gigawatt-scale AI training fabrics for frontier model development. For them, the G300's SONiC support and whitebox disaggregation options are the entry point. The ICN fabric-wide load balancing and unified buffer are the performance differentiators. The value proposition is simple: if the G300 genuinely delivers 33% better link utilisation and 28% faster job completion compared to a non-optimised baseline, the economics of a 128,000-GPU cluster make the switch cost largely irrelevant relative to GPU utilisation gains. The risk is the H2 2026 timeline and the lack of proven production deployments at launch.
Sovereign Clouds and National AI Infrastructure
Governments and national cloud providers building AI infrastructure under sovereignty requirements have different concerns: they need a trusted supply chain, multi-OS flexibility, and deep integration with existing Cisco infrastructure that many national network operators already rely on. The G300's NX-OS, IOS-XR, and ACI support — alongside SONiC — makes it arguably the most OS-flexible AI networking silicon on the market. For sovereign cloud operators already running Cisco infrastructure for WAN and campus networking, extending that into the AI fabric with G300 represents a compelling operational consistency argument.
Large Enterprises Building Private AI Clusters
Enterprise customers deploying on-premises AI clusters at the 1,000–10,000 GPU scale are a newer and rapidly growing market. These customers need enterprise-grade management (Nexus One / ACI), operational support, and integration with their existing data center infrastructure — not just raw switching performance. The G300's integration with Nexus One and AgenticOps, and its ability to operate within an existing ACI fabric, makes it particularly well-suited for this segment. The air-cooled 3RU model is specifically designed for enterprises that have not yet invested in liquid cooling infrastructure.
Service Providers and Carrier AI Infrastructure
The Cisco 8132 — the IOS-XR variant of the G300 switch — targets service providers building AI-as-a-service and inference cloud infrastructure. For SP operators, IOS-XR compatibility with existing WAN and peering infrastructure is critical. The 8132 positions the G300 at the intersection of the SP data center edge and the AI cluster spine, enabling a single platform to serve both roles.
Availability, Pricing and Ecosystem
Cisco confirmed that G300-powered systems are expected to be commercially available in the second half of 2026. The announcement was made at Cisco Live EMEA on February 10, 2026, giving the industry approximately six months of lead time before general availability. When asked about pricing at the Cisco Live briefing, Cisco declined to share specific figures — a common approach for pre-GA announcements where pricing is still being finalised based on customer engagement and competitive positioning.
The G300 is supported by Cisco's full partner and reseller ecosystem — Cisco's broad channel reach is itself a differentiator compared to merchant silicon providers like Broadcom, who sell silicon but not complete systems. Cisco offers complete validated designs, professional services, TAC support, and lifecycle management across the entire G300 stack, which is a significant consideration for enterprise and sovereign cloud customers who lack the in-house expertise of a hyperscaler.
Frequently Asked Questions
The G300 is a 102.4 Tbps full-duplex standalone switching ASIC from Cisco's Silicon One family, announced February 10, 2026. It integrates 512 × 200G in-house SerDes, supports 64 × 1.6T port configurations, and introduces Intelligent Collective Networking (ICN) — a combination of fully shared packet buffer, path-based fabric-wide load balancing, and proactive in-band telemetry.
Both offer 102.4 Tbps with 512 × 200G SerDes and 1.6T port support. The key differences: the G300 uses a fully shared unified packet buffer (Tomahawk 6 uses fragmented ingress/egress buffers), implements ICN fabric-wide load balancing (TH6 uses standard ECMP), and provides full P4 programmability (TH6 offers more limited BF-RT programmability). However, TH6 has been shipping since 2025 — the G300 is not available until H2 2026.
The G300 supports NX-OS (Nexus 9000 series), IOS-XR (Cisco 8000 series), ACI (Cisco Application Centric Infrastructure), and SONiC (disaggregated deployments). It also supports whitebox hardware running SONiC for fully disaggregated deployments targeting hyperscalers and neoclouds.
A G300 fabric can support up to 128,000 GPUs using 750 switches — compared to 2,500 switches needed with the prior generation of silicon. The high-radix design of the G300 (up to 512 ports per chip) enables a flatter, simpler network topology that directly reduces latency and switch count at scale.
P4 is an open-source language for defining packet processing pipelines. On the G300, P4 support means the forwarding behaviour of the chip — parsing, matching, encapsulation, load balancing algorithms, telemetry collection — can all be reprogrammed in the field without hardware replacement. This allows a single G300 platform to serve as leaf, spine, or edge switch depending on the P4 program loaded, and allows sophisticated customers to implement proprietary network features not available in any standard OS.
LPO (Linear Pluggable Optics) removes the DSP retimer chip from optical transceiver modules, converting the electrical signal directly to optical without digital re-timing. This eliminates approximately 50% of the power consumption of conventional retimed modules. At the scale of a large AI cluster with thousands of optical connections, this represents a very significant reduction in total power consumption — and enables higher port density without exceeding thermal limits.
The G300 is expected to be commercially available in the second half of 2026. Cisco has not publicly disclosed pricing. The announcement was made on February 10, 2026 at Cisco Live EMEA in Amsterdam.