Cisco Moves to Relieve AI Data Center Gridlock With New Chip
As AI clusters grow denser and GPUs multiply, Cisco introduces a 102.4 Tbps switching chip and next generation optics to prevent data movement from stalling high cost compute workloads.
Topics
News
- UAE Marketers Lead in AI Trust But Struggle with Data Silos, Research Finds
- Google Unveils TurboQuant, a New AI Memory Compression Algorithm
- Enterprises Are Scaling AI Faster Than They Can Govern It: Study
- As AI Agents Proliferate, Identity Security Gaps Persist, Report Finds
- Meta Found Liable in Child Exploitation Trial; Penalized $375M
- OpenAI Flags Microsoft as a ‘Risk Factor’ Ahead of Potential IPO, Report Says
[Image source: Krishna Prasad/MITSMR Middle East]
Cisco has introduced a new high-capacity networking chip aimed at large AI data center deployments, as enterprises and cloud providers grapple with the challenge of moving massive volumes of data between GPUs. The Silicon One G300, a 102.4 Tbps switching silicon, will power upcoming Cisco N9000 and Cisco 8000 data center systems, the company said on Tuesday.
The announcement highlights how networking is becoming a critical bottleneck in AI infrastructure, particularly for large-scale training and inference workloads where delays in data movement can stall compute jobs.
Cisco is positioning the G300 as a backbone component for dense AI clusters, alongside new hardware designs that include liquid cooling and support for high-density optical connections.
According to Cisco, the G300 is designed to handle bursty AI traffic patterns more efficiently, using techniques such as shared packet buffers, path-based load balancing, and real-time network telemetry. The company claims these features can improve network utilization and reduce AI job completion times in large data center environments, based on internal simulations.
Alongside the chip, Cisco is rolling out a new generation of fixed and modular N9000 and 8000 Ethernet systems. These include fully liquid-cooled configurations that the company says can deliver significantly higher bandwidth density while reducing energy consumption compared to earlier designs.
Cisco also introduced updated optical modules, including 1.6T pluggable optics and 800G linear pluggable optics, aimed at lowering power draw in AI-heavy networks.
On the software side, Cisco announced updates to Nexus One, its data center networking platform, to simplify the operation of AI infrastructure across on-premises and cloud environments. New capabilities include tighter visibility into AI jobs at the network level and upcoming native integration with Splunk, which Cisco says will help organizations analyze network telemetry without moving sensitive data outside their environments.
“Data movement is becoming as important as compute itself in large AI systems,” said Martin Lund, executive vice president of Cisco’s Common Hardware Group, pointing to the growing need for predictable, congestion-free networking as AI clusters scale.
The Silicon One G300 and the associated systems and optics are expected to ship later this year. Cisco said it is working with partners such as AMD, Intel, NVIDIA, and NetApp to position the hardware within broader AI infrastructure stacks.


