An anonymous Cisco competitor commissioned Tolly Group to perform the Cisco UCS testing. Tolly claims that while Cisco promotes UCS blades as having 40 Gbps of capacity, actual throughput enabled during testing was as low as 27 Gbps.
"In recent weeks, we have been asked by one of Cisco's former blade server partners to benchmark the network throughput of Cisco's UCS for an upcoming comparative report. The limitations of Cisco's external switch-based UCS architecture were eye-opening to say the least," Tolly Group founder Kevin Tolly wrote. The full report is scheduled for release Friday.
Analysts and engineers familiar with UCS agree that Cisco's Unified Computing architecture – which includes blades, chassis, fabric interconnect and extender, management software, and network adapters -- constrains bandwidth in ways that technology from competitors may not, but they say the throughput enabled is perfectly suitable to support today's applications. They also say that competitors' products have other drawbacks.
"This is not a real-world instance [for testing]," said one engineer, who runs a Cisco shop but hasn't yet implemented UCS. "No one is pumping out 10 gigabits per second." The engineer's own shop is 85% virtualized, runs on a 4 gigabit uplink backbone and has never had a bottleneck problem, he said.Cisco UCS testing: Where bandwidth constraints may be found
The Tolly test points to a number of throughput trouble spots in the unified computing architecture, starting with the idea that there are more servers than active uplinks in a chassis and that there is dependence on a clumsy external switch.
A single UCS chassis holds eight servers, each of which has its own 10 GbE converged network adapter, but the chassis' fabric interconnect has only four active uplinks to the top-of-rack switch.
"Even though the blades have a theoretical aggregate capacity of 80 Gbps, they all have to communicate to the top-of-rack UCS 6120 XP Fabric Interconnect switch via the maximum of four 10 GbE uplink ports of the UCS 2104 Fabric Extender," Tolly wrote. "The UCS chassis can accept a second fabric extender, but Cisco documentation makes it clear that this second unit is for failover only and cannot be used to provide 80 Gbps of uplink connectivity."
Is Cisco unified computing architecture static?Once the throughput is essentially cut in half by "eight servers vying for four available uplinks," the system does not then automatically aggregate the 40 Gbps of bandwidth across the servers as needed, Tolly said in his report.
"Cisco documentation informs us that a given server is 'pinned' to a given uplink port. 'Pinned' as in static, can't move, can't change, maxed out," he wrote.
Having thumbed deeply through Cisco's UCS user guide, Tolly found the following quote halfway through: "The pinning determines which server traffic goes to which server port on the fabric interconnect. This pinning is fixed. You cannot modify it. As a result, you must consider the server location when you determine the appropriate allocation of bandwidth for a chassis."
Not only does the idea of "pinning" go against the very idea of automated virtual machine migration, according to Tolly, it causes further bandwidth limitation.
Tolly first benchmarked the throughput between two physical servers "pinned" to different uplinks with positive results. Out of the maximum of 20 Gbps bi-directional, he measured 16.7 Gbps of application throughput, not counting lower-layer protocols.
But when the same test was conducted between two physical servers that were tied to the same uplink, there was contention between the two servers as traffic transited the top-of-rack switch.
"The aggregate throughput dropped to 9.10 Gbps out of a possible 20 Gbps," Tolly wrote. "Further tests, and common sense, showed that the best throughput would be achieved when two pairs of servers pinned to the four different links communicated 1 to 2, and 3 to 4. That got us about 36 Gbps. That's great, but what about your other four servers?"
When Tolly added two more real servers and requested additional bandwidth, there was significant degradation resulting from the contention.
"By the time we reached the limits of our test requesting an additional 20 Gbps of bandwidth, Cisco's total system throughput had dropped to just 27 Gbps," he wrote.
Will Cisco UCS bandwidth constraints become a real problem?
When Cisco first set out to design UCS, developers didn't anticipate the speed of virtualization uptake and its bandwidth requirements, said Joe Skorupa, research vice president at Gartner. For now, Cisco's UCS far exceeds the needs of most shops.
"As virtualization proceeds over the next 18 months, the bandwidth requirements will increase," he said, adding that eventually Cisco will have to address the problem.
The company would have done better to design around the problem from the start just to avoid these types of claims, Skorupa said. After all, the networking giant is the "new kid on the block," with a battle against competitors that will be exhausting. But he said it's likely that Cisco's next iteration of equipment will address the problem.
Cisco didn't respond to Tolly's testing results, saying: "Cisco did not authorize or actively participate in the Tolly Group test, therefore we cannot accept or validate its conclusions. Further, Cisco cannot comment on test reports that have been paid for by another vendor."
Tolly contacted Cisco for participation in the course of testing, but Cisco declinedCisco UCS testing shows throughput constraints, one lab says
No comments:
Post a Comment