‘Connection-First’ Design Crucial for Today’s Complex Data Centers
Taking a step back to look at data centers holistically, and not just from the standpoint of the ‘tech’ involved, is vital to understanding how and why connectivity is essential. The big picture perspective also spotlights the need for connectivity to play into the design considerations from the point of data center design inception. With the expectations for reliable, consistent delivery of rapid speed and high performance for the largest hyperscalers in the world, it’s more critical than ever to explore the role connectivity plays as the road by which all connections are made and power travels.
Fundamentally, the function of a data center is to store and manage the most critical resources that are vital to the continuous operations of an organization. Because of this, reliability, efficiency and security are typically top priorities. But often in the rush to design completion, the center’s entire connectivity form and function, including the power supply, can be overlooked. And this can be a dire mistake, as when down-the-line issues with more limited resolution options occur due to already-established, restrictive elements. To most who have designed and deployed high-scale data centers, it comes down to this reality: Connectivity can no longer be an afterthought in data center design.
Times have changed, and the rapidly evolving field of data communications has experienced continuous and insistent demand for increased throughput and bandwidth. To put this in the 40-50 year perspective, Moore’s Law has slowed. The original observation that the number of transistors on a chip would double every two years now has to be adjusted, as geometries at the chip level encounter the challenges of architectures at 7 nanometers and below.
The industry’s response has been to essentially multiply the systems involved, while keeping Moore’s Law alive by interconnecting these systems and recreating the performance levels that would normally be expected from single-chip implementations.
The upshot? Connectivity plays an even more instrumental role in data center design. For example, clever interconnect design could mean that both short reach and longer reach can be achieved with same system design, whether it’s connecting within the rack with passive DAC or connecting multiple racks with active electrical cable or optics.
Redundancy becomes a crucial factor
It’s widely understood that if one system goes down, the continued functioning of the series requires redundancy based on multiple cards or related systems. So, for example, a backplane connector would need to have multiple line cards plugged into an A1 Switch card.
How does this look from an external I/O perspective? In an application where two top-of-the-rack switches are being connected to a single server using a dual TOR (top-of-rack) cable, additional functionality is required if redundancy is to be maintained. It’s becoming more and more critical for these applications to consider these connections from the start.
Major market drivers
There are numerous market drivers for the huge upsurge in data centers and their functionality. AI (artificial intelligence) applications are high on the list, being considered the autonomic nervous system and automatically controlling all the other apps and equipment that come with it. Then, there’s 5G, which basically maintains access to high-speed data all the way to the edge or to the end user’s specific location.
Yet in the design of these complex systems running advanced technologies, is rarely considered and prioritized right from the beginning, likely due to the difficulty in quantifying the ‘pain points’, given the high impact of one component to another. A clear example of that is the architecture and its interconnectedness. Architected one way vs. another, a data center may or may not be cost-effective. And cost is always a key factor!
Further, ignoring design issues at the start invites latency. Properly interconnecting all of these elements, underscores the need to clearly map how the data flows optimally through multiple boxes and systems with the lowest possible latency. In fact, the retention of signal integrity — arguably determines the maximum bandwidth allowed. So, in the battle to eliminate latency – connectivity is table stakes.
Addressing the sheer diversity of applications
The diversity of applications – from unique devices being supported, interconnects available and the wide range of choice facing the end user also rank as critical considerations in design. For example, should a passive DAC cable or an active cable be used? Depending upon the answer, a simple linear amplifier, or a retimer-based design, or an optical solution could be leveraged as alternatives.
These are the key considerations on the external I/O side, but there are considerations on the internal side as well, such as whether to opt for expensive PCB material in lieu of a less expensive alternative such as a ‘BiPass’ cable inside the box.
Technically, as signal integrity becomes paramount, data center and network designers can no longer utilize inexpensive PCB materials. Yet, better-performing PCB materials come at a price premium. BiPass I/O Cable Assemblies reduce the overall signal integrity requirements of the main PCB by transmitting the sensitive high-speed signals via twinax. Compared to PCB materials, BiPass I/O Cable Assemblies dramatically reduce insertion loss between the ASIC and the front panel I/O, creating greater channel margins.
TCO: driving design factor
Although the bi-pass solution might appear more direct, it raises one of the most compelling questions: the total cost of ownership, encompassing what’s inside and outside of the box. It is critical to look at the end-to-end solution and the total cost, starting the discussion there.
Advice and insight – all part of the value proposition
This type of analysis and modeling has enabled Molex to make effective recommendations to our customers, confident that our in-depth analysis provides effective reasoning in support of certain products or solutions.
This level of insight is made possible through the breadth of our portfolio and our participation in the Open19 Project, which aims to establish a new open standard for data center servers. By combining core technologies with collaboration with industry-leading tech companies, we are proud to offer our customers a portfolio built over decades, which means we can guide customers in their choices, pointing out the best combinations of products for the best data center design solution. This includes a full end-to-end high-performance cabling with matched impedance system, as opposed to just providing an interconnect schema, for example.
By working with customers collaboratively to overcome challenges, rather than simply offering single point products, Molex has created a point of differentiation. We see this at play so often at sites around the world, where unplanned expansion has led to real challenges with cable routing, bundling over obstacles and physical impairments with the huge knock-on effect on airflow that impacts all the way down to the board level. It’s for this reason that designing-in connectivity at the outset of data center project is so fundamental. It enables customers to avoid rigid choices that require rework, and considerable amounts of both time and money to rectify challenges when the wrong choices are made. Connection-first planning enables data center design success from the word GO.