The Future of Next-Gen Data Centers Has Arrived
Flexibility is the name of the game in a digital-first world, which is transforming data centers as we’ve known them for decades. Depending on the latest industry analyst report or news cycle, some experts assert the future of data centers is hyperscale while others believe processing power will be delivered locally, in the cloud and at the network’s edge. Regardless of technologies, topologies and terminologies, here’s one thing everyone agrees on: next-generation data centers must be agile, adaptable, distributed, efficient and intelligent.
According to Gartner, spending on global data center infrastructure is expected to reach $200 billion by the end of this year. This represents a 6% increase after spotty spending in 2020, as many enterprises put infrastructure investments on hold while relying more on public cloud providers to address pandemic-related business shifts and disruptions.
Cloud adoption, which was growing steadily before COVID-19, gained major momentum during the pandemic. According to Flexera’s “2021:State of the Cloud Report,” more than 92% of the 750 enterprises surveyed currently have a multi-cloud strategy while 90% expect their cloud use to exceed plans due to the pandemic, an increasingly remote workforce and a surge in videoconferencing. The report also reinforces the evolution of public and private cloud adoption, with 43% of those polled leveraging a hybrid strategy to meet business needs.
As companies move from monolithic data center designs to distributed and disaggregated architectures, myriad new challenges emerge. Enterprises of all shapes and sizes are seeking ways to ease transitions to new heterogeneous environments. The ultimate goal: Choose from a menu of compute, storage and networking options to best meet business needs while only paying for what is used.
This wish is reiterated constantly during customer conversations, which center around three recurring themes:
- Cost is always top of mind, going hand-in-hand with energy efficiency since power represents the biggest cost in any data center. Moving from on-premises to hybrid cloud environments is an excellent way to reduce both costs and carbon footprints.
- Performance is paramount, as everything comes down to speeds and feeds. Data-intensive applications, such as Artificial Intelligence/Machine Learning (AI/ML), video streaming and natural language processing, require alternative architectures and optimized software/hardware to reduce computational processing burdens.
- Connecting everything when it comes to bridging both physical and virtual data centers. It’s critical to enable rapid exchange of data across the entire data center ecosystem through seamless integration and on-demand connectivity.
One of the most obvious outcomes of the move to digital infrastructures is the continuous onslaught of data generated by powerful applications boasting unprecedented levels of functionality. This rings true for my colleague Craig Petrie, VP of Sales and Marketing for BittWare, a Molex company specializing in enterprise-class accelerators for edge and cloud-computing applications.
“Customers need to scale many more diverse applications than ever before,” he says. “A decade ago, they likely had 10 applications, such as databases, where the time-to-compute problem wasn’t critical to the user experience. Today, enterprises are dealing with hundreds of deterministic, real-time, low-latency applications that require different user experiences. Acceleration technology is the answer to achieving much-needed performance improvements while increasing energy efficiency, minimizing data movement and reducing costs.”
With the user experience at the epicenter of everything, data center providers need to deliver consistent performance, regardless of whether the application involves complex machine learning workflows or 4K video streaming to a cellphone. Expectations remain high that all data will be delivered on-demand without fail. Meeting that challenge continuously is a major area of ongoing focus and innovation.
Research and Markets estimates that the amount of data generated each year grows by 35% globally, with data in the healthcare industry registering the fastest growth rate. According to the Centers for Disease Control and Prevention, the number of telehealth visits increased in the first quarter of 2020 by 50% with a single week in March showing a 154% jump compared with the same week in 2019. The rapid rise in telehealth applications is a great example of collaboration across the IT ecosystem to ensure maximum service-level consistency.
Need for Speed & Agility
Measuring performance has always been about speeds and feeds, but next-gen data center operators also need to demonstrate how quickly and efficiently they can adapt to new requirements. Modular, composable infrastructures are designed to abstract layers of compute, storage and networking resources from their physical locations, enabling them to be configured and provisioned on-demand. This becomes especially relevant when contemplating the sheer volume of connected devices ingesting, processing, storing and sharing data.
Market researcher Statista estimates the total installed base of Internet of Things (IoT) devices, encompassing connected cars, smart home devices and industrial equipment, will reach 30.9 billion units by 2025. This represents a sharp rise from 13.8 billion units expected this year. As most of this data is distributed, the bulk will be processed at the edge of the network where smart technologies, including transceivers and accelerators, offer flexible performance boosts.
“Part of the trend toward acceleration is the opportunity to use resources more efficiently,” Petrie adds. “If you’re dealing with a massive data set from a huge hyperscale data center, things get more complicated and potentially more expensive quickly. The goal is to improve overall resource utilization across all data center operations.”
Equally important is ensuring heightened levels of resiliency and security made possible by increasingly smart data center cells and software-driven diagnostics. Data center operators have always made uptime a top priority, as the cost of an outage can be catastrophic. When Akamai experienced an Edge Domain Name System (DNS) failure on July 22nd, websites around the world were affected, including Airbnb, Amazon, Fidelity Investments, UPS and more.
As the impact of downtime is measured in minutes and millions, it’s reassuring to know that a fix was expedited, enabling normal operations to resume in less than an hour. Expect continued emphasis on resiliency, especially now that smart hardware can self-diagnose when a problem occurs while redirecting data traffic to a backup port to reduce loss of services.
To keep pace with the ever-rising data glut, data center operators are striving to meet ballooning bandwidth and interconnectivity requirements. A decade ago, connectivity was among the last items addressed during build-out or expansion efforts. Now, however, high-density interconnect solutions are being designed into next-gen data centers from the onset.
Along those same lines, it’s crucial to follow and embrace evolving standards efforts to support open communications across heterogeneous compute, storage and networking environments. As an active participant in the Open Compute Project (OCP), and an extensive product line dedicated to Project Open19 compliance, Molex is committed to developing connectivity products that enable enterprises to deploy a diverse set of compute, servers, storage and network accelerators and connectors within data centers.
Having a full portfolio of data center interconnect products is one vital way that Molex demonstrates its commitment to supporting data centers of the future. Another way is through ongoing collaborations with leaders across the data center ecosystem to deliver a full menu of high-speed interconnect solutions. Recent innovations include a “near ASIC” solution that lets “bi-pass connectors” be placed directly on a chip substrate package, along with new 112G Active Electric Copper Cables and 100G / 400G Optical Transceivers to fuel both intra-data center interconnect and data center interconnect demands.
At BittWare, advancements extend its IA-Series of Intel Agilex FPGA-based Accelerators to support the most data-intensive workloads. As a trusted advisor to some of the biggest and most well-known names in data center environments, Molex is embracing the most impactful changes across the data center landscape today to ensure enterprises can achieve the most value tomorrow.