The Impact of Data Center Technology Shifts on Connectivity

As data centers have grown by leaps and bounds, data centers have evolved into physical and virtual infrastructures, and many companies run hybrid clouds, a combination of both types of data centers, with a wide variety of architectures and configurations. At the most basic level, a data center centralizes various devices. Data is stored, managed and disseminated across multiple devices. A data center contains computer systems and related components. Common components include redundant power systems and data communication connections.

The volume of data centers is still increasing, and the demand for cloud storage is also increasing. One of the components that must remain stable is the power connection. The power system of a data center requires a modern, interconnected and reliable infrastructure as well as reliable and high-performance connectivity.

The Impact of Data Center Technology Shifts on Connectivity

The first is the change in server technology, and the impact of this change is very direct, and that is speed. 10Gbps, 25Gbps, 40Gbps, 50Gbps, 100Gbps and beyond, the shift in server technology means that existing connectors must be improved for higher speeds, universal interconnection, flexible interconnection, and speed increases without increasing in the case of heat output.

The second point is the shift in storage, where protocols are converging as a megatrend, PCIe switching allows access to more drives for increased performance, which means more internal and external cables are needed. On the other hand, flash memory also drives improvements in cable sealing and other properties.

The third impact comes from 200G/400G uplinks, where hyperscale demand drives the development of 200G/400G uplinks, and the trend toward lower-loss, larger-scale switching is currently being met by orthogonal and wired backplane connection architectures.

When we are increasing the data rate, the data loss increases as the data rate increases. So for connectors that pull high data rates, we need to ensure that as little data is lost as possible, which shortens the distance the signal must be sent at higher data rates. In addition, higher data rates, the use of more cables means more power consumption. Data centers consume more power, and thermal problems will follow. In the design of connection systems in data centers, optimizing power distribution, reducing thermal energy levels, and improving the high-speed/low-latency performance of interconnections are what every connection system hopes to achieve , where flexible connectivity and power are at the core of today’s data center connectivity designs.

Data Center Flexible Connectivity

In the data center, decentralized architectures are driving innovations in how data traffic is handled, and the trend is clear. Flexible data exchange greatly increases the number of connections between racks and servers. For such connections between racks, within racks, and even in most boxes, copper cables are used. The copper cable solution is affordable, easy to use, and has good performance. certain advantages.

The data rate can be increased by using a high-speed single-link design, or using parallel multi-high-speed links to provide higher aggregation speeds. In terms of speed, most connections can currently achieve high data rates, so don’t worry about it. Technological innovations in copper-based connections are based on improving cable performance, providing high-density signals and power with higher efficiency. Copper cables are also being optimized for 400 gigabit bandwidth, allowing them to maximize their use at higher data rates.

These thin cables address the need for ultra-thin, lightweight and highly flexible passive cabling solutions for high-density rack interior applications, solving many data communication cabling problems. This high-capacity, high-speed I/O interconnect system, paired with optimized copper cable assemblies, provides a high-speed, cost-effective alternative to fiber optics for Ethernet, Fibre Channel and InfiniBand. Of course, it is also necessary to improve the thermal performance of the I/O interconnect ports, and to minimize the heat loss from the data system when designing the connection.

Efficient Power Supply for Data Centers

From the time the power enters the data center to the point of use, the power distribution to the actual point of use has losses of 10% to 15%. More efficient power connectors and busbar connectors can effectively reduce voltage losses and provide power in a more efficient manner. Compactness and high density are the most important things for data centers to choose power connectors.

Modularization in connection design with higher signal density has become a trend. The direct integration of signal terminals into terminal modules enables terminal modules to support more signals, and signal density has been greatly improved. The advantages of modularity do not stop there. The general advantage of modular combination—the variety of combinations, also reduces many limitations in efficient power supply design.

The current capability is also very intuitive. The current capability that each terminal of the current connector can carry determines whether it can meet the higher power and higher performance requirements of high-power data centers. Likewise, heat dissipation is essential for efficient power supply, which determines whether the connector can fit into compact data center designs.

summary

The expansion of data center scale requires higher data rate connections and more powerful and efficient power supply support. Keeping up with changing data rates, bandwidth demands, and power demands is an ongoing battle for the various connected systems in a data center.

留下评论