设计工具

无效的输入。不支持特殊字符。

SSD

Connectors, cables and transceivers! Oh my!

Anthony Constantine | September 2025

For the last 3 years, I have been the chair of a technical work group called SFF (note that SFF is not an acronym). SFF’s charter is to “develop technical specifications for storage media, storage networks and pluggable solutions that complement existing industry standards work that encompass cables, connectors, form factor sizes and housing dimensions, management interfaces, transceiver interfaces, electrical interfaces and related technologies.” That is a long way of saying that these representatives from tech companies specify how to build things that connect things together. This group has been around for 35 years and defines a lot of server-based storage form factors we use today, such as the 2.5” HDD/SSD, 3.5” HDD and EDSFF.

While this group defines SSD form factors, most of the specifications are about connectors and cables of various types, transceivers and other specs that call out system peripheral behaviors. So why does a person who works on storage care about this? An SSD is in a server somewhere that has data that eventually needs to get to an end user. This means if I want the data, it needs to move from the SSD to a connector, through a cable, to another connector, then it is routed on some sort of board, hits some sort of processing unit, then goes back on the system board, to a connector, eventually to a transceiver and then it repeats this sort of hopping. While the development of each of these components happens independently, there will always be some sort of limiter. It could be throughput, power, cost, space or some combination of these items.

SFF-TA-1002 1C and 4C+ connectors

Figure 1: SFF-TA-1002 1C and 4C+ connectors (reference: https://snia.org/sff/specifications)

Problem solving: Here are a few examples of these items and how the SFF addresses them:

Connector for SSDs and network interface cards (NICs): EDSFF SSDs and the OCP-defined OCP NIC 3.0 both utilize the same connector (called SFF-TA-1002, shown in Figure 1). This connector is critical as it can ultimately impact the throughput of our devices. The connector was ready for PCIe® 6.0 support so that customers could adopt it. Recent additions to improve usage from OCP NIC 3.0, such as burst current allowances, extend the connector’s capability, which helps the NIC carry out higher-performance operations.

SFF-TA-1016 and SFF-TA-1035 Hybrid orthogonal connector

Figure 2: SFF-TA-1016 and SFF-TA-1035 Hybrid orthogonal connector (reference: https://snia.org/sff/specifications)

Processing unit (xPU)/switch to SSD: I mentioned this in the blog U.2 had a good run. It’s time to move on to EDSFF. As we go up in throughput, new signal integrity challenges pop up. To increase the throughput of the SSD, these challenges need to be solved. One solution to this problem is to remove some of the signal integrity limiters. We did this by creating a hybrid orthogonal connector (Figure 2). This connector is oriented to allow an SSD to be plugged in on its narrow side, allowing for better airflow. The connector's high-speed PCIe signals are cabled to another connector that can be placed close to the xPU or switch, which permits better signal integrity and, therefore, faster speeds such as those in PCIe 6.0.

Sled to rack or rack to rack switch (TOR): As Ethernet speeds have increased, the transceivers need to keep up to meet the throughput demands. SFF defines the mechanical components for this. We recently published an update to the QSFP2 specifications, which increased the maximum speed of the connector and mechanical elements up to 224Gbs per lane, allowing for a maximum speed of 800Gbs per single connector. The challenges that go into this are both electrical and mechanical, as the mechanical tolerance for the plug and receptacle, and how they latch together, have to be tightened to improve the electrical performance. Without changes to these specs, the throughput per SSD or number of SSDs per sled would be limited.

SFF-TA-1016 and SFF-TA-1035 Hybrid orthogonal connector

Figure 3: SFF-TA-1027 QSFP224 1x1 Cage and Connector (reference: https://snia.org/sff/specifications)

Next problem: While the SFF has addressed the problems above, the hunger for higher speeds has not stopped. With AI advancements, data movement has become even more critical. Over the next year, work relating to PCIe 7.0 will need to begin to ensure the SSDs, cables and connectors can all support the throughput needed. 448Gbs for Ethernet and alternative protocols work is also needed to support connections between racks, sleds and xPUs, resulting in work started in SFF to help get this moving. While I work for a memory and storage company, solving these problems puts our customers in a position to have the right system pieces in place to utilize our products.

美光核心数据中心业务部门杰出技术人员

Anthony Constantine

Anthony Constantine 是美光核心数据中心业务部门 (CDBU) 的杰出技术人员,负责美光的存储标准相关工作。他是全球网络存储工业协会 (SNIA) 委员会成员,并担任 SNIA SFF 技术工作组的联合主席,撰写并参与制定了多项规范。