Intel® VMD and VROC

Intel® Volume Management Device (VMD)

The Intel® Xeon Scalable series processors based on Skylake-SP feature a new Volume Management Device (VMD) built-in. This feature will seem similar to VROC (Virtual RAID On CPU), which debuted with the Core X series and X299 chipset, but VMD is what actually enables VROC.

Each CPU has 3 VMD domains that manages 16 PCIe lanes (total 48 lanes per CPU). The Intel® VMD can be turned on and of in the BIOS per 4 lanes (x4 PCIe SSD)

The total number of PCIe lanes in a dual socket machine equals 96 (2×48) and can be devided in max. 24  NVMe drives that are directly attached to the CPU in the system.


Intel® Virtual RAID on CPU (VROC)

Intel® VROC is and Enterprise RAID solution specifically designed for NVMe SSD’s
RAID can be built within a Single VMD domain but also across domains, and even across CPU’s

A RAID array can be bootable when they remain within a single VMD domain. RAID arays that span across multiple VMD domains are not bootable.

Because the Intel® VROC creates RAID arrays without using a traditional RAID HBA the performance is much higher.

Intel® VROC is a hybrid RAID solution. It has attributes like hardware RAID because of the key silicon feature, Intel® Volume Management Device (Intel® VMD). Intel® VDM is offered with the new Intel® Xeon® Scalable Processors. Intel® VROC utilizes Intel® VMD to aggregate NVMe SSDs allowing bootable RAID. Intel® VROC also has attributes like software RAID. For instance, it uses some of the CPU cores to calculate the RAID logic. Because of this combination of software and silicon, Intel® VROC is called a hybrid RAID solution.


Intel® VROC replaces the NVMe RAID. Intel VROC uses Intel® Volume Management Device (Intel® VMD) to provide these new features that Intel® RSTe legacy NVMe RAID does not have:

  • Bootable RAID
  • Surprise hot-plug
  • LED management
  • RAID5 Double Fault Protection
  • Support for third-party SSDs


NVMe-oF™ (NVM Express™ over Fabrics)

NVM Express (NVMe™) – Non-Volatile Memory Express is a communications interface/protocol developed specially for SSDs. Designed to take advantage of the unique properties of pipeline-rich, random access, memory-based storage, NVMe reduces latency and provide faster CPU to data storage device performance.

Itis a new and innovative method of accessing storage media and has been capturing the imagination of data center professionals worldwide. NVMe is an alternative to the Small Computer System Interface (SCSI) standard for connecting and transferring data between a host and a peripheral target storage device or system. SCSI became a standard in 1986, when hard disk drives (HDDs) and tape were the primary storage media. NVMe is designed for use with faster media, such as solid-state drives (SSDs) and post-flash memory-based technologies. NVMe provides a streamlined register interface and command set to reduce the I/O stack’s CPU overhead by directly accessing the PCIe bus. Benefits of NVMe-based storage drives include lower latency, deep parallelism and higher performance.

What is NVME over Fabrics (NVMe-oF™)?

NVMe over Fabrics (NVMe-oF™) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. NVMe-oF™ defines a common architecture that supports a range of storage networking fabrics for NVMe block storage protocol over a storage networking fabric. This includes enabling a front-side interface into storage systems, scaling out to large numbers of NVMe devices and extending the distance within a datacenter over which NVMe devices and NVMe subsystems can be accessed.

NVMe-oF™ is designed to extend NVMe onto fabrics such as Ethernet, Fibre Channel, and InfiniBand.