Frequently Asked Questions
- Membership/General Information
- NVM Express 2.0 Specification/Refactoring
- NVM Express Base Specification Features
- NVM Express Management Interface
- NVM Express Markets
- NVM Express over Fabrics
- NVMe Specification
- NVMe Technology SSDs
Please visit the Contact Us page on the NVM Express website.
NVM Express members can request access to the membership space: https://workspace.nvmexpress.org/kws.
Please visit the Membership page to learn more about the benefits, requirements and costs associated with NVM Express membershi
The specifications are open for use by the industry at large and are available on the Developers Page.
NVM Express is an incorporated non-profit industry organization. There are three categories of members, Promoters, Contributors, and Adopters. The NVM Express group is led by 13 Promoter companies who have board seats and provide overall governance. Promoters are elected and serve two-year terms. Contributor companies are welcome to participate in regularly scheduled technical working sessions that develop specifications and in marketing sessions to further adoption of the interface in the industry. Adopters have access to ratified Technical Proposals and are welcome to participate in NVM Express Marketing activities. Additional information is available in the Organization Bylaws located on the NVM Express Join page.
The Promoters Group is composed of 13 delegates, elected from the industry’s top storage, infrastructure and software companies.
The NVMe Express™ (NVMe™) specification defines how host software communicates with non-volatile memory across a PCI Express® (PCIe®) bus. It is the industry standard for PCIe SSDs in all form factors.
The NVMe specification was developed from the ground up for SSDs and non-volatile memory to be more scalable, higher performance, lower latency, and more efficient than previous storage protocols that were designed for hard disk drives, for instance, SATA and SAS.
The NVMe architecture brings a new high performance queuing mechanism that supports 65,535 I/O queues each with 65,535 commands (referred to as queue depth, or the number of outstanding commands). Queues are mapped to CPU cores delivering scalable performance. The NVMe interface significantly reduces the number of memory-mapped input/output commands and accommodates operating system device drivers running in interrupt or polling modes for higher performance and lower latency.
The NVMe protocol was designed to be scalable through asynchronous IO and not having to be blocked by uncacheable register reads. NVMe architecture is scalable across multiple interfaces, through PCIe with NVMe technologies and networked fabrics through NVMe-oF architecture. It can scale performance with varying PCIe lanes and higher capacity SSDs, it can scale from mobile devices to data centers.
The NVMe specification has many features for power management, including support for non-operational power states for low idle power to extend the battery life of mobile devices and laptops. The NVMe specification has autonomous power state transitions so the device can decide when to enter a different active power state, support for runtime D3 for zero power idle and fast resume, and host control thermal management to inform the SSD when to throttle and reduce power based on a thermal setting. Active power states can be set through set features which is great for data center SSDs where some customers want to improve TCO and reduce power by setting an SSD to limit the maximum power consumption.
RAID works on NVMe SSDs just like it did on previous storage devices. Linux mdadm supports NVMe RAID and has enhancements to improve reliability, performance, and scalability. There are independent hardware vendors that support NVMe based RAID cards for hardware acceleration as well.
There are many options for NVMe RAID. There is open source software NVMe RAID like mdadm in Linux, combination hardware and software with built-in NVMe RAID and hardware RAID cards from independent hardware vendors that support hardware offloads of the RAID functionality with a standard PCIe AIC HBA or RAID card.
NVM Express has partnered with the University of New Hampshire InterOperability Laboratory (UNH-IOL) to build a third-party compliance program for validating NVMe, NVMe-MI and NVMe-oF products. The compliance program includes test services, test reports, plugfests, and compliance test tools. More information is available at https://nvmexpress.org/products/compliance/.
The use of PCIe SSD, NVMe SSD, NVMe/PCIe SSD are all common and acceptable. Most people today use NVMe SSD to imply a PCIe SSD (since that is what has existed from 2011 to today) but in the future, NVMe SSDs may also be on other physical interfaces beyond PCIe architecture.
The M.2 specification, defined by PCI-SIG, is a form factor for small devices, such as SSDs and WiFi cards. M.2 supports SATA as well as PCIe technology, but most commonly M.2 SSDs are NVMe SSDs.
There are multiple commands in the NVMe specification to securely erase user data. The NVMe format command includes support for crypto erase to quickly erase user data by switching the crypto key, as well as full media erase which today physically erases the NAND. Sanitize is the other command to erase user data. It has the same capabilities as format, while also removing Metadata, information from the queues, and guarantees completion by automatically starting after a device reset.
NVMe SSDs can me managed in-band (through the operating system) and out-of-band (outside of the host OS, generally through a BMC). Many open source management tools exist to manage NVMe SSDs, like nvme-cli in Linux that contains all the commands in the NVMe specification to monitor device health and endurance, update firmware, secure erase drives, read SMART, etc.
The NVMe specifications use the PCIe interface for low latency and scalable performance. Current PCIe 3.0 x4 NVMe SSDs are 6-7x faster than a SATA SSD, and PCIe 4.0 SSDs are 12-13x faster than a SATA SSD.
There are various examples of NVMe technology being used in mobile devices at extremely low power–the most notable being the Apple iPhone. M.2 and NVMe BGA devices for notebooks, laptops, and tablets are also demonstrating lower power and improved battery life vs SATA.
SAS or SATA use the SCSI protocol. The SCSI protocol uses acknowledgment overhead to make sure data is received. The NVMe protocol is much slimmer, has less overhead and uses fewer CPU cycles to process. Therefore, it is much faster than SATA and lower latency.
There are many features that the NVMe specification supports. An explanation of the functionality of the many features can be found in section 8 of the NVMe specification.
NVMe 1.4 specification introduced many new features, the most notable new ones being NVM Sets, Read Recovery Levels, IO determinism, enhancements to streams, persistent memory region, persistent event log, asymmetrical namespace access, rebuild assist, verify, namespace granularity and namespace write protect. A full list of the new features can be found in the NVMe 1.4 specification.
IO determinism defines a way for the SSD and the host to coordinate times for the SSD to do background garbage collection and conversely provide a deterministic window where the drive has the best read latency.
NVM Sets is a logical construct that can divide an SSD up into multiple smaller NVM Sets that may be physically isolated from each other, for instance on NAND SSDs on a separate channel or number of NAND dice. This provides isolation and a solution to the noisy neighbor problem (writes from one workload impacting the quality of service on another).
Host Memory Buffer (HMB) is a low-level shared memory interface that can enable high-performance applications such as small payload control loops and large random access buffers.
Persistent Memory Region (PMR) is an optinal area of persistent memory that is located on the NVMe device, that can be read with standard PCIe memory reads/writes. This could be extra DRAM that is power protected, storage class memory or other new memory types. The use cases are not defined by the NVMe specification, but there are many useful needs for a general purpose PMR.
Multipathing, also called I/O multipathing, is the establishment of multiple physical routes between a server and the storage device that supports it. This is done to prevent Single Point of Failure and achieve continuous operations.
An NVMe namespace is a quantity of non-volatile memory (NVM) that can be formatted into logical blocks. Namespaces are used when a storage virtual machine is configured with the NVMe protocol. One or more namespaces are provisioned and connected to an NVMe host. Each namespace can support various block sizes.
The NVMe Zoned Namespaces (ZNS) interface is being developed by NVM Express. By dividing an NVMe namespace into zones, which are required to be sequentially written, ZNS offers essential benefits to hyper-scale organizations, all-flash array vendors and large storage-system vendors wishing to take advantage of storage devices optimized for sequential write workloads. ZNS reduces device-side write amplification, over-provisioning and DRAM while improving tail latency, throughput and drive capacity. By bringing a zoned-block interface to NVMe SSDs, ZNS brings alignment with the ZAC/ZBC host model already being used by SMR HDDs and enables a zoned-block storage layer to emerge across the SSD/HDD ecosystem.
The NVMe Key Value (NVMe-KV) Command Set has been standardized as one of the new I/O Command Sets that the NVMe specification supports. NVMe-KV allows access to data on an NVMe SSD controller using a key rather than a block address. The NVMe-KV Command Set provides the key to store a corresponding value on non-volatile media, then retrieves that value from the media by specifying the corresponding key. NVMe-KV allows users to access key-value data without the costly and time-consuming overhead of additional translation tables between keys and logical blocks.
NVMe devices have been shipping since 2013, with the first NVMe SSDs in production from various NVM Express member companies. NVMe technology now makes up the majority share of SSDs sold in all markets
Yes. Many analysts show that there isn’t a premium for NVMe technology over SATA or SAS SSDs due to a broad market, competition, multiple vendors and general availability. There is no reason users and companies should not be deploying NVMe architecture over other legacy storage protocols.
NVMe 2.0 specification is still actively being developed. The NVM Express organization has announced that there will be a major reorganization of the specification that will make it easier for device manufactures to implement segment specific features.
The current NVMe technology roadmap indicates that the NVMe 2.0 specification is targeted for release in the second half of 2020.
The initial NVMe specification family was not structured for extensibility, so we are optimizing the specifications for further evolution with the refactoring project. The new structure will enable innovation and take the technology back to the core values of speed, simplicity and scalability. The new extensible specification infrastructure will ultimately take the industry through the next phase of growth for NVMe while minimizing the impact on broadly deployed solutions.
Key aspects driving the refactor are:
- Adding Fabrics as a core to NVMe
- The elimination of duplication in data structures
- Integration of NVMe and NVMe-oF base functions
- Separate command set specs
- Modular transport mapping layer PCIe transport separation, command set extensibility
- Fabrics base integrations
The NVM Express specification is built so that it can be utilized across multiple transports beyond PCI Express architecture, including Ethernet, Fibre Channel, Infiniband and RDMA.
NVMe technology can be transported by all popular networks. iWarp has been optimized for WAN so NVMe can ‘travel’ outside of the Data Center.
The NVMe-oF 1.1 specification included several framework improvements to make NVMe-oF implementations easier. These improvements included enhanced discovery features, officially defined in ratified Technical Proposal (TP) 8002. This TP added persistent controller support for discovery to notify hosts when the fabric configuration changes. NVMe-oF 1.0 discovery offered the following capabilities:
- The ability to discover a list of NVM subsystems with namespaces that are accessible to the host
- The ability to discover multiple paths to an NVM subsystem
- The ability to discover controllers that are statically configured
NVMe-oF 1.1 specification added the following enhanced discovery features:
- The optional ability to establish explicit persistent connections to the Discovery controller
- The optional ability to receive Asynchronous Event Notifications from the Discovery controller
For more detailed information on NVMe-oF discovery services, please refer to page 40 of the NVMe-oF 1.1 specification.
The NVMe-MI specification defines a command set and architecture for managing NVMe storage. NVMe-MI technology provides an industry standard for the management of NVMe devices in-band (through an operating system) and out-of-band (usually through a BMC, or baseboard management controller). Anchor features include standardized NVMe enclosure management, the ability to access NVMe-MI technology functionality in-band and new management features for multiple NVMe subsystem solid-state drive (SSD) deployments.
The NVMe-MI specification was created to define a command set and architecture for managing NVMe storage, making it possible to discover, monitor, configure, and update NVMe devices in multiple operating environments. It started with just out-of-band management for OS agnostic management of SSDs over SMBUS, to have a standard way for SSDs and host servers manage NVMe SSDs.
Enclosure management, in-band tunneling, multi NVM subsystem management for complex devices, PCIe port numbering, and a table for identifying form factor and temperatures.
Enclosure management is a way to monitor the physical presence of SSDs in an enclosure (find out what slot an SSD is in, operate LEDs), which may be a storage array, a JBOD/JBOF, etc., monitor temperature and fan speed, and send management commands to the SSDs from the storage enclosure.
In-band means sending commands in an operating system through a standard NVMe driver and commands, while out-of-band means outside of the operating system knowledge, most commonly done with a host BMC through the SMBUS protocol, but now can be done over PCIe vendor defined messages as well.
NVMe technology is targeted at SSDs in the client, mobile, desktop, workstation, server, hyperscale, enterprise, storage segments (all SSD segments). NVMe capable servers and systems are part of the NVMe market, as well as NVMe-oF networking adapters, all-flash arrays, and many other NVMe capable products. NVMe architecture is now much broader than SSDs, but SSDs remain the largest market by far.
NVMe technology is higher performance, more scalable, lower latency, and has much more innovation than the incumbent technologies. All the markets that NVMe architecture supports will have benefit from adopting the technology.
There are many features in NVMe technology for consumers, like format/secure erase/sanitize, power states, thermal management, SMART, namespace write protect, and more.
As these applications like ML/AI benefit tremendously from low latency and performance, NVMe technology will decrease responses/outcomes making it possible to check more sources and get better outcomes.
The NVMe specification is scalable, open, and flexible for the needs of the data center. NVMe end-to-end solutions can reduce response time thereby making Enterprise Apps more responsive and increase workload performance.
NVMe technology has become the go-to storage protocol solution for today’s data centers which were lacking full support previously. The addition of NVMe-oF technology makes NVMe solutions easier to implement than ever across the data center ecosystem. The NVMe-oF protocol allows optimal performance for both applications and the network when accessing NVMe storage via a network. By allowing the NVMe protocol to run over a switched fabric, the NVMe-oF protocol reduces bottlenecks and latency created from older storage fabric protocols. The NVM Express organization recently supported the addition of the NVMe/TCP transport to the NVMe-oF family, making NVMe-oF even more flexible than before. This was in response to data center hyperscaler requests, due to their scale-out architecture choices.
Upcoming specification updates in NVMe 1.4 and NVMe-oF 1.1 also take data center hyperscaler requirements into account. New features in NVMe 1.4 specification such as I/O determinism break up each drive into multiple different devices, allowing multiple I/O workloads to independently access the drive, reducing long-tail latency and improving Quality of Service (QoS). NVMe technology data center benefits:
- Delivery of faster overall access to data
- Lowering of power consumption
- Reduced latency
- Higher Input/Output Operations (IOPS)
No, but enhancements have been made to make NVMe technology more robust and more fault tolerant for Enterprise applications.
ANA. No disruptive failover of storage paths is essential for enterprise storage. It ensures that if there’s a failure in a path the access to the storage will not be disrupted but a switchover will non-disruptively overcome this issue. Enterprise Storage needs to be available all the time.