NVMe adds TCP support

This week the ratified NVMe™/TCP Transport Binding specification has been made available for public download. TCP is a new transport added to the family of existing NVMe™ transports; PCIe®, RDMA, and FC. NVMe/TCP defines the mapping of NVMe queues, NVMe-oF capsules and data delivery over the IETF Transport Control Protocol (TCP). The NVMe/TCP transport offers optional enhancements such as inline data integrity (DIGEST) and online Transport Layer Security (TLS).[…]

https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/

https://nvmexpress.org/wp-content/uploads/NVM-Express-over-Fabrics-1.0-Ratified-TPs.zip

http://git.infradead.org/nvme.git/shortlog/refs/heads/nvme-tcp

https://review.gerrithub.io/c/spdk/spdk/+/425191/

Telemetry: Enhancing Customer Triage of Intel® SSDs

by Behnam Eliyahu and Monika Sane

Telemetry refers to an umbrella of tools, utilities, and protocols to remotely extract and decode information for debugging potential issues with Intel® SSDs. Telemetry works over industry standard protocols, and eliminates or minimizes the need to remove SSDs from customer systems for retrieving debug logs. Telemetry thus enables host tools, Intel technical sales specialists, (TSS), Intel application engineers (AEs), and Intel engineering teams to better identify and debug performance excursions, exception events and critical failures in Intel® SSDs, without sending the physical drive to Intel for failure analysis. This capability is designed in accordance with NVMe* 1.3 telemetry specifications as well as corresponding ACS 4 SATA definitions (which are common industry standards), and is expected to accelerate debugging of external and internal bug sightings pertaining to Intel® SSDs. The key difference between NVMe and SATA is the fact that there is no controller-initiated capability on SATA drives.[…]

https://itpeernetwork.intel.com/telemetry-enhancing-customer-triage/

 

See-also:
https://github.com/linux-nvme/nvme-cli/blob/master/Documentation/nvme-telemetry-log.txt
https://github.com/linux-nvme/nvme-cli/blob/master/linux/nvme.h
https://jmetz.com/2018/02/whats-new-in-nvme-1-3/
https://nvmexpress.org/resources/specifications/

IBM on use of NVMe on POWER9 systems

[…]This article details out on the usage of a Non-Volatile Memory Enterprise (NVMe) adapter on POWER9 systems. This article also provides use cases to explains how an NVMe adapter can be effectively used and also lists the benefits.[…]

https://developer.ibm.com/articles/au-aix-virtualization-nvme/

NVMe Firmware: I Need Your Data

 

[…]The NVMe ecosystem is pretty new, and things like “what version number firmware am I running now” and “is this firmware OEM firmware or retail firmware” are still queried using vendor-specific extensions. I only have two devices to test with (Lenovo P50 and Dell XPS 13) and so I’m asking for some help with data collection. Primarily I’m trying to find out what NMVe hardware people are actually using, so I can approach the most popular vendors first (via the existing OEMs). I’m also going to be looking at the firmware revision string that each vendor sets to find quirks we need — for instance, Toshiba encodes MODEL VENDOR, and everyone else specifies VENDOR MODEL.[…]

https://blogs.gnome.org/hughsie/2018/08/17/nvme-firmware-i-need-your-data/

https://plus.google.com/+RichardHughes/posts/Wqqtots46aA

DMTF, NVMe and SNIA form 3-way alliance for SSD storage mgmt

The DMTF, NVM Express, Inc. and SNIA have formed a new three-way alliance to coordinate standards for managing SSD storage devices. […] In addition to SNIA’s Swordfish and DMTF’s Redfish, the alliance’s collaborative work will include the following standards:

* NVM Express™(NVMe™) is the register interface and command set for PCI Express attached storage with industry standard software available for numerous operating systems. The NVM Express™Management Interface (NVMe-MI™) is the command set and architecture for management of NVM Express storage (e.g., discovering, monitoring, and updating NVMe devices using a BMC).

* DMTF’s Management Component Transport Protocol (MCTP) is a protocol and Platform Level Data Model (PLDM) is a low-level data model defined by the DMTF Platform Management Components Intercommunications (PMCI) Working Group (https://www.dmtf.org/standards/pmci) . MCTP is designed to support communications between different intelligent hardware components that make up a platform management subsystem that provides monitoring and control functions inside a managed system.

* DMTF’s PLDM for Redfish Device Enablement (RDE) defines messages and data structures used for enabling PLDM devices to participate in Redfish-based management without needing to support either JavaScript Object Notation (JSON, used for operation data payloads) or the [Secure] Hypertext Transfer Protocol (HTTP/HTTPS, used to transport and configure operations).

[…]

Click to access NVMe-DMTF-SNIA_Work_Register_v1.0.pdf

https://www.dmtf.org/
http://nvmexpress.org/
https://www.snia.org/
https://www.snia.org/forums/smi/swordfish
http://www.dmtf.org/standards/redfish

AMI announces NVMe UEFI Option ROM service for OEMs

American Megatrends Provides Service for Developing NVMe UEFI Option ROMs for UEFI BIOS Firmware

AMI is pleased to announce new service for developing NVMe® UEFI Option ROMs. Recently, AMI added support for technologies such as NVMe metadata/data encryption and NVMe namespace management. Continuing its support for NVMe-related technologies, AMI has also developed UEFI NVMe Option ROMs. This service for developing UEFI NVMe Option ROMs enables OEMs and NVMe device manufacturers to ship NVMe devices with Option ROMs embedded in them. OEMs and NVMe device manufacturers will have the ability to add features that are traditionally unavailable with generic UEFI drivers and can implement vendor-specific commands and features within their devices. Some of the additional features that are available for NVMe Option ROMs include disk management for namespace management in BIOS setup, metadata/end-to-end data protection, NVMe firmware updates in BIOS setup and SMART data information. Interested OEMs and NVMe drive manufacturers should contact their AMI sales representative for more information.

https://ami.com/en/news/press-releases/american-megatrends-provides-service-for-developing-nvme-uefi-option-roms-for-uefi-bios-firmware/
http://ami.com/products/bios-uefi-firmware/aptio-v/

Intel on SSD secure erase feature

What is secure erase, and is it certified on an Intel® SSD?
by Doug DeVetter | July 31, 2017
Intel SSD used with Secure Erase

I’m often asked whether the secure erase feature within Intel® SSDs is certified by NIST, U.S. DoD, or other government or industry bodies. Intel has implemented the secure erase feature consistent with the ATA and NVMe specifications. The designs and implementations have been internally reviewed and validated. A third-party has tested the implementation on a subset of our products and reported that the data was unrecoverable. Intel is unaware of any industry or government body which certifies or approves the implementation of this technical capability. NIST SP 800-88 is often cited as the guideline to be followed in the United States with regard to secure erase. NIST provides guidelines, however, NIST does not certify compliance to these guidelines. In addition to being consistent with the ATA and NVMe specifications, our implementation of secure erase is in line with the NIST guidelines for data sanitization.[…]

https://itpeernetwork.intel.com/secure-erase-certified-intel-ssd/

https://www.intel.com/content/www/us/en/it-management/intel-it-best-practices/secure-erase-for-ssds-helps-sanitize-data-boost-efficiency-brief.html

Alpine Linux Persistence and Storage Summit

Christoph Hellwig announced this event on the Linux-NVME mailing list.

We proudly announce the Alpine Linux Persistence and Storage Summit (ALPSS), which will be held from September 27-29 at the Lizumerhuette in Austria. The goal of this conference is to discuss the hot topics in Linux storage and file systems, such as persistent memory, NVMe, multi-pathing, raw or open channel flash and I/O scheduling in a cool and relaxed setting with spectacular views in the Austrian alps. We plan to have a small selection of short and to the point talks, and lots of room for discussion in small groups, as well as ample downtime to enjoy the surrounding. Attendance is free except for the accommodation and food at the lodge, but the number of seats is strictly limited. […] Note: The Lizumerhuette is an Alpine Society lodge in a high alpine environment. A hike of approximately 2 hours is required to the lodge, and no other accommodations are available within walking distance.

Full announcement:

http://lists.infradead.org/mailman/listinfo/linux-nvme
http://lists.infradead.org/pipermail/linux-nvme/2017-July/thread.html

 

 

U-Boot gets NVMe support

Zhikang Zhang of NXP added NVMe driver support to U-Boot.

Add Support of devices that follow the NVM Express standard

 Basic functions:
    nvme init/scan
    nvme info – show the basic information of device
    nvme Read/Write

 Driver model:
    Use block device(CONFIG_BLK)’s structure to support nvme’s DM.
    Use UCLASS_PCI as a parent uclass.

The driver code heavily copy from the NVMe driver code in Linux Kernel.

Add nvme commands in U-Boot command line.

1. “nvme list” – show all available NVMe blk devices
2. “nvme info” – show current or a specific NVMe blk device
3. “nvme device” – show or set current device
4. “nvme part” – print partition table
5. “nvme read” – read data from NVMe blk device
6. “nvme write” – write data to NVMe blk device

More info: U-Boot mailing list.

NVMe-over-Fabrics support added to Linux

Christoph Hellwig has submitted multiple patches Linux’s existing NVMe support to enable ‘NVMe over Fabrics’ support. I’ve included his initial comments for each of the 4 patches:

—–

general preparation for NVMe over Fabrics support

This patch set adds some needed preparations for the upcoming NVMe over Fabrics support. Contains:
– Allow transfer size limitations for NVMe transports
– Add the get_log_page command definition required by the NVMe target
– more helpers in core code that can be used by various transports
– add some missing constants and identify attributes

—–

NVMe over Fabrics target implementation

This patch set adds a generic NVMe over Fabrics target. The implementation conforms to the NVMe 1.2b specification (which includes Fabrics) and provides the NVMe over Fabrics access to Linux block devices. The target implementation consists of several elements:
– NVMe target core:  defines and manages the NVMe entities (subsystems, controllers, namespaces, …) and their allocation, responsible for initial commands processing and correct orchestration of the stack setup and tear down.
– NVMe admin command implementation: responsible for parsing and servicing admin commands such as controller identify, set features, keep-alive, log page, …).
– NVMe I/O command implementation: responsible for performing the actual I/O (Read, Write, Flush, Deallocate (aka Discard).  It is a very thin layer on top of the block layer and implements no logic of it’s own. To support exporting file systems please use the loopback block driver in direct I/O mode, which gives very good performance.
– NVMe over Fabrics support: responsible for servicing Fabrics commands (connect, property get/set).
– NVMe over Fabrics discovery service: responsible to serve the Discovery log page through a special cut down Discovery controller.

The target is configured using configfs, and configurable entities are:
 – NVMe subsystems and namespaces
 – NVMe over Fabrics ports and referrals
 – Host ACLs for primitive access control – NVMe over Fabrics access control is still work in progress at the specification level and will be implemented once that work has finished.

To configure the target use the nvmetcli tool from http://git.infradead.org/users/hch/nvmetcli.git, which includes detailed setup documentation.

In addition to the Fabrics target implementation we provide a loopback driver which also conforms the NVMe over Fabrics specification and allows evaluation of the target stack with local access without requiring a real fabric.

Various test cases are provided for this implementation: nvmetcli contains a python testsuite that mostly stresses the configfs interface of the target, and we have various integration tests prepared for the kernel host and target which are available at:

    git://git.infradead.org/nvme-fabrics.git nvmf-selftests
    http://git.infradead.org/nvme-fabrics.git/shortlog/refs/heads/nvmf-selftests

This repository also contains patches from all the series posted today in case you prefer using a git repository over collecting patches.

NVMe over Fabrics RDMA transport drivers

This patch set implements the NVMe over Fabrics RDMA host and the target drivers. The host driver is tied into the NVMe host stack and implements the RDMA transport under the NVMe core and Fabrics modules. The NVMe over Fabrics RDMA host module is responsible for establishing a connection against a given target/controller, RDMA event handling and data-plane command processing. The target driver hooks into the NVMe target core stack and implements the RDMA transport. The module is responsible for RDMA connection establishment, RDMA event handling and data-plane RDMA commands processing. RDMA connection establishment is done using RDMA/CM and IP resolution. The data-plane command sequence follows the classic storage model where the target pushes/pulls the data.

generic NVMe over Fabrics library support

This patch set adds the necessary infrastructure for the NVMe over Fabrics functionality and the NVMe over Fabrics library itself. First we add some needed parameters to NVMe request allocation such as flags (for reserved commands – connect and keep-alive), also support tag allocation of a given queue ID (for connect to be executed per-queue) and allow request to be queued at the head of the request queue (so reconnects can pass in flight I/O). Second, we add support for additional sysfs attributes that are needed or useful for the Fabrics driver. Third we add the NVMe over Fabrics related header definitions and the Fabrics library itself which is transport independent and handles Fabrics specific commands and variables. Last, we add support for periodic keep-alive mechanism which is mandatory for Fabrics.

For more information, see the linux-(nvme,block,kernel) mailing lists.
http://lists.infradead.org/mailman/listinfo/linux-nvme

Intel X[L]710 Ethernet controlllers have NVMe firmware exploit

Intel’s Security Center has a new advisory about Intel Ethernet Controller X710 and XL710 with their NVM image, excerpted below. Update is available.

Strangely, it is from yesterday (May 2016), but copyrighted 2013.

NVMe trade group: Where is the NVMe firmware equivalent of CHIPSEC? Where are the NVMe security tools? 😦

Potential vulnerability in the Intel® Ethernet Controller X710 and XL710 product families
Intel ID: INTEL-SA-00052
Product family: Intel® Ethernet Controller X710 and XL710 Product Families
Impact of vulnerability: Denial of Service
Severity rating:  Important
Original release: May 31, 2016

A vulnerability was identified in the Intel Ethernet Controller X710 and XL710 product family NVM image v5.02 and v5.03.  Intel released an update to mitigate this issue on 31MAY2016. Intel has found a security vulnerability in the v5.02 and v5.03 release of the NVM images for the Intel® Ethernet Controller X710 and XL710 family.  Intel has released a v5.04 NVM image to mitigate this issue.  This update is available to customers through the Intel Download Center. Intel highly recommends that customers of the affected products obtain and apply the updated versions of the NVM image through the Intel Download Center at (https://downloadcenter.intel.com/download/24769).
 
(C) 2013, Intel Corporation

https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00052&languageid=en-fr

ezFIO – NVMe SSD Benchmark Tool

Quoting the blog and readme:

ezFIO is a Linux and Windows wrapper for the cross-platform FIO IO testing tool. It always includes a repeatable preconditioning (also known as “seasoning”) stage, to help simulate the true long-term performance of eSSDs. A wide variation in IO sizes and parallelism is run automatically.

ezFIO also includes long-term performance stability measures, which allow for latency outliers and deviations to be numerically represented and graphically identified quickly. These new SSD metrics show Quality of Service, both at a Macro level (standard deviation over time) and Micro Level (measurement of minimum and maximum latency outliers). And, finally, ezFIO takes all these results and produces an Open Document formatted spreadsheet usable under Linux or Windows with embedded graphs and raw test results to make examining these results consistently derived and for any NVMe device to be compared.

This test script is intended to give a block-level based overview of SSD performance (SATA, SAS, and NVME) under real-world conditions by focusing on sustained performance at different block sizes and queue depths.  Both text-mode Linux and GUI and text-mode Windows versions are included.

http://www.nvmexpress.org/blog/ezfio-powerful-simple-nvme-ssd-benchmark-tool/
https://github.com/earlephilhower/ezfio

new Intel patch adding TCG OPAL unlock to Linux NVMe

Rafael Antognolli of Intel posted a patch to the Linux-(NVMe,Block,Kernel) mailing lists, adding TCG OPAL unlock support to NVMe:

Add Opal unlock support to NVMe. This patch series implement a small set of the Opal protocol for self encrypting devices. It’s implemented only what is needed for saving a password and unlocking a given “locking range”. The password is saved on the driver and replayed back to the device on resume from suspend to RAM. It is specifically supporting the single user mode. It is not planned to implement the full Opal protocol (at least not for now).

Add optane OPAL unlocking code. This code is used to unlock a device during resume from “suspend to RAM”. It allows the userspace to set a key for a locking range. This key is stored in the module memory, and will be replayed later (using the OPAL protocol, through the NVMe driver) to unlock the locking range. The nvme_opal_unlock() will search through the list of saved devices + locking_range + namespaces + keys and check if it is a match for this namespace. For every match, it adds an “unlocking job” to a list, and after this, these jobs are “consumed” by running the respective OPAL “unlock range” commands (from the OPAL spec):
  * STARTSESSION
  * SET(locking range, readwrite)
  * ENDSESSION

NVMe: Add ioctls to save and unlock an Opal locking range. Two ioctls are added to the NVMe namespace: NVME_IOCTL_SAVE_OPAL_KEY and NVME_IOCTL_UNLOCK_OPAL. These ioctls map directly to the respective nvme_opal_register() and nvme_opal_unlock() functions. Additionally, nvme_opal_unlock() is called upon nvme_revalidate_disk, so it will try to unlock a locking range (if a password for it is saved) during PM resume.

For more information, see the post on the list archives:
http://lists.infradead.org/mailman/listinfo/linux-nvme

NVMe tool: SEDutil

Judith Vanderkay posted an article on the NVM Express blog about an updated release of a tool of theirs:

Drive Trust Alliance adds NVMe support to SEDutil:

Drive Trust Alliance maintains the popular sedutil application (formally called msed), which eases configuration of Self-Encrypting Drives implementing the TCG OPAL specification. Until recently only SATA/SCSI drives were supported by sedutil. As of the 1.10 release, NVMe SEDs are officially supported by the Linux version of sedutil. This paves the way for NVMe OPAL SED adoption across a wide variety of datacenter, workstation, client, mobile, and IoT platforms.”

http://www.nvmexpress.org/blog/drive-trust-alliance-adds-nvme-support-to-sedutil/
https://github.com/Drive-Trust-Alliance/sedutil
https://github.com/r0m30/msed
https://github.com/Drive-Trust-Alliance

NVMe Summit presentations available

The presentation PDFs (no A/V) are now available for the NVMe Flash Memory Summit, as well as NVME’s presentations from IDF.

The Flash Memory Summit presentations ZIP includes all of the PDFs of that conference, including one on NVMe security, discussing OVAL, Self Encrypting Drives (SEDs), integration with Trustworthy Computing standards, among other things.

http://www.nvmexpress.org/presentations/

AMI launches MG9005 controller for NVMe SSD storage

Today AMI (American Megatrends, Inc.) launches a new enclosure solution for NVM Express SSD Subsystems. The controller is firmware-upgradable through SMBUS.

A true, single-chip solution, the MG9095 backplane controller ships ready to use with no custom firmware or programming required,” said Subramonian Shankar, AMI CEO.

Read the full announcement:
http://www.ami.com/products/backplanes-and-enclosure-management/enclosure-management-asics/mg9095-controller/

http://www.ami.com/news/press-releases/?PressReleaseID=312&/American%20Megatrends%20Launches%20Enclosure%20Management%20Solution%20for%20NVM%20Express%E2%84%A2%20SSD%20Subsystems/

New NVME Express pass-through protocol in UEFI 2.5

Feng Tian of Intel recently checked in changes to the EDK-II trunk for the EFI_NVME_PASS_THRU_PROTOCOL, as part of the UEFI 2.5 checkins. This UEFI NVM Express protocol provides services that allow NVM Express commands to be sent to an NVM Express controller or to a specific namespace in a NVM Express controller.

I’ve found the definitions in the code, but not an implementation, so either the checkin hasn’t happened yet, I’ve missed it, or it’s a non-open source implementation that won’t be in the TianoCore code, I’m unclear. If you know, please speak up!

For more information, see:
MdePkg/Include/Protocol/NvmExpressPassthru.h
and:
http://www.nvmexpress.org