Uncategorized

U-Boot gets NVMe support

Zhikang Zhang of NXP added NVMe driver support to U-Boot.

Add Support of devices that follow the NVM Express standard

 Basic functions:
    nvme init/scan
    nvme info – show the basic information of device
    nvme Read/Write

 Driver model:
    Use block device(CONFIG_BLK)’s structure to support nvme’s DM.
    Use UCLASS_PCI as a parent uclass.

The driver code heavily copy from the NVMe driver code in Linux Kernel.

Add nvme commands in U-Boot command line.

1. “nvme list” – show all available NVMe blk devices
2. “nvme info” – show current or a specific NVMe blk device
3. “nvme device” – show or set current device
4. “nvme part” – print partition table
5. “nvme read” – read data from NVMe blk device
6. “nvme write” – write data to NVMe blk device

More info: U-Boot mailing list.

Standard
Uncategorized

NMVe updates whitepapers

I think the first 2 links on their whitepaper section, on NVMe and NVMe over Fabrics, have been revised:

White Papers

Standard
Uncategorized

NVMe-over-Fabrics support added to Linux

Christoph Hellwig has submitted multiple patches Linux’s existing NVMe support to enable ‘NVMe over Fabrics’ support. I’ve included his initial comments for each of the 4 patches:

—–

general preparation for NVMe over Fabrics support

This patch set adds some needed preparations for the upcoming NVMe over Fabrics support. Contains:
– Allow transfer size limitations for NVMe transports
– Add the get_log_page command definition required by the NVMe target
– more helpers in core code that can be used by various transports
– add some missing constants and identify attributes

—–

NVMe over Fabrics target implementation

This patch set adds a generic NVMe over Fabrics target. The implementation conforms to the NVMe 1.2b specification (which includes Fabrics) and provides the NVMe over Fabrics access to Linux block devices. The target implementation consists of several elements:
– NVMe target core:  defines and manages the NVMe entities (subsystems, controllers, namespaces, …) and their allocation, responsible for initial commands processing and correct orchestration of the stack setup and tear down.
– NVMe admin command implementation: responsible for parsing and servicing admin commands such as controller identify, set features, keep-alive, log page, …).
– NVMe I/O command implementation: responsible for performing the actual I/O (Read, Write, Flush, Deallocate (aka Discard).  It is a very thin layer on top of the block layer and implements no logic of it’s own. To support exporting file systems please use the loopback block driver in direct I/O mode, which gives very good performance.
– NVMe over Fabrics support: responsible for servicing Fabrics commands (connect, property get/set).
– NVMe over Fabrics discovery service: responsible to serve the Discovery log page through a special cut down Discovery controller.

The target is configured using configfs, and configurable entities are:
 – NVMe subsystems and namespaces
 – NVMe over Fabrics ports and referrals
 – Host ACLs for primitive access control – NVMe over Fabrics access control is still work in progress at the specification level and will be implemented once that work has finished.

To configure the target use the nvmetcli tool from http://git.infradead.org/users/hch/nvmetcli.git, which includes detailed setup documentation.

In addition to the Fabrics target implementation we provide a loopback driver which also conforms the NVMe over Fabrics specification and allows evaluation of the target stack with local access without requiring a real fabric.

Various test cases are provided for this implementation: nvmetcli contains a python testsuite that mostly stresses the configfs interface of the target, and we have various integration tests prepared for the kernel host and target which are available at:

    git://git.infradead.org/nvme-fabrics.git nvmf-selftests
    http://git.infradead.org/nvme-fabrics.git/shortlog/refs/heads/nvmf-selftests

This repository also contains patches from all the series posted today in case you prefer using a git repository over collecting patches.

NVMe over Fabrics RDMA transport drivers

This patch set implements the NVMe over Fabrics RDMA host and the target drivers. The host driver is tied into the NVMe host stack and implements the RDMA transport under the NVMe core and Fabrics modules. The NVMe over Fabrics RDMA host module is responsible for establishing a connection against a given target/controller, RDMA event handling and data-plane command processing. The target driver hooks into the NVMe target core stack and implements the RDMA transport. The module is responsible for RDMA connection establishment, RDMA event handling and data-plane RDMA commands processing. RDMA connection establishment is done using RDMA/CM and IP resolution. The data-plane command sequence follows the classic storage model where the target pushes/pulls the data.

generic NVMe over Fabrics library support

This patch set adds the necessary infrastructure for the NVMe over Fabrics functionality and the NVMe over Fabrics library itself. First we add some needed parameters to NVMe request allocation such as flags (for reserved commands – connect and keep-alive), also support tag allocation of a given queue ID (for connect to be executed per-queue) and allow request to be queued at the head of the request queue (so reconnects can pass in flight I/O). Second, we add support for additional sysfs attributes that are needed or useful for the Fabrics driver. Third we add the NVMe over Fabrics related header definitions and the Fabrics library itself which is transport independent and handles Fabrics specific commands and variables. Last, we add support for periodic keep-alive mechanism which is mandatory for Fabrics.

For more information, see the linux-(nvme,block,kernel) mailing lists.
http://lists.infradead.org/mailman/listinfo/linux-nvme

Standard
Uncategorized

Intel X[L]710 Ethernet controlllers have NVMe firmware exploit

Intel’s Security Center has a new advisory about Intel Ethernet Controller X710 and XL710 with their NVM image, excerpted below. Update is available.

Strangely, it is from yesterday (May 2016), but copyrighted 2013.

NVMe trade group: Where is the NVMe firmware equivalent of CHIPSEC? Where are the NVMe security tools? 😦

Potential vulnerability in the Intel® Ethernet Controller X710 and XL710 product families
Intel ID: INTEL-SA-00052
Product family: Intel® Ethernet Controller X710 and XL710 Product Families
Impact of vulnerability: Denial of Service
Severity rating:  Important
Original release: May 31, 2016

A vulnerability was identified in the Intel Ethernet Controller X710 and XL710 product family NVM image v5.02 and v5.03.  Intel released an update to mitigate this issue on 31MAY2016. Intel has found a security vulnerability in the v5.02 and v5.03 release of the NVM images for the Intel® Ethernet Controller X710 and XL710 family.  Intel has released a v5.04 NVM image to mitigate this issue.  This update is available to customers through the Intel Download Center. Intel highly recommends that customers of the affected products obtain and apply the updated versions of the NVM image through the Intel Download Center at (https://downloadcenter.intel.com/download/24769).
 
(C) 2013, Intel Corporation

https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00052&languageid=en-fr

Standard
Uncategorized

ezFIO – NVMe SSD Benchmark Tool

Quoting the blog and readme:

ezFIO is a Linux and Windows wrapper for the cross-platform FIO IO testing tool. It always includes a repeatable preconditioning (also known as “seasoning”) stage, to help simulate the true long-term performance of eSSDs. A wide variation in IO sizes and parallelism is run automatically.

ezFIO also includes long-term performance stability measures, which allow for latency outliers and deviations to be numerically represented and graphically identified quickly. These new SSD metrics show Quality of Service, both at a Macro level (standard deviation over time) and Micro Level (measurement of minimum and maximum latency outliers). And, finally, ezFIO takes all these results and produces an Open Document formatted spreadsheet usable under Linux or Windows with embedded graphs and raw test results to make examining these results consistently derived and for any NVMe device to be compared.

This test script is intended to give a block-level based overview of SSD performance (SATA, SAS, and NVME) under real-world conditions by focusing on sustained performance at different block sizes and queue depths.  Both text-mode Linux and GUI and text-mode Windows versions are included.

ezFIO – Powerful, Simple NVMe SSD Benchmark Tool

https://github.com/earlephilhower/ezfio

Standard
Uncategorized

new Intel patch adding TCG OPAL unlock to Linux NVMe

Rafael Antognolli of Intel posted a patch to the Linux-(NVMe,Block,Kernel) mailing lists, adding TCG OPAL unlock support to NVMe:

Add Opal unlock support to NVMe. This patch series implement a small set of the Opal protocol for self encrypting devices. It’s implemented only what is needed for saving a password and unlocking a given “locking range”. The password is saved on the driver and replayed back to the device on resume from suspend to RAM. It is specifically supporting the single user mode. It is not planned to implement the full Opal protocol (at least not for now).

Add optane OPAL unlocking code. This code is used to unlock a device during resume from “suspend to RAM”. It allows the userspace to set a key for a locking range. This key is stored in the module memory, and will be replayed later (using the OPAL protocol, through the NVMe driver) to unlock the locking range. The nvme_opal_unlock() will search through the list of saved devices + locking_range + namespaces + keys and check if it is a match for this namespace. For every match, it adds an “unlocking job” to a list, and after this, these jobs are “consumed” by running the respective OPAL “unlock range” commands (from the OPAL spec):
  * STARTSESSION
  * SET(locking range, readwrite)
  * ENDSESSION

NVMe: Add ioctls to save and unlock an Opal locking range. Two ioctls are added to the NVMe namespace: NVME_IOCTL_SAVE_OPAL_KEY and NVME_IOCTL_UNLOCK_OPAL. These ioctls map directly to the respective nvme_opal_register() and nvme_opal_unlock() functions. Additionally, nvme_opal_unlock() is called upon nvme_revalidate_disk, so it will try to unlock a locking range (if a password for it is saved) during PM resume.

For more information, see the post on the list archives:
http://lists.infradead.org/mailman/listinfo/linux-nvme

Standard