Uncategorized

Intel on SSD secure erase feature

What is secure erase, and is it certified on an Intel® SSD?
by Doug DeVetter | July 31, 2017
Intel SSD used with Secure Erase

I’m often asked whether the secure erase feature within Intel® SSDs is certified by NIST, U.S. DoD, or other government or industry bodies. Intel has implemented the secure erase feature consistent with the ATA and NVMe specifications. The designs and implementations have been internally reviewed and validated. A third-party has tested the implementation on a subset of our products and reported that the data was unrecoverable. Intel is unaware of any industry or government body which certifies or approves the implementation of this technical capability. NIST SP 800-88 is often cited as the guideline to be followed in the United States with regard to secure erase. NIST provides guidelines, however, NIST does not certify compliance to these guidelines. In addition to being consistent with the ATA and NVMe specifications, our implementation of secure erase is in line with the NIST guidelines for data sanitization.[…]

https://itpeernetwork.intel.com/secure-erase-certified-intel-ssd/

https://www.intel.com/content/www/us/en/it-management/intel-it-best-practices/secure-erase-for-ssds-helps-sanitize-data-boost-efficiency-brief.html

Standard
Uncategorized

Alpine Linux Persistence and Storage Summit

Christoph Hellwig announced this event on the Linux-NVME mailing list.

We proudly announce the Alpine Linux Persistence and Storage Summit (ALPSS), which will be held from September 27-29 at the Lizumerhuette in Austria. The goal of this conference is to discuss the hot topics in Linux storage and file systems, such as persistent memory, NVMe, multi-pathing, raw or open channel flash and I/O scheduling in a cool and relaxed setting with spectacular views in the Austrian alps. We plan to have a small selection of short and to the point talks, and lots of room for discussion in small groups, as well as ample downtime to enjoy the surrounding. Attendance is free except for the accommodation and food at the lodge, but the number of seats is strictly limited. […] Note: The Lizumerhuette is an Alpine Society lodge in a high alpine environment. A hike of approximately 2 hours is required to the lodge, and no other accommodations are available within walking distance.

Full announcement:

http://lists.infradead.org/mailman/listinfo/linux-nvme
http://lists.infradead.org/pipermail/linux-nvme/2017-July/thread.html

 

 

Standard
Uncategorized

U-Boot gets NVMe support

Zhikang Zhang of NXP added NVMe driver support to U-Boot.

Add Support of devices that follow the NVM Express standard

 Basic functions:
    nvme init/scan
    nvme info – show the basic information of device
    nvme Read/Write

 Driver model:
    Use block device(CONFIG_BLK)’s structure to support nvme’s DM.
    Use UCLASS_PCI as a parent uclass.

The driver code heavily copy from the NVMe driver code in Linux Kernel.

Add nvme commands in U-Boot command line.

1. “nvme list” – show all available NVMe blk devices
2. “nvme info” – show current or a specific NVMe blk device
3. “nvme device” – show or set current device
4. “nvme part” – print partition table
5. “nvme read” – read data from NVMe blk device
6. “nvme write” – write data to NVMe blk device

More info: U-Boot mailing list.

Standard
Uncategorized

NMVe updates whitepapers

I think the first 2 links on their whitepaper section, on NVMe and NVMe over Fabrics, have been revised:

White Papers

Standard
Uncategorized

NVMe-over-Fabrics support added to Linux

Christoph Hellwig has submitted multiple patches Linux’s existing NVMe support to enable ‘NVMe over Fabrics’ support. I’ve included his initial comments for each of the 4 patches:

—–

general preparation for NVMe over Fabrics support

This patch set adds some needed preparations for the upcoming NVMe over Fabrics support. Contains:
– Allow transfer size limitations for NVMe transports
– Add the get_log_page command definition required by the NVMe target
– more helpers in core code that can be used by various transports
– add some missing constants and identify attributes

—–

NVMe over Fabrics target implementation

This patch set adds a generic NVMe over Fabrics target. The implementation conforms to the NVMe 1.2b specification (which includes Fabrics) and provides the NVMe over Fabrics access to Linux block devices. The target implementation consists of several elements:
– NVMe target core:  defines and manages the NVMe entities (subsystems, controllers, namespaces, …) and their allocation, responsible for initial commands processing and correct orchestration of the stack setup and tear down.
– NVMe admin command implementation: responsible for parsing and servicing admin commands such as controller identify, set features, keep-alive, log page, …).
– NVMe I/O command implementation: responsible for performing the actual I/O (Read, Write, Flush, Deallocate (aka Discard).  It is a very thin layer on top of the block layer and implements no logic of it’s own. To support exporting file systems please use the loopback block driver in direct I/O mode, which gives very good performance.
– NVMe over Fabrics support: responsible for servicing Fabrics commands (connect, property get/set).
– NVMe over Fabrics discovery service: responsible to serve the Discovery log page through a special cut down Discovery controller.

The target is configured using configfs, and configurable entities are:
 – NVMe subsystems and namespaces
 – NVMe over Fabrics ports and referrals
 – Host ACLs for primitive access control – NVMe over Fabrics access control is still work in progress at the specification level and will be implemented once that work has finished.

To configure the target use the nvmetcli tool from http://git.infradead.org/users/hch/nvmetcli.git, which includes detailed setup documentation.

In addition to the Fabrics target implementation we provide a loopback driver which also conforms the NVMe over Fabrics specification and allows evaluation of the target stack with local access without requiring a real fabric.

Various test cases are provided for this implementation: nvmetcli contains a python testsuite that mostly stresses the configfs interface of the target, and we have various integration tests prepared for the kernel host and target which are available at:

    git://git.infradead.org/nvme-fabrics.git nvmf-selftests
    http://git.infradead.org/nvme-fabrics.git/shortlog/refs/heads/nvmf-selftests

This repository also contains patches from all the series posted today in case you prefer using a git repository over collecting patches.

NVMe over Fabrics RDMA transport drivers

This patch set implements the NVMe over Fabrics RDMA host and the target drivers. The host driver is tied into the NVMe host stack and implements the RDMA transport under the NVMe core and Fabrics modules. The NVMe over Fabrics RDMA host module is responsible for establishing a connection against a given target/controller, RDMA event handling and data-plane command processing. The target driver hooks into the NVMe target core stack and implements the RDMA transport. The module is responsible for RDMA connection establishment, RDMA event handling and data-plane RDMA commands processing. RDMA connection establishment is done using RDMA/CM and IP resolution. The data-plane command sequence follows the classic storage model where the target pushes/pulls the data.

generic NVMe over Fabrics library support

This patch set adds the necessary infrastructure for the NVMe over Fabrics functionality and the NVMe over Fabrics library itself. First we add some needed parameters to NVMe request allocation such as flags (for reserved commands – connect and keep-alive), also support tag allocation of a given queue ID (for connect to be executed per-queue) and allow request to be queued at the head of the request queue (so reconnects can pass in flight I/O). Second, we add support for additional sysfs attributes that are needed or useful for the Fabrics driver. Third we add the NVMe over Fabrics related header definitions and the Fabrics library itself which is transport independent and handles Fabrics specific commands and variables. Last, we add support for periodic keep-alive mechanism which is mandatory for Fabrics.

For more information, see the linux-(nvme,block,kernel) mailing lists.
http://lists.infradead.org/mailman/listinfo/linux-nvme

Standard
Uncategorized

Intel X[L]710 Ethernet controlllers have NVMe firmware exploit

Intel’s Security Center has a new advisory about Intel Ethernet Controller X710 and XL710 with their NVM image, excerpted below. Update is available.

Strangely, it is from yesterday (May 2016), but copyrighted 2013.

NVMe trade group: Where is the NVMe firmware equivalent of CHIPSEC? Where are the NVMe security tools? 😦

Potential vulnerability in the Intel® Ethernet Controller X710 and XL710 product families
Intel ID: INTEL-SA-00052
Product family: Intel® Ethernet Controller X710 and XL710 Product Families
Impact of vulnerability: Denial of Service
Severity rating:  Important
Original release: May 31, 2016

A vulnerability was identified in the Intel Ethernet Controller X710 and XL710 product family NVM image v5.02 and v5.03.  Intel released an update to mitigate this issue on 31MAY2016. Intel has found a security vulnerability in the v5.02 and v5.03 release of the NVM images for the Intel® Ethernet Controller X710 and XL710 family.  Intel has released a v5.04 NVM image to mitigate this issue.  This update is available to customers through the Intel Download Center. Intel highly recommends that customers of the affected products obtain and apply the updated versions of the NVM image through the Intel Download Center at (https://downloadcenter.intel.com/download/24769).
 
(C) 2013, Intel Corporation

https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00052&languageid=en-fr

Standard