Uncategorized

SUSE on UEFI -vs- BIOS

I missed this blog post from SuSE from last year:

[…]One UEFI topic that I noticeably did not address in this blog is secure boot. This was actually covered extensively in three previous blogs. To read those blogs do a search for “Secure Boot” at suse.com. I also did not address the comparison of UEFI and BIOS from the operating systems perspective in this blog. That is a separate blog that was released at the same time as this one (Comparison of UEFI and BIOS – from an operating system perspective). Please read it too. Hopefully this gives you some helpful information about the transition from BIOS to UEFI, on the hardware side. You can find more information about SUSE YES Certification at https://www.suse.com/partners/ihv/yes/ or search for YES CERTIFIED hardware at https://www.suse.com/yessearch/. You can also review previous YES Certification blogs at YES Certification blog post[…]

https://www.suse.com/communities/blog/comparison-uefi-bios-hardware-perspective/

Standard
Uncategorized

Verified Boot

Marc Kleine-Budde of Pengutronix gives a talk at the Embedded Linux Conference Europe (ELCE), on using Verified Boot.

http://linuxgizmos.com/protecting-linux-devices-with-verified-boot-from-rom-to-userspace/
http://elinux.org/images/f/f8/Verified_Boot.pdf

https://www.linux.com/news/event/elcna/2017/2/verified-boot-rom-userspace

Standard
Uncategorized

more on SCONE

Re: SCONE, mentioned here: https://firmwaresecurity.com/2017/01/07/secure-linux-containers-with-intel-sgx/

 

SCONE: Secure Linux Containers with Intel SGX
Sergei Arnautov, Bohdan Trach, Franz Gregor, Thomas Knauth, Andre Martin, Christian Priebe, Joshua Lind, Divya Muthukumaran, Dan O’Keeffe, Mark L Stillwell, David Goltzsche, Dave Eyers, Rüdiger Kapitza, Peter Pietzuch, Christof Fetzer

In multi-tenant environments, Linux containers managed by Docker or Kubernetes have a lower resource footprint, faster startup times, and higher I/O performance compared to virtual machines (VMs) on hypervisors. Yet their weaker isolation guarantees, enforced through software kernel mechanisms, make it easier for attackers to compromise the confidentiality and integrity of application data within containers. We describe SCONE, a secure container mechanism for Docker that uses the SGX trusted execution support of Intel CPUs to protect container processes from outside attacks. The design of SCONE leads to (i) a small trusted computing base (TCB) and (ii) a low performance overhead: SCONE offers a secure C standard library interface that transparently encrypts/decrypts I/O data; to reduce the performance impact of thread synchronization and system calls within SGX enclaves, SCONE supports user-level threading and asynchronous system calls. Our evaluation shows that it protects unmodified applications with SGX, achieving 0.6✓–1.2✓ of native throughput.[…]

https://www.usenix.org/conference/osdi16/technical-sessions/presentation/arnautov

https://www.usenix.org/system/files/conference/osdi16/osdi16-arnautov.pdf

https://www.usenix.org/sites/default/files/conference/protected-files/osdi16_slides_knauth.pdf

Standard
Uncategorized

Linux Container hardening

Mission Statement: This project will focus on hardening of Linux containers. It will help contribute patches to the Kernel Self Protection Project that evolve the primitives in the Linux kernel used by containers (namespaces, cgroups, etc) to be more secure. This will include brainstorming, designing and implementing ways to future proof namespaces et all in the kernel. This will benefit all container runtimes by keeping the focus on improving the kernel in the subsystems used by containers. We will strive to push the entire container ecosystem to be more secure by fixing the ground they are built upon. Much like the Kernel Self Protection Project we think of security beyond fixing bugs. Fixing bugs is important and we will do that as well but the main value will come from finding ways to future proof namespaces with features that will eliminate undisclosed vulnerabilities. At this point, it is no longer enough to try to fix all the bugs within namespaces, we must try to find ways to make them more secure from the very foundation. We will also work to help educate people on how to use the features of Capabilities, Seccomp, AppArmor, and SELinux to harden their containers. We will try to find ways to make them more accessible to more users.

https://containerhardening.org/

Standard
Uncategorized

ACPI graph support

Sakari Ailus of Intel posted a patch to the Linux-ACPI list that enables ‘ACPI graph support’. Slightly edited version of announcement below:

I posted a previous RFC labelled set of ACPI graph support a while ago[1]. Since then, the matter of how the properties should be used as in ACPI _DSD was discussed in Ksummit and LPC, and a document detailing the rules  was written [1]. I’ve additionally posted v1 which can be found here[2]. This set contains patches written by Mika Westerberg and by myself. The patchset brings support for graphs to ACPI. The functionality achieved by these patches is very similar to what the Device tree provides: the port and the endpoint concept are being employed. The patches make use of the _DSD property and data extensions to achieve this. The fwnode interface is extended by graph functionality; this way graph information originating from both OF and ACPI may be accessed using the same interface, without being aware of the underlying firmware interface. The last patch of the set contains ASL documentation including an example. The entire set may also be found here (on mediatree.git master, but it also applies cleanly on linux-next)[3]. The resulting fwnode graph interface has been tested using V4L2 async with fwnode matching and smiapp and omap3isp drivers, with appropriate changes to make use of the fwnode interface in drivers. The V4L2 patches can be found here. The fwnode graph interface is used by the newly added V4L2 fwnode framework which replaces the V4L2 OF framework, with equivalent functionality.[4]

[1] http://www.spinics.net/lists/linux-acpi/msg69547.html
[2] http://www.spinics.net/lists/linux-acpi/msg71661.html
[3] https://git.linuxtv.org/sailus/media_tree.git/log/?h=acpi-graph
[4] https://git.linuxtv.org/sailus/media_tree.git/log/?h=v4l2-acpi

Full announcement — including changes for v1 — is on the linux-acpi list:
http://vger.kernel.org/majordomo-info.html
http://www.spinics.net/lists/linux-acpi/

Standard
Uncategorized

LUV gets telemetrics

Megha Dey of Intel just submitted a 4-part patch to LUV, adding telemetrics. Below is slightly-edited comments from patch, some build instructions omitted. For full text see email, URL at end.

[Luv] [PATCH V1 0/4] Introduce telemetrics feature in LUV

This patchset consists of all the patches needed to enable the telemetrics feature in LUV. LUV brings together multiple separate upstream test suites into a cohesive and easy-to-use product and validates UEFI firmware at critical levels of the software stack. It may be possible that one of the test suites crashes while running. It may be even possible that a kernel panic is observed. Under these scenarios, it would be useful for LUV to call home and submit forensic data to analyze and address the problem. The telemetrics feature will do just this.  Of course, this will be an opt-in feature(command line argument telemetrics.opt-in) and users will get clear indication that data is being collected. We have used the telemetrics-client code developed by the clear-linux team and tried to incorporate it in LUV. It has support for crashprobe (invoked whenever a core dump is created), oopsprobe(invoked when there is a kernel oops observed) and pstore-probe(invoked when there is a kernel panic and system reboots). In any of these scenarios, telemetrics records will be sent to the server, currently residing at(one used by clear linux):
 http://rnesius-tmdev.jf.intel.com/telemetryui/
The build ID 122122 can be used to filter the LUV telemetrics records which can be further analysed. In due course, we will have to implement a server of our own to handle this. For telemetrics to work in LUV, the following changes were needed:

1. Migrate to SystemD: LUV currently uses the SystemV init manager but since telemetrics-client repo and the latest yocto have updated on to SystemD, LUV also needs to migrate to SystemD. Since Systemd will not work with the existing psplash graphical manager, we have disabled the splash screen

2.    Migrate to Plymouth: LUV currently uses the psplash graphical manager, but since SystemD is compatible with only Plymouth graphical manager, we have migrated to Plymouth. PLEASE NOTE: Before migrating to plymouth, we have to merge the morty branch of the meta-oe layer provided by open embedded into the LUV repo and add the meta-oe layer to the build/conf/bblayers.conf file. Here are the steps to do this: <omitted> The loglevel has been set to 0 else there are lots of kernel messages overwriting the plymouth screen. Hence, details about the individual tests in the testsuites will not appear when the splash screen is set to false when using plymouth. If the user wants the test details to be shown, they would have to remove the ‘quiet’ and ‘loglevel=0’ kernel command line parameters.

3. Enable networking: After shifting to systemD, the LUV image is not being assigned an IP on boot. This is because it is still using a systemV startup script to do the same. Since systemD names its interfaces differently, we could not see any interfaces with a valid IP. This patch adds the networkd package, introduces a network config file which starts dhcp by default for all interfaces whose names start with en(pci devices which get renamed by udev) or eth(backward compatible) and a service file (networking.service) which will bring up the network and make sure an IP is assigned during boot. It refers:
    https://wiki.archlinux.org/index.php/systemd-networkd

4. Enable telemetrics in LUV: A yocto recipe which fetches the clear-linux telemetrics-client repo, builds it and installs all the necessary service files, daemons and probes has been added. Also, Add a kernel line parameter which lets the user opt-in to the telemetrics feature (telemetrics.opt-in). By default, this feature is disabled. Currently, the telemetrics records can be found here: http://rnesius-tmdev.jf.intel.com/telemetryui/

Full announcement and patch:
https://lists.01.org/mailman/listinfo/luv

Standard