Support for RISC-V architecture (@dlrobertson )
Brand new skin, designed by our own @Grazfather
New command print-format
New convenience variables / functions ($_pie , $_heap) by @wbowling
Better AARCH64 support
All command outputs are now buffered, so less IO, more perf
“Repeatable” commands are in
PyEnv support (@hazedic)
Ditched Travis-CI for Circle-CI
Glibc Tcache bins support
Colorized hexdump byte (pwntools-like)
Bugfix in x86 EFLAGS parsing
Better and more unit tests
More caching (on key functions, settings, etc.)
Fixed the doc
(ARM) Auto. adjust GEF mode from cspr flag
Bugfix in capstone integration
Fixed minor issues in format-string-helper
Fixed IDA integration, thx @cclauss
And more minor bugfixes, and speed improvement
Welcome to salt, a tool to reverse and learn kernel heap memory management. It can be useful to develop an exploit, to debug your own kernel code, and, more importantly, to play with the kernel heap allocations and learn its inner workings.
This tool helps tracing allocations and the current state of the SLUB allocator in modern linux kernels.
It is written as a gdb plugin, and it allows you to trace and record memory allocations and to filter them by process name or by cache. The tool can also dump the list of active caches and print relevant information.
This repository also includes a playground loadable kernel module that can trigger allocations and deallocations at will, to serve both as a debugging tool and as a learning tool to better understand how the allocator works.
Time Travel Debugging is now available in WinDbg Preview
We are excited to announce that Time Travel Debugging (TTD) features are now available in the latest version of WinDbg Preview. About a month ago, we released WinDbg Preview which provides great new debugging user experiences. We are now publicly launching a preview version of TTD for the first time and are looking forward to your feedback.[…]
As I hear, TTD has been used at Microsoft internally for years, just now getting this feature out to the public. Though they are not identical in implementation, GDB has had reverse execution for a while.
gdb-symbolic – symbolic execution extention for gdb
* symbolize argv Make symbolic
* memory [address][size]
* target address Set target address
* triton Run symbolic execution
* answer Print symbolic variables
* debug symbolic gdb Show debug message
Binary Ninja plugin for Voltron integration.
* Synchronise the selected instruction in Binary Ninja with the instruction pointer in the debugger
* Mark breakpoints that are set in the debugger in Binary Ninja
* Set and delete breakpoints in the debugger from Binary Ninja
“Strongdb is a gdb plugin that is written in Python, to help with debugging Android Native program.The main code uses gdb Python API.”
“ARMPwn: Repository to train/learn memory corruption exploitation on the ARM platform. This is the material of a workshop I prepared for my CTF Team”
ARMPWN challenge write-up:
A few weeks ago, I came accross @5aelo repo called armpwn for people wanting to have a bit of ARM fun. I had recently spent some time adding new features and perfectionning old ones to my exploit helper gdb-gef and I saw there a perfect practice case. On top of that, I had nothing better to do yesterday ☺ This challenge was really fun, and made so much easier thanks to gef especially to defeat real life protections (NX/ASLR/PIC/Canary), and on a different architecture (Intel is so ‘90). This is mostly why I’m doing this write-up, but feel curious and do it by yourself. Fun time ahead guaranteed ☺ […]
If you have not looked at Voltron, by Jim Fear, please check it out, it is quite powerful:
Voltron is an extensible debugger UI toolkit written in Python. It aims to improve the user experience of various debuggers (LLDB, GDB, VDB and WinDbg) by enabling the attachment of utility views that can retrieve and display data from the debugger host. By running these views in other TTYs, you can build a customised debugger user interface to suit your needs. Voltron does not aim to be everything to everyone. It’s not a wholesale replacement for your debugger’s CLI. Rather, it aims to complement your existing setup and allow you to extend your CLI debugger as much or as little as you like. If you just want a view of the register contents in a window alongside your debugger, you can do that. If you want to go all out and have something that looks more like OllyDbg, you can do that too.
ret-sync stands for Reverse-Engineering Tools synchronization. It’s a set of plugins that help to synchronize a debugging session (WinDbg/GDB/LLDB/OllyDbg2/x64dbg) with IDA disassembler. The underlying idea is simple: take the best from both worlds (static and dynamic analysis).
From debuggers and dynamic analysis we got:
local view, with live dynamic context (registers, memory, etc.)
built-in specialized features/API (ex: Windbg’s !peb, !drvobj, !address, etc.)
From IDA and static analysis we got:
macro view over modules
code analysis, signatures, types, etc.
fancy graph view
persistent storage of knowledge within IDBs
Pass data (comment, command output) from debugger to disassembler (IDA)
Multiple IDBs can be synced at the same time allowing to easily trace through multiple modules
No need to deal with ALSR, addresses are rebased on-the-fly
IDBs and debugger can be on different hosts
ret-sync is a fork of qb-sync that I developed and maintained during my stay at Quarkslab.
Modular visual interface for GDB in Python. This comes as a standalone single-file .gdbinit which, among the other things, enables a configurable dashboard showing the most relevant information during the program execution. Its main goal is to reduce the number of GDB commands issued to inspect the current program status allowing the programmer to focus on the control flow instead.
The other day I asked on the UEFI development list about hardware options for debugging UEFI, for security researchers (many UEFI debugging solutions only target OEMs/IBVs/IHVs). I was mainly thinking about Intel Tunnel Mountain, and related widgets like the Intel ITP-XDP, and other options in addition to AMI’s AMIdebug and Arium’s ITP products. It looks interesting; previously, all I knew of was Arium’s ITP and AMI’s AMIdebug products. Some of the commercial tools (eg, Arium) have new, expensive, closed-source debuggers, and don’t enable use of existing GDB/Windbg skills.
Zachary Gray of of the Intel System Studio debugger team replied with some answers:
Tunnel Mountain is OK but it is a bit old (by Intel standards) If you are looking for a small target for UEFI research purposes I *strongly* recommend the MinnowBoard Max. This target has a freely available firmware that is mostly public, you can compile the firmware with GCC or MSVC, it is debuggable using JTAG (Arium, or our Intel System Studio debugger with the ITP-XDP3 probe) and it is also debuggable using the Agent-based solution that is built into the image (cheap and easy!).
With regards to Intel System Studio vs Arium, I would say that the Arium JTAG debugger has a broader feature set that is proportional to their price, but the System Studio JTAG debugger also provides a useful feature set at a lower cost. We can debug all the UEFI code from reset to boot loader, and also into the OS if you have an interest there as well. With any of these debug tools you do not need a special compiler, you just use a standard debuggable build of the firmware.
Another even cheaper alternative would be to debug an Intel Galileo board using a low-cost FTDI probe such as an Olimex with the OpenOCD open-source JTAG debug server. The board + probe would be <$200 and the firmware is publicly available. For a front-end to the OpenOCD debug server you can use either the Intel System Studio debugger, or simply use GDB. Some google searches will lead you to some documentation on all this. The catch is that Galileo is not a “normal” Intel CPU so depending on your research interest the firmware may not present the concepts you are interested in.
Bruce Cran also replied with a lower-cost options:
Another even cheaper alternative, if you don’t need real hardware, is qemu+OVMF. It’s what I use to debug UEFI problems with the PCI driver I work on (using gdb).
Alternatively, if you don’t need an x86-based system, I believe the BeagleBone/BeagleBoard (with ARM CPU) can be reflashed to edk2 firmware and debugged with a Flyswatter2 – with the main problem likely being being surface-mounting the JTAG header!
I also received some more advice via private email, from a firmware security researcher. Paraphrasing, they felt that the MinnowboardMAX is not much better than the Galileo for UEFI debugging, because it has many more blobs in it’s “open source” BIOS, and the Minnow is harder to debug than Galileo, since modern Atom CPUs are complex. Their advice was low-cost debugging was Galileo over Minnow, $20 for JTAG probe, and free GDB.
Thanks for all the advice!!
Obviously, this post doesn’t cover ARM [much], just Intel and virtual solutions. I’m still learning the best options for ARM-based hardware and UEFI, and will post another article on ARM in the future.