Category: Linux

Parsing Registry files with RegRipper

The registry of a system contains a lot of good data that can be used forensic analysis. Parsing that data from dead box forensics (bit image) using RegRipper ( will provide you with a lot of useful information. RegRipper is an automated HIVE parser that can parse the forensic contents of the SAM, SECURITY, SYSTEM, SOFTWARE, and the NTUSER.DAT HIVES that it is pointed at. You can even use this to forensically mine the contents of restore point registry files. RegRipper utilizes plugins and aside from the default ones installed during installation, more are available online. The program is available for use on Linux or Windows. The Windows variant includes a GUI. can be invoked by pointing the -r HIVEFILE at the hive you would like to mine forensically. You also need to tell RegRipper the type of hive file it is (sam, security, software, system, ntuser). Hives can be found at C:\Windows\system32\config and the ntuser.dat is located on the root of the each users profile. Once RegRipper is installed on your system, you can use the below syntax to get started and useful options.

# -r -f
[Useful Options]
-r Registry hive file to parse
-f Use(sam, security, software, system, ntuser)
-1 List all plugins
-h Help

No Need to Unzip, Just Use Zcat or Zgrep

There will be times when you may encountered a zipped file and want to quickly parse it without having to unzip it. When the time comes, zcat and zgrep will be your savior. The usages of both are very straightforward but there are man pages for both for further reading. Basic usages of the two are depicted below.

Display the contents of a zipped file

Search for specific characters/words in zipped files.

Analyzing Various Memory Capture Formats

In a world where there are so many choices for capturing memory and analyzing it, I felt there would be some benefit in compiling a list for quick reference.

FTK Imager
– Outputs to .mem
– Can be analyzed in Volatility –f –profile=

VMWare (.vmem)
– .vmem and .vmsm are created when a VM is suspended
– Can be analyzed with Volatility (.vmem and .vmsm have to be in the same directory) –f –profile=

– Outputs to .raw
– Can be analyzed in Volatility –f –profile=

Hibernation file (hiberfile.sys)
– The File is created when a system is put into hibernation mode
– Located at the root of the C:\
– The file needs to be converted before using. It can be converted to .img using Volatility imagecopy –f hiberfile.sys –O –profile=
– After conversion to .img, it can be analyzed in Redline or Volatility –profile=

Mandiant Memoryze
– Outputs to .img
– Can be analyzed in Redline or Volatility –profile=

Crash Dumps
– Extension will be .dmp
– Will be written to C:\Windows\Minidump or C:\Windows by default
– Dumps can be forced to happen by adding the value called namedCrashOnCtrlScroll with a REG_DWORD value of 0x01 at HKEY_LOCAL_MACHINE\System\CurrentControlSet\
Services\kbdhid\Parameters. After rebooting the machine, hold down the rightmost CTRL key and press the SCROLL LOCK key twice
– Can be analyzed with Volatility –profile= – Can be analyzed in Redline but must be converted to .img first imagecopy –f –O –profile=

Network Grep for the Folks Who Love to Grep!

Network grep (ngrep) is a great program that allows you to search and filter network packets rather quickly. There is some resemblance to the well-known Linux grep program. Ngrep can analyze live traffic or saved pcaps. The man pages for ngrep are rather straightforward. Ngrep currently recognizes IPv4/ 6, TCP, UDP, ICMPv4/6 and IGMP. The program also understands regular and hex expressions, which is a huge benefit. In the simplest terms, ngrep applies the most common features of grep at the network layer. A few key switches that I will typically use are below but a full list can be found on the man pages.

-q | Will ‘quiet’ the output by printing only packet headers and relevant payloads
-t | Print the timestamp every time there is a match
-i | Ignore case
-I | Read in saved pcap
-w | Expression must match word – regex
-W byline | Linefeeds are printed as linefeeds, making the output pretty and more legible
-s | Set BPF capture length

Below are a few examples of common usages of ngrep.

This command will query all interfaces and protocols for a string match of ‘HTTP’.

If you have a network capture file in .pcap format, use -I $FILE to filter the capture instead of a network interface. This can be handy, for example, if you have a record of a networking event and you need to do a quick analysis.

Reverse of the above command, using only the -O flag will filter against a network interface and copy the matched packets into a capture file in .pcap format.

Search for .exe

Monitor for current email transactions and print the addresses.

This will grab the password and username of all ftp sessions.

Capture network traffic incoming to eth0 interface and show parameters following HTTP GET or POST methods

Monitor all traffic on your network using port 80 with a source IP of

Monitor all traffic on your network using port 80 with a source IP of and destination of

Search the word “login” tranversing port 23 using regex

The match expression can be combined with a pcap filter. For example, suppose we wanted to look for DNS traffic mentioning

Berkley packet filter (bpf) adds to the flexibility of ngrep. Bpf specifies a rich syntax for filtering network packets based on information such as IP address, IP protocol, and port number.

IP address

IP protocol

Port number

For even more granularity, you can combine primitives using boolean connectives and, or and not to really specify what your looking for.

Extracting Data with Bulk Extractor

When it comes to forensics, styles and methodologies may vary from person to person (or organization). Some methods take longer than others and results may vary. One tool/ technique that I lean to time and time again is using Bulk Extractor. Bulk Extractor is a program that enables you to extract key information from digital media. Its usage is valuable no matter the type of case you may be working. A list of the type of information it can extract is depicted on their webpage at

There is a Windows and Linux variant of the program both capable of running from the command line or GUI. It is 4-8 times faster than other tools like EnCase or FTK due to its multi-threading. The program is capable of handling image files, raw devices, or directories. After it completes, it outputs its findings to an .xml file, which can be read back into Bulk Extractor for analysis. The output will look similar to below.


The scanners that you selected to run against your image file have will out to a report in the reports column. Not all scanners generate their own report as they may bucket the information that they find with another report. The chart above can help you determine where a scanner will output. Also, when a selected scanner doesn’t return any suitable data, you will not see a report for it. When you do select a report, it will output its findings to the middle column. From there you can type in strings to search for our just scroll down to view the data. If you want to go further into it data, just click on one of the findings in the middle column and more output will appear in the image column all the way to the right. The image column by default will display the text and the location of the data in the image file. There is an option though to change the image output from text to hex.


Linux Secure Copy (SCP)

SCP is a must for quick transfer of files in native environments. In order to interact with a Windows machine, an SSH server is needed on the system but you may be able to get around that be specifying a different port.

Below are a few examples of how it help you in your daily work.

Copy the file “some_data.txt” from a remote host to the local host

Copy the file “some_data.txt” from the local host to a remote host

Copy the directory “some_dir” from the local host to a remote host’s directory “data”

Copy the file “data.txt” from remote host “sys_1” to remote host “sys_2”

Copying the files “data.txt” and “more_data.txt” from the local host to your home directory on the remote host

Copy the file “data.txt” from the local host to a remote host using port 2264

Copy multiple files from the remote host to your current directory on the local host

Traffic Generators

These tools will generate traffic and transmit it, retransmit traffic from a capture file, perhaps with changes, or permit you to edit traffic in a capture file and retransmit it.

• Bit-Twist includes bittwist, to retransmit traffic from a capture file, and bittwiste, to edit a capture file and write the result to another file (GPL, BSD/Linux/OSX/Windows)

• Cat Karat is an easy packet generation tool that allows to build custom packets for firewall or target testing and has integrated scripting ability for automated testing. (Windows)

• D-ITG (Distributed Internet Traffic Generator) is a platform capable to produce traffic at packet level accurately replicating appropriate stochastic processes for both IDT (Inter Departure Time) and PS (Packet Size) random variables (exponential, uniform, cauchy, normal, pareto, …).

• epb (ethernet package bombardier) is a simple CLI tool for generating/converting ethernet packets from plain text/pcap/netmon/snoop files. (BSD like, Linux/Unix)

• Mausezahn is a free fast traffic generator written in C which allows you to send nearly every possible and impossible packet.


Unzip a file that is zipped many times

This script is used for unzipping zipped files inside of a zipped file. The zipped files are password protected. I developed this because it seems like every capture the flag I do, there is a scenario where this could be used.

This Bash script can be found in my script repo on the right-hand side of the screen.

WMI on Linux

WMI is a great way to query Windows systems without being so intrusive. As of late, I have been dealing with it more and more. Typically, I use a Windows system to query another Windows system but the lack of speed inherit to the Windows OS always has me searching for better ways to complete simple tasks. I quickly turned to Linux as its speed one of many great features of the OS. Using WMI within Linux is achievable although many may not know it. Getting started is pretty simple, to do so check out the below.

1. Install the repo (CentOS 6 or newer).
[nando@localhost home]$ rpm -Uvh

2. Install WMIC from the repository.
[nando@localhost home]$ yum –y install wmi

Some common queries and what the grab are below.
wmic -U admin%admin1234 // “SELECT CommandLine,Name,ProcessId FROM Win32_Process”

wmic -U admin%admin1234 // “SELECT * FROM Win32_ComputerSystem”

Pushpin… Taking Reconnaissance to Another Level

If you are on the offensive side, part of your strategy encompasses reconnaissance at some point. If you are on the defensive side, there is still reconnaissance to be done in order to see what is available about you. Well, a great tool to add to your tool bag is Recon-ng as it makes the recon process simple and seamless. An awesome feature of the program is Pushpin. Pushpin allows you to utilize APIs and grid coordinates in order to display any postings within a designated area. This capability is incredible and could be used for a number of reasons. In any case, a list of the currently released APIs can be found at In most cases, you will have to register with the site in which you are trying to get an API for. Some of the APIs include Twitter, YouTube, LinkedIn, and Instagram. Also, the program has a Metasploit type feel so if you are comfortable with that, you will do just fine. The source code can be found at

To give you a feel for how simple it is, I’ll walk through running the program with Twitter APIs and we will use the Georgia Dome in Atlanta as our area of interest. We will start at the point following installation.

Splunk vs. ELK Stack

When conversing about log collection and correlation on an Enterprise level, Splunk usually always comes up in the conversation. While I am an avid Splunk fan, outside of the free version, it can be a little expensive. ELK (Elasticsearch, Logstash, and Kibana) is very comparable to Splunk, in my opinion. Through my research and hands-on experience with the two, I’ve formulated the below thoughts and comparison.


Cost (Monetarily):

Splunk: Free up to 500MB a day. The paid version has unlimited indexing per day.

ELK: Free. There is a newer paid version that comes with support.


Cost (Time):

Splunk: One could have it up and running rather quickly. The amount of time already spent on (more…)

Converting a DD image into a VM – pt. 2

This is part 2 of the tutorial to convert a DD image into a VM. The below instruction picks up from the position that one already got a DD image and has it unzipped and uncompressed. To finish the task, please read on.

1. Copy the target_image from your linux forensics system to your Windows forensics system

2. To convert the raw file into a virtual machine using Live View, change the extension of the targetimage raw file to .dd

3. Create a folder on the desktop of your Windows forensics system for which we will put the VM after conversion.

4. Open Live View 0.8 short cut on desktop

5. When the program opens, make the following changes. Once complete, your screen should look like the below.

– Ram size: 1024 (default is 512)

– Operating system: Linux


Converting a DD image into a VM – pt. 1


A good buddy of mine introduced me to LiveView, which creates virtual machines from DD images. There were a number of other programs out there that can do the same thing but didn’t seem as smooth as LiveView is.

One may be wondering why what is the need for all of this? Well, let’s say you are inspecting a suspected or known compromised system. Good practice is to not do anything (or at least as little as possible) to the system in question. In order for one to preserve the system and get an image to work off of, we can make a DD (binary) image. From there, we can use LiveView and convert the DD image into a working virtual machine. From there, one can get a memory capture and/or begin any other forensics on the system yet not affect the original hard drive. LiveView can be found at You will need to install it on your Windows forensics system prior to continuing.


Below are the instructions on using the software that my buddy made.

1. Access the target from the forensics system (linux) using SSH

2. Elevate privileges


Collaboration with Elog

Elog is a great program used for collaboration in a LAN or WAN environment. Its very simple to use and easily customizable. This program is ideal for sharing notes or analyzing data and ensuring everyone else knows what is going on. There is an email function as well and the ability to export and import notes/data if desired. The program can be downloaded here: Below are some of the things I did for customization

Alter the look of the program, it’s a .css written in html — /usr/local/elog/themes/default/default.css

Removed the word ‘demo’ from the URL and from the page and changed it to something else — /usr/local/elog/elogd.cfg

Add/adjust the fields of the form — config option listed on the menu bar of the program

Log transcript — /usr/local/elog/logbooks

After you adjust any of these, you are going to restart the elogd service and reload apache.

Splitting up a Large VM for Easier Transmission

Here is the scenario: you have a VM that you want to transfer to another system over the Internet. The VM, in its entirety, is too big to transfer as is. So what do we do? Well, we could convert the .vmx into an .ova and then split it into a few manageable sizes for transport. Once on the distant end, we can easily put it all back together. Using the steps outlined here:, we can do this. Below are very generic steps to achieve this.

1. Convert the VM’s .vmx to ova in terminal

2. Use the ‘split’ commands to breakdown the ova into manageable sizes (I usually do mine in 550 MB (550000000 bytes)). In this case the command would be ‘split -b 550000000 your_vm.ova vm_brokedown

3. Transfer the smaller files to the destination

4. In terminal on the distant end, type cat vm_brokedown* > your_vm.ova

5. Import the ova into the Hypervisor of your choice.

Renaming a Linux NIC interface

You may be wondering why this is even a topic of discussion. Well, certain Linux distros such as CentOS come with the main interface as eth0. For me, it’s not as big of a deal. The concern comes in when I am developing baselines and distributing them back into the community. The more I can do to ensure that things look the same across the distros, the better. In order to rename the interface, one can do the below.

1. Open a terminal and ensure you are Root.

2. Get the MAC and current listing of the interface. Be sure to make note of the MAC for a future step.


The One Page Linux Manual

For those trying to learn Linux, it can be a daunting task. The are a number plethora of resources online and built into the OS. For those just looking for something “light” and tangible, I recommend the one page Linux manual. It fits the bill for the most part (although its actually two pages).

The One Page Linux Manual

Creating a share on Linux and accessing via Windows

There are many times when there is data on a Linux system that needs to be moved to another system like Windows. Well, the question is how do you do that? The method that I have found to be the easiest is to use Samba. Below are the steps to achieve the overall intent.

1) Install Samba. The below syntax is for Debian based systems. For RPM, do “yum install samba”

2) Configure a username and password that will be used to access the share. In this case, the user I will use is john as he is already a user on my system.


Parse and Extract PST and OST Mailboxes

Libpff is a powerful mail examination tool. The tool will allow you to examine and extract data without having to attach the PST to Outlook and has the ability to view emails that are encrypted. In my example below, I will be using the tool via the SANS SIFT workstation as it is already installed. If you want to the program on a different distribution, the source code can be found at While I have an example below of parsing the information, I encourage you to check out the man pages as it is pretty short and straightforward.

Note: the PST I am using is called target_pst.pst

1) Export the PST.

2) Verify that a target.pst.export, target.pst.orphans, and target.pst.recovered directory are now present.

Parsing Metadata with ExifTool

Its one thing to have a piece of data but its another thing to be able to get the metadata about said data. ExifTool ( is a tool that will allow just that. Its command line based but there is a GUI version as well called pyExifTool ( The tool not only allows you to read the metadata but also change it, if necessary. A person could also add his or her own custom tags as well. Below is an example of using the program.

Note: My JPG file name is called pic11.jpg

1) Examine the file using ExifTool


Building a profile for Volatility

After capturing Linux memory using LiME (or your program of choice), we can analyze it using Volatility. In order to do so, you will need to build a profile for Volatility to use. The profile is based on the kernel/version of the system in which the memory capture was done on. The maintainers of the Volatility Project have a repo of pre-built profiles on their page located at Carnegie Mellon University also has prebuilt profiles as well and they are located at
In order to build a profile, following the below instructions. For this demo, I am using a Kali 1.0.9 (Debian) system to build my profile on an Ubuntu system to do the analyzing on.

1) Install dwarfdump. On RedHat(Fedora)-based systems, this can be done by typing ‘yum install dwarfdump’

2) Download the necessary source code to compile the module.dwarf file

3) Change directory into the newly created vol-mem-profile directory


Linux Memory Capture with LiME

When doing forensics, grabbing a capture of the live memory is vital. There are a few different programs out there to accomplish the task but in my testing, I felt LiME was the best choice. It wasn’t intrusive at all on the system and was pretty straightforward. Once I compiled it, I loaded it up on my flash drive and on I went. Below are the steps I took to achieve it all.

Notes: I am using a Kali system and will be moving the compiled LiME program to the target using a flash drive.

1) Make a directory for LiME.

2) Change Directory into the newly created lime directory.