Batch image manipulation using Python and GIMP

Not a very common topic for me, but I thought it could be neat to mention some tips & tricks. I won’t go into the details of the Python GIMP SDK, most of it can be figured out from the GIMP documentation. I spent a total of one hour researching this topic, so I’m not an expert and I could have made mistakes, but perhaps I can save some effort to others which want to achieve the same results. You can jump to the end of the tutorial to find a nice skeleton batch script if you’re not interested in reading the theory.

To those wondering why GIMP, it’s because I created a new icon for Profiler and wanted to automatize some operations on it in order to have it in all sizes and flavors I need. One of the produced images had to be semi-transparent. So I thought, why not using a GIMP batch command, since anyway GIMP is installed on most Linux systems by default?

Just to mention, GIMP supports also a Lisp syntax to write scripts, but it caused my eyes to bleed profusely, so I didn’t even take into it consideration and focused directly on Python.

Of course, I could’ve tried other solutions like PIL (Python Imaging Library) which I have used in the past. But GIMP is actually nice, you can do many complex UI operations from code and you also have an interactive Python shell to test your code live on an image.

For example, open an image in GIMP, then open the Python console from Filters -> Python-Fu -> Console and execute the following code:

img = gimp.image_list()[0]
img.layers[0].opacity = 50.0

And you’ll see that the image is now halfway transparent. What the code does is to take the first image from the list of open images and sets the opacity of the first layer to 50%.

This is the nice thing about GIMP scripting: it lets you manipulate layers just like in the UI. This allows for very powerful scripting capabilities.

The first small issue I’ve encountered in my attempt to write a batch script, is that GIMP only accepts Python code as command line argument, not the path to a script on disk. According to the official documentation:

GIMP Python All this means that you could easily invoke a GIMP Python plug-in such as the one above directly from your shell using the (plug-in-script- fu-eval …) evaluator:

gimp –no-interface –batch ‘(python-fu-console-echo RUN-NONINTERACTIVE “another string” 777 3.1416 (list 1 0 0))’ ‘(gimp-quit 1)’

The idea behind it is that you create a GIMP plugin script, put it in the GIMP plugin directory, register methods like in the following small example script:

#! /usr/bin/env python
from gimpfu import *

def echo(*args):
  """Print the arguments on standard output"""
  print "echo:", args

register(
  "console_echo", "", "", "", "", "",
  "/Xtns/Languages/Python-Fu/Test/_Console Echo", "",
  [
  (PF_STRING, "arg0", "argument 0", "test string"),
  (PF_INT,    "arg1", "argument 1", 100          ),
  (PF_FLOAT,  "arg2", "argument 2", 1.2          ),
  (PF_COLOR,  "arg3", "argument 3", (0, 0, 0)    ),
  ],
  [],
  echo
  )

main()

And then invoke the registered method from the command line as explained above.

I noticed many threads on stackoverflow.com where people were trying to figure out how to execute a batch script from the command line. Now, the obvious solution which came to my mind is to execute Python code from the command line which prepends the current path to the sys.path and then to import the batch script. So I searched and found that solution suggested by the user xenoid in this stackoverflow thread.

So the final code for my case would be:

gimp-console -idf --batch-interpreter python-fu-eval -b "import sys;sys.path=['.']+sys.path;import batch;batch.setOpacity('appicon.png', 50)" -b "pdb.gimp_quit(1)"

And:

from gimpfu import *

def setOpacity(fname, level):
    img = pdb.gimp_file_load(fname, fname)
    img.layers[0].opacity = float(level)
    img.merge_visible_layers(NORMAL_MODE)
    pdb.file_png_save(img, img.layers[0], fname, fname, 0, 9, 1, 0, 0, 1, 1)
    pdb.gimp_image_delete(img)

What took me most to understand was to call the method merge_visible_layers before saving the image. Initially, I was trying to do it without calling it and the saved image was not transparent at all. So I thought the opacity was not correctly set and tried to do it with other methods like calling gimp_layer_set_opacity, but without success.

I then tried in the console and noticed that the opacity is actually set correctly, but that that information is lost when saving the image to disk. I then found the image method flatten and noticed that the transparency was retained, but unfortunately the saved PNG background was now white and no longer transparent. So I figured that there had to be a method to obtain a similar result but without losing the transparent background. Looking a bit among the methods in the SDK I found merge_visible_layers. I think it’s important to point this out, in case you experience the same issue and can’t find a working solution just like it happened to me.

Now we have a working solution, but let’s create a more elegant one, which allows use to use GIMP from within the same script, without any external invocation.

#!/usr/bin/env python

def setOpacity(fname, level):
    img = pdb.gimp_file_load(fname, fname)
    img.layers[0].opacity = float(level)
    img.merge_visible_layers(NORMAL_MODE)
    pdb.file_png_save(img, img.layers[0], fname, fname, 0, 9, 1, 0, 0, 1, 1)
    pdb.gimp_image_delete(img)

# GIMP auto-execution stub
if __name__ == "__main__":
    import os, sys, subprocess
    if len(sys.argv) < 2:
        print("you must specify a function to execute!")
        sys.exit(-1)
    scrdir = os.path.dirname(os.path.realpath(__file__))
    scrname = os.path.splitext(os.path.basename(__file__))[0]
    shcode = "import sys;sys.path.insert(0, '" + scrdir + "');import " + scrname + ";" + scrname + "." + sys.argv[1] + str(tuple(sys.argv[2:]))
    shcode = "gimp-console -idf --batch-interpreter python-fu-eval -b \"" + shcode + "\" -b \"pdb.gimp_quit(1)\""
    sys.exit(subprocess.call(shcode, shell=True))
else:
    from gimpfu import *

We can now call our function simply like this:

./batch.py setOpacity appicon.png 50.0

Which looks very pretty to me.

I could go on showing other nice examples of image manipulation, but the gist of the tutorial was just this. However, GIMP has a rich SDK which allows to automatize very complex operations.

Time Travel: Running Python 3.7 on XP

To restart my career as a technical writer, I chose a light topic. Namely, running applications compiled with new versions of Visual Studio on Windows XP. I didn’t find any prior research on the topic, but I also didn’t search much. There’s no real purpose behind this article, beyond the fact that I wanted to know what could prevent a new application to run on XP. Our target application will be the embedded version of Python 3.7 for x86.

If we try to start any new application on XP, we’ll get an error message informing us that it is not a valid Win32 application. This happens because of some fields in the Optional Header of the Portable Executable.

Most of you probably already know that you need to adjust these fields as follows:

MajorOperatingSystemVersion: 5
MinorOperatingSystemVersion: 0
MajorSubsystemVersion: 5
MinorSubsystemVersion: 0

Fortunately, it’s enough to adjust the fields in the executable we want to start (python.exe), there’s no need to adjust the DLLs as well.

If we try run the application now, we’ll get an error message due to a missing API in kernel32. So let’s turn our attention to the imports.

We have a missing vcruntime140.dll, then a bunch of “api-ms-win-*” DLLs, then only python37.dll and kernel32.dll.

The first thing which comes to mind is that in new applications we often find these “api-ms-win-*” DLLs. If we search for the prefix in the Windows directory, we’ll find a directory both in System32 and SysWOW64 called “downlevel”, which contains a huge list of these DLLs.

As we’ll see later, these DLLs aren’t actually used, but if we open one with a PE viewer, we’ll see that it contains exclusively forwarders to APIs contained in the usual suspects such as kernel32, kernelbase, user32 etc.

There’s a MSDN page documenting these DLLs.

Interestingly, in the downlevel directory we can’t find any of the files imported by python.exe. These DLLs actually expose C runtime APIs like strlen, fopen, exit and so on.

If we don’t have any prior knowledge on the topic and just do a string search inside the Windows directory for such a DLL name, we’ll find a match in C:\Windows\System32\apisetschema.dll. This DLL is special as it contains a .apiset section, whose data can easily be identified as some sort of format for mapping “api-ms-win-*” names to others.

Offset     0  1  2  3  4  5  6  7    8  9  A  B  C  D  E  F     Ascii   

00013AC0  C8 3A 01 00 20 00 00 00   73 00 74 00 6F 00 72 00     .:......s.t.o.r.
00013AD0  61 00 67 00 65 00 75 00   73 00 61 00 67 00 65 00     a.g.e.u.s.a.g.e.
00013AE0  2E 00 64 00 6C 00 6C 00   65 00 78 00 74 00 2D 00     ..d.l.l.e.x.t.-.
00013AF0  6D 00 73 00 2D 00 77 00   69 00 6E 00 2D 00 73 00     m.s.-.w.i.n.-.s.
00013B00  78 00 73 00 2D 00 6F 00   6C 00 65 00 61 00 75 00     x.s.-.o.l.e.a.u.
00013B10  74 00 6F 00 6D 00 61 00   74 00 69 00 6F 00 6E 00     t.o.m.a.t.i.o.n.
00013B20  2D 00 6C 00 31 00 2D 00   31 00 2D 00 30 00 00 00     -.l.1.-.1.-.0...
00013B30  00 00 00 00 00 00 00 00   00 00 00 00 44 3B 01 00     ............D;..
00013B40  0E 00 00 00 73 00 78 00   73 00 2E 00 64 00 6C 00     ....s.x.s...d.l.
00013B50  6C 00 00 00 65 00 78 00   74 00 2D 00 6D 00 73 00     l...e.x.t.-.m.s.

Searching on the web, the first resource I found on this topic were two articles on the blog of Quarkslab (Part 1 and Part 2). However, I quickly figured that, while useful, they were too dated to provide me with up-to-date structures to parse the data. In fact, the second article shows a version number of 2 and at the time of my writing the version number is 6.

Offset     0  1  2  3  4  5  6  7    8  9  A  B  C  D  E  F     Ascii   

00000000  06 00 00 00                                           ....            

Just for completeness, after the publication of the current article, I was made aware of an article by deroko about the topic predating those of Quarkslab.

Anyway, I searched some more and found a code snippet by Alex Ionescu and Pavel Yosifovich in the repository of Windows Internals. I took the following structures from there.

typedef struct _API_SET_NAMESPACE {
	ULONG Version;
	ULONG Size;
	ULONG Flags;
	ULONG Count;
	ULONG EntryOffset;
	ULONG HashOffset;
	ULONG HashFactor;
} API_SET_NAMESPACE, *PAPI_SET_NAMESPACE;

typedef struct _API_SET_HASH_ENTRY {
	ULONG Hash;
	ULONG Index;
} API_SET_HASH_ENTRY, *PAPI_SET_HASH_ENTRY;

typedef struct _API_SET_NAMESPACE_ENTRY {
	ULONG Flags;
	ULONG NameOffset;
	ULONG NameLength;
	ULONG HashedLength;
	ULONG ValueOffset;
	ULONG ValueCount;
} API_SET_NAMESPACE_ENTRY, *PAPI_SET_NAMESPACE_ENTRY;

typedef struct _API_SET_VALUE_ENTRY {
	ULONG Flags;
	ULONG NameOffset;
	ULONG NameLength;
	ULONG ValueOffset;
	ULONG ValueLength;
} API_SET_VALUE_ENTRY, *PAPI_SET_VALUE_ENTRY;

The data starts with a API_SET_NAMESPACE structure.

Count specifies the number of API_SET_NAMESPACE_ENTRY and API_SET_HASH_ENTRY structures. EntryOffset points to the start of the array of API_SET_NAMESPACE_ENTRY structures, which in our case comes exactly after API_SET_NAMESPACE.

Every API_SET_NAMESPACE_ENTRY points to the name of the “api-ms-win-*” DLL via the NameOffset field, while ValueOffset and ValueCount specify the position and count of API_SET_VALUE_ENTRY structures. The API_SET_VALUE_ENTRY structure yields the resolution values (e.g. kernel32.dll, kernelbase.dll) for the given “api-ms-win-*” DLL.

With this information we can already write a small script to map the new names to the actual DLLs.

import os
from Pro.Core import *
from Pro.PE import *

def main():
    c = createContainerFromFile("C:\\Windows\\System32\\apisetschema.dll")
    pe = PEObject()
    if not pe.Load(c):
        print("couldn't load apisetschema.dll")
        return
    sect = pe.SectionHeaders()
    nsects = sect.Count()
    d = None
    for i in range(nsects):
        if sect.Bytes(0) == b".apiset\x00":
            cs = pe.SectionData(i)[0]
            d = CFFObject()
            d.Load(cs)
            break
        sect = sect.Add(1)
    if not d:
        print("could find .apiset section")
        return
    n, ret = d.ReadUInt32(12)
    offs, ret = d.ReadUInt32(16)
    for i in range(n):
        name_offs, ret = d.ReadUInt32(offs + 4)
        name_size, ret = d.ReadUInt32(offs + 8)
        name = d.Read(name_offs, name_size).decode("utf-16")
        line = str(i) + ") " + name + " ->"
        values_offs, ret = d.ReadUInt32(offs + 16)
        value_count, ret = d.ReadUInt32(offs + 20)
        for j in range(value_count):
            vname_offs, ret = d.ReadUInt32(values_offs + 12)
            vname_size, ret = d.ReadUInt32(values_offs + 16)
            vname = d.Read(vname_offs, vname_size).decode("utf-16")
            line += " " + vname
            values_offs += 20
        offs += 24
        print(line)
     
main()

This code can be executed with Cerbero Profiler from command line as “cerpro.exe -r apisetschema.py”. These are the first lines of the produced output:

0) api-ms-onecoreuap-print-render-l1-1-0 -> printrenderapihost.dll
1) api-ms-onecoreuap-settingsync-status-l1-1-0 -> settingsynccore.dll
2) api-ms-win-appmodel-identity-l1-2-0 -> kernel.appcore.dll
3) api-ms-win-appmodel-runtime-internal-l1-1-3 -> kernel.appcore.dll
4) api-ms-win-appmodel-runtime-l1-1-2 -> kernel.appcore.dll
5) api-ms-win-appmodel-state-l1-1-2 -> kernel.appcore.dll
6) api-ms-win-appmodel-state-l1-2-0 -> kernel.appcore.dll
7) api-ms-win-appmodel-unlock-l1-1-0 -> kernel.appcore.dll
8) api-ms-win-base-bootconfig-l1-1-0 -> advapi32.dll
9) api-ms-win-base-util-l1-1-0 -> advapi32.dll
10) api-ms-win-composition-redirection-l1-1-0 -> dwmredir.dll
11) api-ms-win-composition-windowmanager-l1-1-0 -> udwm.dll
12) api-ms-win-core-apiquery-l1-1-0 -> ntdll.dll
13) api-ms-win-core-appcompat-l1-1-1 -> kernelbase.dll
14) api-ms-win-core-appinit-l1-1-0 -> kernel32.dll kernelbase.dll
...

Going back to API_SET_NAMESPACE, its field HashOffset points to an array of API_SET_HASH_ENTRY structures. These structures, as we’ll see in a moment, are used by the Windows loader to quickly index a “api-ms-win-*” DLL name. The Hash field is effectively the hash of the name, calculated by taking into consideration both HashFactor and HashedLength, while Index points to the associated API_SET_NAMESPACE_ENTRY entry.

The code which does the hashing is inside the function LdrpPreprocessDllName in ntdll:

77EA1DAC mov       ebx, dword ptr [ebx + 0x18]      ; HashFactor in ebx 
77EA1DAF mov       esi, eax                         ; esi = dll name length                    
77EA1DB1 movzx     eax, word ptr [edx]              ; one unicode character into eax
77EA1DB4 lea       ecx, dword ptr [eax - 0x41]      ; ecx = character - 0x41
77EA1DB7 cmp       cx, 0x19                         ; compare to 0x19
77EA1DBB jbe       0x77ea2392                       ; if below or equal, bail out
77EA1DC1 mov       ecx, ebx                         ; ecx = HashFactor
77EA1DC3 movzx     eax, ax
77EA1DC6 imul      ecx, edi                         ; ecx *= edi
77EA1DC9 add       edx, 2                           ; edx += 2
77EA1DCC add       ecx, eax                         ; ecx += eax
77EA1DCE mov       edi, ecx                         ; edi = ecx
77EA1DD0 sub       esi, 1                           ; len -= 1
77EA1DD3 jne       0x77ea1db1                       ; if not zero repeat from 77EA1DB1

Or more simply in C code:

const char *p = dllname;
int HashedLength = 0x23;
int HashFactor = 0x1F;
int Hash = 0;
for (int i = 0; i < HashedLength; i++, p++)
    Hash = (Hash * HashFactor) + *p;

As a practical example, let's take the DLL name "api-ms-win-core-processthreads-l1-1-2.dll". Its hash would be 0x445B4DF3. If we find its matching API_SET_HASH_ENTRY entry, we'll have the Index to the associated API_SET_NAMESPACE_ENTRY structure.

Offset     0  1  2  3  4  5  6  7    8  9  A  B  C  D  E  F     Ascii   

00014DA0                                        F3 4D 5B 44                 .M[D
00014DB0  5B 00 00 00                                           [...            

So, 0x5b (or 91) is the index. By going back to the output of mappings, we can see that it matches.

91) api-ms-win-core-processthreads-l1-1-3 -> kernel32.dll kernelbase.dll

By inspecting the same output, we can also notice that all C runtime DLLs are resolved to ucrtbase.dll.

167) api-ms-win-crt-conio-l1-1-0 -> ucrtbase.dll
168) api-ms-win-crt-convert-l1-1-0 -> ucrtbase.dll
169) api-ms-win-crt-environment-l1-1-0 -> ucrtbase.dll
170) api-ms-win-crt-filesystem-l1-1-0 -> ucrtbase.dll
171) api-ms-win-crt-heap-l1-1-0 -> ucrtbase.dll
172) api-ms-win-crt-locale-l1-1-0 -> ucrtbase.dll
173) api-ms-win-crt-math-l1-1-0 -> ucrtbase.dll
174) api-ms-win-crt-multibyte-l1-1-0 -> ucrtbase.dll
175) api-ms-win-crt-private-l1-1-0 -> ucrtbase.dll
176) api-ms-win-crt-process-l1-1-0 -> ucrtbase.dll
177) api-ms-win-crt-runtime-l1-1-0 -> ucrtbase.dll
178) api-ms-win-crt-stdio-l1-1-0 -> ucrtbase.dll
179) api-ms-win-crt-string-l1-1-0 -> ucrtbase.dll
180) api-ms-win-crt-time-l1-1-0 -> ucrtbase.dll
181) api-ms-win-crt-utility-l1-1-0 -> ucrtbase.dll

I was already resigned at having to figure out how to support the C runtime on XP, when I noticed that Microsoft actually supports the deployment of the runtime on it. The following excerpt from MSDN says as much:

If you currently use the VCRedist (our redistributable package files), then things will just work for you as they did before. The Visual Studio 2015 VCRedist package includes the above mentioned Windows Update packages, so simply installing the VCRedist will install both the Visual C++ libraries and the Universal CRT. This is our recommended deployment mechanism. On Windows XP, for which there is no Universal CRT Windows Update MSU, the VCRedist will deploy the Universal CRT itself.

Which means that on Windows editions coming after XP the support is provided via Windows Update, but on XP we have to deploy the files ourselves. We can find the files to deploy inside C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs. This path contains three sub-directories: x86, x64 and arm. We're obviously interested in the x86 one. The files contained in it are many (42), apparently the most common "api-ms-win-*" DLLs and ucrtbase.dll. We can deploy those files onto XP to make our application work. We are still missing the vcruntime140.dll, but we can take that DLL from the Visual C++ installation. In fact, that DLL is intended to be deployed, while the Universal CRT (ucrtbase.dll) is intended to be part of the Windows system.

This satisfies our dependencies in terms of DLLs. However, Windows introduced many new APIs over the years which aren't present on XP. So I wrote a script to test the compatibility of an application by checking the imported APIs against the API exported by the DLLs on XP. The command line for it is "cerpro.exe -r xpcompat.py application_path". It will check all the PE files in the specified directory.

import os, sys
from Pro.Core import *
from Pro.PE import *

xp_system32 = "C:\\Users\\Admin\\Desktop\\system32"
apisetschema = { "OMITTED FOR BREVITY" }
cached_apis = {}
missing_result = {}

def getAPIs(dllpath):
    apis = {}
    c = createContainerFromFile(dllpath)
    dll = PEObject()
    if not dll.Load(c):
        print("error: couldn't load dll")
        return apis
    ordbase = dll.ExportDirectory().Num("Base")
    functions = dll.ExportDirectoryFunctions()
    names = dll.ExportDirectoryNames()
    nameords = dll.ExportDirectoryNameOrdinals()
    n = functions.Count()
    it = functions.iterator()
    for x in range(n):
        func = it.next()
        ep = func.Num(0)
        if ep == 0:
            continue
        apiord = str(ordbase + x)
        n2 = nameords.Count()
        it2 = nameords.iterator()
        name_found = False
        for y in range(n2):
            no = it2.next()
            if no.Num(0) == x:
                name = names.At(y)
                offs = dll.RvaToOffset(name.Num(0))
                name, ret = dll.ReadUInt8String(offs, 500)
                apiname = name.decode("ascii")
                apis[apiname] = apiord
                apis[apiord] = apiname
                name_found = True
                break
        if not name_found:
            apis[apiord] = apiord
    return apis
    
def checkMissingAPIs(pe, ndescr, dllname, xpdll_apis):
    ordfl = pe.ImportOrdinalFlag()
    ofts = pe.ImportThunks(ndescr)
    it = ofts.iterator()
    while it.hasNext():
        ft = it.next().Num(0)
        if (ft & ordfl) != 0:
            name = str(ft ^ ordfl)
        else:
            offs = pe.RvaToOffset(ft)
            name, ret = pe.ReadUInt8String(offs + 2, 400)
            if not ret:
                continue
            name = name.decode("ascii")
        if not name in xpdll_apis:
            print("       ", "missing:", name)
            temp = missing_result.get(dllname, set())
            temp.add(name)
            missing_result[dllname] = temp

def verifyXPCompatibility(fname):
    print("file:", fname)
    c = createContainerFromFile(fname)
    pe = PEObject()
    if not pe.Load(c):
        return
    it = pe.ImportDescriptors().iterator()
    ndescr = -1
    while it.hasNext():
        descr = it.next()
        ndescr += 1
        offs = pe.RvaToOffset(descr.Num("Name"))
        name, ret = pe.ReadUInt8String(offs, 400)
        if not ret:
            continue
        name = name.decode("ascii").lower()
        if not name.endswith(".dll"):
            continue
        fwdlls = apisetschema.get(name[:-4], [])
        if len(fwdlls) == 0:
            print("   ", name)
        else:
            fwdll = fwdlls[0]
            print("   ", name, "->", fwdll)
            name = fwdll
        if name == "ucrtbase.dll":
            continue
        xpdll_path = os.path.join(xp_system32, name)
        if not os.path.isfile(xpdll_path):
            continue
        if not name in cached_apis:
            cached_apis[name] = getAPIs(xpdll_path)
        checkMissingAPIs(pe, ndescr, name, cached_apis[name])
    print()
        
def main():
    if os.path.isfile(sys.argv[1]):
        verifyXPCompatibility(sys.argv[1])
    else:
        files = [os.path.join(dp, f) for dp, dn, fn in os.walk(sys.argv[1]) for f in fn]
        for fname in files:
            with open(fname, "rb") as f:
                if f.read(2) == b"MZ":
                    verifyXPCompatibility(fname)
            
    # summary
    n = 0
    print("\nsummary:")
    for rdll, rapis in missing_result.items():
        print("   ", rdll)
        for rapi in rapis:
            print("       ", "missing:", rapi)
            n += 1
    print("total of missing APIs:", str(n))

main()

I had to omit the contents of the apisetschema global variable for the sake of brevity. You can download the full script from here. The system32 directory referenced in the code is the one of Windows XP, which I copied to my desktop.

And here are the relevant excerpts from the output:

file: python-3.7.0-embed-win32\python37.dll
    version.dll
    shlwapi.dll
    ws2_32.dll
    kernel32.dll
        missing: GetFinalPathNameByHandleW
        missing: InitializeProcThreadAttributeList
        missing: UpdateProcThreadAttribute
        missing: DeleteProcThreadAttributeList
        missing: GetTickCount64
    advapi32.dll
    vcruntime140.dll
    api-ms-win-crt-runtime-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-math-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-locale-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-string-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-stdio-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-convert-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-time-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-environment-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-process-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-heap-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-conio-l1-1-0.dll -> ucrtbase.dll
    api-ms-win-crt-filesystem-l1-1-0.dll -> ucrtbase.dll

[...]

file: python-3.7.0-embed-win32\_socket.pyd
    ws2_32.dll
        missing: inet_ntop
        missing: inet_pton
    kernel32.dll
    python37.dll
    vcruntime140.dll
    api-ms-win-crt-runtime-l1-1-0.dll -> ucrtbase.dll

[...]

summary:
    kernel32.dll
        missing: InitializeProcThreadAttributeList
        missing: GetTickCount64
        missing: GetFinalPathNameByHandleW
        missing: UpdateProcThreadAttribute
        missing: DeleteProcThreadAttributeList
    ws2_32.dll
        missing: inet_pton
        missing: inet_ntop
total of missing APIs: 7

We're missing 5 APIs from kernel32.dll and 2 from ws2_32.dll, but the Winsock APIs are imported just by _socket.pyd, a module which is loaded only when a network operation is performed by Python. So, in theory, we can focus our efforts on the missing kernel32 APIs for now.

My plan was to create a fake kernel32.dll, called xernel32.dll, containing forwarders for most APIs and real implementations only for the missing ones. Here's a script to create C++ files containing forwarders for all APIs of common DLLs on Windows 10:

import os, sys
from Pro.Core import *
from Pro.PE import *

xpsys32path = "C:\\Users\\Admin\\Desktop\\system32"
sys32path = "C:\\Windows\\SysWOW64"

def getAPIs(dllpath):
    pass # same code as above
    
def isOrdinal(i):
    try:
        int(i)
        return True
    except:
        return False
    
def createShadowDll(name):
    xpdllpath = os.path.join(xpsys32path, name + ".dll")
    xpapis = getAPIs(xpdllpath)
    dllpath = os.path.join(sys32path, name + ".dll")
    apis = sorted(getAPIs(dllpath).keys())
    if len(apis) != 0:
        with open(name + ".cpp", "w") as f:
            f.write("#include \n\n")
            for a in apis:
                comment = " // XP" if a in xpapis else ""
                if not isOrdinal(a):
                    f.write("#pragma comment(linker, \"/export:" + a + "=" + name + "." + a + "\")" + comment + "\n")
                #
            print("created", name + ".cpp")
    
def main():
    dlls = ("advapi32", "comdlg32", "gdi32", "iphlpapi", "kernel32", "ole32", "oleaut32", "shell32", "shlwapi", "user32", "uxtheme", "ws2_32")
    for dll in dlls:
        createShadowDll(dll)

main()

It creates files like the following kernel32.cpp:

#include 

#pragma comment(linker, "/export:AcquireSRWLockExclusive=kernel32.AcquireSRWLockExclusive")
#pragma comment(linker, "/export:AcquireSRWLockShared=kernel32.AcquireSRWLockShared")
#pragma comment(linker, "/export:ActivateActCtx=kernel32.ActivateActCtx") // XP
#pragma comment(linker, "/export:ActivateActCtxWorker=kernel32.ActivateActCtxWorker")
#pragma comment(linker, "/export:AddAtomA=kernel32.AddAtomA") // XP
#pragma comment(linker, "/export:AddAtomW=kernel32.AddAtomW") // XP
#pragma comment(linker, "/export:AddConsoleAliasA=kernel32.AddConsoleAliasA") // XP
#pragma comment(linker, "/export:AddConsoleAliasW=kernel32.AddConsoleAliasW") // XP
#pragma comment(linker, "/export:AddDllDirectory=kernel32.AddDllDirectory")
[...]

The comment on the right ("// XP") indicates whether the forwarded API is present on XP or not. We can provide real implementations exclusively for the APIs we want. The Windows loader doesn't care whether we forward functions which don't exist as long as they aren't imported.

The APIs we need to support are the following:

GetTickCount64: I just called GetTickCount, not really important
GetFinalPathNameByHandleW: took the implementation from Wine, but had to adapt it slightly
InitializeProcThreadAttributeList: took the implementation from Wine
UpdateProcThreadAttribute: same
DeleteProcThreadAttributeList: same

I have to be grateful to the Wine project here, as it provided useful implementations, which saved me the effort.

I called the attempt at a support runtime for older Windows versions "XP Time Machine Runtime" and you can find the repository here. I compiled it with Visual Studio 2013 and cmake.

So that we have now our xernel32.dll, the only thing we have to do is to rename the imported DLL inside python37.dll.

Let's try to start python.exe.

Awesome.

Of course, we're still not completely done, as we didn't implement the missing Winsock APIs, but perhaps this and some more could be the content of a second part to this article.

NTCore revamped

After over a decade, I finally took two afternoons to revamp this personal web-page and to merge the content of the old NTCore page with the content of its blog (rcecafe.net). All the URLs of the old web-page and blog have been preserved in the process.

The people who voted for this on Twitter are the guilty ones.

You know who you are.

Ctor conflicts

Perhaps the content of this post is trivial and widely known(?), but I just spent some time fixing a bug related to the following C++ behavior.

Let’s take a look at this code snippet:

// main.cpp ------------------------------

#include 

void bar();
void foo();

int main(int argc, const char *argv[])
{
    bar();
    foo();
    return 0;
}

// a.cpp ---------------------------------

#include 

struct A
{
    A() { printf("apple\n"); }
};

void bar()
{
    new A;
}

// b.cpp ---------------------------------

#include 

struct A
{
    A() { printf("orange\n"); }
};

void foo()
{
    new A;
}

The output of the code above is:

apple
apple

Whether we compile it with VC++ or g++, the result is the same.

The problem is that although the struct or class is declared locally the name of the constructor is considered a global symbol. So while the allocation size of the struct or class is correct, the constructor being invoked is always the first one encountered by the compiler, which in this case is the one which prints ‘apple’.

The problem here is that the compiler doesn’t warn the user in any way that the wrong constructor is being called and in a large project with hundreds of files it may very well be that two constructors collide.

Since namespaces are part of the name of the symbol, the code above can be fixed by adding a namespace:

namespace N
{
struct A
{
    A() { printf("orange\n"); }
};
}
using namespace N;

void foo()
{
    new A;
}

Now the correct constructor will be called.

I wrote a small (dumb) Python script to detect possible ctor conflicts. It just looks for struct or class declarations and reports duplicate symbol names. It’s far from perfect.

# ctor_conflicts.py
import os, sys, re

source_extensions = ["h", "hxx", "hpp", "cpp", "cxx"]

symbols = { }
psym = re.compile("(typedef\\s+)?(struct|class)\\s+([a-zA-Z_][a-zA-Z0-9_]*)(\\s+)?([{])?")

def processSourceFile(fname):
    with open(fname) as f:
        content = f.readlines()
    n = len(content)
    i = 0
    while i < n:
        m = psym.search(content[i])
        i += 1
        if m == None:
            continue
        symname = m.group(3)
        # exclude some recurring symbols in different projects
        if symname == "Dialog" or symname == "MainWindow":
            continue
        # make sure a bracket is present
        if m.group(5) == None and (i >= n or content[i].startswith("{") == False):
            continue
        loc = fname + ":" + str(i)
        if symname in symbols:
            # found a possible collision
            print("Possible collision of '" + symname + "' in:")
            print(symbols[symname])
            print(loc)
            print("")
        else:
            symbols[symname] = loc

def walkFiles(path):
    for root, dirs, files in os.walk(path):
        for f in files:
            # skip SWIG wrappers
            if f.find("PyWrapWin") != -1:
                continue
            # skip Qt ui files
            if f.startswith("ui_"):
                continue
            fname = root + os.sep + f
            ext = os.path.splitext(fname)[1]
            if ext != None and len(ext) > 1 and ext[1:] in source_extensions:
                processSourceFile(fname)


if __name__ == '__main__':
    nargs = len(sys.argv)
    if nargs < 2:
        path = os.getcwd()
    else:
        path = sys.argv[1]
    walkFiles(path)

In my opinion this could be handled better on the compiler side, at least by giving a warning.

ADDENDUM: Myria ‏(@Myriachan) explained the compiler internals on this one on twitter:

I'm just surprised that it doesn't cause a "duplicate symbol" linker error. Symbol flagged "weak" from being inline, maybe? [...] Member functions defined inside classes like that are automatically "inline" by C++ standard. [...] The "inline" keyword has two meanings: hint to compiler that inlining machine code may be wise, and making symbol weak. [...] Regardless of whether the compiler chooses to inline machine code within calling functions, the weak symbol part still applies. [...] It is as if all inline functions (including functions defined inside classes) have __declspec(selectany) on them, in MSVC terms. [...] Without this behavior, if you ever had a class in a header with functions defined, the compiler would either have to always inline the machine code, or you'd have to use #ifdef nonsense to avoid more than one .cpp defining the function.

The explanation is the correct one. And yes, if we define the ctor outside of the class the compiler does generate an error.

The logic mismatch here is that local structures in C do exist, local ctors in C++ don't. So, the correct struct is allocated but the wrong ctor is being called. Also, while the symbol is weak for the reasons explained by Myria, the compiler could still give an error if the ctor code doesn't match across files.

So the rule here could be: if you have local classes, avoid defining the ctor inside the class. If you already have a conflict as I did and don't want to change the code, you can fix it with a namespace as shown above.

Raw File System Analysis (FAT32 File Recovery)

This post isn’t about upcoming features, it’s about things you can already do with Cerbero. What we’ll see is how to import structures used for file system analysis from C/C++ sources, use them to analyze raw hex data, create a script to do the layout work for us in the future and at the end we’ll see how to create a little utility to recover deleted files. The file system used for this demonstration is FAT32, which is simple enough to avoid making the post too long.

Import file system structures

Importing file system structures from C/C++ sources is easy thanks to the Header Manager tool. In fact, it took me less than 30 minutes to import the structures for the most common file systems from different code bases. Click here to download the archive with all the headers.

Here’s the list of headers I have created:

  • ext – ext2/3/4 imported from FreeBSD
  • ext2 – imported from Linux
  • ext3 – imported from Linux
  • ext4 – imported from Linux
  • fat – imported from FreeBSD
  • hfs – imported from Darwin
  • iso9660 – imported from FreeBSD
  • ntfs – imported from Linux
  • reiserfs – imported from Linux
  • squashfs – imported from Linux
  • udf – imported from FreeBSD

Copy the files to your user headers directory (e.g. “AppData\Roaming\CProfiler\headers”). It’s better to not put them in a sub-directory. Please note that apart from the FAT structures, none of the others have been tried out.

Note: Headers created from Linux sources contain many additional structures, this is due to the includes in the parsed source code. This is a bit ugly: in the future it would be a good idea to add an option to import only structures belonging to files in a certain path hierarchy and those referenced by them.

Since this post is about FAT, we’ll see how to import the structures for this particular file system. But the same steps apply for other file systems as well and not only for them. If you’ve never imported structures before, you might want to take a look at this previous post about dissecting an ELF and read the documentation about C++ types.

We open the Header Manager and configure some basic options like ‘OS’, ‘Language’ and ‘Standard’. In this particular case I imported the structures from FreeBSD, so I just set ‘freebsd’, ‘c’ and ‘c11’. Then we need to add the header paths, which in my case were the following:

C:/Temp/freebsd-master
C:/Temp/freebsd-master/include
C:/Temp/freebsd-master/sys
C:/Temp/freebsd-master/sys/x86
C:/Temp/freebsd-master/sys/i386/include
C:/Temp/freebsd-master/sys/i386

Then in the import edit we insert the following code:

HEADER_START("fat");

#include 
#include 
#include 
#include 
#include 

Now we can click on ‘Import’.

Import FAT structures

That’s it! We now have all the FAT structures we need in the ‘fat’ header file.

It should also be mentioned that I modified some fields of the direntry structure from the Header Manager, because they were declared as byte arrays, but should actually be shown as short and int values.

Parse the Master Boot Record

Before going on with the FAT analysis, we need to briefly talk about the MBR. FAT partitions are usually found in a larger container, like a partitioned device.

To perform my tests I created a virtual hard-disk in Windows 7 and formatted it with FAT32.

VHD MBR

As you might be able to spot, the VHD file begins with a MBR. In order to locate the partitions it is necessary to parse the MBR first. The format of the MBR is very simple and you can look it up on Wikipedia. In this case we’re only interested in the start and size of each partition.

Cerbero doesn’t yet support the MBR format, although it might be added in the future. In any case, it’s easy to add the missing feature: I wrote a small hook which parses the MBR and adds the partitions as embedded objects.

Here’s the cfg data:

[GenericMBR]
label = Generic MBR Partitions
file = generic_mbr.py
scanning = scanning

And here’s the Python script:

def scanning(sp, ud):
    # make sure we're at the first level and that the format is unknown
    if sp.scanNesting() != 0 or sp.getObjectFormat() != "":
        return
    # check boot signature
    obj = sp.getObject()
    bsign = obj.Read(0x1FE, 2)
    if len(bsign) != 2 or bsign[0] != 0x55 or bsign[1] != 0xAA:
        return
    # add partitions
    for x in range(4):
        entryoffs = 0x1BE + (x * 0x10)
        offs, ret = obj.ReadUInt32(entryoffs + 8)
        size, ret = obj.ReadUInt32(entryoffs + 12)
        if offs != 0 and size != 0:
            sp.addEmbeddedObject(offs * 512, size * 512, "?", "Partition #" + str(x + 1))

And now we can inspect the partitions directly (do not forget to enable the hook from the extensions).

VHD Partitions

Easy.

Analyze raw file system data

The basics of the FAT format are quite simple to describe. The data begins with the boot sector header and some additional fields for FAT32 over FAT16 and for FAT16 over FAT12. We’re only interested in FAT32, so to simplify the description I will only describe this particular variant. The boot sector header specifies essential information such as sector size, sectors in clusters, number of FATs, size of FAT etc. It also specifies the number of reserved sectors. These reserved sectors start with the boot sector and where they end the FAT begins.

The ‘FAT’ in this case is not just the name of the file system, but the File Allocation Table itself. The size of the FAT, as already mentioned, is specified in the boot sector header. Usually, for data-loss prevention, more than one FAT is present. Normally there are two FATs: the number is specified in the boot sector header. The backup FAT follows the first one and has the same size. The data after the FAT(s) and right until the end of the partition includes directory entries and file data. The cluster right after the FAT(s) usually starts with the Root Directory entry, but even this is specified in the boot sector header.

The FAT itself is just an array of 32-bit indexes pointing to clusters. The first 2 indexes are special: they specify the range of EOF values for indexes. It works like this: a directory entry for a file (directories and files share the same structure) specifies the first cluster of said file, if the file is bigger than one cluster, the FAT is looked up at the index representing the current cluster, this index specifies the next cluster belonging to the file. If the index contains one of the values in the EOF range, the file has no more clusters or perhaps contains a damaged cluster (0xFFFFFFF7). Indexes with a value of zero are marked as free. Cluster index are 2-based: cluster 2 is actually cluster 0 in the data region. This means that if the Root Directory is specified to be located at cluster 2, it is located right after the FATs.

Hence, the size of the FAT depends on the size of the partition, and it must be big enough to accommodate an array large enough to represent every cluster in the data area.

So, let’s perform our raw analysis by adding the boot sector header and the additional FAT32 fields:

Add struct

Note: When adding a structure make sure that it’s packed to 1, otherwise field alignment will be wrong.

Boot sector

Then we highlight the FATs.

FATs

And the Root Directory entry.

Root Directory

This last step was just for demonstration, as we’re currently not interested in the Root Directory. Anyway, now we have a basic layout of the FAT to inspect and this is useful.

Let’s now make our analysis applicable to future cases.

Automatically create an analysis layout

Manually analyzing a file is very useful and it’s the first step everyone of us has to do when studying an unfamiliar file format. However, chances are that we have to analyze files with the same format in the future.

That’s why we could write a small Python script to create the analysis layout for us. We’ve already seen how to do this in the post about dissecting an ELF.

Here’s the code:

from Pro.Core import *
from Pro.UI import *
 
def buildFATLayout(obj, l):
    hname = "fat"
    hdr = CFFHeader()
    if hdr.LoadFromFile(hname) == False:
        return
    sopts = CFFSO_VC | CFFSO_Pack1
    d = LayoutData()
    d.setTypeOptions(sopts)
 
    # add boot sector header and FAT32 fields
    bhdr = obj.MakeStruct(hdr, "bootsector", 0, sopts)
    d.setColor(ntRgba(0, 170, 255, 70))
    d.setStruct(hname, "bootsector")
    l.add(0, bhdr.Size(), d)
    bexhdr = obj.MakeStruct(hdr, "bpb710", 0xB, sopts)
    d.setStruct(hname, "bpb710")
    l.add(0xB, bexhdr.Size(), d)

    # get FAT32 info
    bytes_per_sec = bexhdr.Num("bpbBytesPerSec")
    sec_per_clust = bexhdr.Num("bpbSecPerClust")
    res_sect = bexhdr.Num("bpbResSectors")
    nfats = bexhdr.Num("bpbFATs")
    fat_sects = bexhdr.Num("bpbBigFATsecs")
    root_clust = bexhdr.Num("bpbRootClust")
    bytes_per_clust = bytes_per_sec * sec_per_clust

    # add FAT intervals, highlight copies with a different color
    d2 = LayoutData()
    d2.setColor(ntRgba(255, 255, 127, 70))
    fat_start = res_sect * bytes_per_sec
    fat_size = fat_sects * bytes_per_sec
    d2.setDescription("FAT1")
    l.add(fat_start, fat_size, d2)
    # add copies
    d2.setColor(ntRgba(255, 170, 127, 70))
    for x in range(nfats - 1):
        fat_start = fat_start + fat_size
        d2.setDescription("FAT" + str(x + 2))
        l.add(fat_start, fat_size, d2)
    fat_end = fat_start + fat_size

    # add root directory
    rootdir_offs = (root_clust - 2) + fat_end
    rootdir = obj.MakeStruct(hdr, "direntry", rootdir_offs, sopts)
    d.setStruct(hname, "direntry")
    d.setDescription("Root Directory")
    l.add(rootdir_offs, rootdir.Size(), d)
    
 
hv = proContext().getCurrentView()
if hv.isValid() and hv.type() == ProView.Type_Hex:
    c = hv.getData()
    obj = CFFObject()
    obj.Load(c)
    lname = "FAT_ANALYSIS" # we could make the name unique
    l = proContext().getLayout(lname) 
    buildFATLayout(obj, l)
    # apply the layout to the current hex view
    hv.setLayoutName(lname)

We can create an action with this code or just run it on the fly with Ctrl+Alt+R.

Recover deleted files

Now that we know where the FAT is located and where the data region begins, we can try to recover deleted files. There’s more than one possible approach to this task (more on that later). What I chose to do is to scan the entire data region for file directory entries and to perform integrity checks on them, in order to establish that they really are what they seem to be.

Let’s take a look at the original direntry structure:

struct direntry {
	u_int8_t	deName[11];	/* filename, blank filled */
#define	SLOT_EMPTY	0x00		/* slot has never been used */
#define	SLOT_E5		0x05		/* the real value is 0xe5 */
#define	SLOT_DELETED	0xe5		/* file in this slot deleted */
	u_int8_t	deAttributes;	/* file attributes */
#define	ATTR_NORMAL	0x00		/* normal file */
#define	ATTR_READONLY	0x01		/* file is readonly */
#define	ATTR_HIDDEN	0x02		/* file is hidden */
#define	ATTR_SYSTEM	0x04		/* file is a system file */
#define	ATTR_VOLUME	0x08		/* entry is a volume label */
#define	ATTR_DIRECTORY	0x10		/* entry is a directory name */
#define	ATTR_ARCHIVE	0x20		/* file is new or modified */
	u_int8_t	deLowerCase;	/* NT VFAT lower case flags */
#define	LCASE_BASE	0x08		/* filename base in lower case */
#define	LCASE_EXT	0x10		/* filename extension in lower case */
	u_int8_t	deCHundredth;	/* hundredth of seconds in CTime */
	u_int8_t	deCTime[2];	/* create time */
	u_int8_t	deCDate[2];	/* create date */
	u_int8_t	deADate[2];	/* access date */
	u_int8_t	deHighClust[2];	/* high bytes of cluster number */
	u_int8_t	deMTime[2];	/* last update time */
	u_int8_t	deMDate[2];	/* last update date */
	u_int8_t	deStartCluster[2]; /* starting cluster of file */
	u_int8_t	deFileSize[4];	/* size of file in bytes */
};

Every directory entry has to be aligned to 0x20. If the file has been deleted the first byte of the deName field will be set to SLOT_DELETED (0xE5). That’s the first thing to check. The directory name should also not contain certain values like 0x00. According to Wikipedia, the following values aren’t allowed:

  • ” * / : < > ? \ |
    Windows/MS-DOS has no shell escape character
  • + , . ; = [ ]
    They are allowed in long file names only.
  • Lower case letters a–z
    Stored as A–Z. Allowed in long file names.
  • Control characters 0–31
  • Value 127 (DEL)

We can use these rules to validate the short file name. Moreover, certain directory entries are used only to store long file names:

/*
 * Structure of a Win95 long name directory entry
 */
struct winentry {
	u_int8_t	weCnt;
#define	WIN_LAST	0x40
#define	WIN_CNT		0x3f
	u_int8_t	wePart1[10];
	u_int8_t	weAttributes;
#define	ATTR_WIN95	0x0f
	u_int8_t	weReserved1;
	u_int8_t	weChksum;
	u_int8_t	wePart2[12];
	u_int16_t	weReserved2;
	u_int8_t	wePart3[4];
};

We can exclude these entries by making sure that the deAttributes/weAttributes isn’t ATTR_WIN95 (0xF).

Once we have confirmed the integrity of the file name and made sure it’s not a long file name entry, we can validate the deAttributes. It should definitely not contain the flags ATTR_DIRECTORY (0x10) and ATTR_VOLUME (8).

Finally we can make sure that deFileSize isn’t 0 and that deHighClust combined with deStartCluster contains a valid cluster index.

It’s easier to write the code than to talk about it. Here’s a small snippet which looks for deleted files and prints them to the output view:

from Pro.Core import *

class FATData(object):
    pass

def setupFATData(obj):
    hdr = CFFHeader()
    if hdr.LoadFromFile("fat") == False:
        return None
    bexhdr = obj.MakeStruct(hdr, "bpb710", 0xB, CFFSO_VC | CFFSO_Pack1)
    fi = FATData()
    fi.obj = obj
    fi.hdr = hdr
    # get FAT32 info
    fi.bytes_per_sec = bexhdr.Num("bpbBytesPerSec")
    fi.sec_per_clust = bexhdr.Num("bpbSecPerClust")
    fi.res_sect = bexhdr.Num("bpbResSectors")
    fi.nfats = bexhdr.Num("bpbFATs")
    fi.fat_sects = bexhdr.Num("bpbBigFATsecs")
    fi.root_clust = bexhdr.Num("bpbRootClust")
    fi.bytes_per_clust = fi.bytes_per_sec * fi.sec_per_clust
    fi.fat_offs = fi.res_sect * fi.bytes_per_sec
    fi.fat_size = fi.fat_sects * fi.bytes_per_sec
    fi.data_offs = fi.fat_offs + (fi.fat_size * fi.nfats)
    fi.data_size = obj.GetSize() - fi.data_offs
    fi.data_clusters = fi.data_size // fi.bytes_per_clust
    return fi

invalid_short_name_chars = [
    127,
    ord('"'), ord("*"), ord("/"), ord(":"), ord("<"), ord(">"), ord("?"), ord("\\"), ord("|"),
    ord("+"), ord(","), ord("."), ord(";"), ord("="), ord("["), ord("]")
    ]
def validateShortName(name):
    n = len(name)
    for x in range(n):
        c = name[x]
        if (c >= 0 and c <= 31) or (c >= 0x61 and c <= 0x7A) or c in invalid_short_name_chars:
            return False
    return True

# validate short name
# validate attributes: avoid long file name entries, directories and volumes
# validate file size
# validate cluster index
def validateFileDirectoryEntry(fi, de):
    return validateShortName(de.name) and de.attr != 0xF and (de.attr & 0x18) == 0 and \
            de.file_size != 0 and de.clust_idx >= 2 and de.clust_idx - 2 < fi.data_clusters

class DirEntryData(object):
    pass

def getDirEntryData(b):
    # reads after the first byte
    de = DirEntryData()
    de.name = b.read(10)
    de.attr = b.u8()     
    b.read(8) # skip some fields
    de.high_clust = b.u16()
    b.u32() # skip two fields
    de.clust_idx = (de.high_clust << 16) | b.u16()
    de.file_size = b.u32()
    return de

def findDeletedFiles(fi):
    # scan the data region one cluster at a time using a buffer
    # this is more efficient than using an array of CFFStructs
    dir_entries = fi.data_size // 0x20
    b = fi.obj.ToBuffer(fi.data_offs)
    b.setBufferSize(0xF000)
    for x in range(dir_entries):
        try:
            unaligned = b.getOffset() % 0x20
            if unaligned != 0:
                b.read(0x20 - unaligned)
            # has it been deleted?
            if b.u8() != 0xE5:
                continue
            # validate fields
            de = getDirEntryData(b)
            if validateFileDirectoryEntry(fi, de) == False:
                continue
            # we have found a deleted file entry!
            name = de.name.decode("ascii", "replace")
            print(name + " - offset: " + hex(b.getOffset() - 0x20))
        except:
            # an exception occurred, debug info
            print("exception at offset: " + hex(b.getOffset() - 0x20))
            raise

obj = proCoreContext().currentScanProvider().getObject()
fi = setupFATData(obj)
if fi != None:
    findDeletedFiles(fi)

This script is to be run on the fly with Ctrl+Alt+R. It's not complete, otherwise I would have added a wait box, since like it's now the script just blocks the UI for the entire execution. We'll see later how to put everything together in a meaningful way.

The output of the script is the following:

���������� - offset: 0xd6a0160
���������� - offset: 0x181c07a0
���������� - offset: 0x1d7ee980
&�&�&�&�&� - offset: 0x1e7dee20
'�'�'�'�'� - offset: 0x1f3b49a0
'�'�'�'�'� - offset: 0x1f5979a0
'�'�'�'�'� - offset: 0x1f9f89a0
'�'�'�'�'� - offset: 0x1fbdb9a0
$�$�$�$�$� - offset: 0x1fdcad40
&�&�&�&�&� - offset: 0x1fdcc520
'�'�'�'�'� - offset: 0x2020b9a0
'�'�'�'�'� - offset: 0x205a99a0
'�'�'�'�'� - offset: 0x20b0fe80
'�'�'�'�'� - offset: 0x20b0fec0
'�'�'�'�'� - offset: 0x20e08e80
'�'�'�'�'� - offset: 0x20e08ec0
'�'�'�'�'� - offset: 0x21101e80
'�'�'�'�'� - offset: 0x21101ec0
'�'�'�'�'� - offset: 0x213fae80
'�'�'�'�'� - offset: 0x213faec0
 � � � � � - offset: 0x21d81fc0
#�#�#�#�#� - offset: 0x221b96a0
'�'�'�'�'� - offset: 0x226279a0
 � � � � � - offset: 0x2298efc0
'�'�'�'�'� - offset: 0x22e1ee80
'�'�'�'�'� - offset: 0x22e1eec0
'�'�'�'�'� - offset: 0x232c69a0
'�'�'�'�'� - offset: 0x234a99a0
'�'�'�'�'� - offset: 0x2368c9a0
'�'�'�'�'� - offset: 0x23a37e80
'�'�'�'�'� - offset: 0x23a37ec0
'�'�'�'�'� - offset: 0x23d30e80
'�'�'�'�'� - offset: 0x23d30ec0
'�'�'�'�'� - offset: 0x24029e80
'�'�'�'�'� - offset: 0x24029ec0
'�'�'�'�'� - offset: 0x24322e80
'�'�'�'�'� - offset: 0x24322ec0
'�'�'�'�'� - offset: 0x2461be80
'�'�'�'�'� - offset: 0x2461bec0
'�'�'�'�'� - offset: 0x2474d9a0
 � � � � � - offset: 0x24ab4fc0
 � � � � � - offset: 0x24f01fc0
 � � � � � - offset: 0x2534efc0
���������O - offset: 0x33b4f2e0
�������@@@ - offset: 0x345c7200
OTEPAD EXE - offset: 0x130c009e0
TOSKRNLEXE - offset: 0x130c00b80
TPRINT EXE - offset: 0x130c00bc0
��S�W����� - offset: 0x1398fddc0
��S�V���YY - offset: 0x13af3ad60
��M����E�� - offset: 0x13bbec640
EGEDIT EXE - offset: 0x13ef1f1a0

We can see many false positives in the list. The results would be cleaner if we allowed only ascii characters in the name, but this wouldn't be correct, because short names do allow values above 127. We could make this an extra option, generally speaking it's probably better to have some false positives than missing valid entries. Among the false positives we can spot four real entries. What I did on the test disk was to copy many files from the System32 directory of Windows and then to delete four of them, exactly those four found by the script.

The next step is recovering the content of the deleted files. The theory here is that we retrieve the first cluster of the file from the directory entry and then use the FAT to retrieve more entries until the file size is satisfied. The cluster indexes in the FAT won't contain the next cluster value and will be set to 0. We look for adjacent 0 indexes to find free clusters which may have belonged to the file. Another approach would be to dump the entire file size starting from the first cluster, but that approach is worse, because it doesn't tolerate even a little bit of fragmentation in the FAT. Of course, heavy fragmentation drastically reduces the chances of a successful recovery.

However, there's a gotcha which I wasn't aware of and it wasn't mentioned in my references. Let's take a look at the deleted directory entry of 'notepad.exe'.

Notepad directory entry

In FAT32 the index of the first cluster is obtained by combining the high-word deHighClust with the low-word deStartCluster in order to obtain a 32-bit index.

The problem is that the high-word has been zeroed. The actual value should be 0x0013. Seems this behavior is common on Microsoft operating systems as mentioned in this thread on Forensic Focus.

This means that only files with a cluster index equal or lower than 0xFFFF will be correctly pointed at. This makes another approach for FAT32 file recovery more appealing: instead of looking for deleted directly entries, one could directly look for cluster indexes with a value of 0 in the FAT and recognize the start of a file by matching signatures. Cerbero offers an API to identify file signatures (although limited to the file formats it supports), so we could easily implement this logic. Another advantage of this approach is that it doesn't require a deleted file directory entry to work, increasing the possibility to recover deleted files. However, even that approach has certain disadvantages:

  1. Files which have no signature (like text files) or are not identified won't be recovered.
  2. The name of the files won't be recovered at all, unless they contain it themselves, but that's unlikely.

Disadvantages notwithstanding I think that if one had to choose between the two approaches the second one holds higher chances of success. So why then did I opt to do otherwise? Because I thought it would be nice to recover file names, even though only partially and delve a bit more in the format of FAT32. The blunt approach could be generalized more and requires less FAT knowledge.

However, the surely best approach is to combine both systems in order to maximize chances of recovery at the cost of duplicates. But this is just a demonstration, so let's keep it relatively simple and let's go back to the problem at hand: the incomplete start cluster index.

Recovering files only from lower parts of the disk isn't really good enough. We could try to recover the high-word of the index from adjacent directory entries of existing files. For instance, let's take a look at the deleted directory entry:

Deleted entry

As you can see, the directory entry above the deleted one represents a valid file entry and contains an intact high-word we could use to repair our index. Please remember that this technique is just something I came up with and offers no guarantee whatsoever. In fact, it only works under certain conditions:

  1. The cluster containing the deleted entry must also contain a valid file directory entry.
  2. The FAT can't be heavily fragmented, otherwise the retrieved high-word might not be correct.

Still I think it's interesting and while it might not always be successful in automatic mode, it can be helpful when trying a manual recovery.

This is how the code to recover partial cluster indexes might look like:

def recoverClusterHighWord(fi, offs):
    cluster_start = offs - (offs % fi.bytes_per_clust)
    deloffs = offs - (offs % 0x20)
    nbefore = (deloffs - cluster_start) // 0x20
    nafter = (fi.bytes_per_clust - (deloffs - cluster_start)) // 0x20 - 1
    b = fi.obj.ToBuffer(deloffs + 0x20, Bufferize_BackAndForth)
    b.setBufferSize(fi.bytes_per_clust * 2)
    de_before = None
    de_after = None
    try:
        # try to find a valid entry before
        if nbefore > 0:
            for x in range(nbefore):
                b.setOffset(b.getOffset() - 0x40)
                # it can't be a deleted entry
                if b.u8() == 0xE5:
                    continue
                de = getDirEntryData(b)
                if validateFileDirectoryEntry(fi, de):
                    de_before = de
                    break
        # try to find a valid entry after
        if nafter > 0 and de_before == None:
            b.setOffset(deloffs + 0x20)
            for x in range(nafter):
                # it can't be a deleted entry
                if b.u8() == 0xE5:
                    continue
                de = getDirEntryData(b)
                if validateFileDirectoryEntry(fi, de):
                    de_after = de
                    break
    except:
        pass
    # return the high-word if any
    if de_before != None:
        return de_before.high_clust
    if de_after != None:
        return de_after.high_clust
    return 0

It tries to find a valid file directory entry before and after the deleted entry, remaining in the same cluster. Now we can write a small function to recover the file content.

# dump the content of a deleted file using the FAT
def dumpDeletedFileContent(fi, f, start_cluster, file_size):
    while file_size > 0:
        offs = clusterToOffset(fi, start_cluster)
        data = fi.obj.Read(offs, fi.bytes_per_clust)
        if file_size < fi.bytes_per_clust:
            data = data[:file_size]
        f.write(data)
        # next
        file_size = file_size - min(file_size, fi.bytes_per_clust)
        # find next cluster
        while True:
            start_cluster = start_cluster + 1
            idx_offs = start_cluster * 4 + fi.fat_offs
            idx, ok = fi.obj.ReadUInt32(idx_offs)
            if ok == False:
                return False
            if idx == 0:
                break
    return True

All the pieces are there, it's time to bring them together.

Create a recovery tool

With the recently introduced logic provider extensions, it's possible to create every kind of easy-to-use custom utility. Until now we have seen useful pieces of code, but using them as provided is neither user-friendly nor practical. Wrapping them up in a nice graphical utility is much better.

Home view

What follows is the source code or at least part of it: I have omitted those parts which haven't significantly changed. You can download the full source code from here.

Here's the cfg entry:

[FAT32Recovery]
label = FAT32 file recovery utility
descr = Recover files from a FAT32 partition or drive.
file = fat32_recovery.py
init = FAT32Recovery_init

And the Python code:

class RecoverySystem(LocalSystem):

    def __init__(self):
        LocalSystem.__init__(self)
        self.ctx = proCoreContext()
        self.partition = None
        self.current_partition = 0
        self.fi = None
        self.counter = 0

    def wasAborted(self):
        Pro.UI.proProcessEvents(1)
        return self.ctx.wasAborted()

    def nextFile(self):
        fts = FileToScan()

        if self.partition == None:
            # get next partition
            while self.current_partition < 4:
                entryoffs = 0x1BE + (self.current_partition * 0x10)
                self.current_partition = self.current_partition + 1
                offs, ret = self.disk.ReadUInt32(entryoffs + 8)
                size, ret = self.disk.ReadUInt32(entryoffs + 12)
                if offs != 0 and size != 0:
                    cpartition = self.disk.GetStream()
                    cpartition.setRange(offs * 512, size * 512)
                    part = CFFObject()
                    part.Load(cpartition)
                    self.fi = setupFATData(part)
                    if self.fi != None:
                        self.fi.system = self
                        self.partition = part
                        self.next_entry = self.fi.data_offs
                        self.fi.ascii_names_conv = self.ascii_names_conv
                        self.fi.repair_start_clusters = self.repair_start_clusters
                        self.fi.max_file_size = self.max_file_size
                        break

        if self.partition != None:
            de = findDeletedFiles(self.fi, self.next_entry)
            if de != None:
                self.next_entry = de.offs + 0x20
                fname = "%08X" % self.counter
                f = open(self.dump_path + fname, "wb")
                if f == None:
                    ctx.msgBox(MsgErr, "Couldn't open file '" + fname + "'")
                    return fts
                dumpDeletedFileContent(self.fi, f, de.clust_idx, de.file_size)
                f.close()
                self.counter = self.counter + 1
                fts.setName(fname + "\\" + de.name)
                fts.setLocalName(self.dump_path + fname)
            else:
                self.partition = None

        return fts

def recoveryOptionsCallback(pe, id, ud):
    if id == Pro.UI.ProPropertyEditor.Notification_Close:
        path = pe.getValue(0)
        if len(path) == 0 or os.path.isdir(path) == False:
            errs = NTIntList()
            errs.append(0)
            pe.setErrors(errs)
            return False
    return True

def FAT32Recovery_init():
    ctx = Pro.UI.proContext()
    file_name = ctx.getOpenFileName("Select disk...")
    if len(file_name) == 0:
        return False

    cdisk = createContainerFromFile(file_name)
    if cdisk.isNull():
        ctx.msgBox(MsgWarn, "Couldn't open disk!")
        return False

    disk = CFFObject()
    disk.Load(cdisk)
    bsign = disk.Read(0x1FE, 2)
    if len(bsign) != 2 or bsign[0] != 0x55 or bsign[1] != 0xAA:
        ctx.msgBox(MsgWarn, "Invalid MBR!")
        return False

    dlgxml = """

  
""" opts = ctx.askParams(dlgxml, "FAT32RecoveryOptions", recoveryOptionsCallback, None) if opts.isEmpty(): return False s = RecoverySystem() s.disk = disk s.dump_path = os.path.normpath(opts.value(0)) + os.sep s.ascii_names_conv = "strict" if opts.value(1) else "replace" s.repair_start_clusters = opts.value(2) if opts.value(3) != 0: s.max_file_size = opts.value(3) * 1024 * 1024 proCoreContext().setSystem(s) return True

When the tool is activated it will ask for the disk file to be selected, then it will show an options dialog.

Options

In our case we can select the option 'Ascii only names' to exclude false positives.

The options dialog asks for a directory to save the recovered files. In the future it will be possible to save volatile files in the temporary directory created for the report, but since it's not yet possible, it's the responsibility of the user to delete the recovered files if he wants to.

The end results of the recovery operation:

Results

All four deleted files have been successfully recovered.

Three executables are marked as risky because intrinsic risk is enabled and only 'ntoskrnl.exe' contains a valid digital certificate.

Conclusions

I'd like to remind you that this utility hasn't been tested on disks other than on the one I've created for the post and, as already mentioned, it doesn't even implement the best method to recover files from a FAT32, which is to use a signature based approach. It's possible that in the future we'll improve the script and include it in an update.

The purpose of this post was to show some of the many things which can be done with Cerbero. I used only Cerbero for the entire job: from analysis to code development (I even wrote the entire Python code with it). And finally to demonstrate how a utility with commercial value like the one presented could be written in under 300 lines of Python code (counting comments and new-lines).

The advantages of using the Cerbero SDK are many. Among them:

  • It hugely simplifies the analysis of files. In fact, I used only two external Python functions: one to check the existence of a directory and one to normalize the path string.
  • It helps building a fast robust product.
  • It offers a graphical analysis experience to the user with none or little effort.
  • It gives the user the benefit of all the other features and extension offered by Cerbero.

To better explain what is meant by the last point, let's take the current example. Thanks to the huge amount of formats supported by Cerbero, it will be easy for the user to validate the recovered files.

Validate recovered files

In the case of Portable Executables it's extremely easy because of the presence of digital certificates, checksums and data structures. But even with other files it's easy, because Cerbero may detect errors in the format or unused ranges.

I hope you enjoyed this post!

P.S. You can download the complete source code and related files from here.

References

  1. File System Forensic Analysis - Brian Carrier
  2. Understanding FAT32 Filesystems - Paul Stoffregen
  3. Official documentation - Microsoft
  4. File Allocation Table - Wikipedia
  5. Master boot record - Wikipedia