Video: PDF/XDP Malware Reversing

Since I don’t have time to write many articles, this is my first video tutorial. 🙂 This video is based on my 2016 article on cerbero-blog.com.

If you like it and want to see more, let me know!

Edit: since I was asked to share the comments in the disassembly, here’s a small snippet to add them. You can run the snippet by pressing Ctrl+Alt+R (make sure that the disassembly view is focused when running it).

Porting a CHIP-8 emulator to Rust

I’ve been meaning to learn the Rust language for quite some years and found only now the time to start this endeavor. I must say it has probably been for the best, as the language has clearly matured a lot since the last time I looked into it.

As a first project to try out Rust I ported Laurence Muller’s CHIP-8 emulator to it. It’s a simple C++ project and it took me only a day to port it to Rust.

You can download my port from GitHub.

There’s not much to write about the project itself apart that the original code used GLUT and the port uses SDL2. I also implemented basic audio support, but didn’t work on providing a realistic clock speed.

I can’t yet write something exhaustive about Rust, because I’m still learning it. What I can say up until now is that, apart some minor things which I dislike (snake-case: ugh), it seems fun to program in it. The amount of rules make the programming a bit more challenging, but it pays off in satisfaction once everything builds without complaints.

The only thing I can say is that I can’t clearly see a use-case for Rust. Yes, it’s a solid and secure language which is fun to use. But will it be used in the real world? I can see many use-cases for Go, not so many for Rust. What I hope is for Rust to mature some more and then to become stable, without going down the path to insanity like modern C++.

Drawn to Complexity: story of my own stupidity

When someone writes blog entries about the mistakes of others, then one should also be able to admit one’s own. And that’s exactly what I’m going to do here.

The main reasons I’m writing this post are the following:

  • To entertain and interact on some level.
  • To save others from falling for the same mistakes or at least make them feel less alone.
  • To talk about the history of my commercial application (Cerbero Profiler), in order to offer an understanding to the reader of how it came to be the way it is.
  • To let the readers have access to my thoughts regarding the future of this product.

Let me start with a disclaimer: I am aware of the literature about software blunders, marketing strategies, sales, etc. I read quite a bit.

In fact, my own favorite book about mistakes in the IT world has been for a long time In Search Of Stupidy by Merrill R. (Rick) Chapman.

I probably first read this book when I was about 18-20 years old and it has remained my favorite ever since. I have since re-read it several times. Apart from the interesting history lessons concerning some of the biggest IT blunders, the book is also hilarious. I wish my writing could be as funny as Rick’s and maybe someday it will.

This is to say that not only have I since my youth been aware of mistakes done by major companies in the IT world, but I have also always agreed that to do better than others it is necessary to make less mistakes. There are many self-help books about success which recommend to give oneself the possibility to fail in a controlled way for as many times as it takes to reach success. Of course, ideally it’s even better to learn from the mistakes of others. Sometimes it may be possible if our own stubbornness doesn’t stand in the way.

The initial idea for the program came to me when I was very young, around 16. I had noticed how it was possible to explore the user address space of a process in WinHex and wanted to expand on that idea by allowing to inspect and edit PEs in memory.

The thing I easily was drawn to as a developer was complexity. If an idea was complex, I would have liked to begin working on it even before considering whether the idea had any real use. And the recurring theme in my life is that I have almost always chosen complexity over business opportunities.

It has to be noted that the most successful software, in terms of user count, I have ever written is also the one which took me the least to write, namely: 4GB Patch. It was written in about 20 minutes and I have no idea how many millions of times it has been downloaded.

Hence, I understood that complexity had little to do with success, as successful mobile phone fart apps have largely demonstrated. Still I couldn’t let go of my idea. Implementing it on a rough level could be easy, but doing it in a sophisticated way would take time.

I wrote most of Explorer Suite when I was 19 and at the time I wrote it mainly to help me with another project connected to .NET internals (I was writing an obfuscator). When I was 21-22 I already had much more experience and wanted to rewrite the core of the application to make it possible to offer support for multiple file formats (at the time I wanted to support more executable file formats). So I partly did that, but also the UI, written in MFC, was a disaster to expand. I didn’t have the time to work on it.

You have to keep in mind that during all that time I was also working on other projects. At 23 years of age I joined Hex-Rays to work on IDA Pro. Yet another time in my life when I chose complexity: I had something to prove and IDA Pro was a big fish. I was hired to rewrite/port the entire UI of IDA to Qt. Quoting Ilfak Guilfanov from the blog of Hex-Rays:

“We invested lots of time and efforts into idaq: Daniel worked on it full time nine months. And he is a brilliant programmer who knows how to do things, yet there is a lot to do – just to achieve the same level of comfort as with idag.”

I’d like to thank Ilfak for the opportunity he gave me, I learned a lot. But the most important thing I took away from that experience was that there wasn’t any project that could scare me anymore. That project was immense and it took me literally 9 months of writing and porting up to 1000 lines of code a day to make it in that time-frame. Afterwards, I was exhausted and fearless.

At the end of those initial months, I started working on a small PDF analyzer using the core I had rewritten years before. At the time, there was a huge interest in PDF malware and I wanted to take the opportunity to play a bit with my code.

The PDF utility evolved in the following years into a part of my original idea: an application capable of inspecting multiple file formats and I started to sell the product.

My first mistake, and this is something probably many do when they’re young and/or inexperienced, was that I created something overly-professional. I wasn’t exactly sure who the audience for the product was going to be. Was it going be just technical people or also semi-technical people? This was a big ingenuity on my side. I could have thought it through at the time and figure it out from the start.

That mistake resulted into a UI which tried to be both simple and complex at the same time. This means that the UI hid the complexity to make things appear more simple or limited, but not nearly enough for somebody who isn’t skilled, while also increasing the learning curve for skilled people who had to search for the hidden functionality they needed.

This initial indecision also resulted into a fundamental marketing issue of product positioning. Was I offering my product to technical people or not? And what exactly was I selling?

You see, in my pursuit of adding complexity, I completely lost focus on use-cases and marketing. The incomplete list of features on the page of the product is like a giant wall of text which you can observe here in miniature.

See how impressive and complex it is? I bet you already want to purchase it!

You don’t? Yeah… exactly.

The list of sparse features is staggering. The program even includes Clang just in order to be able to extract C++ structures from source code.

The list of supported file formats is also considerable:

APK, APNG, AXML, BMP, BZ2, CHM, CLASS, DEX, DIB, DLL, DOC, DOCX, ELF, EML, EOT, EXE, GIF, GZIP, JAR, JPEG, JSE, LNK, LZMA, MACH-O, MSI, O, OCX, ODT, OTF, PDB, PDF, PFB, PNG, PPS, PPT, PPTX, PRX, PUFF, RAW, RTF, SO, SQLITE3, SWF, SYS, T1, T2, TIFF, TORRENT, TTC, TTF, VBE, WINMEM, WOFF, XDP, XLS, XLSX, XML, ZIP

I wrote most of the support myself with the exception of CHM, EML, DOC/XLS/PPT (which I took over), LNK, ActionScript2 (in SWF), WINMEM (which I handed over after initially developing it myself). The reader has to consider that the support for certain file formats like PDF or PE is extensive.

I lost track of the features myself and implemented many things which nobody could even notice. Let me offer you a few examples of my insanity.

  • I stress-tested many DB technologies just to see which was the best one to store the data. I abstracted the access to the DB in order to be able to switch the DB technology underneath or even support more than one. My idea was even to let the user decide which DB type to use.
  • As already mentioned, I embedded Clang just to extract C++ structures from source code. The level of support goes one step further into insanity as it even includes templates. And that’s not even the end of it. Structures can be imported from PDBs as well and underneath they rely on two different mechanism: whereas C++ structures are computed on the fly in terms of size, PDB ones have a fixed size.
  • Speaking of which, I added my own PDB parser which I created relying only on the awesome information provided by Sven B. Schreiber and the hex editor.
  • I didn’t want to rely on Authenticode in Windows to validate certificates in PEs, because that would have meant having some non-portable code and also slightly slowing down the scanning process. So what I did was to reverse engineer how Authenticode works and implement it myself. The application won’t validate certificates on Linux and OS X, because I didn’t have a nice way to maintain an updated certificate store and the necessity didn’t arise so I didn’t bother, but in theory it could validate PEs on Linux and OS X.
  • I implemented the parsing of every font format. Some famous exploits relied on font technology, so I didn’t want the product to lack the support for fonts. For those of you who are not aware of it: there isn’t just one font format. There is even one format called EOT created by Microsoft, which stands for Embedded Open Type. Basically it’s a compressed OpenType font. To get back to the OpenType format several stages have to be performed. One of those includes decompression. As for the compression algorithm, Microsoft chose a custom one based on lz77 called lzcomp. Microsoft has released the source code of lzcomp, but the version they released contained some bugs and had already been patched in Windows. So what I did was to diff the compiled code in order to include the patches and to avoid having vulnerable code in my product. Of course, I could’ve also used the Windows API to achieve the same, but that would’ve meant not being able to run the same code on other OSs.
  • At the time when it came out I bought the latest PDF specs draft, just to be able to support the newest encryption revision before anyone could even ask for it.
  • I implemented a first-person shooter game in the product so that the user wouldn’t get bored during the analysis of a file. I’m joking, but I stopped just shy of that.

These are just a few of the insane things I did. And I did many of them while also having an office job.

In fact, even though it took way longer than I had hoped for, I found enough energy one night after work and got to finish the code to demonstrate the idea which was planted in my brain since I was 16.

An icon, inside an executable, inside a process address space, inside a raw memory dump. The complete hierarchy being visible and explorable.

I had proved what I set out to prove. That was it. One thing scrapped from the to-do list of my life.

The development of the memory support stalled after that, because the office work was taking up most of my time and I also had a life to live (let’s pretend it’s true). In addition, I still had a product to support regarding the features which were actually being used by people.

In the end, I decided to hire another developer dedicated to the memory part as that was the only viable solution and it turned out to be the right thing to do.

So what was the result of all this work? A product which I had difficulty to describe to potential customers. I ended up pitching it as a “file analysis framework”, which sounds as exciting as you would expect.

I am actually grateful to those customers which saw past the confusing concept, steep learning curve and sparse features. Many customers appreciated, for instance, the Python SDK. I have dedicated a lot of time and effort into exposing most of the functionality of the product to Python. The only issue in that regard is the documentation, since it’s not easy to grasp everything from the posts on the company blog.

However, whenever a customer asked me for help with the SDK, I tried to do my best and I think that has been appreciated.

I actually like the SDK. For instance, decoding an object (or all of them, for that matter) in a PDF is just as simple as the following code.

And it’s not only about file formats: the SDK allows to create complex UIs as well.

After over a decade of non-continuous development, this summer I finally had the time to draw some conclusions. What mistakes did I make? Which are the things I dislike about my product and which are the ones I like?

Some of the mistakes I made:

  • I focused onto proving something instead of focusing on real use-cases. I even knew about this, but it didn’t change my commitment to do it regardless.
  • I didn’t choose my target audience from the start.
  • I have implemented too many sparse features instead of continuing to improve a limited number.
  • All of that resulted into creating something which I didn’t feel passionate about.

Of course, it’s better to do just one thing and do it well. But that was too simple for me and that goes back to the root of my own stupidity.

Having forced myself to write something without passion is also my main issue now. Since I started my work towards version 3.0, I decided to make radical changes.

  1. Remove everything I visually hate from the product and replace it with something I like. This started out by creating a new icon.
  2. Think about the things I like such as the SDK to build on them and create more things that I like.
  3. Finally give the product a shape and position.
  4. Maintain code compatibility for whatever solution existing customers have created.
  5. Give the product a strong coherency, both visual and feature-wise.
  6. End up with a project I enjoy working on and a product people enjoy using.

Having reached a point of (partial) maturity in my life and not feeling anymore any need to prove myself through complexity, I am now forced to deal with the complexity I created in my youth for myself.

To completely re-think such a large project is not easy at all. It may or may not work out. I really am not writing from a point where I know that it will be possible to remedy my mistakes. I have some initial ideas, but I am still far from a complete concept.

This time I am presented with some unique challenges, different from those I encountered in the past. The main challenge lies in becoming passionate about the project. If somehow I manage to accomplish that, then I think many more could enjoy the product.

Batch image manipulation using Python and GIMP

Not a very common topic for me, but I thought it could be neat to mention some tips & tricks. I won’t go into the details of the Python GIMP SDK, most of it can be figured out from the GIMP documentation. I spent a total of one hour researching this topic, so I’m not an expert and I could have made mistakes, but perhaps I can save some effort to others which want to achieve the same results. You can jump to the end of the tutorial to find a nice skeleton batch script if you’re not interested in reading the theory.

To those wondering why GIMP, it’s because I created a new icon for Profiler and wanted to automatize some operations on it in order to have it in all sizes and flavors I need. One of the produced images had to be semi-transparent. So I thought, why not using a GIMP batch command, since anyway GIMP is installed on most Linux systems by default?

Just to mention, GIMP supports also a Lisp syntax to write scripts, but it caused my eyes to bleed profusely, so I didn’t even take into it consideration and focused directly on Python.

Of course, I could’ve tried other solutions like PIL (Python Imaging Library) which I have used in the past. But GIMP is actually nice, you can do many complex UI operations from code and you also have an interactive Python shell to test your code live on an image.

For example, open an image in GIMP, then open the Python console from Filters -> Python-Fu -> Console and execute the following code:

And you’ll see that the image is now halfway transparent. What the code does is to take the first image from the list of open images and sets the opacity of the first layer to 50%.

This is the nice thing about GIMP scripting: it lets you manipulate layers just like in the UI. This allows for very powerful scripting capabilities.

The first small issue I’ve encountered in my attempt to write a batch script, is that GIMP only accepts Python code as command line argument, not the path to a script on disk. According to the official documentation:

GIMP Python All this means that you could easily invoke a GIMP Python plug-in such as the one above directly from your shell using the (plug-in-script- fu-eval …) evaluator:

gimp –no-interface –batch ‘(python-fu-console-echo RUN-NONINTERACTIVE “another string” 777 3.1416 (list 1 0 0))’ ‘(gimp-quit 1)’

The idea behind it is that you create a GIMP plugin script, put it in the GIMP plugin directory, register methods like in the following small example script:

And then invoke the registered method from the command line as explained above.

I noticed many threads on stackoverflow.com where people were trying to figure out how to execute a batch script from the command line. Now, the obvious solution which came to my mind is to execute Python code from the command line which prepends the current path to the sys.path and then to import the batch script. So I searched and found that solution suggested by the user xenoid in this stackoverflow thread.

So the final code for my case would be:

And:

What took me most to understand was to call the method merge_visible_layers before saving the image. Initially, I was trying to do it without calling it and the saved image was not transparent at all. So I thought the opacity was not correctly set and tried to do it with other methods like calling gimp_layer_set_opacity, but without success.

I then tried in the console and noticed that the opacity is actually set correctly, but that that information is lost when saving the image to disk. I then found the image method flatten and noticed that the transparency was retained, but unfortunately the saved PNG background was now white and no longer transparent. So I figured that there had to be a method to obtain a similar result but without losing the transparent background. Looking a bit among the methods in the SDK I found merge_visible_layers. I think it’s important to point this out, in case you experience the same issue and can’t find a working solution just like it happened to me.

Now we have a working solution, but let’s create a more elegant one, which allows use to use GIMP from within the same script, without any external invocation.

We can now call our function simply like this:

Which looks very pretty to me.

I could go on showing other nice examples of image manipulation, but the gist of the tutorial was just this. However, GIMP has a rich SDK which allows to automatize very complex operations.