Drawn to Complexity: story of my own stupidity

When someone writes blog entries about the mistakes of others, then one should also be able to admit one’s own. And that’s exactly what I’m going to do here.

The main reasons I’m writing this post are the following:

  • To entertain and interact on some level.
  • To save others from falling for the same mistakes or at least make them feel less alone.
  • To talk about the history of my commercial application (Cerbero Profiler), in order to offer an understanding to the reader of how it came to be the way it is.
  • To let the readers have access to my thoughts regarding the future of this product.

Let me start with a disclaimer: I am aware of the literature about software blunders, marketing strategies, sales, etc. I read quite a bit.

In fact, my own favorite book about mistakes in the IT world has been for a long time In Search Of Stupidy by Merrill R. (Rick) Chapman.

I probably first read this book when I was about 18-20 years old and it has remained my favorite ever since. I have since re-read it several times. Apart from the interesting history lessons concerning some of the biggest IT blunders, the book is also hilarious. I wish my writing could be as funny as Rick’s and maybe someday it will.

This is to say that not only have I since my youth been aware of mistakes done by major companies in the IT world, but I have also always agreed that to do better than others it is necessary to make less mistakes. There are many self-help books about success which recommend to give oneself the possibility to fail in a controlled way for as many times as it takes to reach success. Of course, ideally it’s even better to learn from the mistakes of others. Sometimes it may be possible if our own stubbornness doesn’t stand in the way.

The initial idea for the program came to me when I was very young, around 16. I had noticed how it was possible to explore the user address space of a process in WinHex and wanted to expand on that idea by allowing to inspect and edit PEs in memory.

The thing I easily was drawn to as a developer was complexity. If an idea was complex, I would have liked to begin working on it even before considering whether the idea had any real use. And the recurring theme in my life is that I have almost always chosen complexity over business opportunities.

It has to be noted that the most successful software, in terms of user count, I have ever written is also the one which took me the least to write, namely: 4GB Patch. It was written in about 20 minutes and I have no idea how many millions of times it has been downloaded.

Hence, I understood that complexity had little to do with success, as successful mobile phone fart apps have largely demonstrated. Still I couldn’t let go of my idea. Implementing it on a rough level could be easy, but doing it in a sophisticated way would take time.

I wrote most of Explorer Suite when I was 19 and at the time I wrote it mainly to help me with another project connected to .NET internals (I was writing an obfuscator). When I was 21-22 I already had much more experience and wanted to rewrite the core of the application to make it possible to offer support for multiple file formats (at the time I wanted to support more executable file formats). So I partly did that, but also the UI, written in MFC, was a disaster to expand. I didn’t have the time to work on it.

You have to keep in mind that during all that time I was also working on other projects. At 23 years of age I joined Hex-Rays to work on IDA Pro. Yet another time in my life when I chose complexity: I had something to prove and IDA Pro was a big fish. I was hired to rewrite/port the entire UI of IDA to Qt. Quoting Ilfak Guilfanov from the blog of Hex-Rays:

“We invested lots of time and efforts into idaq: Daniel worked on it full time nine months. And he is a brilliant programmer who knows how to do things, yet there is a lot to do – just to achieve the same level of comfort as with idag.”

I’d like to thank Ilfak for the opportunity he gave me, I learned a lot. But the most important thing I took away from that experience was that there wasn’t any project that could scare me anymore. That project was immense and it took me literally 9 months of writing and porting up to 1000 lines of code a day to make it in that time-frame. Afterwards, I was exhausted and fearless.

At the end of those initial months, I started working on a small PDF analyzer using the core I had rewritten years before. At the time, there was a huge interest in PDF malware and I wanted to take the opportunity to play a bit with my code.

The PDF utility evolved in the following years into a part of my original idea: an application capable of inspecting multiple file formats and I started to sell the product.

My first mistake, and this is something probably many do when they’re young and/or inexperienced, was that I created something overly-professional. I wasn’t exactly sure who the audience for the product was going to be. Was it going be just technical people or also semi-technical people? This was a big ingenuity on my side. I could have thought it through at the time and figure it out from the start.

That mistake resulted into a UI which tried to be both simple and complex at the same time. This means that the UI hid the complexity to make things appear more simple or limited, but not nearly enough for somebody who isn’t skilled, while also increasing the learning curve for skilled people who had to search for the hidden functionality they needed.

This initial indecision also resulted into a fundamental marketing issue of product positioning. Was I offering my product to technical people or not? And what exactly was I selling?

You see, in my pursuit of adding complexity, I completely lost focus on use-cases and marketing. The incomplete list of features on the page of the product is like a giant wall of text which you can observe here in miniature.

See how impressive and complex it is? I bet you already want to purchase it!

You don’t? Yeah… exactly.

The list of sparse features is staggering. The program even includes Clang just in order to be able to extract C++ structures from source code.

The list of supported file formats is also considerable:

APK, APNG, AXML, BMP, BZ2, CHM, CLASS, DEX, DIB, DLL, DOC, DOCX, ELF, EML, EOT, EXE, GIF, GZIP, JAR, JPEG, JSE, LNK, LZMA, MACH-O, MSI, O, OCX, ODT, OTF, PDB, PDF, PFB, PNG, PPS, PPT, PPTX, PRX, PUFF, RAW, RTF, SO, SQLITE3, SWF, SYS, T1, T2, TIFF, TORRENT, TTC, TTF, VBE, WINMEM, WOFF, XDP, XLS, XLSX, XML, ZIP

I wrote most of the support myself with the exception of CHM, EML, DOC/XLS/PPT (which I took over), LNK, ActionScript2 (in SWF), WINMEM (which I handed over after initially developing it myself). The reader has to consider that the support for certain file formats like PDF or PE is extensive.

I lost track of the features myself and implemented many things which nobody could even notice. Let me offer you a few examples of my insanity.

  • I stress-tested many DB technologies just to see which was the best one to store the data. I abstracted the access to the DB in order to be able to switch the DB technology underneath or even support more than one. My idea was even to let the user decide which DB type to use.
  • As already mentioned, I embedded Clang just to extract C++ structures from source code. The level of support goes one step further into insanity as it even includes templates. And that’s not even the end of it. Structures can be imported from PDBs as well and underneath they rely on two different mechanism: whereas C++ structures are computed on the fly in terms of size, PDB ones have a fixed size.
  • Speaking of which, I added my own PDB parser which I created relying only on the awesome information provided by Sven B. Schreiber and the hex editor.
  • I didn’t want to rely on Authenticode in Windows to validate certificates in PEs, because that would have meant having some non-portable code and also slightly slowing down the scanning process. So what I did was to reverse engineer how Authenticode works and implement it myself. The application won’t validate certificates on Linux and OS X, because I didn’t have a nice way to maintain an updated certificate store and the necessity didn’t arise so I didn’t bother, but in theory it could validate PEs on Linux and OS X.
  • I implemented the parsing of every font format. Some famous exploits relied on font technology, so I didn’t want the product to lack the support for fonts. For those of you who are not aware of it: there isn’t just one font format. There is even one format called EOT created by Microsoft, which stands for Embedded Open Type. Basically it’s a compressed OpenType font. To get back to the OpenType format several stages have to be performed. One of those includes decompression. As for the compression algorithm, Microsoft chose a custom one based on lz77 called lzcomp. Microsoft has released the source code of lzcomp, but the version they released contained some bugs and had already been patched in Windows. So what I did was to diff the compiled code in order to include the patches and to avoid having vulnerable code in my product. Of course, I could’ve also used the Windows API to achieve the same, but that would’ve meant not being able to run the same code on other OSs.
  • At the time when it came out I bought the latest PDF specs draft, just to be able to support the newest encryption revision before anyone could even ask for it.
  • I implemented a first-person shooter game in the product so that the user wouldn’t get bored during the analysis of a file. I’m joking, but I stopped just shy of that.

These are just a few of the insane things I did. And I did many of them while also having an office job.

In fact, even though it took way longer than I had hoped for, I found enough energy one night after work and got to finish the code to demonstrate the idea which was planted in my brain since I was 16.

An icon, inside an executable, inside a process address space, inside a raw memory dump. The complete hierarchy being visible and explorable.

I had proved what I set out to prove. That was it. One thing scrapped from the to-do list of my life.

The development of the memory support stalled after that, because the office work was taking up most of my time and I also had a life to live (let’s pretend it’s true). In addition, I still had a product to support regarding the features which were actually being used by people.

In the end, I decided to hire another developer dedicated to the memory part as that was the only viable solution and it turned out to be the right thing to do.

So what was the result of all this work? A product which I had difficulty to describe to potential customers. I ended up pitching it as a “file analysis framework”, which sounds as exciting as you would expect.

I am actually grateful to those customers which saw past the confusing concept, steep learning curve and sparse features. Many customers appreciated, for instance, the Python SDK. I have dedicated a lot of time and effort into exposing most of the functionality of the product to Python. The only issue in that regard is the documentation, since it’s not easy to grasp everything from the posts on the company blog.

However, whenever a customer asked me for help with the SDK, I tried to do my best and I think that has been appreciated.

I actually like the SDK. For instance, decoding an object (or all of them, for that matter) in a PDF is just as simple as the following code.

And it’s not only about file formats: the SDK allows to create complex UIs as well.

After over a decade of non-continuous development, this summer I finally had the time to draw some conclusions. What mistakes did I make? Which are the things I dislike about my product and which are the ones I like?

Some of the mistakes I made:

  • I focused onto proving something instead of focusing on real use-cases. I even knew about this, but it didn’t change my commitment to do it regardless.
  • I didn’t choose my target audience from the start.
  • I have implemented too many sparse features instead of continuing to improve a limited number.
  • All of that resulted into creating something which I didn’t feel passionate about.

Of course, it’s better to do just one thing and do it well. But that was too simple for me and that goes back to the root of my own stupidity.

Having forced myself to write something without passion is also my main issue now. Since I started my work towards version 3.0, I decided to make radical changes.

  1. Remove everything I visually hate from the product and replace it with something I like. This started out by creating a new icon.
  2. Think about the things I like such as the SDK to build on them and create more things that I like.
  3. Finally give the product a shape and position.
  4. Maintain code compatibility for whatever solution existing customers have created.
  5. Give the product a strong coherency, both visual and feature-wise.
  6. End up with a project I enjoy working on and a product people enjoy using.

Having reached a point of (partial) maturity in my life and not feeling anymore any need to prove myself through complexity, I am now forced to deal with the complexity I created in my youth for myself.

To completely re-think such a large project is not easy at all. It may or may not work out. I really am not writing from a point where I know that it will be possible to remedy my mistakes. I have some initial ideas, but I am still far from a complete concept.

This time I am presented with some unique challenges, different from those I encountered in the past. The main challenge lies in becoming passionate about the project. If somehow I manage to accomplish that, then I think many more could enjoy the product.

4 thoughts on “Drawn to Complexity: story of my own stupidity”

  1. Hi Daniel;

    That was a good read and it also brought back some memories of my coding years on the Commodore Amiga series, of which I still have a functioning 1992 model A1200.

    In programming, I always liked complexity however my German heritage would get the better of me and at every turn I’d ensure my assembler code was as efficient as possible. Every routine and instruction would accomplish the most while using the least processor clock cycles and result in smallest executable file size. If I could get away with NOT using a BSR [subroutine] and RTS pair, hell yeh, I’d do it.

    I always made utilities and was never one for gaming, but my favourite to this day – I can’t even remember its name anymore – patched slower code instructions in executables to faster equivalents, MULU -> ASL and many others. It was only a few hundred bytes in size on disk (that’s the thing about assembler – to output “Hello world!” in a DOS window would create a 5000 byte executable on disk in compiled “C” code, I could ASM that to just 48 bytes).

    It made a big difference in many older Amiga programs, especially when a few slow instructions were used in a loop. I even patched the OS ROM image which I booted the machine with.

    On the Motorola processors 68000 -> 68030 the NOP (no-operation) instruction was okay to use, and often used in cracked software, but on processors 68040 -> 68060 the NOP instruction actually stalled the processors and cleared caches, resulting in a huge slowdown of almost 100 clock cycles, so I used instruction LEA A6,A6 instead as it needed only 2 clock cycles and still did the “nothing” required of it.

    Because Amiga executables use CODE and DATA “hunks”, I had to code some sort of hunklab inside the util to ensure I only patched proper CODE hunks, or crashes would surely result.

    I released it completely freeware on Aminet back in the 90’s. It’s probably still there.

    I always wanted to do that for the PC as well, but after taking a quick look (a long time ago) at x86 ASM, my first thought being “What fresh hell is this?” and the next, “Where are all my registers?” left me not bothering to learn it at all. Maybe you could take up this minor challenge. It would make a good companion to your 4GB Patch.

    Enjoy;

    Olly (Australia)

    1. Hello Olly,

      thank you for the interesting story!

      In general it could be a good idea to apply some sort of code optimization if possible. However, there are a few issues which come to my mind. The main one being that many games use software protections nowadays.

      Your story really brought back memories of the good old times for me, even though it predates the period when I started doing IT, which was in 2000, but I still got some flair from the 90s.

      Cheers

  2. Hey daniel,
    thanks for sharing your thoughts and what you called “mistakes”.

    Maybe in a business-centric way they could be considered mistakes but, you were young and you wanted to proof and challenge yourself: I see nothing wrong in that. 🙂

    You are/were also ingenious enough to make your thinking become real being them insane or not 😛
    I have, from time to time, some good ideas – at least IMHO – but lack of skills (but plenty of incompetence and laziness :P) keeps me away from all the cool stuff you do everyday. I understand this “look-behind” might let you think you wasted time … but it’s really cool stuff (even if no one will know, like or use it), eheheh 🙂

    At the end of the day, however, when you “finish” something you employed a lot of time on – even if it’s not something you were passionate enough about – you feel extremely happy/satisfied you could scrap it from your own to-do list … or at least these are my feelings, in my little space of stupid/trivial things 🙂

    It’s awesome you’re now trying to “put together” the efforts you did in the past and “adjusting” them with the taste of maturity … it will help you re-gain the passion for your product (all parts of it) and, while in the process, other memories, like the ones you shared with us in this post (good or bad, it depends of course … but all precious 😉 ) will come to your mind.


    Congratulations also to ЯΞ√ΩLUT↑☼N who did a lot of cool stuff too !
    Thanks for sharing your experience too. 🙂

    You’re amazing guys and, for people like you who strive for perfection … it’s expected nothing will satisfy you, but maybe it’s the negative side of being genius 😉

    Best Regards,
    Tony

    P.S. Even in my poor job, I often look back at something I wrote in the past and say to myself “who the hell wrote this ugly or useless stuff?” … before taking a moment and realize it was me who did so much nonsense 😀 And soon I plan to “correct” it asap : but I wouldn’t classify it as a “mistake” … at least not all of it 😛

    1. Hello Tony,
      thank you for your comment, it made my day! 🙂 Also thanks for the encouragement and for sharing your own experience. I was indeed happy when I scrapped the memory thing from my to-do list. In the end, even my revamping of NTCore and writing again on the blog is an attempt to reconnect to my passion as written before. It’s a process I think and it takes some time, it doesn’t happen from one day to the other. And it’s the same with a product which has been developed for so long, it takes some distancing to be able to see it with other eyes. And this is true both to judge it objectively and to be able to completely re-think it.
      Cheers

Leave a Reply

Your email address will not be published. Required fields are marked *