Drawn to Complexity: story of my own stupidity

When someone writes blog entries about the mistakes of others, then one should also be able to admit one’s own. And that’s exactly what I’m going to do here.

The main reasons I’m writing this post are the following:

  • To entertain and interact on some level.
  • To save others from falling for the same mistakes or at least make them feel less alone.
  • To talk about the history of my commercial application (Cerbero Profiler), in order to offer an understanding to the reader of how it came to be the way it is.
  • To let the readers have access to my thoughts regarding the future of this product.

Let me start with a disclaimer: I am aware of the literature about software blunders, marketing strategies, sales, etc. I read quite a bit.

In fact, my own favorite book about mistakes in the IT world has been for a long time In Search Of Stupidy by Merrill R. (Rick) Chapman.

I probably first read this book when I was about 18-20 years old and it has remained my favorite ever since. I have since re-read it several times. Apart from the interesting history lessons concerning some of the biggest IT blunders, the book is also hilarious. I wish my writing could be as funny as Rick’s and maybe someday it will.

This is to say that not only have I since my youth been aware of mistakes done by major companies in the IT world, but I have also always agreed that to do better than others it is necessary to make less mistakes. There are many self-help books about success which recommend to give oneself the possibility to fail in a controlled way for as many times as it takes to reach success. Of course, ideally it’s even better to learn from the mistakes of others. Sometimes it may be possible if our own stubbornness doesn’t stand in the way.

The initial idea for the program came to me when I was very young, around 16. I had noticed how it was possible to explore the user address space of a process in WinHex and wanted to expand on that idea by allowing to inspect and edit PEs in memory.

The thing I easily was drawn to as a developer was complexity. If an idea was complex, I would have liked to begin working on it even before considering whether the idea had any real use. And the recurring theme in my life is that I have almost always chosen complexity over business opportunities.

It has to be noted that the most successful software, in terms of user count, I have ever written is also the one which took me the least to write, namely: 4GB Patch. It was written in about 20 minutes and I have no idea how many millions of times it has been downloaded.

Hence, I understood that complexity had little to do with success, as successful mobile phone fart apps have largely demonstrated. Still I couldn’t let go of my idea. Implementing it on a rough level could be easy, but doing it in a sophisticated way would take time.

I wrote most of Explorer Suite when I was 19 and at the time I wrote it mainly to help me with another project connected to .NET internals (I was writing an obfuscator). When I was 21-22 I already had much more experience and wanted to rewrite the core of the application to make it possible to offer support for multiple file formats (at the time I wanted to support more executable file formats). So I partly did that, but also the UI, written in MFC, was a disaster to expand. I didn’t have the time to work on it.

You have to keep in mind that during all that time I was also working on other projects. At 23 years of age I joined Hex-Rays to work on IDA Pro. Yet another time in my life when I chose complexity: I had something to prove and IDA Pro was a big fish. I was hired to rewrite/port the entire UI of IDA to Qt. Quoting Ilfak Guilfanov from the blog of Hex-Rays:

“We invested lots of time and efforts into idaq: Daniel worked on it full time nine months. And he is a brilliant programmer who knows how to do things, yet there is a lot to do – just to achieve the same level of comfort as with idag.”

I’d like to thank Ilfak for the opportunity he gave me, I learned a lot. But the most important thing I took away from that experience was that there wasn’t any project that could scare me anymore. That project was immense and it took me literally 9 months of writing and porting up to 1000 lines of code a day to make it in that time-frame. Afterwards, I was exhausted and fearless.

At the end of those initial months, I started working on a small PDF analyzer using the core I had rewritten years before. At the time, there was a huge interest in PDF malware and I wanted to take the opportunity to play a bit with my code.

The PDF utility evolved in the following years into a part of my original idea: an application capable of inspecting multiple file formats and I started to sell the product.

My first mistake, and this is something probably many do when they’re young and/or inexperienced, was that I created something overly-professional. I wasn’t exactly sure who the audience for the product was going to be. Was it going be just technical people or also semi-technical people? This was a big ingenuity on my side. I could have thought it through at the time and figure it out from the start.

That mistake resulted into a UI which tried to be both simple and complex at the same time. This means that the UI hid the complexity to make things appear more simple or limited, but not nearly enough for somebody who isn’t skilled, while also increasing the learning curve for skilled people who had to search for the hidden functionality they needed.

This initial indecision also resulted into a fundamental marketing issue of product positioning. Was I offering my product to technical people or not? And what exactly was I selling?

You see, in my pursuit of adding complexity, I completely lost focus on use-cases and marketing. The incomplete list of features on the page of the product is like a giant wall of text which you can observe here in miniature.

See how impressive and complex it is? I bet you already want to purchase it!

You don’t? Yeah… exactly.

The list of sparse features is staggering. The program even includes Clang just in order to be able to extract C++ structures from source code.

The list of supported file formats is also considerable:

APK, APNG, AXML, BMP, BZ2, CHM, CLASS, DEX, DIB, DLL, DOC, DOCX, ELF, EML, EOT, EXE, GIF, GZIP, JAR, JPEG, JSE, LNK, LZMA, MACH-O, MSI, O, OCX, ODT, OTF, PDB, PDF, PFB, PNG, PPS, PPT, PPTX, PRX, PUFF, RAW, RTF, SO, SQLITE3, SWF, SYS, T1, T2, TIFF, TORRENT, TTC, TTF, VBE, WINMEM, WOFF, XDP, XLS, XLSX, XML, ZIP

I wrote most of the support myself with the exception of CHM, EML, DOC/XLS/PPT (which I took over), LNK, ActionScript2 (in SWF), WINMEM (which I handed over after initially developing it myself). The reader has to consider that the support for certain file formats like PDF or PE is extensive.

I lost track of the features myself and implemented many things which nobody could even notice. Let me offer you a few examples of my insanity.

  • I stress-tested many DB technologies just to see which was the best one to store the data. I abstracted the access to the DB in order to be able to switch the DB technology underneath or even support more than one. My idea was even to let the user decide which DB type to use.
  • As already mentioned, I embedded Clang just to extract C++ structures from source code. The level of support goes one step further into insanity as it even includes templates. And that’s not even the end of it. Structures can be imported from PDBs as well and underneath they rely on two different mechanism: whereas C++ structures are computed on the fly in terms of size, PDB ones have a fixed size.
  • Speaking of which, I added my own PDB parser which I created relying only on the awesome information provided by Sven B. Schreiber and the hex editor.
  • I didn’t want to rely on Authenticode in Windows to validate certificates in PEs, because that would have meant having some non-portable code and also slightly slowing down the scanning process. So what I did was to reverse engineer how Authenticode works and implement it myself. The application won’t validate certificates on Linux and OS X, because I didn’t have a nice way to maintain an updated certificate store and the necessity didn’t arise so I didn’t bother, but in theory it could validate PEs on Linux and OS X.
  • I implemented the parsing of every font format. Some famous exploits relied on font technology, so I didn’t want the product to lack the support for fonts. For those of you who are not aware of it: there isn’t just one font format. There is even one format called EOT created by Microsoft, which stands for Embedded Open Type. Basically it’s a compressed OpenType font. To get back to the OpenType format several stages have to be performed. One of those includes decompression. As for the compression algorithm, Microsoft chose a custom one based on lz77 called lzcomp. Microsoft has released the source code of lzcomp, but the version they released contained some bugs and had already been patched in Windows. So what I did was to diff the compiled code in order to include the patches and to avoid having vulnerable code in my product. Of course, I could’ve also used the Windows API to achieve the same, but that would’ve meant not being able to run the same code on other OSs.
  • At the time when it came out I bought the latest PDF specs draft, just to be able to support the newest encryption revision before anyone could even ask for it.
  • I implemented a first-person shooter game in the product so that the user wouldn’t get bored during the analysis of a file. I’m joking, but I stopped just shy of that.

These are just a few of the insane things I did. And I did many of them while also having an office job.

In fact, even though it took way longer than I had hoped for, I found enough energy one night after work and got to finish the code to demonstrate the idea which was planted in my brain since I was 16.

An icon, inside an executable, inside a process address space, inside a raw memory dump. The complete hierarchy being visible and explorable.

I had proved what I set out to prove. That was it. One thing scrapped from the to-do list of my life.

The development of the memory support stalled after that, because the office work was taking up most of my time and I also had a life to live (let’s pretend it’s true). In addition, I still had a product to support regarding the features which were actually being used by people.

In the end, I decided to hire another developer dedicated to the memory part as that was the only viable solution and it turned out to be the right thing to do.

So what was the result of all this work? A product which I had difficulty to describe to potential customers. I ended up pitching it as a “file analysis framework”, which sounds as exciting as you would expect.

I am actually grateful to those customers which saw past the confusing concept, steep learning curve and sparse features. Many customers appreciated, for instance, the Python SDK. I have dedicated a lot of time and effort into exposing most of the functionality of the product to Python. The only issue in that regard is the documentation, since it’s not easy to grasp everything from the posts on the company blog.

However, whenever a customer asked me for help with the SDK, I tried to do my best and I think that has been appreciated.

I actually like the SDK. For instance, decoding an object (or all of them, for that matter) in a PDF is just as simple as the following code.

And it’s not only about file formats: the SDK allows to create complex UIs as well.

After over a decade of non-continuous development, this summer I finally had the time to draw some conclusions. What mistakes did I make? Which are the things I dislike about my product and which are the ones I like?

Some of the mistakes I made:

  • I focused onto proving something instead of focusing on real use-cases. I even knew about this, but it didn’t change my commitment to do it regardless.
  • I didn’t choose my target audience from the start.
  • I have implemented too many sparse features instead of continuing to improve a limited number.
  • All of that resulted into creating something which I didn’t feel passionate about.

Of course, it’s better to do just one thing and do it well. But that was too simple for me and that goes back to the root of my own stupidity.

Having forced myself to write something without passion is also my main issue now. Since I started my work towards version 3.0, I decided to make radical changes.

  1. Remove everything I visually hate from the product and replace it with something I like. This started out by creating a new icon.
  2. Think about the things I like such as the SDK to build on them and create more things that I like.
  3. Finally give the product a shape and position.
  4. Maintain code compatibility for whatever solution existing customers have created.
  5. Give the product a strong coherency, both visual and feature-wise.
  6. End up with a project I enjoy working on and a product people enjoy using.

Having reached a point of (partial) maturity in my life and not feeling anymore any need to prove myself through complexity, I am now forced to deal with the complexity I created in my youth for myself.

To completely re-think such a large project is not easy at all. It may or may not work out. I really am not writing from a point where I know that it will be possible to remedy my mistakes. I have some initial ideas, but I am still far from a complete concept.

This time I am presented with some unique challenges, different from those I encountered in the past. The main challenge lies in becoming passionate about the project. If somehow I manage to accomplish that, then I think many more could enjoy the product.

Batch image manipulation using Python and GIMP

Not a very common topic for me, but I thought it could be neat to mention some tips & tricks. I won’t go into the details of the Python GIMP SDK, most of it can be figured out from the GIMP documentation. I spent a total of one hour researching this topic, so I’m not an expert and I could have made mistakes, but perhaps I can save some effort to others which want to achieve the same results. You can jump to the end of the tutorial to find a nice skeleton batch script if you’re not interested in reading the theory.

To those wondering why GIMP, it’s because I created a new icon for Profiler and wanted to automatize some operations on it in order to have it in all sizes and flavors I need. One of the produced images had to be semi-transparent. So I thought, why not using a GIMP batch command, since anyway GIMP is installed on most Linux systems by default?

Just to mention, GIMP supports also a Lisp syntax to write scripts, but it caused my eyes to bleed profusely, so I didn’t even take into it consideration and focused directly on Python.

Of course, I could’ve tried other solutions like PIL (Python Imaging Library) which I have used in the past. But GIMP is actually nice, you can do many complex UI operations from code and you also have an interactive Python shell to test your code live on an image.

For example, open an image in GIMP, then open the Python console from Filters -> Python-Fu -> Console and execute the following code:

And you’ll see that the image is now halfway transparent. What the code does is to take the first image from the list of open images and sets the opacity of the first layer to 50%.

This is the nice thing about GIMP scripting: it lets you manipulate layers just like in the UI. This allows for very powerful scripting capabilities.

The first small issue I’ve encountered in my attempt to write a batch script, is that GIMP only accepts Python code as command line argument, not the path to a script on disk. According to the official documentation:

GIMP Python All this means that you could easily invoke a GIMP Python plug-in such as the one above directly from your shell using the (plug-in-script- fu-eval …) evaluator:

gimp –no-interface –batch ‘(python-fu-console-echo RUN-NONINTERACTIVE “another string” 777 3.1416 (list 1 0 0))’ ‘(gimp-quit 1)’

The idea behind it is that you create a GIMP plugin script, put it in the GIMP plugin directory, register methods like in the following small example script:

And then invoke the registered method from the command line as explained above.

I noticed many threads on stackoverflow.com where people were trying to figure out how to execute a batch script from the command line. Now, the obvious solution which came to my mind is to execute Python code from the command line which prepends the current path to the sys.path and then to import the batch script. So I searched and found that solution suggested by the user xenoid in this stackoverflow thread.

So the final code for my case would be:

And:

What took me most to understand was to call the method merge_visible_layers before saving the image. Initially, I was trying to do it without calling it and the saved image was not transparent at all. So I thought the opacity was not correctly set and tried to do it with other methods like calling gimp_layer_set_opacity, but without success.

I then tried in the console and noticed that the opacity is actually set correctly, but that that information is lost when saving the image to disk. I then found the image method flatten and noticed that the transparency was retained, but unfortunately the saved PNG background was now white and no longer transparent. So I figured that there had to be a method to obtain a similar result but without losing the transparent background. Looking a bit among the methods in the SDK I found merge_visible_layers. I think it’s important to point this out, in case you experience the same issue and can’t find a working solution just like it happened to me.

Now we have a working solution, but let’s create a more elegant one, which allows use to use GIMP from within the same script, without any external invocation.

We can now call our function simply like this:

Which looks very pretty to me.

I could go on showing other nice examples of image manipulation, but the gist of the tutorial was just this. However, GIMP has a rich SDK which allows to automatize very complex operations.

The decay of the IT industry

I’m writing this post just for solidarity with those who share my nowadays not so popular opinions. There’s most likely zero chance of anyone else changing his mind.

Job Interviews

Back in the days when I was still working as an employee, I only experienced interviews in the shape of conversations aimed at establishing whether or not I had the necessary knowledge for the job.

I am grateful that I’m not looking for a job today, because those times have gone. Today, job interviews are made of questions and tests which can only establish whether the guy wasted enough time exercising for the interview. In fact, there are even books(!) to prepare someone for these interviews. This tells nothing about the person’s real skills and fitness for the job. There’s people specializing in passing job interviews… That’s the people you want to hire, yeah.

Many clever IT guys won’t even bother with such nonsense. I know I wouldn’t. Instead, I would just continue to look for a company which is smart.

What I’m saying is that important companies are missing out on real talent based on these ridiculous interviews. Don’t get me wrong, for me or people like me that is just perfect, because whenever we need to hire a brilliant software developer, it’s very easy. There are many talented people around who are easily captivated by a serious job interview.

Agile Development

I don’t have much to say about the subject, because I have never had the misfortune to work for a company which used agile development, but I want to recommend an excellent post by Michael O. Church, namely “Why “Agile” and especially Scrum are terrible”, which I read a few years ago.

At the time I was searching for a funny rant against agile development and that’s how I got to this very funny and insightful read. I found many of my own views represented in his writing.

I really haven’t got anything to add to Michael’s post, because, being a low-level guy, any contact with agile development is unlikely for me.

Back in the old days, the retarded bullshit we had was called UML. Then, apparently, someone thought that UML wasn’t nearly retarded enough and came up with agile development, which is a million times more retarded.

What I think is funny is that some people defend agile as not being entirely bad in certain regards, because agile tries to claim for itself common sense and basic principles. Developers who actually need to be told these basic principles should gain experience before developing major projects in the first place and managers who need it shouldn’t manage anyone at all.

Quoting Michael:

Like a failed communist state that equalizes by spreading poverty, Scrum in its purest form puts all of engineering at the same low level: not a clearly spelled-out one, but clearly below all the business people who are given full authority to decide what gets worked on.

This is because agile development gives the illusion to managers who don’t understand the technology that they are in control of the development process. That’s the reason why it has become so popular. Just like open-space offices give to the same managers (and owners) the illusion of productivity. “Oh, it’s buzzing! I’m getting value for my money!”.

Open Spaces

Another brilliant idea which became trendy. I’m late at criticizing it, because there are already many articles / studies / polls saying that open spaces are terrible. Anyway, it’s a good example of how something stupid got popular and still is. I have worked in open spaces myself and it’s extremely stressful and ineffective.

“How can we make people who have to think for a living more productive? I know! Let’s put noise and people moving around them!”

Open spaces force you to look busy even if you’re not. Whoever thinks that it’s possible to write code for 8 hours a day, every day, for a long period of time has never programmed in his entire life. I can program intensively 5-6 hours a day for a sustained period of time, but even that is a lot. Four hours is more realistic. And I have always been an over-achiever. Forcing people to waste their time on social media and YouTube to look busy is just stupid.

Quoting Bill Hicks:

“Hicks! How come you’re not working?”. I go: “There’s nothing to do”, “Well, you pretend that you’re working”, “Why don’t you pretend I’m working? Yeah, you get paid more than me, you fantasize!”

That’s why people who work for me are completely free to organize their time as they wish. Companies should hire talented people and talented people don’t need a baby-sitter. Unless she’s hot.

Diversity

New definition of “inclusion”: let’s treat people differently because of what they are or represent, either on the workspace or on social media. And let’s over-praise their achievements. This will be fair to the people outside of their group and to the people who are really clever and which belong to that group. Whatever minority that is.

People should be hired, promoted and awarded based on their merits. Not because of what they are or represent. The current trend is the result of a culture which favors good intentions and feelings over reason and logic, which in a technical field is even more ludicrous.

The pyramids were built on the sweat, blood and tears of many men. Not by singing Kumbaya while holding hands in a circle.

Making complex things is hard.

Having said that, I absolutely encourage neuro-diversity. Many companies should hire someone who isn’t an idiot for a change.

Time Travel: Running Python 3.7 on XP

To restart my career as a technical writer, I chose a light topic. Namely, running applications compiled with new versions of Visual Studio on Windows XP. I didn’t find any prior research on the topic, but I also didn’t search much. There’s no real purpose behind this article, beyond the fact that I wanted to know what could prevent a new application to run on XP. Our target application will be the embedded version of Python 3.7 for x86.

If we try to start any new application on XP, we’ll get an error message informing us that it is not a valid Win32 application. This happens because of some fields in the Optional Header of the Portable Executable.

Most of you probably already know that you need to adjust these fields as follows:

MajorOperatingSystemVersion: 5
MinorOperatingSystemVersion: 0
MajorSubsystemVersion: 5
MinorSubsystemVersion: 0

Fortunately, it’s enough to adjust the fields in the executable we want to start (python.exe), there’s no need to adjust the DLLs as well.

If we try run the application now, we’ll get an error message due to a missing API in kernel32. So let’s turn our attention to the imports.

We have a missing vcruntime140.dll, then a bunch of “api-ms-win-*” DLLs, then only python37.dll and kernel32.dll.

The first thing which comes to mind is that in new applications we often find these “api-ms-win-*” DLLs. If we search for the prefix in the Windows directory, we’ll find a directory both in System32 and SysWOW64 called “downlevel”, which contains a huge list of these DLLs.

As we’ll see later, these DLLs aren’t actually used, but if we open one with a PE viewer, we’ll see that it contains exclusively forwarders to APIs contained in the usual suspects such as kernel32, kernelbase, user32 etc.

There’s a MSDN page documenting these DLLs.

Interestingly, in the downlevel directory we can’t find any of the files imported by python.exe. These DLLs actually expose C runtime APIs like strlen, fopen, exit and so on.

If we don’t have any prior knowledge on the topic and just do a string search inside the Windows directory for such a DLL name, we’ll find a match in C:\Windows\System32\apisetschema.dll. This DLL is special as it contains a .apiset section, whose data can easily be identified as some sort of format for mapping “api-ms-win-*” names to others.

Searching on the web, the first resource I found on this topic were two articles on the blog of Quarkslab (Part 1 and Part 2). However, I quickly figured that, while useful, they were too dated to provide me with up-to-date structures to parse the data. In fact, the second article shows a version number of 2 and at the time of my writing the version number is 6.

Just for completeness, after the publication of the current article, I was made aware of an article by deroko about the topic predating those of Quarkslab.

Anyway, I searched some more and found a code snippet by Alex Ionescu and Pavel Yosifovich in the repository of Windows Internals. I took the following structures from there.

The data starts with a API_SET_NAMESPACE structure.

Count specifies the number of API_SET_NAMESPACE_ENTRY and API_SET_HASH_ENTRY structures. EntryOffset points to the start of the array of API_SET_NAMESPACE_ENTRY structures, which in our case comes exactly after API_SET_NAMESPACE.

Every API_SET_NAMESPACE_ENTRY points to the name of the “api-ms-win-*” DLL via the NameOffset field, while ValueOffset and ValueCount specify the position and count of API_SET_VALUE_ENTRY structures. The API_SET_VALUE_ENTRY structure yields the resolution values (e.g. kernel32.dll, kernelbase.dll) for the given “api-ms-win-*” DLL.

With this information we can already write a small script to map the new names to the actual DLLs.

This code can be executed with Cerbero Profiler from command line as “cerpro.exe -r apisetschema.py”. These are the first lines of the produced output:

Going back to API_SET_NAMESPACE, its field HashOffset points to an array of API_SET_HASH_ENTRY structures. These structures, as we’ll see in a moment, are used by the Windows loader to quickly index a “api-ms-win-*” DLL name. The Hash field is effectively the hash of the name, calculated by taking into consideration both HashFactor and HashedLength, while Index points to the associated API_SET_NAMESPACE_ENTRY entry.

The code which does the hashing is inside the function LdrpPreprocessDllName in ntdll:

Or more simply in C code:

As a practical example, let’s take the DLL name “api-ms-win-core-processthreads-l1-1-2.dll”. Its hash would be 0x445B4DF3. If we find its matching API_SET_HASH_ENTRY entry, we’ll have the Index to the associated API_SET_NAMESPACE_ENTRY structure.

So, 0x5b (or 91) is the index. By going back to the output of mappings, we can see that it matches.

By inspecting the same output, we can also notice that all C runtime DLLs are resolved to ucrtbase.dll.

I was already resigned at having to figure out how to support the C runtime on XP, when I noticed that Microsoft actually supports the deployment of the runtime on it. The following excerpt from MSDN says as much:

If you currently use the VCRedist (our redistributable package files), then things will just work for you as they did before. The Visual Studio 2015 VCRedist package includes the above mentioned Windows Update packages, so simply installing the VCRedist will install both the Visual C++ libraries and the Universal CRT. This is our recommended deployment mechanism. On Windows XP, for which there is no Universal CRT Windows Update MSU, the VCRedist will deploy the Universal CRT itself.

Which means that on Windows editions coming after XP the support is provided via Windows Update, but on XP we have to deploy the files ourselves. We can find the files to deploy inside C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs. This path contains three sub-directories: x86, x64 and arm. We’re obviously interested in the x86 one. The files contained in it are many (42), apparently the most common “api-ms-win-*” DLLs and ucrtbase.dll. We can deploy those files onto XP to make our application work. We are still missing the vcruntime140.dll, but we can take that DLL from the Visual C++ installation. In fact, that DLL is intended to be deployed, while the Universal CRT (ucrtbase.dll) is intended to be part of the Windows system.

This satisfies our dependencies in terms of DLLs. However, Windows introduced many new APIs over the years which aren’t present on XP. So I wrote a script to test the compatibility of an application by checking the imported APIs against the API exported by the DLLs on XP. The command line for it is “cerpro.exe -r xpcompat.py application_path”. It will check all the PE files in the specified directory.

I had to omit the contents of the apisetschema global variable for the sake of brevity. You can download the full script from here. The system32 directory referenced in the code is the one of Windows XP, which I copied to my desktop.

And here are the relevant excerpts from the output:

We’re missing 5 APIs from kernel32.dll and 2 from ws2_32.dll, but the Winsock APIs are imported just by _socket.pyd, a module which is loaded only when a network operation is performed by Python. So, in theory, we can focus our efforts on the missing kernel32 APIs for now.

My plan was to create a fake kernel32.dll, called xernel32.dll, containing forwarders for most APIs and real implementations only for the missing ones. Here’s a script to create C++ files containing forwarders for all APIs of common DLLs on Windows 10:

It creates files like the following kernel32.cpp:

The comment on the right (“// XP”) indicates whether the forwarded API is present on XP or not. We can provide real implementations exclusively for the APIs we want. The Windows loader doesn’t care whether we forward functions which don’t exist as long as they aren’t imported.

The APIs we need to support are the following:

GetTickCount64: I just called GetTickCount, not really important
GetFinalPathNameByHandleW: took the implementation from Wine, but had to adapt it slightly
InitializeProcThreadAttributeList: took the implementation from Wine
UpdateProcThreadAttribute: same
DeleteProcThreadAttributeList: same

I have to be grateful to the Wine project here, as it provided useful implementations, which saved me the effort.

I called the attempt at a support runtime for older Windows versions “XP Time Machine Runtime” and you can find the repository here. I compiled it with Visual Studio 2013 and cmake.

So that we have now our xernel32.dll, the only thing we have to do is to rename the imported DLL inside python37.dll.

Let’s try to start python.exe.

Awesome.

Of course, we’re still not completely done, as we didn’t implement the missing Winsock APIs, but perhaps this and some more could be the content of a second part to this article.

Overclocked

This post comes after a very long hiatus on my side in relation to this personal blog. During the past years I have been very busy with work and other activities, but in the last months I took a break and started to re-think my life.

One of the consequences of this process, has been the revamping of NTCore and the decision to provide it with new content in the shape of articles and programs. In fact, I wanted to start with a technical article, but then some considerations crept into my mind and I wanted to share them.

One of the reasons I stopped writing about interesting things and to dedicate spare time to my IT hobby, was that too much of my time was being spent on work related IT activities not connected to the development of Cerbero Profiler. Anyone who has ever worked for a company with incompetent managers, can understand this perfectly. There are companies, large or small, which kill the passion for whatever you enjoyed doing before working for them.

One classic example is a company which had luck with its first product, because it was the right product at the right time and then tries to replicate its first success with an endless amount of new projects all doomed to fail. The reason they do it is because they don’t want their company to rely only on one product. The reason they fail is because they were lucky, not clever, with their first product.

Unfortunately, the boost of arrogance caused by the first hit is enough to eclipse all the following failures, which may or may not, depending on the success of the first product, bring the company to collapse.

The technical workforce in such a company is divided into two groups. The first group works on the first product, aka the cash cow. This group endures enormous pressure, because the entire faith of the company depends on them. Not only that, but the pressure increases whenever money is wasted on the other useless side-projects. The frustration of this group stems from the fact that they are the only ones being put under pressure and that their work has to finance the, from their side perceived, non-work of the others.

The second groups works on the side-projects which are doomed to fail. The clever technical people in this group already know that these projects will fail, but that doesn’t change anything in the decisions taken by the company. The frustration of this group stems from continuously doing useless things, which nobody cares about and not being appreciated like the people in the first group.

In such an environment, it doesn’t matter to which group you belong to, if you understand the big picture or if you just consider it your day job. You’re screwed regardless. The difference is that the people of the first group tend to last longer, but the toxic environment of the company will consume them as well in the long run. The people of the second group are the ones being consumed faster and there’s a reason for that.

I heard that some large companies take into account the psychological effects on a software developer who worked on a major project, which then got canceled. These companies make sure that the employee is then assigned to the development team of an already established product. This is to avoid the re-occurrence of the same situation for the developer and the psychological strain it would generate for him.

If you currently work for a company of the earlier category, I can give you only one advice: resign and do something else. Cultivate crops, hunt, forge steel or build roads. Anything is better than enduring the bullshit of such a place. You can do it for a time if you need to, but you have to know when to stop.

For years I wasn’t able to live from the profits of my commercial product and needed a day job, then in the last years the situation changed, but I still didn’t stop my other activity for a number of reasons. In the beginning profits were still uncertain and I also figured that more money was even better.

The ironic thing is that even though you may earn more money, you are also more inclined to spend it easily. This is because of the work-caused mental fatigue which forces your brain to look for continuous gratification to alleviate the pain. So you end up in a fancy apartment, with a big TV, a nice car, etc. It requires some effort to break the routine and part from that situation. Effort which isn’t caused by the difficulty to give up a materialistic life-style, but to one’s mental fatigue which makes it hard to start any new endeavor.

That isn’t to say that I dislike money. In fact, one of the reasons I changed my life is that the money wasn’t nearly good enough for the amount of stress I had to face. I am neither a materialistic person nor a hippie. I can live with little money or with tons of it. It doesn’t change who I am.

It’s been only 10 months since I changed things and started to re-organize my life. The initial months were spent mostly on personal matters, logistics and recovering my physical health. Even though I always kept in shape and did a lot of sport, the stress still had effects on my overall well-being.

I spent the following months on relaxing my mind, making projects for the future and even starting a new hobby, knife making.

Of course, I still worked on my commercial product from time to time, but even that required a thinking pause as the new 3.0 version approaches and it’s a good point in time for some interesting and major improvements. I also made new important business deals unrelated to my product, which wouldn’t have happened if I hadn’t changed things.

That brings us to now and to my wish to rekindle my passion for IT and to the actual topic of the post.

It’s impossible for someone who grew up playing with SoftICE, like myself, not to notice the differences in approaching the field of IT back then and doing it now. In the past, we spent our time on IRC, which was a lot more fun than Twitter. We had less technologies to focus on. The result was that we were more focused and less distracted.

Not only that. We were small communities in which you could gain appreciation for some days of work writing a small utility or writing an article. Today nobody gives a fuck. Your article or code is just a drop in the ocean or a tweet in the movie “The Birds”.

Nowadays the IT field exploded with many new fields and disciplines, many of which 20 years ago were relegated to academic research, were insignificantly small or weren’t there at all. Distributed computing, machine learning, mobile development, virtualization etc.

At the same time, the amount of people and money in the IT industry also caused the explosion of bullshit. From IT security up until the retarded bullshit of agile development.

Although this may just seem another “things were better before” comment, it’s not really the point of it. There’s a natural process of commercialization from something which is niche to something which becomes common and consumed by the masses, which makes the field for those belonging to the initial niche less appealing. This is normal.

What is interesting is that we lose interest in things today, because we are overclocked. By this technical reference I mean that we are overstimulated. We developed a numbness in regard to technology because we were exposed to too many (mostly useless) innovations in an excessive amount which our brain couldn’t absorb and so it gave up and lost interest.

While, of course, no one can centrally control the amount of innovations which globally come out every day, individual companies can limit the amount of innovations within their own products for our brains to be able to appreciate them.

There’s a reason why nobody cares today when the new Windows is released. Many stopped caring after Windows Vista and most after Windows 7. Remember when the release of a new Windows was a big event? Remember how respected the work of Matt Pietrek and Sven B. Schreiber was? It’s not just because they were pioneers. The reason is that we cared beyond having a resource to help us implement our daily piece of code.

We had the illusion that technology was a progression towards improvement. And now we are disillusioned.

In my old rants against Microsoft, wherein I predict the failure of products like Windows Phone and Silverlight, it is possible to notice the increasing disillusionment. Let me quote an old post from 2011:

Moreover, Windows could be improved to an endless extent without re-inventing the wheel every 2 years. If the decisions were up to me I would work hard on micro-improvements. Introduce new sets of native APIs along Win32. And I’d do it gradually, with care and try to give them a strong coherency. I would try to introduce benefits which could be enjoyed even by applications written 15 years ago. The beauty should lie in the elegance in finding ingenious solutions for extending what is already there, not by doing tabula rasa every time. I would make developers feel at home and that their time and code is highly valued, instead of making them feel like their creations are always obsolete compared to my brand new technology which, by the way, nobody uses.

To be clear, it isn’t just Microsoft. All the big players make the same mistake. During Jobs’ era at Apple we had a controlled amount of improvements which we could appreciate. When Jobs died, Apple became the same as any other company and today nobody cares about Apple products as well.

The gist of my theory is what follows. The majority of people use Windows or the iPhone to do a number of things. While a minority of people may think it’s cool to have yet a slimmer phone without headphone jack or charging it without a wire, these are actually regressions (having to buy new adapters or headphones from Apple, more easily breaking your phone because the back is made out of glass) and they annoy the majority, while also numbing their capacity to absorb improvements.

If you add to your product 50 new things and only 5 of those are actual improvements, even those 5 improvements will become an indistinguishable blur among the other 45 and won’t even be perceived.

And just to hammer my point home, let’s take a Victorinox Swiss Army Knife (yes, I grew up watching MacGyver). It has more than a hundred years of history and it is perfect as it is. Of course, a minority of people may think that adding pizza cutter to it may be essential, but Victorinox doesn’t work for a minority. Yes, every now and then a new model of knife comes out intended for a particular group of people like sailing enthusiasts or IT workers, but the classic models have more or less remained unchanged throughout the decades. What happened is that they went over countless micro-improvements which brought them to the state-of-the-art tools they are today.

An OS, just like any important piece of technology, should give the user the same satisfaction a Victorinox SAK gives to its holder.

These are some of the considerations which crossed my mind while trying to make again my entrance in the IT world. They will reflect on my work and over the next months I will put my money where my mouth is.