Drawn to Complexity: story of my own stupidity

When someone writes blog entries about the mistakes of others, then one should also be able to admit one’s own. And that’s exactly what I’m going to do here.

The main reasons I’m writing this post are the following:

  • To entertain and interact on some level.
  • To save others from falling for the same mistakes or at least make them feel less alone.
  • To talk about the history of my commercial application (Cerbero Profiler), in order to offer an understanding to the reader of how it came to be the way it is.
  • To let the readers have access to my thoughts regarding the future of this product.

Let me start with a disclaimer: I am aware of the literature about software blunders, marketing strategies, sales, etc. I read quite a bit.

In fact, my own favorite book about mistakes in the IT world has been for a long time In Search Of Stupidy by Merrill R. (Rick) Chapman.

I probably first read this book when I was about 18-20 years old and it has remained my favorite ever since. I have since re-read it several times. Apart from the interesting history lessons concerning some of the biggest IT blunders, the book is also hilarious. I wish my writing could be as funny as Rick’s and maybe someday it will.

This is to say that not only have I since my youth been aware of mistakes done by major companies in the IT world, but I have also always agreed that to do better than others it is necessary to make less mistakes. There are many self-help books about success which recommend to give oneself the possibility to fail in a controlled way for as many times as it takes to reach success. Of course, ideally it’s even better to learn from the mistakes of others. Sometimes it may be possible if our own stubbornness doesn’t stand in the way.

The initial idea for the program came to me when I was very young, around 16. I had noticed how it was possible to explore the user address space of a process in WinHex and wanted to expand on that idea by allowing to inspect and edit PEs in memory.

The thing I easily was drawn to as a developer was complexity. If an idea was complex, I would have liked to begin working on it even before considering whether the idea had any real use. And the recurring theme in my life is that I have almost always chosen complexity over business opportunities.

It has to be noted that the most successful software, in terms of user count, I have ever written is also the one which took me the least to write, namely: 4GB Patch. It was written in about 20 minutes and I have no idea how many millions of times it has been downloaded.

Hence, I understood that complexity had little to do with success, as successful mobile phone fart apps have largely demonstrated. Still I couldn’t let go of my idea. Implementing it on a rough level could be easy, but doing it in a sophisticated way would take time.

I wrote most of Explorer Suite when I was 19 and at the time I wrote it mainly to help me with another project connected to .NET internals (I was writing an obfuscator). When I was 21-22 I already had much more experience and wanted to rewrite the core of the application to make it possible to offer support for multiple file formats (at the time I wanted to support more executable file formats). So I partly did that, but also the UI, written in MFC, was a disaster to expand. I didn’t have the time to work on it.

You have to keep in mind that during all that time I was also working on other projects. At 23 years of age I joined Hex-Rays to work on IDA Pro. Yet another time in my life when I chose complexity: I had something to prove and IDA Pro was a big fish. I was hired to rewrite/port the entire UI of IDA to Qt. Quoting Ilfak Guilfanov from the blog of Hex-Rays:

“We invested lots of time and efforts into idaq: Daniel worked on it full time nine months. And he is a brilliant programmer who knows how to do things, yet there is a lot to do – just to achieve the same level of comfort as with idag.”

I’d like to thank Ilfak for the opportunity he gave me, I learned a lot. But the most important thing I took away from that experience was that there wasn’t any project that could scare me anymore. That project was immense and it took me literally 9 months of writing and porting up to 1000 lines of code a day to make it in that time-frame. Afterwards, I was exhausted and fearless.

At the end of those initial months, I started working on a small PDF analyzer using the core I had rewritten years before. At the time, there was a huge interest in PDF malware and I wanted to take the opportunity to play a bit with my code.

The PDF utility evolved in the following years into a part of my original idea: an application capable of inspecting multiple file formats and I started to sell the product.

My first mistake, and this is something probably many do when they’re young and/or inexperienced, was that I created something overly-professional. I wasn’t exactly sure who the audience for the product was going to be. Was it going be just technical people or also semi-technical people? This was a big ingenuity on my side. I could have thought it through at the time and figure it out from the start.

That mistake resulted into a UI which tried to be both simple and complex at the same time. This means that the UI hid the complexity to make things appear more simple or limited, but not nearly enough for somebody who isn’t skilled, while also increasing the learning curve for skilled people who had to search for the hidden functionality they needed.

This initial indecision also resulted into a fundamental marketing issue of product positioning. Was I offering my product to technical people or not? And what exactly was I selling?

You see, in my pursuit of adding complexity, I completely lost focus on use-cases and marketing. The incomplete list of features on the page of the product is like a giant wall of text which you can observe here in miniature.

See how impressive and complex it is? I bet you already want to purchase it!

You don’t? Yeah… exactly.

The list of sparse features is staggering. The program even includes Clang just in order to be able to extract C++ structures from source code.

The list of supported file formats is also considerable:

APK, APNG, AXML, BMP, BZ2, CHM, CLASS, DEX, DIB, DLL, DOC, DOCX, ELF, EML, EOT, EXE, GIF, GZIP, JAR, JPEG, JSE, LNK, LZMA, MACH-O, MSI, O, OCX, ODT, OTF, PDB, PDF, PFB, PNG, PPS, PPT, PPTX, PRX, PUFF, RAW, RTF, SO, SQLITE3, SWF, SYS, T1, T2, TIFF, TORRENT, TTC, TTF, VBE, WINMEM, WOFF, XDP, XLS, XLSX, XML, ZIP

I wrote most of the support myself with the exception of CHM, EML, DOC/XLS/PPT (which I took over), LNK, ActionScript2 (in SWF), WINMEM (which I handed over after initially developing it myself). The reader has to consider that the support for certain file formats like PDF or PE is extensive.

I lost track of the features myself and implemented many things which nobody could even notice. Let me offer you a few examples of my insanity.

  • I stress-tested many DB technologies just to see which was the best one to store the data. I abstracted the access to the DB in order to be able to switch the DB technology underneath or even support more than one. My idea was even to let the user decide which DB type to use.
  • As already mentioned, I embedded Clang just to extract C++ structures from source code. The level of support goes one step further into insanity as it even includes templates. And that’s not even the end of it. Structures can be imported from PDBs as well and underneath they rely on two different mechanism: whereas C++ structures are computed on the fly in terms of size, PDB ones have a fixed size.
  • Speaking of which, I added my own PDB parser which I created relying only on the awesome information provided by Sven B. Schreiber and the hex editor.
  • I didn’t want to rely on Authenticode in Windows to validate certificates in PEs, because that would have meant having some non-portable code and also slightly slowing down the scanning process. So what I did was to reverse engineer how Authenticode works and implement it myself. The application won’t validate certificates on Linux and OS X, because I didn’t have a nice way to maintain an updated certificate store and the necessity didn’t arise so I didn’t bother, but in theory it could validate PEs on Linux and OS X.
  • I implemented the parsing of every font format. Some famous exploits relied on font technology, so I didn’t want the product to lack the support for fonts. For those of you who are not aware of it: there isn’t just one font format. There is even one format called EOT created by Microsoft, which stands for Embedded Open Type. Basically it’s a compressed OpenType font. To get back to the OpenType format several stages have to be performed. One of those includes decompression. As for the compression algorithm, Microsoft chose a custom one based on lz77 called lzcomp. Microsoft has released the source code of lzcomp, but the version they released contained some bugs and had already been patched in Windows. So what I did was to diff the compiled code in order to include the patches and to avoid having vulnerable code in my product. Of course, I could’ve also used the Windows API to achieve the same, but that would’ve meant not being able to run the same code on other OSs.
  • At the time when it came out I bought the latest PDF specs draft, just to be able to support the newest encryption revision before anyone could even ask for it.
  • I implemented a first-person shooter game in the product so that the user wouldn’t get bored during the analysis of a file. I’m joking, but I stopped just shy of that.

These are just a few of the insane things I did. And I did many of them while also having an office job.

In fact, even though it took way longer than I had hoped for, I found enough energy one night after work and got to finish the code to demonstrate the idea which was planted in my brain since I was 16.

An icon, inside an executable, inside a process address space, inside a raw memory dump. The complete hierarchy being visible and explorable.

I had proved what I set out to prove. That was it. One thing scrapped from the to-do list of my life.

The development of the memory support stalled after that, because the office work was taking up most of my time and I also had a life to live (let’s pretend it’s true). In addition, I still had a product to support regarding the features which were actually being used by people.

In the end, I decided to hire another developer dedicated to the memory part as that was the only viable solution and it turned out to be the right thing to do.

So what was the result of all this work? A product which I had difficulty to describe to potential customers. I ended up pitching it as a “file analysis framework”, which sounds as exciting as you would expect.

I am actually grateful to those customers which saw past the confusing concept, steep learning curve and sparse features. Many customers appreciated, for instance, the Python SDK. I have dedicated a lot of time and effort into exposing most of the functionality of the product to Python. The only issue in that regard is the documentation, since it’s not easy to grasp everything from the posts on the company blog.

However, whenever a customer asked me for help with the SDK, I tried to do my best and I think that has been appreciated.

I actually like the SDK. For instance, decoding an object (or all of them, for that matter) in a PDF is just as simple as the following code.

And it’s not only about file formats: the SDK allows to create complex UIs as well.

After over a decade of non-continuous development, this summer I finally had the time to draw some conclusions. What mistakes did I make? Which are the things I dislike about my product and which are the ones I like?

Some of the mistakes I made:

  • I focused onto proving something instead of focusing on real use-cases. I even knew about this, but it didn’t change my commitment to do it regardless.
  • I didn’t choose my target audience from the start.
  • I have implemented too many sparse features instead of continuing to improve a limited number.
  • All of that resulted into creating something which I didn’t feel passionate about.

Of course, it’s better to do just one thing and do it well. But that was too simple for me and that goes back to the root of my own stupidity.

Having forced myself to write something without passion is also my main issue now. Since I started my work towards version 3.0, I decided to make radical changes.

  1. Remove everything I visually hate from the product and replace it with something I like. This started out by creating a new icon.
  2. Think about the things I like such as the SDK to build on them and create more things that I like.
  3. Finally give the product a shape and position.
  4. Maintain code compatibility for whatever solution existing customers have created.
  5. Give the product a strong coherency, both visual and feature-wise.
  6. End up with a project I enjoy working on and a product people enjoy using.

Having reached a point of (partial) maturity in my life and not feeling anymore any need to prove myself through complexity, I am now forced to deal with the complexity I created in my youth for myself.

To completely re-think such a large project is not easy at all. It may or may not work out. I really am not writing from a point where I know that it will be possible to remedy my mistakes. I have some initial ideas, but I am still far from a complete concept.

This time I am presented with some unique challenges, different from those I encountered in the past. The main challenge lies in becoming passionate about the project. If somehow I manage to accomplish that, then I think many more could enjoy the product.

The decay of the IT industry

I’m writing this post just for solidarity with those who share my nowadays not so popular opinions. There’s most likely zero chance of anyone else changing his mind.

Job Interviews

Back in the days when I was still working as an employee, I only experienced interviews in the shape of conversations aimed at establishing whether or not I had the necessary knowledge for the job.

I am grateful that I’m not looking for a job today, because those times have gone. Today, job interviews are made of questions and tests which can only establish whether the guy wasted enough time exercising for the interview. In fact, there are even books(!) to prepare someone for these interviews. This tells nothing about the person’s real skills and fitness for the job. There’s people specializing in passing job interviews… That’s the people you want to hire, yeah.

Many clever IT guys won’t even bother with such nonsense. I know I wouldn’t. Instead, I would just continue to look for a company which is smart.

What I’m saying is that important companies are missing out on real talent based on these ridiculous interviews. Don’t get me wrong, for me or people like me that is just perfect, because whenever we need to hire a brilliant software developer, it’s very easy. There are many talented people around who are easily captivated by a serious job interview.

Agile Development

I don’t have much to say about the subject, because I have never had the misfortune to work for a company which used agile development, but I want to recommend an excellent post by Michael O. Church, namely “Why “Agile” and especially Scrum are terrible”, which I read a few years ago.

At the time I was searching for a funny rant against agile development and that’s how I got to this very funny and insightful read. I found many of my own views represented in his writing.

I really haven’t got anything to add to Michael’s post, because, being a low-level guy, any contact with agile development is unlikely for me.

Back in the old days, the retarded bullshit we had was called UML. Then, apparently, someone thought that UML wasn’t nearly retarded enough and came up with agile development, which is a million times more retarded.

What I think is funny is that some people defend agile as not being entirely bad in certain regards, because agile tries to claim for itself common sense and basic principles. Developers who actually need to be told these basic principles should gain experience before developing major projects in the first place and managers who need it shouldn’t manage anyone at all.

Quoting Michael:

Like a failed communist state that equalizes by spreading poverty, Scrum in its purest form puts all of engineering at the same low level: not a clearly spelled-out one, but clearly below all the business people who are given full authority to decide what gets worked on.

This is because agile development gives the illusion to managers who don’t understand the technology that they are in control of the development process. That’s the reason why it has become so popular. Just like open-space offices give to the same managers (and owners) the illusion of productivity. “Oh, it’s buzzing! I’m getting value for my money!”.

Open Spaces

Another brilliant idea which became trendy. I’m late at criticizing it, because there are already many articles / studies / polls saying that open spaces are terrible. Anyway, it’s a good example of how something stupid got popular and still is. I have worked in open spaces myself and it’s extremely stressful and ineffective.

“How can we make people who have to think for a living more productive? I know! Let’s put noise and people moving around them!”

Open spaces force you to look busy even if you’re not. Whoever thinks that it’s possible to write code for 8 hours a day, every day, for a long period of time has never programmed in his entire life. I can program intensively 5-6 hours a day for a sustained period of time, but even that is a lot. Four hours is more realistic. And I have always been an over-achiever. Forcing people to waste their time on social media and YouTube to look busy is just stupid.

Quoting Bill Hicks:

“Hicks! How come you’re not working?”. I go: “There’s nothing to do”, “Well, you pretend that you’re working”, “Why don’t you pretend I’m working? Yeah, you get paid more than me, you fantasize!”

That’s why people who work for me are completely free to organize their time as they wish. Companies should hire talented people and talented people don’t need a baby-sitter. Unless she’s hot.

Diversity

New definition of “inclusion”: let’s treat people differently because of what they are or represent, either on the workspace or on social media. And let’s over-praise their achievements. This will be fair to the people outside of their group and to the people who are really clever and which belong to that group. Whatever minority that is.

People should be hired, promoted and awarded based on their merits. Not because of what they are or represent. The current trend is the result of a culture which favors good intentions and feelings over reason and logic, which in a technical field is even more ludicrous.

The pyramids were built on the sweat, blood and tears of many men. Not by singing Kumbaya while holding hands in a circle.

Making complex things is hard.

Having said that, I absolutely encourage neuro-diversity. Many companies should hire someone who isn’t an idiot for a change.

Overclocked

This post comes after a very long hiatus on my side in relation to this personal blog. During the past years I have been very busy with work and other activities, but in the last months I took a break and started to re-think my life.

One of the consequences of this process, has been the revamping of NTCore and the decision to provide it with new content in the shape of articles and programs. In fact, I wanted to start with a technical article, but then some considerations crept into my mind and I wanted to share them.

One of the reasons I stopped writing about interesting things and to dedicate spare time to my IT hobby, was that too much of my time was being spent on work related IT activities not connected to the development of Cerbero Profiler. Anyone who has ever worked for a company with incompetent managers, can understand this perfectly. There are companies, large or small, which kill the passion for whatever you enjoyed doing before working for them.

One classic example is a company which had luck with its first product, because it was the right product at the right time and then tries to replicate its first success with an endless amount of new projects all doomed to fail. The reason they do it is because they don’t want their company to rely only on one product. The reason they fail is because they were lucky, not clever, with their first product.

Unfortunately, the boost of arrogance caused by the first hit is enough to eclipse all the following failures, which may or may not, depending on the success of the first product, bring the company to collapse.

The technical workforce in such a company is divided into two groups. The first group works on the first product, aka the cash cow. This group endures enormous pressure, because the entire faith of the company depends on them. Not only that, but the pressure increases whenever money is wasted on the other useless side-projects. The frustration of this group stems from the fact that they are the only ones being put under pressure and that their work has to finance the, from their side perceived, non-work of the others.

The second groups works on the side-projects which are doomed to fail. The clever technical people in this group already know that these projects will fail, but that doesn’t change anything in the decisions taken by the company. The frustration of this group stems from continuously doing useless things, which nobody cares about and not being appreciated like the people in the first group.

In such an environment, it doesn’t matter to which group you belong to, if you understand the big picture or if you just consider it your day job. You’re screwed regardless. The difference is that the people of the first group tend to last longer, but the toxic environment of the company will consume them as well in the long run. The people of the second group are the ones being consumed faster and there’s a reason for that.

I heard that some large companies take into account the psychological effects on a software developer who worked on a major project, which then got canceled. These companies make sure that the employee is then assigned to the development team of an already established product. This is to avoid the re-occurrence of the same situation for the developer and the psychological strain it would generate for him.

If you currently work for a company of the earlier category, I can give you only one advice: resign and do something else. Cultivate crops, hunt, forge steel or build roads. Anything is better than enduring the bullshit of such a place. You can do it for a time if you need to, but you have to know when to stop.

For years I wasn’t able to live from the profits of my commercial product and needed a day job, then in the last years the situation changed, but I still didn’t stop my other activity for a number of reasons. In the beginning profits were still uncertain and I also figured that more money was even better.

The ironic thing is that even though you may earn more money, you are also more inclined to spend it easily. This is because of the work-caused mental fatigue which forces your brain to look for continuous gratification to alleviate the pain. So you end up in a fancy apartment, with a big TV, a nice car, etc. It requires some effort to break the routine and part from that situation. Effort which isn’t caused by the difficulty to give up a materialistic life-style, but to one’s mental fatigue which makes it hard to start any new endeavor.

That isn’t to say that I dislike money. In fact, one of the reasons I changed my life is that the money wasn’t nearly good enough for the amount of stress I had to face. I am neither a materialistic person nor a hippie. I can live with little money or with tons of it. It doesn’t change who I am.

It’s been only 10 months since I changed things and started to re-organize my life. The initial months were spent mostly on personal matters, logistics and recovering my physical health. Even though I always kept in shape and did a lot of sport, the stress still had effects on my overall well-being.

I spent the following months on relaxing my mind, making projects for the future and even starting a new hobby, knife making.

Of course, I still worked on my commercial product from time to time, but even that required a thinking pause as the new 3.0 version approaches and it’s a good point in time for some interesting and major improvements. I also made new important business deals unrelated to my product, which wouldn’t have happened if I hadn’t changed things.

That brings us to now and to my wish to rekindle my passion for IT and to the actual topic of the post.

It’s impossible for someone who grew up playing with SoftICE, like myself, not to notice the differences in approaching the field of IT back then and doing it now. In the past, we spent our time on IRC, which was a lot more fun than Twitter. We had less technologies to focus on. The result was that we were more focused and less distracted.

Not only that. We were small communities in which you could gain appreciation for some days of work writing a small utility or writing an article. Today nobody gives a fuck. Your article or code is just a drop in the ocean or a tweet in the movie “The Birds”.

Nowadays the IT field exploded with many new fields and disciplines, many of which 20 years ago were relegated to academic research, were insignificantly small or weren’t there at all. Distributed computing, machine learning, mobile development, virtualization etc.

At the same time, the amount of people and money in the IT industry also caused the explosion of bullshit. From IT security up until the retarded bullshit of agile development.

Although this may just seem another “things were better before” comment, it’s not really the point of it. There’s a natural process of commercialization from something which is niche to something which becomes common and consumed by the masses, which makes the field for those belonging to the initial niche less appealing. This is normal.

What is interesting is that we lose interest in things today, because we are overclocked. By this technical reference I mean that we are overstimulated. We developed a numbness in regard to technology because we were exposed to too many (mostly useless) innovations in an excessive amount which our brain couldn’t absorb and so it gave up and lost interest.

While, of course, no one can centrally control the amount of innovations which globally come out every day, individual companies can limit the amount of innovations within their own products for our brains to be able to appreciate them.

There’s a reason why nobody cares today when the new Windows is released. Many stopped caring after Windows Vista and most after Windows 7. Remember when the release of a new Windows was a big event? Remember how respected the work of Matt Pietrek and Sven B. Schreiber was? It’s not just because they were pioneers. The reason is that we cared beyond having a resource to help us implement our daily piece of code.

We had the illusion that technology was a progression towards improvement. And now we are disillusioned.

In my old rants against Microsoft, wherein I predict the failure of products like Windows Phone and Silverlight, it is possible to notice the increasing disillusionment. Let me quote an old post from 2011:

Moreover, Windows could be improved to an endless extent without re-inventing the wheel every 2 years. If the decisions were up to me I would work hard on micro-improvements. Introduce new sets of native APIs along Win32. And I’d do it gradually, with care and try to give them a strong coherency. I would try to introduce benefits which could be enjoyed even by applications written 15 years ago. The beauty should lie in the elegance in finding ingenious solutions for extending what is already there, not by doing tabula rasa every time. I would make developers feel at home and that their time and code is highly valued, instead of making them feel like their creations are always obsolete compared to my brand new technology which, by the way, nobody uses.

To be clear, it isn’t just Microsoft. All the big players make the same mistake. During Jobs’ era at Apple we had a controlled amount of improvements which we could appreciate. When Jobs died, Apple became the same as any other company and today nobody cares about Apple products as well.

The gist of my theory is what follows. The majority of people use Windows or the iPhone to do a number of things. While a minority of people may think it’s cool to have yet a slimmer phone without headphone jack or charging it without a wire, these are actually regressions (having to buy new adapters or headphones from Apple, more easily breaking your phone because the back is made out of glass) and they annoy the majority, while also numbing their capacity to absorb improvements.

If you add to your product 50 new things and only 5 of those are actual improvements, even those 5 improvements will become an indistinguishable blur among the other 45 and won’t even be perceived.

And just to hammer my point home, let’s take a Victorinox Swiss Army Knife (yes, I grew up watching MacGyver). It has more than a hundred years of history and it is perfect as it is. Of course, a minority of people may think that adding pizza cutter to it may be essential, but Victorinox doesn’t work for a minority. Yes, every now and then a new model of knife comes out intended for a particular group of people like sailing enthusiasts or IT workers, but the classic models have more or less remained unchanged throughout the decades. What happened is that they went over countless micro-improvements which brought them to the state-of-the-art tools they are today.

An OS, just like any important piece of technology, should give the user the same satisfaction a Victorinox SAK gives to its holder.

These are some of the considerations which crossed my mind while trying to make again my entrance in the IT world. They will reflect on my work and over the next months I will put my money where my mouth is.

Companies on the Verge of a Nervous Breakdown

This is basically a continuation of the previous post about the biggest software delusions of the last decade. In hindsight I would have set rather a different tone for what I wrote, less rant and more technical, but the problem is that I keep things on my mind for a long time and never care enough to write them down leaving them rotting until they come out as technological rants. Anyway, rants are always more fun to read, so let’s keep the style.

In this post I’m going to write about some things left out in the previous one and also comment some things which happened in the meanwhile. You might ask what I have to show for my big claims about complex issues? Very little indeed, but does this make them less true? You’ll be the judge. What I try to offer here is a different perspective on issues which are always analyzed from the marketing or business point of view. Trying to explain these things giving technical reasons, offer in my opinion much better explanations than those fished from the flavor-of-the-day marketing magic hat.

After the last post I was sent per email a “graphic that illustrates the 30 years of innovation at Microsoft and their failures along the way” to link on my blog. I don’t care really about the reasons to ask for a link-to. What I want to say is that this graphic made fun of Microsoft’s failures of the decade just by listing some of them. And this is more or less the usual approach I see taken on the subject even by technical blogs. Which means focusing on the facts, rather than trying to understand them.

Windows Phone

Can we say that Lumia/Windows 7 phones flopped or is it still too soon? I think that after some of the articles I’ve read here and there, we might say that. Lumia phones were pushed out by a big carrier in the US (AT&T) and have been subject of a massive marketing campaign, but still they sold less than the dropped and not advertized N9/MeeGo project.

Nokia is laughable for dropping MeeGo! It can’t be stressed enough, because that would’ve been their only chance to regain market share and they completely blew it.

But why? Surely many reasons stand in the background, but in the end of the day one has to consider what is better on the technical level. If your definition of a better phone is how shiny it looks, then important decisions in the mobile industry shouldn’t be left to you. Many think that Apple is leading the smartphone/table industry because of their marketing strategy. While Apple products are often appealing and polished, this can’t be farther from the truth. Take the desktop market. Is Apple leading there? No. Why? Aren’t the products as polished as their counterparts in the mobile market? Or does Apple strangely suck at marketing their desktop products? Sure, Apple computers are expensive, but so are iPhones!

The first rule here is that great products sell themselves. Clearly marketing helps, but no matter how much marketing money you spend on a product which people don’t want, it will not sell, especially in the long term.

Take MeeGo for instance. I don’t mean that this project would’ve rescued Nokia instantly. Probably they would’ve still to endure 1-2 years of losses along the road, but eventually it would’ve flourished. Of this I’m sure. And considering how many people still buy overpriced N9 phones on ebay, I have a point. The trick is that if you know you have a great project at hands, you invest in it, endure some losses in the strong belief that it will eventually succeed.

One might say that this is exactly what is happening to Nokia and Windows Phone, only that they are betting on the wrong horse. It would be an acceptable point of view if we don’t get hands down on the technology itself. MeeGo was a great project, in my opinion it would’ve been the most advanced OS on the mobile market. Compare this with a repackaged Windows Mobile (not based on NT technology) running Silverlight. Alone the fact that a developer is forced to write his apps in Silverlight or XNA, that alone, would be enough to say “case fuckin’ closed!”. Rumors say Windows Mobile 8 will feature a NT kernel and also that developers will be able to compile C++ code. Seems like after enormous pressure, Microsoft had to give in about C++ (wow, that was totally unexpected… except that I wrote it even a year ago and would’ve been clear to anyone which has even a yota of experience as a developer). Even if it’s true, this is totally messed up. Those developers who lost time to port their C++ code to C# for Windows Phone 7 because C++ would never be a part of the toolchain of that OS lost their time probably for nothing. Also, users which are running Windows Mobile 7 won’t get a free update to the next version, which is incredible since both iOS and Android update their OS even for older phones. It should be pretty clear that when you want to take away market share from the biggest in the game, you must offer at least in part something which is better. Now can someone tell me in what regard a Windows Mobile 7 is better than iOS or Android. Leaving out the hardware of Nokia (and I still think that a smartphone without front-camera is pretty silly nowadays) and just focus on the operating system itself. Is there any advantage? Both iOS and Android have many more apps and of higher quality than WP7. iOS is closed just like Windows Mobile 7, while Android is more easy to hack and play with. Both iOS and Android allow C++ to be compiled, while WP7 doesn’t.

Metro and Windows 8

I’m still calling it Metro, but what is it called now? Microsoft lost the brand to a very famous European wholesale chain store. As a friend of mine said, “I would fire the whole marketing team, if they even can’t come up with a brand name which is not already used”. And not only is it used, but it’s used by a very big chain. It’s like calling your new technology “Walmart”, at least google the name first! (maybe it’s because they were forced to use Bing…)

And enough with these flashy marketing names for development technologies! There’s no reason to pretentiously call something “Silverlight”, it makes it only much more ridiculous when it ends up in the shithouse (or silvershithouse). Use dumb prosaic names like Win32, MFC, Qt! It doesn’t fuckin’ matter! What matters is the code and only the code, and after a year or more of hearing about Metro I haven’t yet seen the code! Granted I don’t look for it, I don’t dig it up from some msdn showcase, I don’t go to conferences, but this isn’t a good enough reason. Just google “metro code snippet” or anything similar and it will be hard to come up with results (I’ve found a preview on msdn which is just a collection of small samples which I was too lazy to view all). The code in this case is like a big mistery waiting to be unveiled…

Except that nobody cares! Apart making fun of Metro, I have yet to see anybody waiting impatiently for Metro or even talking about it (apart making fun of the name etc.).

Microsoft got me personally annoyed to a point in which I don’t follow anything they do anymore. I will have to try sooner or later Windows 8 just to guarantee the stability of my own product, but that’s it. I won’t use it nor play with it. I will skip it completely. And all this is ok, because I think that everything Microsoft is doing is not here to stay. Bing, Silverlight, Windows Phone, WPF, Zune (R.I.P.) etc. And time is confirming my claims. Of course, I can’t predict the future, something might change and change the faith of one of these products as well. But with the current management this is very unlikely.

As for what I read about it, the whole new UI is just jaw-dropping stupid. It’s incredible how this trend of “simplifying UIs” got hold of so many projects. Seen what happened to Gnome 3? Seen what happened to Ubuntu when it came out with Unity? Why is Mint now so popular?

Sure people don’t want to learn again things they already can do, but the problem here is that there’s no damn reason to change something which is working perfectly well and put instead something which is just worse. While humans strive for harmony and unity, these concepts can’t be applied to everything. A desktop is a productivity device. It’s efficient, fast and advanced. While a tablet is a device for consumption, it is ideal to read, play games, browse the web. Having one application at a time visible in a desktop is not only a bad idea, it is idiotic beyond imagination. The key point of a desktop is that it allows complex applications to be used, which would be impossible to use on a tablet: Photoshop, Maya, LibreOffice, Premiere etc. And the whole concept of tiles, which to Microsoft is so brilliant is equally moronic. If Microsoft doesn’t drop the whole concept soon enough after the Windows 8 debacle, I will just drop Windows completely.

The complexity of window managers could be solved much more elegantly by providing a basic mode for users which are not technologically capable.

The betrayal

Developers have been “betrayed” by Microsoft numerous times. Like I mentioned in the previous post, Microsoft deprecated and dished out new technologies at a pace that no one could follow, deprecating in a matter of few years what they just claimed to be their newest direction. Hence confusing and frustrating developers who tried to keep up-to-date, while refusing to significantly update existing and widely used technologies.

Or in the case of Windows Phone 7 the few developers who ported their code to C# now read that Windows Phone 8 will allow C++ code to be compiled. Will they be satisfied by this? Same for the users who bought Windows Phone 7 devices: they will not be able to run applications compiled for Windows Phone 8. Well, at least they got the tiles…

Losing the ground

The one thing which differentiates one OS, apart their own intrinsic quality, from the other is the number of applications which run on it. But the quality of the OS increases once there’s enough interest in it, and that interest is again a result of the applications which run on it. So simple right? While Microsoft knows this rule, it did everything it could to annoy developers. Microsoft tried to bind developers to Windows not by pleasing them, but by dishing out ugly technologies which run only on Windows and using their market share to force developers to use them.

Developers, like anyone else, guard their own interests. Many lost faith in Microsoft completely and started looking for safer havens. This surely is true even for other experts, although I can speak only for my own kind.

For instance, how did Microsoft lose its IE market share? I can’t even start judging IE as a product, apart its history of lack of security, its history of ignoring standards making life hell for web developers, its appalling plugin technology. We’re talking about a product which in 2012 considers clicking on a URL such an important event to signal it emitting a click sound. IE lost its market share by being an inferior product. But do you think that users with no technical ability would’ve downloaded and installed Firefox on their own? No, it’s because more technical people advised them to do so. I did it many times. And this is true for many products which make a name for themselves among technical people and from there they get to the masses. By the way, I consider this the best path for a product, because it means it stands on solid ground.

And finally Valve is starting to sell games on Linux. It can’t be stressed enough how important this is, because if this works out and I can’t see why it shouldn’t, it will change everything. If Microsoft loses the game battle to Linux, then they will lose the OS battle. I think this could be the battle of Stalingrad for Microsoft, because once there are enough games on Linux, there’s no end to the ground which Microsoft can lose. At that point Valve could even come out with its own console and compete against XBox. And since the gaming industry is so powerful, it would mean an overwhelming cash and interest injection into Linux, which everybody involved in that OS could benefit from. Of course, I’m speculating here, but does Microsoft understand the potential here?

I don’t think management does. They are hopping from one technology to another: WinForms, no WPF, no Silverlight, no Metro (replace with the new still unknown name), C#, no HTML5+JS. The problem, in the end, is that if as a CEO you don’t know what you are dealing with, you can’t take informed decisions and you will surround yourself with people you can’t evaluate technically. Your decisions will then only be based upon the appearance, the flashy name, how pretentious the concept sounds or how many millions are spent on marketing. A technically capable CEO is not a guarantee for success, but an incapable one is a recipe for failure. Remember what the former CEO of Pepsi did to Apple? Look at what Elop is doing to Nokia or Ballmer to Microsoft.

The biggest software delusions of the last decade

… or how Microsoft is trying to lose its dominant position.

It’s not only about Microsoft of course. Other big companies have made mistakes, but Microsoft is surely the company which has made most of them in the last ten years. Surely it’s because they can afford it: others can’t make that many without filing for bankrupcy.

Managed development

This is probably the root of most dumb decisions. When Java came out it was appealing to many. Microsoft was already at that time a follower in its decisions and started its .NET development. .NET itself wasn’t a bad idea. At the time I thought it was going to be a part of the ecosystem just like native applications and replacing the obsolete and buggy Visual Basic 6.

Nowadays the reality which we can see is that Microsoft wants their managed technology to take over and become the preferred solution for Windows. From what I could grasp reading some articles about Windows 8 is their interest forcing Desktop developers to write applications that could easily be run/ported to tablets and phones.

But does this infatuation of managed development make sense? To answer this question, it is first necessary to open a parenthesis.

The big innovator of the last decade has been Apple and not because Apple is so smart, but because the others have been clumsy and dumb. I’m talking from a technology perspective here and not from a business/marketing point of view. Apple is obviously very good at marketing, but also it has had a passion about its products. In my opinion, someone who is the CEO of a big IT company should be able to tell the difference between a computer and a toaster. So, yes, this cuts out Ballmer.

I once had been convinced into buying a Zune MP3 player. It was quite expensive (99 euros) considering my previous MP3 players. After trying it out I discovered it didn’t allow me to play tunes based on the directory they were stored inside. I could only play them based on their tags (artist, album etc.). Microsoft seriously expected me to tag now all my tunes? Years before I had taken many of my CDs and ripped them without filling out the tags. Thus, on their player my music was interrupted by my Swedish lessons! On top of that, it wasn’t even a standard USB memory device, it had its own drivers. Let’s just say it’s the worst MP3 player I have ever had. Afterwards I bought a 30 euros Philips player and lived happily ever since. Why did I write this? Because it says a great lot about the care which goes into products. Which in the case above is zero. How is it possible that no one in the process has raised his hand and said “hey, but it’s missing this and that”? It is a great indicator of how certain things are reviewed in Microsoft.

But wait. You could say that the iPod (which I have never used btw) has the same characteristics and lacks this functionality as well. First off, the iPod targets a certain audience and is practically bundled with its iTunes store. This argument can be reduced to: if I wanted an iPod, I would have bought one. And that’s the first big problem of Microsoft, it can’t come up with ideas of his own and doesn’t understand why people prefer the original to the copy. Apple is far from representing perfection in its products, but what is more imperfect than a mere imitation without any advantages?

This was quite a huge parenthesis but you’ll see that I’ll manage somehow to pull the strings together. And if I fail, hey, I can always do some marketing to compensate.

The point of all this is that Apple has been the technology leader of the last ten years. And which are the leading technologies produced by Apple? iPhone, iPad and iPod Touch which on the software side means iOS.

iOS is a mix of C, C++ and Obj-C. Developers write their applications for iOS with Obj-C or through a layer on top of it. Objective-C is basically C with a front-end for the compiler which allows the embedded smalltalk syntax. Thus, Apple is dominating the market with a programming language which comes from the 70s.

Did that create any sort of barrier or limitation for them? It seems not.

Clearly the technological advantages of managed development do not stand in the results for the user, since hardly someone can argue that the Windows Phone 7 experience is much more nicer and appealing than that of an iPhone.

Which means that the advantages have to be on the development side if they can’t be found in the results (more on that later).

Is it easier and more convenient for a developer to use .NET instead of, say, native C++ or Objective-C. If he is just learning to program and doesn’t understand the concept of pointer it might be, although even that isn’t guaranteed. But even if it is, it is not easier or more convenient for a veteran.

Let’s take, for instance, a company which has developed a nice voice recognition library in C++. After 10 years it has become an advanced product and it has been decided to make it available for embedded devices. It is quite easily ported to iOS or Android in just a few weeks, because both allow for native C++ code to be compiled. Not so for Windows Phone 7. Why should the company invest money into rewriting their library for a device which has only like 7% of the market share? Unfortunately, not all companies are so eager to lose money like Microsoft.

Google did the same mistake with Android, but they almost immediately gave in when developers demanded for native code to be compilable and now they’ve got something which doesn’t make much sense. An official Java API and native modules with also a native API, although minimal compared to the Java one. It would have made more sense to offer directly a C/C++ API and let other technologies be built on top, of course. Google, nonetheless, seems much less stubborn than Microsoft.

So managed isn’t more convenient for companies or developers which already have a product and only need to port it, but what about those who are starting their product only now. Is it convenient for them?

The big advantage of Java which made it so appealing in its days was its multiplatform capability. But plain C/C++ are multiplatform. What is needed by a language to become multiplatform is only the API. There couldn’t be any better example of C++ being multiplatform than the Qt framework. And what is less multiplatform than a technology which is intended to run only on Microsoft products? A great deal of code can be ported among iOS and Android. This doesn’t apply to Windows Phone 7. So, even for brand new products it’s highly inconvenient to use .NET, given that it will preclude porting the code to other devices.

Uhm, it doesn’t reflect in the results and it’s a bad investment. What about the inherent technological advantages? There are some pros. It’s sometimes easier to debug managed applications and it’s way easier to analyze them. Also, most importantly, they are compiled just once for different devices. One more advantage which comes to my mind is that they allow reflection. Dynamism isn’t an advantage inherent to managed languages as Objective-C, Qt and lastly my article about Dynamic C++ can prove.

The first three advantages come at a cost. Debugging managed applications isn’t always easier. It’s easier if the problem is in the application itself, it becomes a nightmare when the problem is inside the framework. If that’s the case, the complexity becomes much bigger than debugging native applications. A friend of mine was affected by the large object heap problem. And I haven’t really understood whether the problem has been addressed in .NET 4 or not. Nor do I care, actually. But in that thread Connor Douglas writes on the 16/08/2011:

“This problem has caused me serval sleepless nights and is currently delaying a project from going into production. I don’t understand why microscoft will not look at this problem. I am dealing with heavy image processing application with large arrays.

The application is meant to run periods of years without being restarted.

Very disappointed to find out that this is an issue so late in our development cylce!”

Please note, the problem has been reported on the 18/12/2009. Two years have passed.

From my experience I can only say that for big projects it’s never a good idea to delegate complexity to others without the possibility to intervene directly. Every managed language (especially if the VM is not open-source) makes the developer completely depend on the owner of the managed technology. What can the developer above do other than knock at the door of Microsoft and demand a fix? It’s not like he can choose another .NET framework or patch the framework himself.

It’s easier to analyze .NET applications indeed. It’s also very easy to reverse engineer them as I have showed years ago in my articles about .NET reversing (part 1, part 2). Thanks to the attributes of managed languages themselves and the amount of metadata and type information, .NET applications are de facto open-source. Anyone can take the .NET Reflector and obtain the original source code from any .NET assembly. If anyone thinks protections will prevent this, please read the two articles I linked above. It’s ironic that this is what the N°1 anti open-source company in the world wants: that all applications should become open-source.

The last argument which often I hear used in favor of managed applications is ‘security’. It’s true that a buffer overflow can’t happen in a managed application, unless of course it happens in the VM itself. But I can probably safely say that 95% of buffer overflows in history were caused by unsafe string functions. The fact that C featured an unsafe API can’t be used as an argument in favour of managed languages. And if we consider the remaining risk in native applications, the solution is to tighten the security of processes and hardware. We have seen many new things during the last 10 years: DEP, ASLR, stack cookies, SafeSEH. Already writing a buffer overflow exploit on Windows 7 x64 is anything but trivial. And much more can be achieved without invoking managed technologies.

Garbage Collection

While this may seem bound to managed and scripting languages, it isn’t. Some native languages have garbage collectors as well and it has been the big trend in the first years of 2000. Garbage collection makes a lot of sense in scripting languages, but there it should be confined. I fully made up my mind years ago about this topic and it boils down to 2 very simple conclusions.

1) A garbage collector doesn’t make sense as long as every memory leak is smaller than the memory wasted by a garbage collector.

2) It’s bad for shaping the mentality of developers. Memory is a resource just like a file or a socket. Would you expect someone else to close a file you opened?

The second point is in my view self-evident and the first one is easy to demonstrate. Just consider the large object heap discussed in the previous paragraph and the quotation of the article related to that:

“You’d have thought that memory leaks were a thing of the past now that we use .NET. True, but we can still hit problems. We can, for example, prevent memory from being recycled if we inadvertently hold references to objects that we are no longer using.”

Which actually would be a leak. Just because the framework will free the memory once the application terminates, doesn’t mean it’s not a leak. Even when one is leaking memory in C the operating system will free the leaked memory once the application is terminated. The only advantage here is that the garbage collector doesn’t allow incremental leaks. A pointer in C can be used several times, leaking memory over and over. With a garbage collector of course this can’t happen.

But hardly an application without GC will waste the amount of memory a GC does. There are two kinds of leaks in an application without GC: those which occur rarely and those which occur often. Only those which occur rarely or just once and leak only a small amount of memory will go unnoticed. All the other will be noticed and debugged by the programmer. The small and rare leaks are just less wasteful of memory than a GC and thus from a practical point of view preferable.

Moreover, the GC in .NET could had been implemented much better by making it optional or by giving the developer the ability to delete objects, instead of forcing dereferences and putting silly .dispose() methods here and there.

XAML

While XML is an ideal solution to represent a hierarchy like a UI, things have gotten out of hand with XAML. First thing: it’s the ugliest thing I have ever seen (if we exclude Italian politics).

And this is an extremely simple snippet. How does one usually modify complex snippets or do things which can’t be achieved through the designer? In a way which is in line with the .NET mentality. In fact, one big problem in the .NET framework is that its API is most of the times incoherent. Thus, it’s impossible for a programmer to just guess the correct method to use. Here’s a simple example:

If you can’t make even a simple int/string conversion coherent in a framework, then I’d say it’s a problem. Let’s take the same code in Qt:

I can assure you that I didn’t need to look up anything the first time I used QString in Qt. Not so for C#. Nobody can just guess the methods.

The developer in this case has to search for a snippet on the internet, which could be called Copy and Paste development. It’s the same with XAML of course. Unless you rely entirely on a designer, but as with HTML pages I rarely see complex ones done with a designer, so that one has to go with the raw XML.

Forcing programmers to be confronted with XML to make their UIs is the worst idea ever. This has root in the typical university way of thinking. Microsoft made big announcements that with XAML finally programmers had no longer to focus on UIs, which could now be left to the graphical people.

What a great idea! I wonder what kind of application is completely separated between its UI and code so that the graphical people can just proceed doing their work without worries. When I try to visualize such an application in my mind I see either an animated presentation which doesn’t do anything or a dialog box with three buttons and an image. Once I start to think about anything more complex than that, I strangely can’t see any longer the separation between UI and code.

UIs are made of complex graphical components, often custom components. Who needs someone meddling with the UI just to redispose some buttons or add some graphical elements? Does this really make it worthwile talking about a separation of UI and code?

And anyway, even admitting there could be a separation between the two, I really wonder how many companies do have dedicated team members just for UIs. Even small companies do exist. And I know this may come as a surprise to you, Microsoft, but even individual developers do exist. Amazing, isn’t it?

A typical academical idea which looks good on paper. For three seconds.

Silverlight

I don’t know whether it is/will be much used. I heard many times of Microsoft pushing it by re-doing important websites for free using Silverlight.

As much as I don’t like Flash, I would never ever invest in Silverlight, much rather in Flash. First off, Flash is much more used than Silverlight and runs on basically every operating system and will surely do so even in the future if Microsoft doesn’t really decide to buy Adobe (and that by the way should be stopped by the antitrust which seems only to be interested in knowing whether Microsoft is imposing Internet Explorer to Windows users).

The new Flash stands in no way behind Silverlight in terms of features for what its purpose is. Also, this is typical of the behavior of Microsoft lately. There’s no place for others on the market, they themselves need to be everywhere. Not that competition itself is bad for Flash, quite the contrary, but it should be left to others!

Why? Because when a company bases its business on a technology like that, it really earns on the product. So it must ensure customers are satisfied and that it works on every platform just as advertised.

I don’t believe that Microsoft really cares about the revenue generated by Silverlight itself. I think it is much more important to them to bind programmers and applications to their core business, which is operating systems.

I believe that in general frameworks should be developed by third parties for these exact reasons, but this is even more true for something which really should work everywhere like a web-embedded technology.

Windows Phone 7

Windows Phone 7 is highly recommended to anyone who wishes to start developing.

On an iPhone.

Yes, precisely. After two hours spent wrestling Silverlight/XAML into displaying a trivial layout on a Windows Phone, any normal programmer will immediately buy an iPhone. Even the odd smalltalk syntax doesn’t look so bad now, does it? Quite the contrary! It seems highly reasonable and elegant. How only could it look bad before?

Apart from that, I don’t know whether they improved things lately, but at the time it came out it lacked an API for practically anything, even the most trivial things like SQLite support. And of course it can’t be added manually, since it can’t run native modules as discussed before.

It doesn’t seem a highly intelligent move to release a smartphone after anybody else, in delay of years and then bring out something so immature. I honestly hope that the Windows Phone crashes and burns. Not only because it would teach Microsoft a humility lesson (if they can actually learn one), but also because it would stop the delusion of forcing desktop developers in rethinking everything for the mobile market, which is the latest Microsoft trend judging by the articles about Windows 8 I have skimmed through these weeks.

For now it’s unsure how it will end. Although Windows Phone has already been declared a failure, Microsoft has launched a partnership with Nokia and will invest even more on it. Like usual. If the product is not bought, then it can only be that we haven’t spent enough on it. Let’s do some marketing!

Cloud computing

This word has acquired so many meanings that if Hegel was still alive he would use it too.

Which also means that it makes no longer sense using it if not for marketing purposes like Apple just did with its iCloud. Which actually is just a service like DropBox with a fancy name.

The range of meanings the word has acquired includes basic server technology, synchronization, distributed computing, web based applications (which probably is the most authentic meaning).

If web based applications are meant, then clearly the idea is stupid. Having every application on a remote computer is not only the worst thing for privacy, but is also slow, costly (for the company), inefficient and a sucky user experience.

Many have written about this topic and I certainly am not the one who can shed additional light on it, but I mentioned it anyway just for completeness.

Simplicity

This paradigm has just got to go.

I have installed Ubuntu on the computer of some extremly unskilled people. And they use it. They browse the web, check their email, watch movies, write documents with Libre Office and even move files to/from memory sticks.

If these people can do it, then I can probably train a penguin to use Ubuntu.

Granted that I’d probably need to find a larger keyboard for his fins; but that’s all.

There’s just no more room for simplifying without removing functionality. On the other hand, Microsoft would simplify my life a great deal if they finally decided to implement a search functionality in the list of installed services (and that’s not the only place where a search functionality is lacking). Or by introducing a file search that actually has any kind of purpose. That would simplify _my_ life a lot, thank you. And I’m pretty sure that after 20 years these improvements can be safely done without the risk of juggling too many things at once. But I might be wrong. Who knows…

Bing, MSN Live, failed Yahoo acquisition

I can’t put it better than Charlie Brooker once did (please read with British accent):

“I suppose, you know, theoretically you could watch the royal wedding on ITV not the BBC, just like you could search for things on Bing instead of Google, or eat Daddy’s ketchup instead of Heinz. It’s possible, but it’s not _normal_. It borders on perversion. You could watch it on Sky News but that’s like searching Hellman’s Ketchup on Yahoo.”

If you don’t get right at once something which was lame from its conception, just give up. Sometimes in life it is very healthy to give up for shaping one’s character. Behaving like a pestering child who stumps on the ground and screams “BUT I WANT IT! I WANT IT!” doesn’t seem to me a winning strategy.

Social networks (Facebook, Google+, Wave, MySpace etc.)

Yes, I know that Facebook is an immense business right now. But I have always seen it as a bubble and I hope for everybody’s sake that it really is. Maybe one day humanity will realize that putting sensitive information in the hands of a corporation is not such a smart idea. Or maybe not. Anyway the topic has deserved to be in the list, because an infinity of money has been invested (by others) into social networks with no results.

Conclusions

As we have seen other companies do mistakes, but no one as much as Microsoft. A company behaving like a retarded giant who is baffled by others passing by him running and who starts its running motion in an attempt to catch them without noticing that the strings of his shoes have been tied together.

More money, more marketing. Never passion or care. It has always to be the latest toy. Then as soon as it has been played with for two seconds it is thrown to the ground and then again focusing on the next toy.

What’s a better example for this behavior than Skype? Was it really necessary to buy it? Couldn’t a partnership suffice? Won’t it more realistically prevent smarter acquisitions in the future for lack of money or intervention of the antitrust?

And can developers really follow Microsoft?

.NET with WinForms, big change. Lot of code needs to be rewritten. But wait what is WPF? XAML needs now to be used for the UIs? Ah. And what’s Silverlight? Should I use WPF or Silverlight? What are the differences? And all the WinForms code? Obsolete??… HEY, WHAT IS METRO?

By the way, is it just me or Metro Apps sounds a lot like Metro Sexual? Sorry, but South Park burned that brand for me.

Anyway it is clear that everything from Microsoft comes out touched by too many people, too fast and without the necessary dedication and care which in my opinion are essential to great products.

Don’t get me wrong, it’s not like I’m saying that Windows 8 will be the end of Microsoft. Of course not. Probably it will be disliked just like Vista and afterwards things will be re-improved like with Windows 7. The problem is that Microsoft is losing time. A lot of time. Sooner or later operating systems such as OSX and Linux will completely catch up with what really matters in a desktop, which apart from its own features, are the applications which run on it.

I wonder when it will be possible to look after a new release on Windows hoping for improvements, instead of hoping that it won’t be worse than the current version.

Moreover, Windows could be improved to an endless extent without re-inventing the wheel every 2 years. If the decisions were up to me I would work hard on micro-improvements. Introduce new sets of native APIs along Win32. And I’d do it gradually, with care and try to give them a strong coherency. I would try to introduce benefits which could be enjoyed even by applications written 15 years ago. The beauty should lie in the elegance in finding ingenious solutions for extending what is already there, not by doing tabula rasa every time. I would make developers feel at home and that their time and code is highly valued, instead of making them feel like their creations are always obsolete compared to my brand new technology which, by the way, nobody uses. I also would like them to believe that I wouldn’t meddle with their business once it becomes interesting enough, be it virtual machines, web applications, search engines, browsers, VOIP etc. Just name one thing Microsoft hasn’t been involed into during the last ten years.

I can’t say how much Microsoft will lose of its dominant position in the years ahead. Certainly it is working very hard on it and hard work sometimes pays off.