Ctor conflicts

Perhaps the content of this post is trivial and widely known(?), but I just spent some time fixing a bug related to the following C++ behavior.

Let’s take a look at this code snippet:

The output of the code above is:

Whether we compile it with VC++ or g++, the result is the same.

The problem is that although the struct or class is declared locally the name of the constructor is considered a global symbol. So while the allocation size of the struct or class is correct, the constructor being invoked is always the first one encountered by the compiler, which in this case is the one which prints ‘apple’.

The problem here is that the compiler doesn’t warn the user in any way that the wrong constructor is being called and in a large project with hundreds of files it may very well be that two constructors collide.

Since namespaces are part of the name of the symbol, the code above can be fixed by adding a namespace:

Now the correct constructor will be called.

I wrote a small (dumb) Python script to detect possible ctor conflicts. It just looks for struct or class declarations and reports duplicate symbol names. It’s far from perfect.

In my opinion this could be handled better on the compiler side, at least by giving a warning.

ADDENDUM: Myria ‏(@Myriachan) explained the compiler internals on this one on twitter:

I’m just surprised that it doesn’t cause a “duplicate symbol” linker error. Symbol flagged “weak” from being inline, maybe? […] Member functions defined inside classes like that are automatically “inline” by C++ standard. […] The “inline” keyword has two meanings: hint to compiler that inlining machine code may be wise, and making symbol weak. […] Regardless of whether the compiler chooses to inline machine code within calling functions, the weak symbol part still applies. […] It is as if all inline functions (including functions defined inside classes) have __declspec(selectany) on them, in MSVC terms. […] Without this behavior, if you ever had a class in a header with functions defined, the compiler would either have to always inline the machine code, or you’d have to use #ifdef nonsense to avoid more than one .cpp defining the function.

The explanation is the correct one. And yes, if we define the ctor outside of the class the compiler does generate an error.

The logic mismatch here is that local structures in C do exist, local ctors in C++ don’t. So, the correct struct is allocated but the wrong ctor is being called. Also, while the symbol is weak for the reasons explained by Myria, the compiler could still give an error if the ctor code doesn’t match across files.

So the rule here could be: if you have local classes, avoid defining the ctor inside the class. If you already have a conflict as I did and don’t want to change the code, you can fix it with a namespace as shown above.

The biggest software delusions of the last decade

… or how Microsoft is trying to lose its dominant position.

It’s not only about Microsoft of course. Other big companies have made mistakes, but Microsoft is surely the company which has made most of them in the last ten years. Surely it’s because they can afford it: others can’t make that many without filing for bankrupcy.

Managed development

This is probably the root of most dumb decisions. When Java came out it was appealing to many. Microsoft was already at that time a follower in its decisions and started its .NET development. .NET itself wasn’t a bad idea. At the time I thought it was going to be a part of the ecosystem just like native applications and replacing the obsolete and buggy Visual Basic 6.

Nowadays the reality which we can see is that Microsoft wants their managed technology to take over and become the preferred solution for Windows. From what I could grasp reading some articles about Windows 8 is their interest forcing Desktop developers to write applications that could easily be run/ported to tablets and phones.

But does this infatuation of managed development make sense? To answer this question, it is first necessary to open a parenthesis.

The big innovator of the last decade has been Apple and not because Apple is so smart, but because the others have been clumsy and dumb. I’m talking from a technology perspective here and not from a business/marketing point of view. Apple is obviously very good at marketing, but also it has had a passion about its products. In my opinion, someone who is the CEO of a big IT company should be able to tell the difference between a computer and a toaster. So, yes, this cuts out Ballmer.

I once had been convinced into buying a Zune MP3 player. It was quite expensive (99 euros) considering my previous MP3 players. After trying it out I discovered it didn’t allow me to play tunes based on the directory they were stored inside. I could only play them based on their tags (artist, album etc.). Microsoft seriously expected me to tag now all my tunes? Years before I had taken many of my CDs and ripped them without filling out the tags. Thus, on their player my music was interrupted by my Swedish lessons! On top of that, it wasn’t even a standard USB memory device, it had its own drivers. Let’s just say it’s the worst MP3 player I have ever had. Afterwards I bought a 30 euros Philips player and lived happily ever since. Why did I write this? Because it says a great lot about the care which goes into products. Which in the case above is zero. How is it possible that no one in the process has raised his hand and said “hey, but it’s missing this and that”? It is a great indicator of how certain things are reviewed in Microsoft.

But wait. You could say that the iPod (which I have never used btw) has the same characteristics and lacks this functionality as well. First off, the iPod targets a certain audience and is practically bundled with its iTunes store. This argument can be reduced to: if I wanted an iPod, I would have bought one. And that’s the first big problem of Microsoft, it can’t come up with ideas of his own and doesn’t understand why people prefer the original to the copy. Apple is far from representing perfection in its products, but what is more imperfect than a mere imitation without any advantages?

This was quite a huge parenthesis but you’ll see that I’ll manage somehow to pull the strings together. And if I fail, hey, I can always do some marketing to compensate.

The point of all this is that Apple has been the technology leader of the last ten years. And which are the leading technologies produced by Apple? iPhone, iPad and iPod Touch which on the software side means iOS.

iOS is a mix of C, C++ and Obj-C. Developers write their applications for iOS with Obj-C or through a layer on top of it. Objective-C is basically C with a front-end for the compiler which allows the embedded smalltalk syntax. Thus, Apple is dominating the market with a programming language which comes from the 70s.

Did that create any sort of barrier or limitation for them? It seems not.

Clearly the technological advantages of managed development do not stand in the results for the user, since hardly someone can argue that the Windows Phone 7 experience is much more nicer and appealing than that of an iPhone.

Which means that the advantages have to be on the development side if they can’t be found in the results (more on that later).

Is it easier and more convenient for a developer to use .NET instead of, say, native C++ or Objective-C. If he is just learning to program and doesn’t understand the concept of pointer it might be, although even that isn’t guaranteed. But even if it is, it is not easier or more convenient for a veteran.

Let’s take, for instance, a company which has developed a nice voice recognition library in C++. After 10 years it has become an advanced product and it has been decided to make it available for embedded devices. It is quite easily ported to iOS or Android in just a few weeks, because both allow for native C++ code to be compiled. Not so for Windows Phone 7. Why should the company invest money into rewriting their library for a device which has only like 7% of the market share? Unfortunately, not all companies are so eager to lose money like Microsoft.

Google did the same mistake with Android, but they almost immediately gave in when developers demanded for native code to be compilable and now they’ve got something which doesn’t make much sense. An official Java API and native modules with also a native API, although minimal compared to the Java one. It would have made more sense to offer directly a C/C++ API and let other technologies be built on top, of course. Google, nonetheless, seems much less stubborn than Microsoft.

So managed isn’t more convenient for companies or developers which already have a product and only need to port it, but what about those who are starting their product only now. Is it convenient for them?

The big advantage of Java which made it so appealing in its days was its multiplatform capability. But plain C/C++ are multiplatform. What is needed by a language to become multiplatform is only the API. There couldn’t be any better example of C++ being multiplatform than the Qt framework. And what is less multiplatform than a technology which is intended to run only on Microsoft products? A great deal of code can be ported among iOS and Android. This doesn’t apply to Windows Phone 7. So, even for brand new products it’s highly inconvenient to use .NET, given that it will preclude porting the code to other devices.

Uhm, it doesn’t reflect in the results and it’s a bad investement. What about the inherent technological advantages? There are some pros. It’s sometimes easier to debug managed applications and it’s way easier to analyze them. Also, most importantly, they are compiled just once for different devices. One more advantage which comes to my mind is that they allow reflection. Dynamism isn’t an advantage inherent to managed languages as Objective-C, Qt and lastly my article about Dynamic C++ can prove.

The first three advantages come at a cost. Debugging managed applications isn’t always easier. It’s easier if the problem is in the application itself, it becomes a nightmare when the problem is inside the framework. If that’s the case, the complexity becomes much bigger than debugging native applications. A friend of mine was affected by the large object heap problem. And I haven’t really understood whether the problem has been addressed in .NET 4 or not. Nor do I care, actually. But in that thread Connor Douglas writes on the 16/08/2011:

“This problem has caused me serval sleepless nights and is currently delaying a project from going into production. I don’t understand why microscoft will not look at this problem. I am dealing with heavy image processing application with large arrays.

The application is meant to run periods of years without being restarted.

Very disapointed to find out that this is an issue so late in our development cylce!”

Please note, the problem has been reported on the 18/12/2009. Two years have passed.

From my experience I can only say that for big projects it’s never a good idea to delegate complexity to others without the possibility to intervene directly. Every managed language (especially if the VM is not open-source) makes the developer completely depend on the owner of the managed technology. What can the developer above do other than knock at the door of Microsoft and demand a fix? It’s not like he can choose another .NET framework or patch the framework himself.

It’s easier to analyze .NET applications indeed. It’s also very easy to reverse engineer them as I have showed years ago in my articles about .NET reversing (part 1, part 2). Thanks to the attributes of managed languages themselves and the amount of metadata and type information, .NET applications are de facto open-source. Anyone can take the .NET Reflector and obtain the original source code from any .NET assembly. If anyone thinks protections will prevent this, please read the two articles I linked above. It’s ironic that this is what the N°1 anti open-source company in the world wants: that all applications should become open-source.

The last argument which often I hear used in favour of managed applications is ‘security’. It’s true that a buffer overflow can’t happen in a managed application, unless of course it happens in the VM itself. But I can probably safely say that 95% of buffer overflows in history were caused by unsafe string functions. The fact that C featured an unsafe API can’t be used as an argument in favour of managed languages. And if we consider the remanining risk in native applications, the solution is to tighten the security of processes and hardware. We have seen many new things during the last 10 years: DEP, ASLR, stack cookies, SafeSEH. Already writing a buffer overflow exploit on Windows 7 x64 is anything but trivial. And much more can be achieved without invoking managed technologies.

Garbage Collection

While this may seem bound to managed and scripting languages, it isn’t. Some native languages have garbage collectors as well and it has been the big trend in the first years of 2000. Garbage collection makes a lot of sense in scripting languages, but there it should be confined. I fully made up my mind years ago about this topic and it boils down to 2 very simple conclusions.

1) A garbage collector doesn’t make sense as long as every memory leak is smaller than the memory wasted by a garbage collector.

2) It’s bad for shaping the mentality of developers. Memory is a resource just like a file or a socket. Would you expect someone else to close a file you opened?

The second point is in my view self-evident and the first one is easy to demonstrate. Just consider the large object heap discussed in the previous paragraph and the quotation of the article related to that:

“You’d have thought that memory leaks were a thing of the past now that we use .NET. True, but we can still hit problems. We can, for example, prevent memory from being recycled if we inadvertently hold references to objects that we are no longer using.”

Which actually would be a leak. Just because the framework will free the memory once the application terminates, doesn’t mean it’s not a leak. Even when one is leaking memory in C the operating system will free the leaked memory once the application is terminated. The only advantage here is that the garbage collector doesn’t allow incremental leaks. A pointer in C can be used several times, leaking memory over and over. With a garbage collector of course this can’t happen.

But hardly an application without GC will waste the amount of memory a GC does. There are two kinds of leaks in an application without GC: those which occur rarely and those which occur often. Only those which occur rarely or just once and leak only a small amount of memory will go unnoticed. All the other will be noticed and debugged by the programmer. The small and rare leaks are just less wasteful of memory than a GC and thus from a practical point of view preferable.

Moreover, the GC in .NET could had been implemented much better by making it optional or by giving the developer the ability to delete objects, instead of forcing dereferences and putting silly .dispose() methods here and there.

XAML

While XML is an ideal solution to represent a hierarchy like a UI, things have gotten out of hand with XAML. First thing: it’s the ugliest thing I have ever seen (if we exclude Italian politics).

And this is an extremely simple snippet. How does one usually modify complex snippets or do things which can’t be achieved through the designer? In a way which is in line with the .NET mentality. In fact, one big problem in the .NET framework is that its API is most of the times incoherent. Thus, it’s impossible for a programmer to just guess the correct method to use. Here’s a simple example:

If you can’t make even a simple int/string conversion coherent in a framework, then I’d say it’s a problem. Let’s take the same code in Qt:

I can assure you that I didn’t need to look up anything the first time I used QString in Qt. Not so for C#. Nobody can just guess the methods.

The developer in this case has to search for a snippet on the internet, which could be called Copy and Paste development. It’s the same with XAML of course. Unless you rely entirely on a designer, but as with HTML pages I rarely see complex ones done with a designer, so that one has to go with the raw XML.

Forcing programmers to be confronted with XML to make their UIs is the worst idea ever. This has root in the typical university way of thinking. Microsoft made big announcements that with XAML finally programmers had no longer to focus on UIs, which could now be left to the graphical people.

What a great idea! I wonder what kind of application is completely separated between its UI and code so that the graphical people can just proceed doing their work without worries. When I try to visualize such an application in my mind I see either an animated presentation which doesn’t do anything or a dialog box with three buttons and an image. Once I start to think about anything more complex than that, I strangely can’t see any longer the separation between UI and code.

UIs are made of complex graphical components, often custom components. Who needs someone meddling with the UI just to redispose some buttons or add some graphical elements? Does this really make it worthwile talking about a separation of UI and code?

And anyway, even admitting there could be a separation between the two, I really wonder how many companies do have dedicated team members just for UIs. Even small companies do exist. And I know this may come as a surprise to you, Microsoft, but even individual developers do exist. Amazing, isn’t it?

A typical academical idea which looks good on paper. For three seconds.

Silverlight

I don’t know whether it is/will be much used. I heard many times of Microsoft pushing it by re-doing important websites for free using Silverlight.

As much as I don’t like Flash, I would never ever invest in Silverlight, much rather in Flash. First off, Flash is much more used than Silverlight and runs on basically every operating system and will surely do so even in the future if Microsoft doesn’t really decide to buy Adobe (and that by the way should be stopped by the antitrust which seems only to be interested in knowing whether Microsoft is imposing Internet Explorer to Windows users).

The new Flash stands in no way behind Silverlight in terms of features for what its purpose is. Also, this is typical of the behavior of Microsoft lately. There’s no place for others on the market, they themselves need to be everywhere. Not that competition itself is bad for Flash, quite the contrary, but it should be left to others!

Why? Because when a company bases its business on a technology like that, it really earns on the product. So it must ensure customers are satisfied and that it works on every platform just as advertised.

I don’t believe that Microsoft really cares about the revenue generated by Silverlight itself. I think it is much more important to them to bind programmers and applications to their core business, which is operating systems.

I believe that in general frameworks should be developed by third parties for these exact reasons, but this is even more true for something which really should work everywhere like a web-embedded technology.

Windows Phone 7

Windows Phone 7 is highly recommended to anyone who wishes to start developing.

On an iPhone.

Yes, precisely. After two hours spent wrestling Silverlight/XAML into displaying a trivial layout on a Windows Phone, any normal programmer will immediately buy an iPhone. Even the odd smalltalk syntax doesn’t look so bad now, does it? Quite the contrary! It seems highly reasonable and elegant. How only could it look bad before?

Apart from that, I don’t know whether they improved things lately, but at the time it came out it lacked an API for practically anything, even the most trivial things like SQLite support. And of course it can’t be added manually, since it can’t run native modules as discussed before.

It doesn’t seem a highly intelligent move to release a smartphone after anybody else, in delay of years and then bring out something so immature. I honestly hope that the Windows Phone crashes and burns. Not only because it would teach Microsoft a humility lesson (if they can actually learn one), but also because it would stop the delusion of forcing desktop developers in rethinking everything for the mobile market, which is the latest Microsoft trend judging by the articles about Windows 8 I have skimmed through these weeks.

For now it’s unsure how it will end. Although Windows Phone has already been declared a failure, Microsoft has launched a partnership with Nokia and will invest even more on it. Like usual. If the product is not bought, then it can only be that we haven’t spent enough on it. Let’s do some marketing!

Cloud computing

This word has acquired so many meanings that if Hegel was still alive he would use it too.

Which also means that it makes no longer sense using it if not for marketing purposes like Apple just did with its iCloud. Which actually is just a service like DropBox with a fancy name.

The range of meanings the word has acquired includes basic server technology, synchronization, distributed computing, web based applications (which probably is the most authentic meaning).

If web based applications are meant, then clearly the idea is stupid. Having every application on a remote computer is not only the worst thing for privacy, but is also slow, costly (for the company), inefficient and a sucky user experience.

Many have written about this topic and I certainly am not the one who can shed additional light on it, but I mentioned it anyway just for completeness.

Simplicity

This paradigm has just got to go.

I have installed Ubuntu on the computer of some extremly unskilled people. And they use it. They browse the web, check their email, watch movies, write documents with Libre Office and even move files to/from memory sticks.

If these people can do it, then I can probably train a penguin to use Ubuntu.

Granted that I’d probably need to find a larger keyboard for his fins; but that’s all.

There’s just no more room for simplifying without removing functionality. On the other hand, Microsoft would simplify my life a great deal if they finally decided to implement a search functionality in the list of installed services (and that’s not the only place where a search functionality is lacking). Or by introducing a file search that actually has any kind of purpose. That would simplify _my_ life a lot, thank you. And I’m pretty sure that after 20 years these improvements can be safely done without the risk of juggling too many things at once. But I might be wrong. Who knows…

Bing, MSN Live, failed Yahoo acquisition

I can’t put it better than Charlie Brooker once did (please read with British accent):

“I suppose, you know, theoretically you could watch the royal wedding on ITV not the BBC, just like you could search for things on Bing instead of Google, or eat Daddy’s ketchup instead of Heinz. It’s possible, but it’s not _normal_. It borders on perversion. You could watch it on Sky News but that’s like searching Hellman’s Ketchup on Yahoo.”

If you don’t get right at once something which was lame from its conception, just give up. Sometimes in life it is very healthy to give up for shaping one’s character. Behaving like a pestering child who stumps on the ground and screams “BUT I WANT IT! I WANT IT!” doesn’t seem to me a winning strategy.

Social networks (Facebook, Google+, Wave, MySpace etc.)

Yes, I know that Facebook is an immense business right now. But I have always seen it as a bubble and I hope for everybody’s sake that it really is. Maybe one day humanity will realize that putting sensitive information in the hands of a corporation is not such a smart idea. Or maybe not. Anyway the topic has deserved to be in the list, because an infinity of money has been invested (by others) into social networks with no results.

Conclusions

As we have seen other companies do mistakes, but no one as much as Microsoft. A company behaving like a retarded giant who is buffled by others passing by him running and who starts its running motion in an attempt to catch them without noticing that the strings of his shoes have been tied together.

More money, more marketing. Never passion or care. It has always to be the latest toy. Then as soon as it has been played with for two seconds it is thrown to the ground and then again focusing on the next toy.

What’s a better example for this behavior than Skype? Was it really necessary to buy it? Couldn’t a partnership suffice? Won’t it more realistically prevent smarter acquisitions in the future for lack of money or intervention of the antitrust?

And can developers really follow Microsoft?

.NET with WinForms, big change. Lot of code needs to be rewritten. But wait what is WPF? XAML needs now to be used for the UIs? Ah. And what’s Silverlight? Should I use WPF or Silverlight? What are the differences? And all the WinForms code? Obsolete??… HEY, WHAT IS METRO?

By the way, is it just me or Metro Apps sounds a lot like Metro Sexual? Sorry, but South Park burned that brand for me.

Anyway it is clear that everything from Microsoft comes out touched by too many people, too fast and without the necessary dedication and care which in my opinion are essential to great products.

Don’t get me wrong, it’s not like I’m saying that Windows 8 will be the end of Microsoft. Of course not. Probably it will be disliked just like Vista and afterwards things will be re-improved like with Windows 7. The problem is that Microsoft is losing time. A lot of time. Sooner or later operating systems such as OSX and Linux will completely catch up with what really matters in a desktop, which apart from its own features, are the applications which run on it.

I wonder when it will be possible to look after a new release on Windows hoping for improvements, instead of hoping that it won’t be worse than the current version.

Moreover, Windows could be improved to an endless extent without re-inventing the wheel every 2 years. If the decisions were up to me I would work hard on micro-improvements. Introduce new sets of native APIs along Win32. And I’d do it gradually, with care and try to give them a strong coherency. I would try to introduce benefits which could be enjoyed even by applications written 15 years ago. The beauty should lie in the elegance in finding ingenious solutions for extending what is already there, not by doing tabula rasa every time. I would make developers feel at home and that their time and code is highly valued, instead of making them feel like their creations are always obsolete compared to my brand new technology which, by the way, nobody uses. I also would like them to believe that I wouldn’t meddle with their business once it becomes interesting enough, be it virtual machines, web applications, search engines, browsers, VOIP etc. Just name one thing Microsoft hasn’t been involed into during the last ten years.

I can’t say how much Microsoft will lose of its dominant position in the years ahead. Certainly it is working very hard on it and hard work sometimes pays off.

Software Theft FAIL

… Or why stealing software is stupid (and wrong). A small guide to detect software theft for those who are not reverse engineers.

Under my previous post the user Xylitol reported a web-page (hxyp://martik-scorp.blogspot.com/2010/12/show-me-loaded-drivers.html) by someone called “Martik Panosian” claiming that my driver list utility was his own.

Now, the utility is very small and anybody who can write a bit of code can write a similar one in an hour. Still, stealing is not nice. 🙂

Since I can’t let this ignominious theft go unpunished :P, I’ll try at least to make this post stretch beyond the specific case and show to people who don’t know much about these sort things how they can easily recognize if a software of theirs has been stolen.

In this specific case, the stolen software has been changed in its basic appearance (title, icon, version information). It can easily be explored with a software such as the CFF Explorer. In this case the CFF Explorer also identifies the stolen software as packed with PE Compact. If the CFF Explorer fails to recognize the signature, it’s a good idea to use a more up-to-date identification program like PEiD.

However, packing an application to conceal its code is a very dumb idea. Why? Because packers are not meant to really conceal the code, but to bind themselves to the application. What is usually difficult to recover in a packed application is its entry-point, the IAT and other things. But the great majority of the code is usually recoverable through a simple memory dump.
Just select the running application with an utility such as Task Explorer, right click to display the context menu and click on “Dump PE”.

Now the code can be compared. There are many ways to compare the code of two binaries. One of the easiest is to open it with IDA Pro and to use a binary diffing utility such as PatchDiff2. If the reader is doing this for hobby and can’t afford a commercial license of IDA Pro, then the freeware version will do as well.

Just disassemble both files with IDA Pro and save one of the idbs. Then click on “Edit->Plugins->PatchDiff2” and select the saved idb.

Let’s look at a screenshot of the results:

Click to enlarge

As it is possible to see, not only were the great majority of functions matched, but they also match at the same address, which proves beyond doubt that they are, in fact, the same application.

It’s important to remember that a limited number of matches is normal, because library functions or some basic ones may match among different applications.

A comparison of two applications can even be performed manually with IDA Pro, just by looking at the code, but using a diffing utility is in most cases the easiest solution.

Qt’s GUI Thread

If you’re a Qt developer, you surely are aware of the fact that you can only display GUI elements and access them from the main thread. This limitation as far as I know is mostly bound to the limitations of X and it isn’t to exclude that multithreading support for GUIs will be added soon.

This limitation never caused me any trouble, since the signal & slots mechanism is thread-safe and communicating between threads and GUI elements can be achieved through it. However, yesterday I needed to show a messagebox in a method and, in case the code is not executing in the main thread, show a native win32 MessageBox instead of a QMessageBox (of course, only on Windows can I do that, on other platforms when I’m not in the main thread, I won’t show anything).

Anyway, here’s a simple method to establish if we’re running the GUI thread:

As you can see this is a pointer comparision, but can we rely on the value returned by currentThread? Yes, we can since the pointer is associated with the thread itself as we can see from the code of the method:

qt_create_tls just calls once for every thread TlsAlloc and if the data for the current thread hasn’t been set yet, it is set with TlsSetValue. So, we can rely on a pointer comparision.