Advertisement

why is C++ not commonly used in low level code?

Started by July 16, 2011 05:46 PM
67 comments, last by Washu 13 years, 3 months ago

9.3x faster, some 1.4x smaller.


Wrong conclusion.

The problem is IO bound, so heuristics on buffer sizes will determine the running time.

Change those std::endl to '\n' and you see a big difference.[/quote]

Also disable sync with stdio, I forgot exactly how. I think it's disable_sync_with_stdio() or similar.
I don't think you are able to measure the performance of a language with an artificial benchmark such as this. An artificial benchmark can be designed to prove that any X is 'better than' Y for any specific benchmark.

Every language has specific traits that outweigh some other language, and this is what matters. In the end, holy wars with 'X vs Y' do nothing but serve to cause arguments and usually are based in nothing but heresay and personal opinion without solid proof.

Use the right tool to solve the problem, without trying to use an excavator to swat a fly, and you will prosper. :-)

(EDIT: 1.6 million google results are here saying why it is not a good idea to trust such benchmarks).
Advertisement
Thanks for replies. I tried with \n, reserved some space for the string and disabled the sync with following results:



[calmis@purpleorange cvscpp]$ time echo "Dun-dun Daa" | ./main1 a ab abc abcd abcde abcdef abcdefg > mainC.txt

real 0m0.035s
user 0m0.017s
sys 0m0.007s
[calmis@purpleorange cvscpp]$ time echo "Dun-dun Daa" | ./main2 a ab abc abcd abcde abcdef abcdefg > mainCPP.txt

real 0m0.055s
user 0m0.033s
sys 0m0.003s

[/quote]

Obviously the speed bottleneck in this case is/was due to a incompetent programmer(me). However, there's still the size problem to solve. With GCC 4.6 the sizes are:



[calmis@purpleorange cvscpp]$ wc -c main1
3508 main1
[calmis@purpleorange cvscpp]$ wc -c main2
5944 main2
[/quote]

...when compiled with -Os and -s flags. This is my main point when working with embedded systems which do not necessarily have the resources to spare for increased code size.

But as stated numerous times before, the main reason for preferring C over C++(at least to date) has been with portability and habit.





I don't think you are able to measure the performance of a language with an artificial benchmark such as this. An artificial benchmark can be designed to prove that any X is 'better than' Y for any specific benchmark.

Every language has specific traits that outweigh some other language, and this is what matters. In the end, holy wars with 'X vs Y' do nothing but serve to cause arguments and usually are based in nothing but heresay and personal opinion without solid proof.

Use the right tool to solve the problem, without trying to use an excavator to swat a fly, and you will prosper. :-)



This is completely true. It is not possible to draw conclusions about the whole language based on single benchmarks really. However, I was asked for an example which demonstrates that C indeed is more resource efficient than C++, and I brought in one, very simple example of this. And it gets even worse when one starts using more than just the standard C++ library, e.g. classes, multiple inheritance and exceptions. This is based on my experience when working with 1k/4k demoscene prods, in which C++ is a definite no-go because of the increased resulting binary size. If size doesn't matter(what!?), then C++ is much more attractive for me at least. But as shown above, I am not a C++ dev at all really. :)

Did you remember to strip symbols and debug information from your binary after you compiled it?

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]


This is based on my experience when working with 1k/4k demoscene prods, in which C++ is a definite no-go because of the increased resulting binary size.


Ahh, the nostalgia, i remember when these were written in assembly language, not C :cool:

Did you remember to strip symbols and debug information from your binary after you compiled it?


With external tools? Not really, but after doing that the results are as follows:




[calmis@purpleorange cvscpp]$ wc -c main1
3508 main1
[calmis@purpleorange cvscpp]$ wc -c main2
5944 main2
[calmis@purpleorange cvscpp]$ strip -R .comment -R .gnu.version main1
[calmis@purpleorange cvscpp]$ strip -R .comment -R .gnu.version main2
[calmis@purpleorange cvscpp]$ sstrip main1
[calmis@purpleorange cvscpp]$ sstrip main2
[calmis@purpleorange cvscpp]$ wc -c main1
2048 main1
[calmis@purpleorange cvscpp]$ sstrip main2
[calmis@purpleorange cvscpp]$ wc -c main2
4424 main2


[/quote]

The reason why I excluded this in the first place was that when working with embedded systems often there are no such tools to use, so they'd just skew the results.


[quote name='Calmatory' timestamp='1311016150' post='4836958']
This is based on my experience when working with 1k/4k demoscene prods, in which C++ is a definite no-go because of the increased resulting binary size.


Ahh, the nostalgia, i remember when these were written in assembly language, not C :cool:
[/quote]


These days optimizing compiler can generate pretty much quite damn optimal assembly from C, and OS-specific tricks and external strippers/packers/crunchers have advanced very much during the past few years or so to further reduce the gap between C and asm. Writing a barebone sketch with C and optimizing the rest by hand in asm is the way to go.
Advertisement
2 points;

1) std::cin and std::cout are doing more than the scanf and printf under the hood, which are both valid C++ functions btw and thus any coder worth their salt would use them if the situation required it*
2) As soon as the input string exceeds 32 characters your C code runs the risk of blowing up nicely; the C++ version is safe in that regard.

(* it becomes a speed vs safety trade off... if you want speed then you'd take the functions with the lowest overhead, if you want safey then you have to ditch some of that speed)
A big reason is that "embedded" systems span an incredible range of devices and processors -- typically, anyone with an odd-ball processor or who sells their own microcontrollers is also in the position of having to develop and maintain their own tool-hains, run-times, etc -- C is simply the easiest path, and probably about an order of magnitude less difficult to implement and support.

There's also a certain amount of historical bias / tribalism among codgy old embedded developers, who probably gave up assembly with a fair bit of reluctance. In the wide, this means that much of the ecosystem is built around C. C++ is making inroads on more capable platforms (eg, MIPS/PowerPC platforms), but C is still king elsewhere, and probably will be for quite some time.

It's also true that, when a custom solution is called for, its far more straight-forward to build something on top of C, than to try to undo/work around what C++ has by default and replace it with your own -- I'm not talking about memory allocation here, I mean things like providing some sort of (psuedo-)object system built to be performant on the device.

You're right that C++ does have quite a few other features that are useful to embedded dev though -- for example, you could concievably use placement new to initialize memory-mapped devices using the class constructor, and template meta-programming can be used to generate highly-tuned code parametrically.

throw table_exception("(? ???)? ? ???");

Mostly because C++ doesn't help solve the problems that "low level code" is facing. It's considerably easier to guess what's going on under the hood with a C program compared to a C++ program. Virtual functions make it difficult to determine what code will run next. Templates make it difficult to determine what code even exists. C++ tries to be type-safe, where C doesn't. Sometimes you pay a performance cost for this type safety, sometimes you don't, but afaik you never get a performance win out if it (like OCaml purports to do).

So, if you're writing code, and all of the domain momentum is geared toward C (existing code, libraries, developer skillsets), it's very difficult to justify moving to C++. It's a similar situation with C++ vs a higher level language for games.
Anthony Umfer
I rest my case for now.
As mentioned by phantom, that's a completely false benchmark, as both programs aren't doing the same thing at all, you've made the C++ one do a ton more work than the C one.


You may as well have written://C benchmark
for( int i=0; i<1; ++i ){printf("");}
//C++ benchmark
for( int i=0; i<10000; ++i ){printf("");}

Mostly because C++ doesn't help solve the problems that "low level code" is facing. It's considerably easier to guess what's going on under the hood with a C program compared to a C++ program. Virtual functions make it difficult to determine what code will run next. Templates make it difficult to determine what code even exists.
That's a completely false argument -- the C equivalent of virtual functions are function pointers. The C equivalent of templates are #include-templates.
If you used those same methodologies in both languages, you'd have the same problems in both languages.

Not all C++ programs are written using the wrong features at the wrong time. Just because you're using C++ everywhere, it doesn't mean you should go and use virtual for something where you wouldn't have used a function-pointer in C. If you do, then you're just a shit C++ programmer, and the argument becomes "good programmers are better than shit programmers, derp!".

This topic is closed to new replies.

Advertisement