Advertisement

Anyone heard of codility? (Automated Programming Assessment)

Started by February 20, 2010 04:13 PM
35 comments, last by janta 14 years, 8 months ago
Quote: Original post by Sneftel

JIT-compiled code can potentially run faster than optimized precompiled C, due to the ability of the compiler to dynamically recompile portions of the code in response to dynamic analysis as well as the specific processor it's running on.


Just one detail - JIT doesn't kick in until 1000 (client)/10000 (server) invocations of a method. So in this test there is likely no JIT code.

-XX:PrintCompilation and -XX:CompileThreshold can be used to mess with that. If Threshold is 1 (not sure if 0 works), then it entire code will get compiled on first use, usually on startup. It results in something similar to offline compilation.
Quote: Original post by Antheus
Quote: Original post by nullsquared
In fact, notice how most of the measurements for the C version were 0s. The last one in C was either a fluke, or their Java measurements were inaccurate.


So you reject the reality and substitute your own? Your statement means that both Java and C version fluked on same test, and they were both pair-wise correct on every other test.

Wait what? No, I'm just saying their timing doesn't look too correct.

Quote:
Since this is a trivial case, depending on how C code gets compiled, the Java version might put variables closer together, reducing number of cache misses. Just one or two cases as such and you've saved several hundred, perhaps even thousands of cycles.

Of course. And I seriously doubt the JVM JIT compilation out-smarted the C compiler, especially with cache-friendly memory. Java's arrays are Object's and have lots of crap associated with them. The C code was about as cache friendly as it can get. (maybe cache_hit can point something out here [wink])

Quote:
Code hasn't been a bottleneck in a while now, memory is. And while it cannot be controlled, unless specifically managed in C, JVM might just result in slightly more optimal version, considering we're dealing with arrays, which Java can allocate in place, compared to List<> which requires instances of objects.

Wait what? Lists have nothing to do with this, I don't see your point of bringing them in. Java does have the advantage of a GC memory system, however that can also be a disadvantage. The C version is performing continuous access on raw memory. The Java version is performing access on an Object which happens to be an array, and who knows what else is going on behind the scene.

Quote:
So C or C++ application can never be faster than something written in assembly?

Nope, as long as the assembly is written perfectly. The problem here is that compilers will usually write better assembly than a human being, so your argument is a bit of a moot point.

Quote:
BTW, do not underestimate JVM's design. When it comes to large application (1MLOC+), JVM will offer considerably better run-time characteristics than equivalent-effort C++ application precisely due to memory management. I mean this seriously, the micro overhead starts mattering surprisingly little above certain size, without even mentioning the difference in development time.

For GUI based tools and what-not, perhaps you're right. But if Java wasn't so dog-slow, then maybe Crysis can be optimized a bit if it was rewritten in Java [lol]
Advertisement
Quote: Original post by Sneftel
Quote: Original post by nullsquared
@ Sneftel: Not the point - the same code in Java won't JIT compile to something as efficient or more efficient than the same code in C.
JIT-compiled code can potentially run faster than optimized precompiled C, due to the ability of the compiler to dynamically recompile portions of the code in response to dynamic analysis as well as the specific processor it's running on.


Oh, I'm not denying that recompiling code specifically targeting the machine it's about to run on can be faster than running something compiled in a generic manner. But these are very obscure cases that you bring out, and they most likely have 0 effect on the little test we all took.
equi [] _ = -1equi (x:xs) n =   if foldl (+) 0 xs == foldl (+) 0 n   then length n   else equi xs (x : n)

It may not be able to compile your solutions, but I still prefer Project Euler. [smile]
Quote: Original post by Wan
foldl (+) 0

AKA sum :P
Quote: Original post by mattd
Quote: Original post by Wan
foldl (+) 0

AKA sum :P

Lol, forgot about that one. :)
Advertisement
Quote: Original post by nullsquared
Quote: Original post by benryves
I don't think time matters as long as it doesn't time out.


Yeah I just tried it again, it doesn't matter. And this time I got 100/100, added an ugly hack to pass their big numbers test [grin]:
#include <numeric>int equi ( const vector<int> &A ) {    if (A.empty()) return -1;    if (A.size() == 1) return 0;    int right = std::accumulate(A.begin(), A.end(), 0);    int left = 0;    for (size_t i = 0; i < A.size(); ++i)    {        right -= A;        if (left == right)            return i;        if (left + A < left && A > 0)            return -1;        left += A;    }    return -1;}


Got 93.5/100 on first try in c++, like other people, I fell for the overflow issue. using long long int passes their test even though it's not theoretically any safer with regard to overflow.

This topic is closed to new replies.

Advertisement