Java vs. C — inspired by an interview question, I wrote a little bit of sample code to create an array of a billion random integers, and either (A) take the sums as I go, or (B) go back over and sum it on a second pass.
In C (cygwin, gcc 6.4.0 64-bit):
– summing as I go, about 3.7 seconds on my laptop, -O3 doesn’t make much difference
– two passes, about 6.5 seconds unoptimized, about 4.1 seconds with -O3
– time to malloc(sizeof(int) * 1000000000) < 1ms
In Java (jdk 10):
– summing as I go, about 12.1 seconds
– two passes, about 13.4 seconds
– time to new int[1_000_000_000] about 1.7 seconds
So Java sucks, right?
Replaced random integer with consecutive integers, all in two passes:
Java : 2.7s
Unoptimized C: 6.1s
C at -O3: 2.1s
Given how much of that time is the equivalent of malloc in Java zeroing the memory, that’s pretty impressively fast.
So the real issue seems to be that Random.java is WAYYYY slower than rand()?