Oh, then there was a big misconception on my side. Still referring to this:
If I comment the casting out, I get an answer of 7.75. If the casting remains, I get an answer of 42.66. The analytical solution gives an answer of 42.94.
I thought you get this big difference within one function call.
But now i see you get it only after many integration steps.
taby said:
I have thought about it a lot and I believe that it has to do with the quantization. so basically we are snapping the double to a float value. It is this snapping to of values that causes the correct precession.
Yeah, that's what i meant with joking: ‘On the other hand, maybe reducing precision is needed to emulate quantized nature? :D’
But i don't believe it. I rather believe the ‘wrong’ result you get if not doing the cast is more correct than the seemingly accurate result from casting.
How can you be so sure that 43 is right for the model you use? Real world measurements i guess?
And still having your code ported, how long would i need to run it to replicate?
I did so only for seconds, and it did not print any numbers in that time. Would if i let it run longer?