Advertisement

Floating point determinism in Unity, PhysX and .NET: Intel vs AMD

Started by October 21, 2020 01:23 AM
8 comments, last by JohnnyCode 4 years, 2 months ago

I am working on a game in Unity (using PhysX) that has an input based replay system, which requires determinism to function correctly. I'm aware of the challenges of getting floating point arithmetic to be determ in different settings, so this was initially just built to be an internal tool to help capture video footage. Turned out that the replays are entirely deterministic for same build/different machines. Unfortunately, I get desyncs when a replay captured on one CPU vendor is run on a different one (i.e., AMD ←→ Intel). I've tried doing Mono and IL2CPP (AOT compiling) with no success.

There is of course a large tech stack between Unity and the machine code, but I am wondering if anyone has any insight on this? I made a small app to test arithmetic determinacy

float value = 0.2924150f;

value = (float)Math.Sin(value);
value = (float)Math.Cos(value);
value = (float)Math.Tan(value);

value = (float)Math.Pow(value, 2.32932f);

// numbers.txt contains 200 randomly generated numbers from 0-1.
using (StreamReader file = new StreamReader("numbers.txt"))
{
	string line;

	int op = 0;

	while ((line = file.ReadLine()) != null)
	{
		float readValue = Convert.ToSingle(line);

		if (op == 0)
			value += readValue;
		else if (op == 1)
			value -= readValue;
		else if (op == 2)
			value *= readValue;
		else
			value /= readValue;

		op = (op + 1) % 4;
	}
}

Console.WriteLine(value.ToString("F10"));

byte[] bytes = BitConverter.GetBytes(value);

for (int i = 0; i < bytes.Length; i++)
{
	Console.WriteLine(bytes[i]);
}

Console.Read();

and got consistent results on both Intel and AMD, so it could be a configuration issue? From what I've read x86-64 should produce consistent results on modern Intel and AMD, but it's hard to find a straight answer.

Thanks for any help.

Floating point numbers are usually a point of optimization, one can decide between low precision and high calculation costs (relative) when for example build native code with Visual Studio. So the underlaying runtime built into Unity may be kind of an issue.

But anyways, how huge is your desire to be 100% accurate? Is it necessary that you have high-precision floating points with full decimal points or is it enougth to reduce the precision to let's say first 2 or three decimal points of the number?

Advertisement

Shaarigan said:

Floating point numbers are usually a point of optimization, one can decide between low precision and high calculation costs (relative) when for example build native code with Visual Studio. So the underlaying runtime built into Unity may be kind of an issue.

But anyways, how huge is your desire to be 100% accurate? Is it necessary that you have high-precision floating points with full decimal points or is it enougth to reduce the precision to let's say first 2 or three decimal points of the number?

Do you mean accuracy as in accuracy of calculations, or consistent with prior runs? We don't currently have any way to recover the replays when desyncs occur (this would require making the entire game state serializable—a bit too much of an investment for now!), so it needs to be 100% consistent.

Unity's AOT compiler (IL2CPP) does apparently allow for compiler flags to be passed in, but not sure if the entire engine gets compiled with these (from .NET intermediate language to C++, anyways). So it might be beyond my reach.

I don't use Unity but I've been doing a lot of intrinsic programming lately. Is it possible that the two CPUs have different SIMD support and that Unity is optimizing for SIMD?

Iron-Warrior said:
so it needs to be 100% consistent

So if a higher precision is less consistent, reduce the precision until it gets!?

Gnollrunner said:
Unity is optimizing for SIMD?

Unity is optimizing for SIMD in some parts of the engine, for example vector math but also the engine itself (is likely) to be compiled for less precision in favor for higher performance, which is in the end what C# code runs on. IL2CPP dosen't make such a difference because it just compiles the C# code to platform assembly, it doesn't recompile the entire engine (for which you would need source code access to).

In general, Unity games are NOT standalone, they are executed in the Unity Player App, which is part of the engine and also contains an embedded CLR to execute C# script code

@Shaarigan It just seems given the same installation of Unity, it should do the same thing on different machines barring some difference at run time. I could see it checking the SIMD levels and calling different sets of routines. I'm probably going to do that myself when I upgrade my computer. Some of the newer SIMD stuff makes certain operations easier to do in different ways. The order of operations could make some difference in accuracy. However I have literally zero experience with Unity so it's all speculation. I just thought I'd throw the idea out there.

Advertisement

I don't think you can force consistent results by reducing precision. Imagine a situation where one the AMD result is consistently higher (by the smallest amount possible) than the Intel result. No matter how you round this result, there will always be values just on the cutoff point such that the AMD result is still going to be higher than the Intel result after rounding - and the rounding process will increase the difference between them.

I think that the difference in results is probably due to different code paths based on different SIMD instruction sets. If that's the case, then you might be able to disable the problematic SIMD instructions when compiling.

Shaarigan said:

Iron-Warrior said:
so it needs to be 100% consistent

Unity is optimizing for SIMD in some parts of the engine, for example vector math but also the engine itself (is likely) to be compiled for less precision in favor for higher performance, which is in the end what C# code runs on. IL2CPP dosen't make such a difference because it just compiles the C# code to platform assembly, it doesn't recompile the entire engine (for which you would need source code access to).

So if a higher precision is less consistent, reduce the precision until it gets!?

a light breeze said:

I don't think you can force consistent results by reducing precision. Imagine a situation where one the AMD result is consistently higher (by the smallest amount possible) than the Intel result. No matter how you round this result, there will always be values just on the cutoff point such that the AMD result is still going to be higher than the Intel result after rounding - and the rounding process will increase the difference between them.

I think that the difference in results is probably due to different code paths based on different SIMD instruction sets. If that's the case, then you might be able to disable the problematic SIMD instructions when compiling.

This was my intuition too—truncating numbers to a certain level of precision could still produce results that have larger precision that would need to be rounded by the device, right?

SIMD is a good point to bring up. Unity is very much a black box for much of its code, so not sure if they expose any way to control for that.

Thanks everyone for your responses!

It depends on the size of real numbers wheather truncating makes sense, and also algebraic opeartion scale, accumulating loss of precision when consequently passing through them.

This topic is closed to new replies.

Advertisement