More specifically the tests TestExecute32MixedArgs and TestExecuteThis32MixedArgs. In both of these cases the tests report that arguments 14,15,16 are wrong.
However, after some tests I have been able to confirm that this is not a problem with AngelScript, but more with the MSVC optimizer.
Debugging the code I can see that the arguments are indeed correct. The problem occurs when the arguments are loaded into the floating point register. Although the disassembly window shows me that the code was correctly generated by MSVC compiler the value loaded from the stack into the floating point register is incorrect. The following instruction is executed:
fld dword ptr [esp+48h]
Where esp+48h points to the 14th parameter on the stack. The value on the stack at this location is 14.0f. But what is loaded into the st0 register is actually 1#IND. This is what I cannot understand.
It actually seems to fail due to what is already loaded into the other floating point registers, because if I move the unit test to the beginning of the main() function it doesn't fail. The code is exactly the same, only the contents of the floating point registers are different.
The same test work if I compile it without optimization for speed, using the exact same AngelScript library as before.
I'll try to figure out what is going on. But at least I know the problem is not with AngelScript.
[Edited by - WitchLord on February 3, 2006 9:18:15 PM]