Hi all!
I am currently engaged with learning more about deep water simulations. I have just began to scratch the surface but there is one thing I have been wondering for a while: why most modern methods compute the inverse fft in realtime instead of precomputing one (or several) animated textures and use them as vertex displacement textures? Is it actually more efficient to compute the IFFT in realtime than simply fetching from the textures?
I can think of a pretty good reason which is that a texture for a 256x256 grid with a periodicity of 30 seconds, at 30 fps that would be: 3 * 30 * 30 * 256 * 256 bytes = 177 megabytes. I don't know what the usual memory budget for a thing like this is so I don't know if that might be too much. Is this the reason for computing the IFFT in realtime or am I missing something?
Thanks in advance!
Javier