51 minutes ago, Fulcrum.013 said:
Is any of available engines around operates with curved and NURBS geometry? Nothing of it. But hardware support has been added 7 years ago
Just because the support is there doesn't mean it's useful.
I'm not a graphics programmer, but most of the references to NURBS-based graphics that I can find are more than 10 years old. I wouldn't be surprised if NURBS was now an out of date technique, replaced by stuff like tesselating geometry shaders that are probably faster (I've heard that even 3D modelling tools convert NURBS surfaces to polygons in order to render them in real time!) Polygons are still "good enough," anyway.
From what I hear from my colleagues, the big innovations in computer graphics in recent years have been in lighting.
51 minutes ago, Fulcrum.013 said:
if you really need a pholimorphism you can not avoid it. And virtual methods is fasted possible solution.
True, but if you can implement something without polymorphism, why wouldn't you, other than "because I might want to override this later?" Which, unless you know you're going to want to override it later, is over-engineering.
And we do try to avoid using both functions from DLLs AND switch statements (and other kinds of branching) in performance-critical code, you know.
35 minutes ago, Fulcrum.013 said:
if you mean something like if-if-if you is complete wrong. It works by complete other way.
I'm not sure what you mean by "if-if-if", but running code that doesn't add value is obviously bad for performance. If you're talking about actual if statements, I would point out that mispredicted branches do come with a performance cost on modern hardware due to deep pipelined CPU architectures. Obviously, that's more of a problem on some architectures than others. The performance cost isn't just the branching, though. There's also cache utilization to consider. Memory access patterns are hugely important to performance on modern hardware.* Having "dead data" (as in, data to configure stuff that you aren't using) in memory surely doesn't help with that.
Yes, yes, older CPUs don't have to think about branching so much, but commercial game developers generally don't write software for platforms that are more than 10 years old, so they aren't really a concern.
Anyway, this discussion is now sufficiently far off-topic that I'm not sure I see the value in continuing it. Perhaps if you're curious about why NURBS isn't used in the major game engines, you should make a post in the graphics forum.
* I saw a bit of shader code once that had originally been a simple array index that had been turned into a switch statement because of a shader compiler "quirk" that caused the array index to invalidate a block of cache memory that was the size of the array, causing cache misses every time the array was accessed. The switch statement was faster, in that specific case. I actually didn't believe my colleague at first because it seemed so counter-intuitive. Can't remember offhand what the shader actually did or what platform it was for, unfortunately...