🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Scrum metodology

Started by
116 comments, last by Tom Sloper 5 years, 12 months ago

 

9 minutes ago, Oberon_Command said:

There are also developers who specifically avoid engines - even open-source engines - because they worry that using those engines will cause their game to "feel" like every other game made with those engines.

Better say avoid third-parties engines. Becouse nobody will sell or  all the more give for free not outdated engines.

#define if(a) if((a) && rand()%100)

Advertisement
2 hours ago, Fulcrum.013 said:

It really massive difference betwin extending something and refactoring something. Realy it just opened for extensions but closed for modification. 

I've worked in codebases that were written the mindset that you couldn't refactor code after you wrote it (which, unless I'm mistaken, is the approach you're advocating here - you want us to write the code once and never touch it again because it's perfect), and you could only extend it, years after the original developers had moved on. They were the worst, most horribly over-engineered pieces of shit I have ever laid eyes on. Layers upon layers upon layers of inheritance, just to add something to a class. Virtual interfaces in performance-critical parts of the code. Design patterns for everything. Dead code everywhere, and you couldn't tell without static analysis tools which code was alive and which was dead.

I'd rather work in a C-style codebase where everything is a global than go back to that crap.

38 minutes ago, Fulcrum.013 said:

Better say avoid third-parties engines. Becouse nobody will sell or  all the more give for free not outdated engines.

It isn't that the engines are outdated, it's that:

  1. if you use only the technology (in this case, Unreal's materials system) that comes with the engine, your game will probably look a bit like everyone else's some specific ways. That's a bad thing. Using something else wouldn't mean "extending" the engine, it would mean "rewriting parts of" the engine.
  2. Most third-party technology is not optimized specifically for your needs. It could have the most advanced rendering engine in the world and it still wouldn't be a good fit if you found that you needed to optimize the culling system because you had really specific needs and it was too general (all that data-driven stuff comes with a performance cost!) and the engine didn't let you do that.
58 minutes ago, Oberon_Command said:

They were the worst, most horribly over-engineered pieces of shit I have ever laid eyes on.

Looks like you know about OOP nothing else then Band of Fourth has wrote. Of course thay have some useful things but generally thay wrote a complete garbage.  Proper architecture use a tini classes that will do its part of work forever. For example TPriseistent that Borland made 25 yeras ago drive serialization and deserizlization of anything else using a reflection tables  generated by compiler until now without any changes. And it never will require to be changed becouse it universal and 100% data driven mechanizm.

58 minutes ago, Oberon_Command said:

years after the original developers had moved on

Why you use 3D algos tens of years after its original developers moved on? For example rotation on 3D space and so on. You use it becouse it universal and its mathematical background can not be changed. Same with anything else. 

16 minutes ago, Oberon_Command said:

It isn't that the engines are outdated, it's that:

Is any of available engines around  operates with curved and NURBS geometry? Nothing of it. But hardware support has been added 7 years ago. And so on. Nobody will give you newest of his components. Its like you sell weapons. Anycase it wil be cuted or outdated version of weapon that you will keept for yourself. Even in case you sell it to allies.

58 minutes ago, Oberon_Command said:

Virtual interfaces in performance-critical parts of the code

if you really need a pholimorphism you can not avoid it. And virtual methods is fasted possible solution. Also algoritmical optimization that generally require a pholimorphism can give tens of thousends times of speed up, while hardware optimiztion can give no more than 50x on older CPU that have no chahce preload prediction.

#define if(a) if((a) && rand()%100)

29 minutes ago, Oberon_Command said:

all that data-driven stuff comes with a performance cost!

if you mean something like if-if-if you is complete wrong. It works by complete other way.

1 hour ago, Oberon_Command said:

Virtual interfaces in performance-critical parts of the code.

Why you use functions from DLL it uses same indirect call. Why you uses switch that is virtual jump in best possible case? It have same perfomance as virtual call into best case only. In common case it use binary search to find a branch.

#define if(a) if((a) && rand()%100)

51 minutes ago, Fulcrum.013 said:

Is any of available engines around  operates with curved and NURBS geometry? Nothing of it. But hardware support has been added 7 years ago

Just because the support is there doesn't mean it's useful.

I'm not a graphics programmer, but most of the references to NURBS-based graphics that I can find are more than 10 years old. I wouldn't be surprised if NURBS was now an out of date technique, replaced by stuff like tesselating geometry shaders that are probably faster (I've heard that even 3D modelling tools convert NURBS surfaces to polygons in order to render them in real time!) Polygons are still "good enough," anyway.

From what I hear from my colleagues, the big innovations in computer graphics in recent years have been in lighting.

51 minutes ago, Fulcrum.013 said:

if you really need a pholimorphism you can not avoid it. And virtual methods is fasted possible solution.

True, but if you can implement something without polymorphism, why wouldn't you, other than "because I might want to override this later?" Which, unless you know you're going to want to override it later, is over-engineering.

And we do try to avoid using both functions from DLLs AND switch statements (and other kinds of branching) in performance-critical code, you know. :D

35 minutes ago, Fulcrum.013 said:

if you mean something like if-if-if you is complete wrong. It works by complete other way.

I'm not sure what you mean by "if-if-if", but running code that doesn't add value is obviously bad for performance.:) If you're talking about actual if statements, I would point out that mispredicted branches do come with a performance cost on modern hardware due to deep pipelined CPU architectures. Obviously, that's more of a problem on some architectures than others. The performance cost isn't just the branching, though. There's also cache utilization to consider. Memory access patterns are hugely important to performance on modern hardware.* Having "dead data" (as in, data to configure stuff that you aren't using) in memory surely doesn't help with that.

Yes, yes, older CPUs don't have to think about branching so much, but commercial game developers generally don't write software for platforms that are more than 10 years old, so they aren't really a concern.

Anyway, this discussion is now sufficiently far off-topic that I'm not sure I see the value in continuing it. Perhaps if you're curious about why NURBS isn't used in the major game engines, you should make a post in the graphics forum.

* I saw a bit of shader code once that had originally been a simple array index that had been turned into a switch statement because of a shader compiler "quirk" that caused the array index to invalidate a block of cache memory that was the size of the array, causing cache misses every time the array was accessed. The switch statement was faster, in that specific case. I actually didn't believe my colleague at first because it seemed so counter-intuitive. Can't remember offhand what the shader actually did or what platform it was for, unfortunately...

9 minutes ago, Fulcrum.013 said:

Why you uses switch that is virtual jump in best possible case? It have same perfomance as virtual call into best case only. In common case it use binary search to find a branch.

We're very off-topic at the moment, but hold up there a second. Binary search over case statements is almost certainly faster than virtual function dispatch.

A binary search of < 100 or so case statements, on a value which is already in a register... might as well be free, compared to the cache miss you may incur with the virtual function dispatch.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

1 hour ago, swiftcoder said:

A binary search of < 100 or so case statements

But table that it have to search into some other place just like as vtable

#define if(a) if((a) && rand()%100)

58 minutes ago, Fulcrum.013 said:

 But table that it have to search into some other place just like as vtable

There's no separate table in the binary search case - the search is encoded directly into the assembly instructions, testing against hardcoded constant values.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

1 hour ago, swiftcoder said:

There's no separate table in the binary search case - the search is encoded directly into the assembly instructions, testing against hardcoded constant values.

It is no any difference what same table is organized. Code stored same way as any other data into memory and have to be chached before use like any other data. Also as we can see it prevent inlining so anycase flush pipeline on function call/far jump. It is no any difference what same is unlocal - code or data. Result will be same.  Very very old fortran textbooks stated that any function call make a significant delay. And it delay still here on far jumps, just one very clever guy by name Bjorn give us a inlining spell to remove it delays. But near jump can not fit anywere . Unfortunatelly human unable to follow superposition concept on such low level, to decide where to better use it spell, so let optimizer decide it, and follow superposition concepts on most importent scales where optemizer can not follow it. Any case optimal elements not give a optimal assembly. So somevere we have to sacrifice flexibility for speed, but on other place speed for flexibility.For speed critical places where, flexibility unnesessary, CPU have a SIMD or at more built-in GPU where ever branches can cause a delays. But for places where high flexibility needed, speed usualy not critical and also optimizer can help to have as low sacrafice as it possible, while can not minimize sacrafices of flexibility. Really overheads on vcalls has not cause significant delays even on 80386@33 so why it have to be significat on modern hardware where on-chip chache size exids 80386 whole memory size at least 2 times? Is it becouse syntetic test don't do any heavy actual work that take place where usually vcalls used?

#define if(a) if((a) && rand()%100)

This topic is closed to new replies.

Advertisement