Advertisement

Collision detection and response

Started by July 06, 2015 04:08 PM
15 comments, last by Aardvajk 9 years, 7 months ago

I just finished reading this article regarding the subject and I have a few questions:

1. I noticed that the article is from year 2000. Are the methods described in it still viable?

2. Okay, so I now I basically know how to deal with a collision of an ellipsoid and a polygon (triangle, quad). Now what? Am I supposed to run the collision routine on every polygon in a mesh? If yes, it means I have to save the mesh data and not dump it after I upload it to the gpu, right? I really have no idea where to start.

3. If you could recommend more learning source on this subject that would be great.

Thanks!

How about here:

http://www.gamedev.net/topic/475753-list-of-physics-engines-and-reference-material-updated-7-march-2011/

And there are a couple of presentations from the GDC physics tutorial here:

http://box2d.org/downloads/

Advertisement

2. Okay, so I now I basically know how to deal with a collision of an ellipsoid and a polygon (triangle, quad). Now what? Am I supposed to run the collision routine on every polygon in a mesh? If yes, it means I have to save the mesh data and not dump it after I upload it to the gpu, right? I really have no idea where to start.

Collision geometry can (and probably should) be different from render geometry. Collision geometry can in a lot of cases be made very simple compared to render geometry, which will reduce the amount of processing you need to do.

Additionally, there are ways of avoiding testing all polygons -- e.g. by partitioning areas in discrete chunks. A relevant search term here might be "Binary Space Partitioning (BSP)".

Optimizations here will depend on the type of games/level structure you need to have, though. A game like Metroid Prime (where you load and unload rooms as you traverse through them) can make a different set of assumptions and optimizations than an open world game like e.g. GTA V.

Hello to all my stalkers.

I'm targeting open world games like gta and WoW. I just recalled that WoW render geometry matches its collision geometry most of the time. Like when walking in a forest you could really climb certain trees and structures.
Lacoste correct me if I'm wrong but you discussed broadphase algorithms while I was talking about narrowphase. Also I couldnt find anything that looks helpful for my goals in the reference library linked by Dirk.

It sounds like you're confused on where the narrow-phase begins after the broadphase ends.

Usually the broadphase collects pairs of colliders that can potentially intersect. The narrowphase throws away all pairs that don't actually intersect. The remainder are handed off the solver, which pushes the colliders apart to a non-intersecting configuration.

The details of each step can become very involved, and this is where Dirk's link comes in. Don't overlook them.

So, which of these steps are you trying to learn more about? As for triangle meshes, there are a lot of different approaches for storing and interacting with them...

Get Christer Ericson's Real-Time Collision Detection (2005).

Advertisement

I just finished reading this fine article and now I have some thoughts about my second question, regarding (vertex) data arrangement when dealing with physics, aside from gpu rendering. Note that I'm talking about narrow phase here.

My current rendering routine which I implemented uses vertex indices as well, uploads all the data to the gpu (vertices, normals, uv coordiantes, indices), clears the data from the program memory, and then calls glDrawElements( GL_TRIANGLES, ... ) (which is just an index-based drawing call) every time I want to draw the screen.

Now the obvious approach I thought of for dealing with physics is to store the vertices and indices of every entity (and perhaps normals too, to reduces some calculations), and then feed the Physics::calc_collision_with_triangle() routine with the each entity's triangles, based on the indices.

My question is if that's a legit approach, and if that issue has ever been brought up for discussion.

Sorry if I misunderstood your question, but are you asking what is the standard way of running collision detection on objects in a game?

Are you talking about collision detection in physics simulations or general computer graphics applications?

HTH,

Generally, the polygons you render aren't the polygons that are used to represent solid objects (rigid bodies). In a game, a triangle soup is generally represented by a convex polyhedras (AKA convex hulls/sets) due to the bad performance that mesh-mesh intersection tests have. However, hull-hull intersection tests do still a little bit slower. For instance, running a sphere-sphere test is 90% faster than a hull-hull test. Therefore, objects with well-know simmetry such cubes, spheres, cylinders, etc. has its own intersection routines to increase the performance of the collision detection system.


My question is if that's a legit approach, and if that issue has ever been brought up for discussion.
Particularly, for a rigid body physics simulation in a game, this is the legit approach. However, it is definately not the standard approach for more serious applications such CAD, medical training, 3D modelling, etc.


Now the obvious approach I thought of for dealing with physics is to store the vertices and indices of every entity (and perhaps normals too, to reduces some calculations), and then feed the Physics::calc_collision_with_triangle() routine with the each entity's triangles, based on the indices.
...or you can abstract everything and separate physics from graphics.

I'm talking about game physics. Actually the ellipsoid-triangle article I linked in my last post is exactly what I'm after.

I want to represent the character in my open-world game as an ellipsoid and check for collision with other meshes. For this I can't rely on approximations (except the character-ellipsoid one). Can you offer a better solution than the one I listed?

...or you can abstract everything and separate physics from graphics.

What do you mean? Of course if I decide to go with my approach I'm going to store mesh data in a more back-end interface, and then feed both rendering and physics pipelines with it. Is that what you meant?



Can you offer a better solution than the one I listed?

Me can't, but I represent (physically) all my meshes as subdivided convex hulls. As for triangle meshes, I definately don't use them in my physics engine.



What do you mean? Of course if I decide to go with my approach I'm going to store mesh data in a more back-end interface, and then feed both rendering and physics pipelines with it. Is that what you meant?

Yeah, the physics simulation doesn't know about renderable meshes.

This topic is closed to new replies.

Advertisement