Physics system stumping AI
Hey all,
I'm developing a 2D space game, involving some pesky enemy ships who constantly attack the player.
My game has a rigid body dynamics system implemented (I chose here over the physics forum), and something as simple as flying towards the character is fooling the AI because of the extra maths behind movement.
Rotation is posing the problem. Ships have values for torque, angular velocity and orientation. Because of this it's not as simple as 'when you're facing player, stop turning' because it still must overcome it's angular velocity to stop. The ships end up flying in circles.
I'm not sure how to approach the issue.
I could bend the laws of my physics and let enemies stop rotation instantaneously but that would be a last resort.
Slowing their rotation speed enough can prevent this issue, but by the time it works the ships have giant, laughable turning circles.
I know they need to aim to have Angular Velocity = 0 when facing Character, but I dont know how to implement this.
Thanks for your time!
- Haptic
I have recently spliced OpenSteer into an existing 3D space simulation I am working on.
http://www.arcnebula.com
They are linked to and credited on my links page.
It was super super easy. It's Open Source, and the community is active.
I actually found that I could even overlay the OpenGL based visualisations of the logic into my app so I could see the decisions my actors are making.
It has both 2D and 3D implementations, chase and flee, as well as boids and loads of other cool stuff, and is designed in a modular way to allow new behaviors through C++ function "PlugIns".
All the things you need, like angular velocity etc. are there under the hood.
If I could splice it into a 1 year old multi-threaded project, which also already has physics and other more limited AI, in an afternoon without problems I would assume it would not be too hard for you. Hope that helps...
http://www.arcnebula.com
They are linked to and credited on my links page.
It was super super easy. It's Open Source, and the community is active.
I actually found that I could even overlay the OpenGL based visualisations of the logic into my app so I could see the decisions my actors are making.
It has both 2D and 3D implementations, chase and flee, as well as boids and loads of other cool stuff, and is designed in a modular way to allow new behaviors through C++ function "PlugIns".
All the things you need, like angular velocity etc. are there under the hood.
If I could splice it into a 1 year old multi-threaded project, which also already has physics and other more limited AI, in an afternoon without problems I would assume it would not be too hard for you. Hope that helps...
Feel free to 'rate me down', especially when I prove you wrong, because it will make you feel better for a second....
As far as I know there are roughly two approaches.
1. A PID controller. (http://en.wikipedia.org/wiki/PID_controller)
2. Exact calculation of forces
I've always gone for #2 (I think 1 is the norm). Basically, you're writing a kind of 'controller' that you give initial and desired physical state information to and it calculates the forces needed to get there. Typically you assume some maximum acceleration and maximum velocity to make the movement more realistic. It really depends what you want to do though.
My uncommented 1d-controller looks roughly like this
1. A PID controller. (http://en.wikipedia.org/wiki/PID_controller)
2. Exact calculation of forces
I've always gone for #2 (I think 1 is the norm). Basically, you're writing a kind of 'controller' that you give initial and desired physical state information to and it calculates the forces needed to get there. Typically you assume some maximum acceleration and maximum velocity to make the movement more realistic. It really depends what you want to do though.
My uncommented 1d-controller looks roughly like this
class Controller{ Controller ( float maxAcceleration, float maxSpeed, float dt ) { _maxAcceleration = maxAcceleration; _maxSpeed = maxSpeed; _dx = _maxSpeed*_maxSpeed/(2.0f*_maxAcceleration)+_maxSpeed*dt; } float getAcceleration ( float x0, float v0, float xf, float dt ) { float dx; float vf; float a; float sign; dx = xf-x0; sign = (dx > 0.0f) ? 1.0f : -1.0f; dx *= sign; if ( dx < _dx ) vf = dx*_maxSpeed/_dx; else vf = _maxSpeed; a = (sign*vf-v0)/dt; if ( a < -_maxAcceleration ) a = -_maxAcceleration; else if ( a > _maxAcceleration ) a = _maxAcceleration; return a; }private: float _maxAcceleration; float _maxSpeed; float _dx;};
jdindia and scratt: thankyou very much for the replies, they are greatly appreciated. I'll look into your suggestions when I get home a little later on.
- Haptic
I agree with jdindia: The right way to approach this problem is probably to apply control theory to it. Old fashioned PID or whatever might be sufficient, but my first impulse is to say that what you really want is a min-time optimal controller. You can think of this as the solution to the "pathfinding" problem in state space.
Another idea, from robotics, would be RRTs (randomized rapidly-exploring trees).
Another idea, from robotics, would be RRTs (randomized rapidly-exploring trees).
I use a much simpler method, after spending some time trying to set up an optimal control solution (which is much too complicated for simple stuff). Instead, each frame I use the current velocity and angular velocity to calculate where I will be a certain time (typically between 1 and 3 seconds) into the future, calculate the difference between that and my target, and thrust/torque to bring the prediction closer to the target.
With careful tuning of the prediction time, this method is almost indistinguishable from the control theory approach. You can see the (very informative) discussion here.
[Edited by - swiftcoder on February 22, 2009 8:31:05 AM]
With careful tuning of the prediction time, this method is almost indistinguishable from the control theory approach. You can see the (very informative) discussion here.
[Edited by - swiftcoder on February 22, 2009 8:31:05 AM]
Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]
Quote: Original post by swiftcoder
I use a much simpler method, after spending some time trying to set up an optimal control solution (which is much too complicated for simple stuff). Instead, each frame I use the current velocity and angular velocity to calculate where I will be a certain time (typically between 1 and 3 seconds) into the future, calculate the difference between that and my target, and thrust/torque to bring the prediction closer to the target.
With careful tuning of the prediction time, this method is almost indistinguishable from the control theory approach. You can see the (very informative) discussion here.
@swiftcoder:
Since our earlier discussion, I've learned that there are much simpler ways to solve these problems. In particular, please ignore the algorithms I sketched at the end of that thread (actually, I'll leave a note there now). Instead, the algorithm to use is called "Value Iteration." I may have mentioned it in our previous discussion, but back then I misunderstood it!
(Everything I said about costate equations and two-point boundary value problems, however, was correct.)
Anyway, with value iteration I think this problem is entirely feasible under an optimal control framework, and not that hard to solve. It's my fault that I made it look so hard earlier!
Gimme a second and I'll post a better explanation...
Maybe I'm missing something, but can't you just solve the rotational kinematic equations?
1. theta =( wi + wf ) / 2 * t
2. wf^2 = wi^2 + 2*torque * theta
theta is the direction you want it to face
wi is the current angular velocity
t is time since the last frame
wf is the calculated angular velocity
torque is what you apply to your entity after it's calculated
solve #1 for wf, then plug that into #2 and solve for torque, and apply that torque to the ship?
1. theta =( wi + wf ) / 2 * t
2. wf^2 = wi^2 + 2*torque * theta
theta is the direction you want it to face
wi is the current angular velocity
t is time since the last frame
wf is the calculated angular velocity
torque is what you apply to your entity after it's calculated
solve #1 for wf, then plug that into #2 and solve for torque, and apply that torque to the ship?
OK, more on value iteration. It's very simple, and described in Steve LaValle's (Excellent) Planning Book, but I hadn't "gotten" it really until now.
The idea: First you find the Bellman Value Function for the 0-step plan, then the 1-step plan, then the 2-step plan, and so on. However, this is NOT a Dijkstra-like algorithm; you update the ENTIRE Bellman Value Function every iteration. (In some cases this isn't necessary, and the algorithm can reduce to something Dijkstra-like, but we'll ignore that for now, because that's less general).
Basically, at iteration k+1, you start with an array (grid) representing the k-step Bellman Value Function, and you update it to get the k+1-step value function. The algorithm works like this:
As for initialization: You initialize V_0 to the terminal cost. For problems where you MUST reach a particular final state, set V(goal_state)=0, and V(all_other_states)=infinity.
That's it.
For an example, consider a very simple system. The state space is just the Euclidean plane, and the system is simply,
Basically, it's just a dude who can move in any direction on the plane. And suppose that you want to get to the location (20.5, 20.5). Since the shortest path between two points is a straight line, the Bellman Value function is going to be
V = (1/vel)*sqrt( (x_1 - 20.5)^2 + (x_1 - 20.5)^2 ).
Here is some quick MATLAB code I just hacked together which solves this problem numerically. It should be easy to extend to your higher-dimensional state space:
There are additional optimizations that can be done to the algorithm to make it more Dijkstra-like in some cases; for info on this see Steve LaValle's book (linked to above) or, for instance, Sethian's fast-marching methods (but keep in mind that fast marching methods are less general than value iteration).
Anyway, I hope that this explains Value Iteration to your satisfaction!
The idea: First you find the Bellman Value Function for the 0-step plan, then the 1-step plan, then the 2-step plan, and so on. However, this is NOT a Dijkstra-like algorithm; you update the ENTIRE Bellman Value Function every iteration. (In some cases this isn't necessary, and the algorithm can reduce to something Dijkstra-like, but we'll ignore that for now, because that's less general).
Basically, at iteration k+1, you start with an array (grid) representing the k-step Bellman Value Function, and you update it to get the k+1-step value function. The algorithm works like this:
for k=0 to FINAL_TIME/delT; for every state x in your state-space grid for every control input u (sweep over possible inputs, with some resolution) Look to see from which state you would come to this state x in deltaT seconds by applying control input u. Call this other previous state x_prev. If you were to do this, the k+1-step cost-to-go at x would be V_k(x_prev)+L(x,u,t). (Note that since x_prev will probably not be at a grid point, you'll need to get V_k(x_prev) by interpolation). (Also note that in the above, L(x, u, t) is just your instantaneous cost function. For pure minimum time problems, you simply have L(x, u, t)=1; i.e., all instants cost something, and they all cost the same. If you had terrain with different movement costs, or if some control inputs were more expensive than others (like they burnt more fuel or something), then L(x,u,t) might be something else.) Set V_{k+1}(x) equal to the smallest of the cost-to-gos computed in the above (inner, or 'u' loop).
As for initialization: You initialize V_0 to the terminal cost. For problems where you MUST reach a particular final state, set V(goal_state)=0, and V(all_other_states)=infinity.
That's it.
For an example, consider a very simple system. The state space is just the Euclidean plane, and the system is simply,
d [ x_1 ] [ cos(u) ]---- [ ] = vel*[ ] dt [ x_2 ] [ sin(u) ]
Basically, it's just a dude who can move in any direction on the plane. And suppose that you want to get to the location (20.5, 20.5). Since the shortest path between two points is a straight line, the Bellman Value function is going to be
V = (1/vel)*sqrt( (x_1 - 20.5)^2 + (x_1 - 20.5)^2 ).
Here is some quick MATLAB code I just hacked together which solves this problem numerically. It should be easy to extend to your higher-dimensional state space:
function value_iter() W = 40; H = 40; BIG_NUM = 1e12; V = BIG_NUM*ones(H,W); V(20,20) = 0; V(20,21) = 0; V(21,20) = 0; V(21,21) = 0; V_next = V; delT = 1; vel = sqrt(2); for i=1:ceil(sqrt(W^2 + H^2))+1 changes = 0; for y=1:H for x=1:W V_best = inf; for u=linspace(-pi,pi, 20) p_this = [x;y] + vel*delT*[cos(u); sin(u)]; if(p_this(1) >= 1 && p_this(1) <= W && ... p_this(2) >= 1 && p_this(2) <= H) V_this = delT + bilinear(V, p_this(2), p_this(1), W, H); if(V_this < V_best) V_best = V_this; end end end if(V_best < V(y,x) - 1e-9) V_next(y,x) = V_best; changes = changes + 1; end end end V = V_next; if(changes == 0) break; end endend
There are additional optimizations that can be done to the algorithm to make it more Dijkstra-like in some cases; for info on this see Steve LaValle's book (linked to above) or, for instance, Sethian's fast-marching methods (but keep in mind that fast marching methods are less general than value iteration).
Anyway, I hope that this explains Value Iteration to your satisfaction!
Fascinating to see the different solutions being proposed. All are workable... I'd probably pick these in order:
1) In this simple case, solve the equation. This works for even more complicated simulations, but it gets harder.
2) If you plan on making it a little more complex later, use a simple PD controller and tweak it.
3) Remove the physics for the AI by design, that's another common solution.
4) Value iteration is overkill, but it's a good skill to have!
Alex
1) In this simple case, solve the equation. This works for even more complicated simulations, but it gets harder.
2) If you plan on making it a little more complex later, use a simple PD controller and tweak it.
3) Remove the physics for the AI by design, that's another common solution.
4) Value iteration is overkill, but it's a good skill to have!
Alex
Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement