Steering a newtonian spaceship
I'm trying to implement steering behaviors to get some basic pursuit logic in a 2D game. The problem I'm running into is that the algorithms I've found seem to assume your heading and velocity will be along the same vector.
In my case I have spaceships which turn by applying angular velocity and can accelerate only in the direction they are facing. I very much don't want my AI to "cheat" so I'm set on using the same interface that the player would. Is Boids not an appropriate model here?
I've read some things about PID controllers but information applicable to games seems scarce and I'm so i'm not sure if that's the right approach either.
I have had had some success using lookahead and heuristics but it's a little slow and fine tuning the behavior has proven difficult.
I've seen this thread: http://www.gamedev.net/community/forums/topic.asp?topic_id=512372 but it seems he has a constant steering rate. The math suggested as a solution is also way over my head which I hope doesn't mean I'm doomed here.
[Edited by - wild_pointer on April 8, 2009 12:21:31 AM]
[size=2]
If you look around on these boards you'll find more on this (e.g., here, which in turn references some other posts). But basically it all comes down in these threads to,
Of these, #2 actually isn't that hard to do, so I think that even if you have to run it overnight it might not be completely unreasonable, and it's the best I've come up with so far if you want optimality (which, admittedly, might not matter).
But since I talked about that in the other threads that I linked to, I'll write something completely different (and simpler) here:
If you can wrap a controller around your spaceship that takes a "goal" heading angle and velocity, then maybe you can pretend that your spaceship really does behave "like a car" at a higher level to use some of Craig Reynold's steering behaviors.
- swiftcoder came up with a nice predictive controller (which it seems you've seen)
- I played around with optimal control.
- First I started with Hamiltonians and costate equations and such (which it seems you've also seen... but I understand that it's not math you'll just pick up from a forum post. And besides, I never really got anywhere with it for this problem).
- Later I came to understand a "brute force" planning algorithm known as Value Iteration with Interpolation which requires almost none of the "complicated math" but which just takes a long time (so I assume you'd run this offline and just store the results in a table if you used it).
Of these, #2 actually isn't that hard to do, so I think that even if you have to run it overnight it might not be completely unreasonable, and it's the best I've come up with so far if you want optimality (which, admittedly, might not matter).
But since I talked about that in the other threads that I linked to, I'll write something completely different (and simpler) here:
If you can wrap a controller around your spaceship that takes a "goal" heading angle and velocity, then maybe you can pretend that your spaceship really does behave "like a car" at a higher level to use some of Craig Reynold's steering behaviors.
Quote: Original post by wild_pointer
I'm trying to implement steering behaviors to get some basic pursuit logic in a 2D game. The problem I'm running into is that the algorithms I've found seem to assume your heading and velocity will be along the same vector.
Steering behaviours are a good model of where an agent wants to go; if you replace simplistic arbitrary kinematic updates with a dynamic model, the agents will simply take some time to turn and accelerate but they still have their instantaneous velocities and orientations to use for steering behaviour calculations.
You can choose to take either the speed vector or the unit's facing as reference for different purposes: for example aligned velocity can be good for movement in formation and aligned facing to shoot.
Quote:How can the AI "cheat" if it's constrained by a certain maximum torque and acceleration?
In my case I have spaceships which turn by applying angular velocity and can accelerate only in the direction they are facing. I very much don't want my AI to "cheat" so I'm set on using the same interface that the player would. Is Boids not an appropriate model here?
Quote:Different heuristics can make otherwise similar enemies peculiar and tactically interesting; modifying constants like the maximum torque and acceleration, the desired maximum speed, and various steering behaviour parameters might be sufficient to create variety and tune difficulty.
I've read some things about PID controllers but information applicable to games seems scarce and I'm so i'm not sure if that's the right approach either.
I have had had some success using lookahead and heuristics but it's a little slow and fine tuning the behavior has proven difficult.
I suggest allowing some acceleration backwards and sideways for generality (with braking the ship can decelerate without clumsily turning 180 degrees, with lateral thrust it can swerve and orient itself more freely).
If your ships are already able to turn and fly, what remains to be tuned and improved exactly?
Omae Wa Mou Shindeiru
Quote: Original post by Emergent
If you look around on these boards you'll find more on this (e.g., here, which in turn references some other posts). But basically it all comes down in these threads to,
- swiftcoder came up with a nice predictive controller (which it seems you've seen)
- I played around with optimal control.
- First I started with Hamiltonians and costate equations and such (which it seems you've also seen... but I understand that it's not math you'll just pick up from a forum post. And besides, I never really got anywhere with it for this problem).
- Later I came to understand a "brute force" planning algorithm known as Value Iteration with Interpolation which requires almost none of the "complicated math" but which just takes a long time (so I assume you'd run this offline and just store the results in a table if you used it).
Of these, #2 actually isn't that hard to do, so I think that even if you have to run it overnight it might not be completely unreasonable, and it's the best I've come up with so far if you want optimality (which, admittedly, might not matter).
But since I talked about that in the other threads that I linked to, I'll write something completely different (and simpler) here:
If you can wrap a controller around your spaceship that takes a "goal" heading angle and velocity, then maybe you can pretend that your spaceship really does behave "like a car" at a higher level to use some of Craig Reynold's steering behaviors.
I'm surprised I missed some of those threads - gives me some more to read. Thanks for all your input on the topic, Emergent.
Your last idea is really interesting (Not that I'm dismissing value iteration, I need to look at it more closely when I get home). I would love to be able to use the steering behaviors because they're intuitive and having that kind of interface would greatly simplify layering actual behaviors on top.
I bet I could re-purpose my lookahead approach to just figure out how to reach the goal heading and velocity. Actually, broken down into this more simple problem, figuring it out algorithmically might be possible (for me) too. The velocity is trivial independently. Heading is a little more difficult because I need to allow for angular deceleration once I approach goal heading. Not sure how I could allow for both to happen at once and have it look natural. Maybe some rules would work (don't accelerate too much while your actual and goal heading differ greatly).
Of course, if I'm turning and accelerating at the same time, my position is going to change which means my original goal heading is going to be wrong. That might not be a problem if I'm doing this every frame but I get the impression that my ships might end up taking a kind of lazy arc to their target? I guess I'll just need to try it.
[size=2]
Quote: Original post by wild_pointer
I get the impression that my ships might end up taking a kind of lazy arc to their target? I guess I'll just need to try it.
Figuring out exactly what kind of "lazy arcs" your ship "likes" to travel in would be helpful, I feel. Really, there are only six cases (corresponding to combinations of thrust/no-thrust and turn-left/no-turn/turn-right)... I wish that the solutions to these ODEs were obvious, but I have a nagging suspicion that closed-form solutions exist (which would help us a lot, I think).
I did some numerical experiments with the ode,
function dxdt = f(t, x) p = x(1:2); v = x(3:4); theta = x(5); a = [cos(theta); sin(theta)]; w = 1; dxdt = [v; a; w];end
which just applies a constant thrust and torque. Here are results for two different initial conditions.
(These are plots of the trajectories in the plane. "Time" is encoded by the spacing of the arrows -- which are evenly-spaced in time. The arrows show the current heading angle.)
.
I wish patterns jumped out, but, like I said, I still have a nagging feeling that these can be solved analytically, anyway.
I finally had a chance to try out Emergan't suggestion of wrapping the ship in a controller to let me use the steering behaviors. It doesn't work for a bunch of reasons that obvious in retrospect :)
I usually end up in some kind of expanding loop around the target. Accelerating while turning means I never seem to quite match the goal velocity since I end up moving laterally. Also, matching speed is a lot harder than I realized because, for example, accelerating in reverse could very well increase my speed depending on my heading.
float ShipMind::Approach(boost::shared_ptr<Entity> entity, const D3DXVECTOR3& goalVelocity) const{ // TODO: pull these from Ship const float maxSpeed = 100.0f; const float maxAngularSpeed = 10.0f; const float angularAcceleration = 0.08f; // Adjust Heading boost::shared_ptr<Transform> transform = entity->FindComponent<Transform>("Transform"); boost::shared_ptr<Body> body = entity->FindComponent<Body>("Body"); boost::shared_ptr<Ship> ship = entity->FindComponent<Ship>("Ship"); D3DXVECTOR3 goalHeading; D3DXVec3Normalize(&goalHeading, &goalVelocity); float theta = std::acos(D3DXVec3Dot(&transform->Heading(), &goalHeading)); // How long would it take to decelerate from current angular velocity to 0? float omega = D3DXVec3Length(&body->AngularVelocity()); float angularDecelerationTime = omega / angularAcceleration; angularDecelerationTime /= 60.0f; // How long until we reach goal heading at current angular velocity? float timeToReachGoalHeading = (omega > 0) ? theta / omega : std::numeric_limits<float>::infinity(); D3DXVECTOR3 axis; D3DXVec3Cross(&axis, &transform->Heading(), &goalHeading); if(axis.z < 0) // right-handed { theta = -theta; } if(timeToReachGoalHeading > angularDecelerationTime) { if(theta < 0) { ship->Rotate(Ship::Clockwise); } else { ship->Rotate(Ship::Counterclockwise); } }/* // Early out (Maybe if I try to complete turn before accelerating?) theta = std::acos(D3DXVec3Dot(&transform->Heading(), &goalHeading)); if((theta / D3DX_PI) > 0.01f) { return 0; }*/ // Adjust Speed float currentSpeed = D3DXVec3Length(&body->LinearVelocity()); float goalSpeed = D3DXVec3Length(&goalVelocity); if(currentSpeed < goalSpeed) // ship automatically tries to stop turning if not actively turning { ship->Accelerate(Ship::Forward); } else { ship->Accelerate(Ship::Reverse); }}
I usually end up in some kind of expanding loop around the target. Accelerating while turning means I never seem to quite match the goal velocity since I end up moving laterally. Also, matching speed is a lot harder than I realized because, for example, accelerating in reverse could very well increase my speed depending on my heading.
[size=2]
Quote: Original post by wild_pointer
I finally had a chance to try out Emergan't suggestion of wrapping the ship in a controller to let me use the steering behaviors. It doesn't work for a bunch of reasons that obvious in retrospect :)
[...]
I usually end up in some kind of expanding loop around the target. Accelerating while turning means I never seem to quite match the goal velocity since I end up moving laterally. Also, matching speed is a lot harder than I realized because, for example, accelerating in reverse could very well increase my speed depending on my heading.
Shucks... Maybe we need a more principled controller for this...
Here's a thought: Once you're at speed, the thrust you need to apply is always perpendicular to the desired trajectory, in the plane of the instantaneous osculating circle. I.e., it's proportional to the curvature and along the normal vector of the Frenet-Serret frame. Or in yet other words, apply the centripetal acceleration necessary to turn in whatever way you want.
Maybe this will work better?
More...
Consider a controller with a few distinct modes:
1. Go straight (as the name implies. Also, accelerate if not yet at speed)
2. Prepare for turn (without thrusting, rotate ship 90 degrees to the direction of motion so thrust will be tangent to path)
3. Turn (thrust while rotating to provide centripetal force)
4. Recover from turn (without thrusting, rotate ship towards the direction of motion).
Your paths will now be composed of circular arcs and straight lines. This is almost "the same as a car" except that, since you need to put modes 4 and 1 between turns in different directions, a line segment of a particular length will be required between each pair of arcs in different directions.
So now that you have a way to track a trajectory of this form, all you have to do is come up with a trajectory that you want to track.
Consider a controller with a few distinct modes:
1. Go straight (as the name implies. Also, accelerate if not yet at speed)
2. Prepare for turn (without thrusting, rotate ship 90 degrees to the direction of motion so thrust will be tangent to path)
3. Turn (thrust while rotating to provide centripetal force)
4. Recover from turn (without thrusting, rotate ship towards the direction of motion).
Your paths will now be composed of circular arcs and straight lines. This is almost "the same as a car" except that, since you need to put modes 4 and 1 between turns in different directions, a line segment of a particular length will be required between each pair of arcs in different directions.
So now that you have a way to track a trajectory of this form, all you have to do is come up with a trajectory that you want to track.
Quote: Original post by Emergent
More...
Consider a controller with a few distinct modes:
1. Go straight (as the name implies. Also, accelerate if not yet at speed)
2. Prepare for turn (without thrusting, rotate ship 90 degrees to the direction of motion so thrust will be tangent to path)
3. Turn (thrust while rotating to provide centripetal force)
4. Recover from turn (without thrusting, rotate ship towards the direction of motion).
Your paths will now be composed of circular arcs and straight lines. This is almost "the same as a car" except that, since you need to put modes 4 and 1 between turns in different directions, a line segment of a particular length will be required between each pair of arcs in different directions.
So now that you have a way to track a trajectory of this form, all you have to do is come up with a trajectory that you want to track.
Thanks for your continued help Emergent.
That's a good idea. In retrospect, it's obvious the best way to set a new trajectory is not to turn to a heading along that trajectory and apply thrust. Right now I don't have the capability to "plan" actions that will span several updates so I'll have to build up some functionality for that first.
The last few days I'd been tinkering again with a heuristic based approach but it's really difficult to balance the the different factors influencing the value of a state. I did enjoy a little bit of success with weighting chosen actions such that they're more likely to be chosen again on subsequent updates. It cut down on the circle-strafing and they seem to prefer to make strafing runs which is what I wanted. It still has lots of problems though, not the least of which is how slow it is.
[size=2]
Really stupid system:
struct firstOrderShip { location l, velocity v };
struct spinningShip { firstOrderShip ship, angular velocity a };
// returns the amount of time it takes to naively reach them.
// We rotate to face the way we want to thrust (spin up then spin down)
// We thrust directly towards where we want to thrust
// We turn exactly 180 degrees
// We thrust away from the target until we are halted relative to the ship.
// We turn towards the ship
// This function solves for that move:
double firstOrderNaiveDistance( spinningShip me, firstOrderShip them );
// The backward version:
// We turn to face where we want to go without thrusting
// We reverse-thrust until we are at the right velocity
// We turn to face the target ship
double firstOrderBreakDistance( spinningShip me, firstOrderShip them );
// Finds the least of the various tactics above:
double firstOrderDistance( spinningShip me, firstOrderShip them );
Then, search nearby for the lowest firstOrderDistance to the ship, mixing a combination of rotation and thrust. We don't care what tactic was picked by firstOrderDistance: we just head down-hill towards a solution that gets us where we want to be.
The example strategies are pretty dumb ones, by the way. The idea is that you can write up a collection of strategies you have analytic solutions for, and then your AI actually searches the nearby space and finds the most efficient one it can reach.
This can result in the AI doing a thrust-while-turning even though your strategies don't include that, because the the AI reaches by thrusting-while-turning is a closer state than the "just turn" state.
Add in better distance metrics that use more advanced tactics, and the AI improves, without the AI ever having a long-term plan.
You can also defer decisions -- if you think it will take 10 time units to do the calculations, you can base your decisions about what to do on what your state will be in 10 time units after you apply your decision for 10 time units. Of course, late plans end up sucking! This also could have the wonderful side effect of the same 'delayed decision' code can be overloaded to give your AI a deferred reaction time to new information (by making the delay even longer than it has to be).
struct firstOrderShip { location l, velocity v };
struct spinningShip { firstOrderShip ship, angular velocity a };
// returns the amount of time it takes to naively reach them.
// We rotate to face the way we want to thrust (spin up then spin down)
// We thrust directly towards where we want to thrust
// We turn exactly 180 degrees
// We thrust away from the target until we are halted relative to the ship.
// We turn towards the ship
// This function solves for that move:
double firstOrderNaiveDistance( spinningShip me, firstOrderShip them );
// The backward version:
// We turn to face where we want to go without thrusting
// We reverse-thrust until we are at the right velocity
// We turn to face the target ship
double firstOrderBreakDistance( spinningShip me, firstOrderShip them );
// Finds the least of the various tactics above:
double firstOrderDistance( spinningShip me, firstOrderShip them );
Then, search nearby for the lowest firstOrderDistance to the ship, mixing a combination of rotation and thrust. We don't care what tactic was picked by firstOrderDistance: we just head down-hill towards a solution that gets us where we want to be.
The example strategies are pretty dumb ones, by the way. The idea is that you can write up a collection of strategies you have analytic solutions for, and then your AI actually searches the nearby space and finds the most efficient one it can reach.
This can result in the AI doing a thrust-while-turning even though your strategies don't include that, because the the AI reaches by thrusting-while-turning is a closer state than the "just turn" state.
Add in better distance metrics that use more advanced tactics, and the AI improves, without the AI ever having a long-term plan.
You can also defer decisions -- if you think it will take 10 time units to do the calculations, you can base your decisions about what to do on what your state will be in 10 time units after you apply your decision for 10 time units. Of course, late plans end up sucking! This also could have the wonderful side effect of the same 'delayed decision' code can be overloaded to give your AI a deferred reaction time to new information (by making the delay even longer than it has to be).
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement