Advertisement

Stupid ideas + lack of time

Started by October 19, 2009 04:27 AM
16 comments, last by PureBlackSin 15 years, 1 month ago
Sorry I didn't get to give you the basic explanation I promised last night. Work got in the way.

Your situation is not identical to the one I had in mind, but it's probably not that far off.

Quote: [...]each turbine has 2 angles of freedom, they can pitch (outer ring) and they can roll (inner engine) they all perform the same rotation at the same time, they cannot be controlled separetly


That sounds contradictory. Does each turbine have 2 angles of freedom or only one? If the angles are linked together, that's only one degree of freedom (not sure what "angles of freedom" are). I am guessing you also have control over how much thrust you get from each turbine (how fast it's spinning), which would solve Emergent's issues with not having control over some angle.

Now that I think about it, the inertia in a turbine is probably huge, so I am not sure how you would go about changing the thrust quickly enough to stabilize pitch and roll.

Anyway, all these details about your vehicle are very relevant.

Quote: Original post by alvaro
Does each turbine have 2 angles of freedom or only one? If the angles are linked together, that's only one degree of freedom.


My reading: Each has two, but they are not independent. All turbines have the same orientation at any time.

The obvious solution is to just decouple the turbine orientations to get 8 degrees of freedom instead of just 2.
Advertisement
Quote: Original post by Emergent
Quote: Original post by alvaro
Does each turbine have 2 angles of freedom or only one? If the angles are linked together, that's only one degree of freedom.


My reading: Each has two, but they are not independent. All turbines have the same orientation at any time.

The obvious solution is to just decouple the turbine orientations to get 8 degrees of freedom instead of just 2.



Exactly, what I mean is, the turbines can pitch and roll, but when you pitch and roll one of them, they all fallow, note that, I don't want to make an auto-pilot, I just want an auxiliary stabilization, they all turn together to simplify the commands (a 2 axys joystick + thruster + pedals (or z-axis joystick) that controls the 4 of them the same way)

Maybe I should post a video of what I have so far
A video would be nice.

I will try to describe the scheme that I would use to control a vehicle like that. As a first stage, I would get the vehicle to be stable on the air, without going anywhere. I can think of four quantities that you want to keep at around some target value:
* Pitch around 0
* Roll around 0
* Yaw around some constant
* Elevation at around some constant

Things would be made simpler if you had four corresponding control variables that would affect those four quantities:
* For elevation, something that sets the overall thrust level.
* For pitch, a number that will be added to the levels of the front two thrusters and subtracted from the level of the rear thrusters.
* For roll, a number that will be added to the levels of the right thrusters and subtracted from the level of the left thrusters.
* For yaw, some angle which could rotate the thrusters on the right one way and the thrusters on the left the other way.

In what I am describing, you would only use one of the angles that your thrusters can turn.

You can then think of the problem as four different sub-problems. For each one of them, you have a quantity you can measure and a variable that will affect it. Time to use a PID controller as a first attempt at solving the sub-problems.

Think of yaw control as an example. You can measure the difference between your current yaw and your desired yaw. In a naive attempt, you could make the control variable be proportional to this difference. This would be a "P controller" ("P" for "Proportional"). If you implement something like that, you'll probably find that the behavior is basically large oscillations around the target, since the dynamics would be similar to those of a pendulum. In order to dampen these oscillations you can add a term proportional to the rate at which that difference is changing (subtract two consecutive measures of the difference). Play with the coefficients until you are happy with the results. Now you have a PD controller ("D" for "Derivative"), which might be all you need. It is possible that there is some overall drift in one direction (at least on a real machine), so you could look at the sum of all the measured differences for all time and add a term that is proportional to it (which we'll call "I", for integral). The resulting controller with three simple terms is sufficient in many situations, and is what's called a "PID controller". I am sure you can find better descriptions somewhere else (Wikipedia?).

Now we still don't have any way to put our inputs in the system to alter the behavior. The kind of thing you want to do is make the target of each subsystem be determined by the inputs. In the case of pitch and roll this is very straight forward. In the other two cases, you may want the input to indicate the rate of change of the quantity. It might take some experimentation to get it working well. Perhaps you can rework the PID controllers where the quantities you control are the rate of yaw change, or the vertical speed. It's a matter of it feeling right.

Well, that's my basic plan.

If you want to see what a PID controller looks like (the 10 lines of code Kylotan was talking about), here's my take:
#include <iostream>#include <cmath>class PID_Controller {  double P_coeff, I_coeff, D_coeff;  double integral, previous_difference;  public:  PID_Controller (double P_coeff, double I_coeff, double D_coeff)  : P_coeff(P_coeff), I_coeff(I_coeff), D_coeff(D_coeff),  integral(0.0), previous_difference(0.0) {  }    double get_control_value(double value, double target) {    double difference = target - value;    double result = P_coeff * difference      + I_coeff * integral      + D_coeff * (difference - previous_difference);    integral += difference;    previous_difference = difference;    return result;  }};int main() {  double position = 1.0;  double speed = 0.0;    PID_Controller controller(1.0, 0.01, 10.0);    const double dt = .1;  for (int i=0; i<10000; ++i) {    double acceleration = controller.get_control_value(position, 0.0) + 0.1;    // The `0.1' above is here to mess with the controller, to show what the "I" term does.    std::cout << position << '\n';    speed += dt * acceleration;    position += dt * speed;  }}


I hope that helps.
Personally I'd just find a controller for the full MIMO (multiple input, multiple output) system in one fell swoop as I outlined briefly in my previous post. I'll flesh that out if I have time, but unfortunately I'm pretty busy with some other things right now...

Have you been able to model your system (that is, write down the ODEs describing it)? That's step 1.
I would love to see Emergent's plan fleshed out a bit.

If you don't like either of our plans, I have another one using value iteration, which is something I have no experience with but it sounds like it could potentially work very well.
Advertisement
Quote: Original post by alvaro
I would love to see Emergent's plan fleshed out a bit.

If you don't like either of our plans, I have another one using value iteration, which is something I have no experience with but it sounds like it could potentially work very well.


alvaro's method is the old-school technique, but you can certainly use it. Might be a little more intuitive, actually.

But anyway, here's my attempt at a quick tutorial. I redefine variables a few times, so watch out for that (sorry). But it shows how it all works, I hope.

1. Modeling your system

This is standard-enough rigid body dynamics. I might not be the cleverest at it, so although I'll give you fairly nice matrix representations of the equations it's possible (probable?) that there are prettier ways to do this. Regardless, somehow you need to model your system. Here goes,

Let u1, u2,...,u4 be unit vectors along the thrust directions of your 4 fans, and f1 = k u1, f2 = k u2, etc., where k is the constant magnitude of your thrust.

Also let phi be the function that, given two fan angles, returns the corresponding unit vector; e.g., phi(theta11, theta12) = u1, if theta11 and theta12 are the two gimbal angles of fan 1.

The net torque on your center of mass is then

tau = r1 x f1 + r2 x f2 + ... + r4 x f4
= k [ r1 x phi(theta11, theta12) + r2 x phi(theta21, theta22) + ... + r4 x phi(theta41, theta42) ]

where each 'ri' is the vector from the cm of your vehicle to the center of the ith turbine.

The net force is likewise

f = f1 + f2 + ... + f4
= k [ phi(theta11, theta12) + ... + phi(theta41, theta42) ]

so your system has dynamics in (one particular) state-space form (where x is the tank's position vector, v is its velocity vector, R is the rotation matrix describing its orientation, and w is its angular velocity vector -- all in the spatial (inertial) frame)

dx/dt = v
dv/dt = (1/m) f
dR/dt = hat(w) R
dw/dt = R J-1 RT(tau - hat(w) R J RT w)

where J is the inertia tensor in the body frame, m is the mass, and 'hat' denotes the hat operator.

Next let 'h' be the function that, given roll/pitch/yaw angles, returns the rotation matrix R, and let a=(r,p,y) be the vector of these angles. Then,

dR/dt = gradr h(a) dr/dt + gradp h(a) dp/dt + gradp h(a) dy/dt

where gradr h(a) is the partial gradient of h with respect to a evaluated at r. It is a matrix. To get it, you just compute the partial derivatives of the entries of the matrix h(a) with respect to r. Likewise for the other partial gradients. Or, unhatting both sides (this is the opposite of the hat operator described previously),

unhat(dR/dt) = unhat(gradr h(a)) dr/dt + unhat(gradp h(a)) dp/dt + unhat(gradp h(a)) dy/dt

Note that the reason we can unhat each term seperately and pull out the scalars dr/dt, dp/dt, and dy/dt because the 'hat' and 'unhat' operators are linear. Anyway, you can write this in a matrix,

unhat(dR/dt) = [ unhat(gradr h(a)) , unhat(gradp h(a)) , unhat(gradp h(a)) ] da/dt

or just

unhat(dR/dt) = M(a) da/dt

where M(a) is the matrix above whose columns are the unhatted gradients. Of course you can solve for da/dt by inverting M(a). So I'll do that and plug it into our state-space equations to get,

dx/dt = v
dv/dt = (1/m) f
da/dt = M(a)-1 unhat(hat(w) R)
dw/dt = h(a) J-1 h(a)T(tau - hat(w) h(a) J h(a)T w) .

This is a slightly funky state-space representation; it mixes roll/pitch/yaw angles ('a') with exponential coordinates ('w'), but hey, it's not wrong (so long as I haven't made any silly mistakes). Also note that the above equations now only work locally since we're using a local coordinate chart (r,p,y) for rotation matrices.

Finally, plugging in for tau and f,

dx/dt = v
dv/dt = (1/m) k [ phi(theta11, theta12) + ... + phi(theta41, theta42) ]
da/dt = M(a)-1 unhat(hat(w) R)
dw/dt = h(a) J-1 h(a)T[ k [ r1 x phi(theta11, theta12) + r2 x phi(theta21, theta22) + ... + r4 x phi(theta41, theta42) ] - hat(w) h(a) J h(a)T w]

which gives you explicit state-space dynamics for the system with control inputs theta11,...,theta42. (Note that I haven't plugged in for r1,...,r4 but that these are also functions of the state... Hmmm, maybe I should have done all this in body rather than spatial coordinates... Anyway, you get the point. I'll leave it to you to work those out and plug them in).

The point is that now you have a model. You don't need to do your modeling exactly like this, but somehow you need to get ODEs the describe your system.

Now in what follows, I'll write 'x' for the entire state. I'm redefining x to be xnew = (xold, v, a, w). Ok? I'll also say u = (theta11,...,theta42). So what you've got is a system of the form

dx/dt = f(x, u)

which was the whole point of this. You need, somehow, to get to this point. Do your modeling however you prefer.



2. Linearizing your system

Now that you have

dx/dt = f(x, u)

choose a state xbar that you want to stabilize your system around, and define z = x - xbar. Also solve the equation

0 = f(xbar, ubar)

for ubar to get the nominal control input, and define v = u - ubar (I won't be referring to the different quantity I called 'v' in the earlier section). Then,

dz/dt = f(z + xbar, v + ubar)

or to first order

dz/dt = df/dx(xbar, ubar) z + df/du(xbar, ubar) v

where df/dx(xbar, ubar) is the partial Jacobian of f with respect to x, and likewise for df/fu. Or just,

dz/dt = A z + B v

where
A = df/dx(xbar, ubar)
B = df/du(xbar, ubar) .

Notice that the matrices A and B are constant. This is the linearized system, and it approximates the behavior of the full nonlinear system about the operating point (xbar, ubar).


3. Adjoining integrators to your system

Consider the system

dz/dt = A z + B v
ds/dt = I s + I mu

where mu is another control input. This is still a linear time-invariant system, so for what follows I'll redefine

x = (z, s)
u = (v, mu)

A = [A 0; 0 I]
(where ';' denotes "start a new row in the matrix)

B = [Bold, I]

where in all of the above 'I' is the identity matrix. So now we're back to an LTI system that looks like

dx/dt = A x + B u


(sorry about redefining symbols all the time; hopefully you've followed this far... I promise I'll stop!)


4. Designing a state-feedback controller

So long as your system is controllable, you can stabilize your system by using the control input

u = -K x

where K is an appropriately-chosen matrix. Its elements are called the control gains. First, to test for controllability, you can make sure that the controllability matrix

[B, AB, A2 B, ..., An B]

is full rank, where n is the dimension of the state-space (12 in this case). For you it should be, but this is a reasonable sanity check.

Now all you need to do is pick your K matrix. Two two ways to do this are:
1 - Pole placement
2 - Solving the Algebraic Riccati Equation
(there are others as well).

I'll do #2.

What I'm doing here is solving the infinite-horizon continuous-time LQR problem.

First, choose symmetric positive definite nxn matrices Q and R. For instance, they can be scaled identity matrices. If the Q matrix is big, it means you want to try very hard to be stable and don't care how big you make your control inputs to do it. If your R matrix is very big it means that your control inputs cost a lot and you'll try to be as gentle as possible.

Anyway, solve the equations,

AT P + P A - P B R-1 BT P + Q = 0
P = PT

for P; then your optimal control gains are,

K = R-1 BT P .

You can verify that the resulting system is stable by checking that all the eigenvalues of the matrix A-BK have negative real part.


Notes

You can skip the stuff about adjoining integrators to the system and just do step #4 directly on the n-dimensional system if you like. The integrators aren't really necessary.

[EDIT: Noticed some typos; fixed them. Also, just to be clear, this is all stuff you do just to design the controller. The only thing you actually need to do in your code at runtime is compute u = -K x at each timestep, which is just a matrix multiplication.]

[Edited by - Emergent on October 22, 2009 10:06:52 PM]
Hi, after briefly looking through the post, and the fact that i have done and currently working on a similar project at the moment (hover tank/directx/physx).

FYI you can see e.g.s of it here: http://www.myndoko.com/Games/WipeOutGame/Game.htm

As a RC helicopter/roboteer/ RC air plane enthusiast and computer programmer i feel i may have the solution, and i hope that it is simpler than the above suggestions.

There are two main approaches:

Easy: Use a cube or convex mesh, allow it to scrape along the ground and render the model above the ground, this is easy and quick to implement and requires little thought to do.

Hard: As a RC heli pilot i would suggest some gyro like properties.

e.g. you get the pitch roll and yaw o the craft and constantly correct. The bonus of this is it's not a lot more computationally expensive as you are just modifying the forces each time and doing some basic math.

Before i out line this method there will be a bit of tweaking involved to get the right stepping e.g. amount of correction per frame/physX simulation to stop over correction and under correction.

After getting the gyro bit working you can start adding "dampening" in to smooth the correction, making it more realistic.

So implementation:

Get Yaw,pitch and roll (local coordinates)

Work out with side/corner is dipping.
Work out by how much.
Work out angle of fan/motor on that side.
Use COS and SIN to work out the force needed for the angle to get the same up force. (e.g. engine at cos x degrees from vertical down)
Increase power/force to that side (amount needed is gained through tweaking) and times by cos to get a uniform/predictable amount so tweaking is constant.
Check Yaw pitch roll and repeat.

If the constant of correction is right (after tweaking) the stability problems should go away.

This is the method used to stabilise RC heli's tails and robot helis in general.

For more info search for robot heli in google.

PureBlackSin

P.S. If you want to contact me by e-mail directly use the contact from on my website in the support section.

Good luck
For Games, Articles and Custom High End Computers, come visit.Myndoko

This topic is closed to new replies.

Advertisement