So, where were we...
The basics of a Kalman filter (part 2)...
In part 1 we created a simple linear dynamical system for the hockey puck that incorporated only observation noise. It was given by the following equations:
s (t+dt) = Fs (t)y(t) = H(s (t)) + n (t) |1 dt|F = | | H = |1 0| n = |n1| |0 1|
There was an error above which I need to go back and fix... and that was that we actually want H to be a 1x2 matrix, not 2x2 since we only observe 1 of the 2 variables. Thus n will be a scalar noise variable, not a 2x1 matrix. Sorry for any confusion.
Now, I want to complicate things just a little more. Let''s assume that are hockey playing agent doesn''t have a perfect model of the physics of the world. Pucks generally go in straight lines (except when they bounce), but spin can make them swerve a little. this doesn''t have to be the case in the actual physics model of the game, only in what the hockey agent THINKS the physics model is.
To accomplish this, we add noise to the process model (the first of the equations above). So,
s (t+dt) = Fs (t) + Gw (t)
But, do we add this process noise to the position of the puck or to its velocity? We could do both, but we actually only need to add it to the velocity, since the belief in the position depends on the belief in the velocity. Uncertainty in the velocity will flow through to the position.
So, we set
| w1 |w = | | | w2 |
and we define G so that only noise is added to the velocity. So
G = |0 1|
Now, we need to return to the idea that this noise is Gaussian. A Gaussian (Normal) probability distribution is characterised by two
cumulants : the
mean and the
variance (
covariance for multidimensional distributions). Now, for additive noise, it needs to have a zero mean. The covariance though is up to us to choose. Since
s has dimension two (1 dimension for position and 1 for speed) then the covariance for the process noise will be a 2x2 matrix looking like
|q11 0|Q = | | |0 q22|
and the covariance for the observation noise will be
R = r1
since n is a scalar.
We also need to know the initial value of the cumulants for the probability distribution over the state at t=0. These are a free choice, so set
s (0) to whatever you want to be the starting position of the puck and set
P (0) ( the initial covariance of the state probability distribution) to say
|1 0|P = | | |0 1|
So, now that we have all of this, what do we do with it??? Well, we apply the Kalman filter equations!
For the forward pass filter we have two steps:
1) Prediction: Compute what we think the cumulants of the state probability distribution are going to be just before the observation is made (using our last known state and our process model); and,
2) Estiamtion: Bias this prediction by the actual noisy observation we make.
Let''s assume that our first observation occurs at t=1. So, we need to predict the cumulants at t=1.
This is done using
s pred = Fs (t-1)Ppred = F*P(t-1)*F'' + G*Q*G''
where the '' denotes the transpose of the matrix.
Now, the estimation step is only slightly more complex.
We need to compute a few intermediary quantities... don''t worry what they mean.
e = y - H*
s pred
M = H*Ppred*H'' + R
Minv = M-1
K = Vpred*H''*Minv
and finally
s (t) =
s pred + K*e
P(t) = (I - K*H)*Ppred
That''s enough for now. You can have some time to digest this and ask questions if anyone is still interested!??? Over the weekend I''ll add in the backwards pass filter, which you need to complete the system. I''ll also talk about how you can handle the bouncing behaviour the walls create!
Cheers for now,
Timkin