Advertisement

Camera problems

Started by August 01, 2004 01:20 PM
4 comments, last by Horatius83 20 years, 4 months ago
This is a bit long, but I have a habit of explaining things adequately and giving lots of information, so a person isn't left to assume. Hopefully I've included everything... Recently, I've been driving myself nuts with writing a camera class. The biggest part was figuring out what I wanted out of the class, and I think I have that now. It seems a lot of cameras I come across, they work with 3 angles (heading, pitch, and yaw?) and a camera worldspace position, and from there, calculate the orientation. The camera I want (ideally) is identical to an old DOS/early Windows era 3D shooter called Descent 1/2. It has 6 D.O.F., but more importantly, it rotates the camera along its own local axes, rather than worldspace X, Y, and Z axes. What this means is, no matter which funky angle you've positioned yourself, you can always have 3 ways to rotate: along your local X, local Y, and local Z axes. In my mind, this eliminates keeping track of 3 angles and constantly calculating a matrix for them all. Instead, I just arbitrarily rotate my local camera axes when I want, and my view adjusts accordingly. It seems easy.... heh The only problem I've yet to solve is how to handle rotating around 2 or 3 of your local axes simultaneously. Order does matter, and I've tried a few crazy methods to make it work, but for now, I've put that aside. All I want now is a camera that can strafe (translate) and rotate along its local axes. This won't work :( As I understand it, an ortho-normal view matrix looks like this:

| rx ux lx px |
| ry uy ly py |
| rz uz lz pz |
| 0  0  0  1  |
Where <rx,ry,rz> is your right vector, <ux,uy,uz> is your up vector, <lx,ly,lz> is your look vector, and <px,py,pz> is your world-space camera position, times -1, transformed by the upper-left 3x3 rotation/orientation portion of this matrix. As for where the negative sign(s) go to account for openGL's negative-Z-forward setup, I'm not really sure at this point. Is my thinking correct? So I thought, okay, I'll store the cameras local axes in 3 vectors. They'll start out unit length and (perhaps later) I can renormalize them here and there to keep them ortho-normal. I'll store the camera's world-space position, and each time I apply the camera to the scene, build the matrix and load it. In my test program, it starts with only pitch (x-axis) and heading (y-axis) enabled. As long as I only pick 1 axis to rotate on, all the movement seems as it should. I can pitch upwards and downwards and strafe in all 6 directions and it moves correctly; same with only turning left and right. When I do one rotation, then another, though, that's when it goes wrong. I really don't get it :( For instance, if I rotate left by 90'ish degrees, then decide to pitch downward, instead of seeing the world pitch as it should, it appears that the world (a large flat textured quad for now!) rotates along the WORLD's x-axis instead of my camera's. Why? The code:

// Each game loop, when I process key input, I'll either set or
// zero out the velocities for strafes and rotations like this:
// and yea, that's SDL

// strafe right/left
if (keyStates[SDLK_d])
    cam.setRightVel(5.0f);
else if (keyStates[SDLK_a])
    cam.setRightVel(-5.0f);
else
    cam.setRightVel(0.0f);

// strafe up/down
if (keyStates[SDLK_w])
    cam.setUpVel(5.0f);
else if (keyStates[SDLK_s])
    cam.setUpVel(-5.0f);
else
    cam.setUpVel(0.0f);

// strafe forward/backwards
if (keyStates[SDLK_x])
    cam.setForwardVel(5.0f);
else if (keyStates[SDLK_z])
    cam.setForwardVel(-5.0f);
else
    cam.setForwardVel(0.0f);

// adjust heading left/right
if (keyStates[SDLK_LEFT])
    cam.setHeadingVel(45.0f);
else if (keyStates[SDLK_RIGHT])
    cam.setHeadingVel(-45.0f);
else
    cam.setHeadingVel(0.0f);

// adjust pitch up/down
if (keyStates[SDLK_UP])
    cam.setPitchVel(45.0f);
else if (keyStates[SDLK_DOWN])
    cam.setPitchVel(-45.0f);
else
    cam.setPitchVel(0.0f);


// The camera has these relevant data members
Vector3 right, up, look;
Vector3 position;  // this is in world-space
Vector3 moveVel, rotateVel;
// I store a normal velocity vector in moveVel and the angular
// velocities for x/y/z rotation in rotateVel, 1 per component
float matrix[16];   // and the curséd orientation matrix!
                    // used in a column-major fashion

Camera::Camera()
{
    // I initialize the local camera axes to what the default
    // openGL camera view is, but obviously, there needs to be
    // a negative sign somewhere in the building of the matrix
    // since the look vector is 0,0,-1.  A default view matrix
    // should act as an identity matrix, no?
    right = Vector3(1.0f, 0.0f, 0.0f);
    up = Vector3(0.0f, 1.0f, 0.0f);
    look = Vector3(0.0f, 0.0f, -1.0f);
    memset(matrix, 0, sizeof(float) * 16);
    matrix[15] = 1.0f;
}

void Camera::apply()
{
    // Build the orientation based on the local camera axes
    matrix[0] = right.x;
    matrix[1] = right.y;
    matrix[2] = right.z;
    matrix[4] = up.x;
    matrix[5] = up.y;
    matrix[6] = up.z;
    matrix[8] = -look.x;  // Is this where they go?
    matrix[9] = -look.y;  // I've found what seems like better
    matrix[10] = -look.z; // results when right.z, up.z and
                          // look.z were negated instead.
                          // Those are the parts of the matrix
                          // that actually affect the final z
                          // value of the transformed point
    
    // Transform global space position into camera space
    matrix[12] = matrix[0] * position.x + matrix[4] * position.y + matrix[8] * position.z;
    matrix[13] = matrix[1] * position.x + matrix[5] * position.y + matrix[9] * position.z;
    matrix[14] = matrix[2] * position.x + matrix[6] * position.y + matrix[10] * position.z;
    
    // Translation has to be opposite of the camera's position
    matrix[12] *= -1;
    matrix[13] *= -1;
    matrix[14] *= -1;

    glLoadMatrixf(matrix);
}

// This function updates the position and rotations based on the
// velocities which are in world-units per second.  Then they're
// scaled by the amount of time elapsed each frame.
void Camera::update(float timeElapsed)
{
    float d, dt = timeElapsed / 1000.0f;
    Vector3 temp;

    // Add movement velocity to the position
    if (moveVel.x != 0.0f)
    {
        d = dt * moveVel.x;
        temp += (right * d);
    }
    if (moveVel.y != 0.0f)
    {
        d = dt * moveVel.y;
        temp += (up * d);
    }
    if (moveVel.z != 0.0f)
    {
        d = dt * moveVel.z;
        temp += (look * d);
    }
    
    position += temp;
    
    // Add rotational velocities.  Order matters here!
    if (rotateVel.x != 0.0f)
        rotateX(rotateVel.x * dt);
    if (rotateVel.y != 0.0f)
        rotateY(rotateVel.y * dt);
    if (rotateVel.z != 0.0f)
        rotateZ(rotateVel.z * dt);
}

// And the rotations of the actual local camera axes:
// I use openGL to generate a rotation matrix around each axis,
// retrieve the matrix, then do a manual multiply for each axis
// affected.  Is this logic right?
void Camera::rotateX(float a)
{
    float m[16];
    
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
        glLoadIdentity();
        glRotatef(a, right.x, right.y, right.z);
        glGetFloatv(GL_MODELVIEW_MATRIX, m);
    glPopMatrix();
    multMatrixByVector(m, &up);
    multMatrixByVector(m, &look);
}

void Camera::rotateY(float a)
{
    float m[16];
    
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
        glLoadIdentity();
        glRotatef(a, up.x, up.y, up.z);
        glGetFloatv(GL_MODELVIEW_MATRIX, m);
    glPopMatrix();
    
    multMatrixByVector(m, &right);
    multMatrixByVector(m, &look);
}

void Camera::rotateZ(float a)
{
    float m[16];
    
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
        glLoadIdentity();
        glRotatef(a, look.x, look.y, look.z);
        glGetFloatv(GL_MODELVIEW_MATRIX, m);
    glPopMatrix();
    multMatrixByVector(m, &right);
    multMatrixByVector(m, &up);
}

// and just in case I really suck and miss some error in this:
// remember column-major matrices in openGL!
// and yes, this does skip multiplying the fourth row since, as
// far as I know, the 1 in the 4th coordinate will never be
// affected when I'm only multiplying vectors by rotation
// matrices.
void Camera::multMatrixByVector(float* m, Vector3* v)
{
    float temp[3];
    
    for (int i = 0; i < 3; i++)
        temp = m * v->x + m * v-&gt;y + m * v-&gt;z;

    v-&gt;x = temp[<span class="cpp-number">0</span>];
    v-&gt;y = temp[<span class="cpp-number">1</span>];
    v-&gt;z = temp[<span class="cpp-number">2</span>];
}

</pre></div><!–ENDSCRIPT–>

Help is greatly greatly greatly appreciated.  I've been studying and tinkering with 3D math and programming for many years now, and I feel stupid that I still don't grasp things like this :P
That's because your camera suffers from something called a "Gimbal Lock". It means that when rotations in three axes are performed, some rotations may cause other rotations to be zeroed out. The problem is caused by your vectors. When your vectors are rotated, you need to rebuild each vector again using the CrossProduct. In this way, the gimbal lock will go away =)

Check my code for doing this in DirectX:

D3DXVec3Normalize(& vLook,& vLook);
D3DXVec3Cross(& vRight,& vUp,& vLook);
D3DXVec3Normalize(& vRight,& vRight);
D3DXVec3Cross(& vUp, & vLook, & vRight);
D3DXVec3Normalize(& vUp, & vUp);

Remember, you have to do this FIRST every time you update your camera and build the orientation matrix for the camera.
Advertisement
Explaination:
D3DXVec3Normalize(∈_vector, &result_vector)
D3DXVec3Cross(∈_vector1,∈_vector2,&result_vector)
Js
Actually, I'm somewhat sure this method of implementing a camera avoids gimbal lock. If you'll notice in the rotate functions, I rotate each local camera axis when I perform the rotation. It should (in theory) keep all the camera axes on an orthogonal basis and unit length. This is different from doing composite rotations around world axes each frame. Still, though, when I construct the view matrix from them, I don't get the correct view :( I'm mostly sure my error lies in the way I'm building the view matrix, and I've been able to find little on google and other searching to explain the math/theory behind the view matrix.

Thanks for your reply.
Oops, I posted without being logged in. The above is me :P
"The Zen of Direct 3D" by Peter Walsh has a UVN camera class that does exactly what you are describing, LaMothe's new book (the big red one, tips of the rasterization gurus, or something?) has a few too, something you might want to look into. Also, if you have a linear algebra textbook, you might want to look into changing the basis of a system (I think that's what its called) where basically you can describe any point in space using three vectors that are not equal to a scaler times one of the other two. (simple, right?)
"Think you Disco Duck, think!" Professor Farnsworth

This topic is closed to new replies.

Advertisement