This is a bit long, but I have a habit of explaining things adequately and giving lots of information, so a person isn't left to assume. Hopefully I've included everything...
Recently, I've been driving myself nuts with writing a camera class. The biggest part was figuring out what I wanted out of the class, and I think I have that now.
It seems a lot of cameras I come across, they work with 3 angles (heading, pitch, and yaw?) and a camera worldspace position, and from there, calculate the orientation. The camera I want (ideally) is identical to an old DOS/early Windows era 3D shooter called Descent 1/2. It has 6 D.O.F., but more importantly, it rotates the camera along its own local axes, rather than worldspace X, Y, and Z axes. What this means is, no matter which funky angle you've positioned yourself, you can always have 3 ways to rotate: along your local X, local Y, and local Z axes. In my mind, this eliminates keeping track of 3 angles and constantly calculating a matrix for them all. Instead, I just arbitrarily rotate my local camera axes when I want, and my view adjusts accordingly. It seems easy.... heh
The only problem I've yet to solve is how to handle rotating around 2 or 3 of your local axes simultaneously. Order does matter, and I've tried a few crazy methods to make it work, but for now, I've put that aside. All I want now is a camera that can strafe (translate) and rotate along its local axes. This won't work :(
As I understand it, an ortho-normal view matrix looks like this:
| rx ux lx px |
| ry uy ly py |
| rz uz lz pz |
| 0 0 0 1 |
Where <rx,ry,rz> is your right vector, <ux,uy,uz> is your up vector, <lx,ly,lz> is your look vector, and <px,py,pz> is your world-space camera position, times -1, transformed by the upper-left 3x3 rotation/orientation portion of this matrix. As for where the negative sign(s) go to account for openGL's negative-Z-forward setup, I'm not really sure at this point. Is my thinking correct?
So I thought, okay, I'll store the cameras local axes in 3 vectors. They'll start out unit length and (perhaps later) I can renormalize them here and there to keep them ortho-normal. I'll store the camera's world-space position, and each time I apply the camera to the scene, build the matrix and load it. In my test program, it starts with only pitch (x-axis) and heading (y-axis) enabled. As long as I only pick 1 axis to rotate on, all the movement seems as it should. I can pitch upwards and downwards and strafe in all 6 directions and it moves correctly; same with only turning left and right. When I do one rotation, then another, though, that's when it goes wrong. I really don't get it :( For instance, if I rotate left by 90'ish degrees, then decide to pitch downward, instead of seeing the world pitch as it should, it appears that the world (a large flat textured quad for now!) rotates along the WORLD's x-axis instead of my camera's. Why?
The code:
// Each game loop, when I process key input, I'll either set or
// zero out the velocities for strafes and rotations like this:
// and yea, that's SDL
// strafe right/left
if (keyStates[SDLK_d])
cam.setRightVel(5.0f);
else if (keyStates[SDLK_a])
cam.setRightVel(-5.0f);
else
cam.setRightVel(0.0f);
// strafe up/down
if (keyStates[SDLK_w])
cam.setUpVel(5.0f);
else if (keyStates[SDLK_s])
cam.setUpVel(-5.0f);
else
cam.setUpVel(0.0f);
// strafe forward/backwards
if (keyStates[SDLK_x])
cam.setForwardVel(5.0f);
else if (keyStates[SDLK_z])
cam.setForwardVel(-5.0f);
else
cam.setForwardVel(0.0f);
// adjust heading left/right
if (keyStates[SDLK_LEFT])
cam.setHeadingVel(45.0f);
else if (keyStates[SDLK_RIGHT])
cam.setHeadingVel(-45.0f);
else
cam.setHeadingVel(0.0f);
// adjust pitch up/down
if (keyStates[SDLK_UP])
cam.setPitchVel(45.0f);
else if (keyStates[SDLK_DOWN])
cam.setPitchVel(-45.0f);
else
cam.setPitchVel(0.0f);
// The camera has these relevant data members
Vector3 right, up, look;
Vector3 position; // this is in world-space
Vector3 moveVel, rotateVel;
// I store a normal velocity vector in moveVel and the angular
// velocities for x/y/z rotation in rotateVel, 1 per component
float matrix[16]; // and the curséd orientation matrix!
// used in a column-major fashion
Camera::Camera()
{
// I initialize the local camera axes to what the default
// openGL camera view is, but obviously, there needs to be
// a negative sign somewhere in the building of the matrix
// since the look vector is 0,0,-1. A default view matrix
// should act as an identity matrix, no?
right = Vector3(1.0f, 0.0f, 0.0f);
up = Vector3(0.0f, 1.0f, 0.0f);
look = Vector3(0.0f, 0.0f, -1.0f);
memset(matrix, 0, sizeof(float) * 16);
matrix[15] = 1.0f;
}
void Camera::apply()
{
// Build the orientation based on the local camera axes
matrix[0] = right.x;
matrix[1] = right.y;
matrix[2] = right.z;
matrix[4] = up.x;
matrix[5] = up.y;
matrix[6] = up.z;
matrix[8] = -look.x; // Is this where they go?
matrix[9] = -look.y; // I've found what seems like better
matrix[10] = -look.z; // results when right.z, up.z and
// look.z were negated instead.
// Those are the parts of the matrix
// that actually affect the final z
// value of the transformed point
// Transform global space position into camera space
matrix[12] = matrix[0] * position.x + matrix[4] * position.y + matrix[8] * position.z;
matrix[13] = matrix[1] * position.x + matrix[5] * position.y + matrix[9] * position.z;
matrix[14] = matrix[2] * position.x + matrix[6] * position.y + matrix[10] * position.z;
// Translation has to be opposite of the camera's position
matrix[12] *= -1;
matrix[13] *= -1;
matrix[14] *= -1;
glLoadMatrixf(matrix);
}
// This function updates the position and rotations based on the
// velocities which are in world-units per second. Then they're
// scaled by the amount of time elapsed each frame.
void Camera::update(float timeElapsed)
{
float d, dt = timeElapsed / 1000.0f;
Vector3 temp;
// Add movement velocity to the position
if (moveVel.x != 0.0f)
{
d = dt * moveVel.x;
temp += (right * d);
}
if (moveVel.y != 0.0f)
{
d = dt * moveVel.y;
temp += (up * d);
}
if (moveVel.z != 0.0f)
{
d = dt * moveVel.z;
temp += (look * d);
}
position += temp;
// Add rotational velocities. Order matters here!
if (rotateVel.x != 0.0f)
rotateX(rotateVel.x * dt);
if (rotateVel.y != 0.0f)
rotateY(rotateVel.y * dt);
if (rotateVel.z != 0.0f)
rotateZ(rotateVel.z * dt);
}
// And the rotations of the actual local camera axes:
// I use openGL to generate a rotation matrix around each axis,
// retrieve the matrix, then do a manual multiply for each axis
// affected. Is this logic right?
void Camera::rotateX(float a)
{
float m[16];
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRotatef(a, right.x, right.y, right.z);
glGetFloatv(GL_MODELVIEW_MATRIX, m);
glPopMatrix();
multMatrixByVector(m, &up);
multMatrixByVector(m, &look);
}
void Camera::rotateY(float a)
{
float m[16];
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRotatef(a, up.x, up.y, up.z);
glGetFloatv(GL_MODELVIEW_MATRIX, m);
glPopMatrix();
multMatrixByVector(m, &right);
multMatrixByVector(m, &look);
}
void Camera::rotateZ(float a)
{
float m[16];
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRotatef(a, look.x, look.y, look.z);
glGetFloatv(GL_MODELVIEW_MATRIX, m);
glPopMatrix();
multMatrixByVector(m, &right);
multMatrixByVector(m, &up);
}
// and just in case I really suck and miss some error in this:
// remember column-major matrices in openGL!
// and yes, this does skip multiplying the fourth row since, as
// far as I know, the 1 in the 4th coordinate will never be
// affected when I'm only multiplying vectors by rotation
// matrices.
void Camera::multMatrixByVector(float* m, Vector3* v)
{
float temp[3];
for (int i = 0; i < 3; i++)
temp = m * v->x + m * v->y + m * v->z;
v->x = temp[<span class="cpp-number">0</span>];
v->y = temp[<span class="cpp-number">1</span>];
v->z = temp[<span class="cpp-number">2</span>];
}
</pre></div><!–ENDSCRIPT–>
Help is greatly greatly greatly appreciated. I've been studying and tinkering with 3D math and programming for many years now, and I feel stupid that I still don't grasp things like this :P