Advertisement

Focal point with first person terrain navigation

Started by January 20, 2023 09:45 AM
16 comments, last by JoeJ 1 year, 10 months ago

Okay… so I am still having problems with this.

Originally as I move through a terrain, I was moving the camera position but not the focal point it was looking at. I use the direction of projection vector (from the camera to the focal point) to determine the forward (or backward) vector direction. Obviously as I get close to the focal point (this is set on a feature on my terrain) I eventually ran into some problems. So the idea of keeping the focal point tethered to the center of the window, as @joej suggested, seemed correct to me. However, the problem now is that when the center of the screen points at the sky, depending on how I continue to move, my direction of projection keeps on pointing more and more into the sky (does this make sense?).

I am indeed moving the camera position with the cursor keys.

Any additional thoughts or comments?

ps. Previously I used the mouse to select the camera focal point. This seemed to be working fine provided I selected points that were not immediately around the camera but further a field. I just wanted to obviate having to constantly select a focal point.

mllobera said:
Originally as I move through a terrain, I was moving the camera position but not the focal point it was looking at.

Now i'm a bit confused about the definition of ‘focal point’ too.
Do you mean something like a locked target, which we then circle around but keep looking at? Or, clicking a vertex in a 3D modeling tool, and then camera keeps focused at the vertex, and movement orbits the selection?

Ofc. such orbiting camera mode has problems if the camera is at the center of its interest. The difference vector becomes zero, so we can no longer calculate a robust forwards direction to look at.

mllobera said:
So the idea of keeping the focal point tethered to the center of the window, as @joej suggested, seemed correct to me. However, the problem now is that when the center of the screen points at the sky, depending on how I continue to move, my direction of projection keeps on pointing more and more into the sky (does this make sense?).

Now this sounds like the standard first person shooter camera. There is no center to keep focused at, instead we can rotate the camera freely around itself, which also sets the forwards direction for movement.
But if you look straight upwards (or downwards) while standing on a flat plane, the upwards view vector projected to the ground plane becomes zero, so we can't robustly calculate the direction of movement.

The standard solution is to use two Euler Angles for a FPS camera. One angle controls the forwards vector perpendicular to the up vector. You set it with moving mouse left and right.
The second angle controls looking up or down, and we usually clip it to the maxima of -90 degrees to 90 degrees, so you can not ‘overspin’ while looking up to look backwards while still moving forwards.

Using the third Euler Angle would allow to tilt the camera, as seen in the game Quake when the player dies. But that's not really useful while playing and usually omitted to avoid player confusion.
A game which did use full freedom across all axis was Descent. In this game you do not really need a sense of what's up or down, so confusion does not come up.

mllobera said:
I am indeed moving the camera position with the cursor keys.

Why? Controlling camera with mouse is the greatest invention of 3D gaming? :D
It's key for good first person immersion console game pads can not deliver.

But ofc. it depends on the game which kind of camera works best. However, all our input devices are limited. Mouse is best, but has only 2 axis to navigate 3D space. So there always is some compromise to make.
But it should work to replicate the camera of a certain game you know.

Advertisement

It is probably my fault that you are confused.

I am using a scientific package that is based on opengl. I am trying to navigate through a terrain tesselation using a first person view. Like opengl, the software allows you to define a camera with a position (in world coordinates), a focal point (in world coordinates) , viewup vector and so on.

To mimic moving, I would update the position of the camera using the ‘direction of projection ’ (which in this case is defined as vector in the direction from the camera position to the focal point) or an orthogonal vector ( the cross product of this vector with the viewup vector) in case I was moving sideways. Hence, my movement was linked, ultimately, with the focal point (which is not being updated).

I was using the cursor keys to move around on the terrain and the mouse to look around when at a certain point.

Still not quite certain how to proceed for it makes no sense to me that the target should not be linked to the motion of the view BUT if I do this I ran into the problem I mentioned earlier (if that makes sense at all).

Apologies for any confusion.

M

mllobera said:
Like opengl, the software allows you to define a camera with a position (in world coordinates), a focal point (in world coordinates) , viewup vector and so on.

I would try to get free from any conventions this system imposes. Which might be as easy as this:

camera.position = myCamera.position;
camera.focalPoint = myCamera.position + myCamera.eyeVector;
camera.viewUp = vec3(0,1,0); // depending what's your up vector - usually Y or Z

It seems your package sets up its camera using a ‘Look At’ method, which is pretty common and requires exactly such data.

Let's say you have a first person character on your terrain, with such basic data and functionality:

struct Player
{
	 static constexpr vec3 up(0,1,0);
	 
	 vec3 position; // of the feet on ground
	 float forwardsAngle;
	 float lookUpAngle;
	 
	 vec3 CalcForwardsVector ()
	 {
	 	return vec3 (sin(forwardsAngle), 0, cos(forwardsAngle)); // stays in the XZ plane 		
	 }
	 
	 vec3 CalcSideVector ()
	 {
	 	return normalized(cross(CalcForwardsVector(), up));	 		
	 }

	 vec3 CalcEyeVector ()
	 {
	 	return sin(lookUpAngle) * up + cos(lookUpAngle) * CalcForwardsVector();
	 }

	 void MouseLookInput(int mouseDeltaX, int mouseDeltaY)
	 {
	 	forwardsAngle += float(mouseDeltaX) * 0.01f;
	 	lookUpAngle += float(mouseDeltaY) * 0.01f;
	 	lookUpAngle = max(-M_PI*0.99f, min(M_PI*0.99f, lookUpAngle));
	 }
	 
	 void MovementInput (int pressedKeys)
	 {
	 	float speed = 0;
	 	if (pressedKeys & UP) speed = 0.1f;
	 	if (pressedKeys & DOWN) speed = -0.1f;
	 	float strafe = 0;
	 	if (pressedKeys & LEFT) strafe = 0.1f;
	 	if (pressedKeys & RIGHT) strafe = -0.1f;
	 	position += speed * CalcForwardsVector() + strafe * CalcSideVector();
	 	position.y = SampleHeightMap (position.x, position.z);
	 }
}; 

Then you could set up the systems camera like this:

vec3 eyePos = player.position + player.up * 1.8f; // offset height from feet to eyes
camera.position = eyePos;
camera.focalPoint = eyePos + player.CalcEyeVector();
camera.viewUp = player.up;

It should work i guess, assuming i did no bugs. May be necessary to swap cos and sin, negate angles, etc. But it should behave like a simple FPS game then.

Thanks @JoeJ !!! Let me go through your code and see that I make sense of this. Thanks again!!!!

@JoeJ Just to clarify:

mouseDeltaX, mouseDeltaY is the displacement in viewport units due to the mouse movement (have these been normalized by the size of the screen?)

Advertisement

mllobera said:
mouseDeltaX, mouseDeltaY is the displacement in viewport units due to the mouse movement (have these been normalized by the size of the screen?)

Yes. To get them, it's common to force the mouse cursor to be centered on the screen and hiding it visually, so the user can move some big enough distance from the screen center towards some edge of the screen without being stopped by the edge within one frame.

The constants in my code are all just arbitrary guessed numbers which you need to tweak. (I assumed the mouse delta to be in pixels, because that's what the Windows OS usually gives us.)

This topic is closed to new replies.

Advertisement