Compute cube edges in screenspace
Why are you so set on doing this as a screenspace operation? Think we need to understand this to proceed.
This is a problem that is not suited to being solved in screenspace when such a trivial solution exists in view space (as has been described, transform faces to view, check edge to see if normals of connected faces point in opposite directions with a simple dot product) unless I am misunderstanding.
Why are you so set on doing this as a screenspace operation? Think we need to understand this to proceed.
I'm only looking at how to solve it in screen space because i thought it was required since depending on the orientation of the cube on screen, its outline is 4 or 6 edges and figuring out which of the two it is seems like a screen space problem to me.
For your method, why are you checking to see if faces point away from eachother? Wont every face of a cube point away from every other face? Also, is this the method that C0lumbo suggested?
For your method, why are you checking to see if faces point away from eachother? Wont every face of a cube point away from every other face? Also, is this the method that C0lumbo suggested?
Yes, it is as suggested by the other poster.
Think about the faces in view space, not object space. Once you have transfomed the vertices into view space, when you calculate the normals they will either be pointing towards the camera or away from the camera.
If you take an edge and consider the normals of the two faces the edge connects, if they both point towards the camera or they both point away from the camera, then the edge is not part of the silhouette. If one points towards the camera and the other points away, the edge is part of the silhouette.
So all you need to know is if the normals in view space are pointing in the same direction as each other or not. You can just dot the normals together and if the result is < 0, they are not pointing in the same direction and the edge is one of the silhouette edges.
For your method, why are you checking to see if faces point away from eachother? Wont every face of a cube point away from every other face? Also, is this the method that C0lumbo suggested?
Yes, it is as suggested by the other poster.
Think about the faces in view space, not object space. Once you have transfomed the vertices into view space, when you calculate the normals they will either be pointing towards the camera or away from the camera.
If you take an edge and consider the normals of the two faces the edge connects, if they both point towards the camera or they both point away from the camera, then the edge is not part of the silhouette. If one points towards the camera and the other points away, the edge is part of the silhouette.
So all you need to know is if the normals in view space are pointing in the same direction as each other or not. You can just dot the normals together and if the result is < 0, they are not pointing in the same direction and the edge is one of the silhouette edges.
Hmm okay makes a lot more sense now. I got some code for that but its unfinished:
int32* GetOutlineList(Vector3f Corners[8])
{
// Assume winding order is CCW
Vector3f FaceNormal[6];
FaceNormal[0] = GetQuadNormal(Corners[0], Corners[2], Corners[3]); // Front Face
FaceNormal[1] = GetQuadNormal(Corners[0], Corners[1], Corners[5]); // Right Face
FaceNormal[2] = GetQuadNormal(Corners[4], Corners[5], Corners[7]); // Back Face
FaceNormal[3] = GetQuadNormal(Corners[6], Corners[7], Corners[3]); // Left Face
FaceNormal[4] = GetQuadNormal(Corners[0], Corners[4], Corners[6]); // Top Face
FaceNormal[5] = GetQuadNormal(Corners[1], Corners[5], Corners[7]); // Bottom Face
bool FacesScreen[6];
FacesScreen[0] = FaceNormal[0].z > 0;
FacesScreen[1] = FaceNormal[1].z > 0;
FacesScreen[2] = FaceNormal[2].z > 0;
FacesScreen[3] = FaceNormal[3].z > 0;
FacesScreen[4] = FaceNormal[4].z > 0;
FacesScreen[5] = FaceNormal[5].z > 0;
In the above function, I want it to return a pointer to 4 or 6 ints which are indices into the corner parameters (the corners provided are in view space). The issue I have is, lets say I find 2 edges to be visible and they share a point. I cannot simply say if edge is visible, add its 2 points or else I will have a list with repetition of points which is inefficient for my purposes. I wanted to get around this by first traversing the faces left to right and then top to bottom and trying to figure out when I could add edges but this has cases that I don't know how to account for. I cannot build the edges right away when I find an edge to be visible because I will be traversing my cube into 8 smaller cubes (octree) and I wanted to find out what 6 points are visible at the root so that I could propagate that down the tree when I render it.
struct EdgeKey
{
public:
EdgeKey(int v1, int v2) : v1(v1 < v2 ? v1 : v2), v2(v1 > v2 ? v1 : v2) { }
// above constructor ensures that EdgeKey(1, 2) and EgdeKey(2, 1) are the same interally for example
// have omitted the references to the faces, just need to ignore these in the hash so that they don't
// lead to duplicates of the same edge
int v1, v2;
};
uint hash(EdgeKey){ /* ... */ }
std::set<EdgeKey> edges; // or whatever
void build()
{
for(int i = 0; i < faces.count(); ++i)
{
for(int j = 0; j < faces.indices.count() - 1; ++j)
{
int k = (j < faces[i].indices.count() - 1 ? j + 1 : 0);
edges.insert(faces[i].indices[j], faces[i].indices[k]);
}
}
}
Once you have the unique edge set with pointers to the faces, it becomes trivial to iterate across this set to find out which edges form the silhouette on screen.
Okay so I got an issue with identifying which faces are visible. When I render the points of my cube, I get this image:
As we can see, there are 3 faces visible here but the dot product code only identifies 2 as visible:
int32* GetOutlineList(Vector3f Corners[8])
{
// Assume winding order is CCW
Vector3f FaceNormal[6];
FaceNormal[0] = GetQuadNormal(Corners[3], Corners[2], Corners[0]);//(Corners[0], Corners[2], Corners[3]); // Front Face
FaceNormal[1] = GetQuadNormal(Corners[5], Corners[1], Corners[0]);//(Corners[0], Corners[1], Corners[5]); // Right Face
FaceNormal[2] = GetQuadNormal(Corners[7], Corners[5], Corners[4]);//(Corners[4], Corners[5], Corners[7]); // Back Face
FaceNormal[3] = GetQuadNormal(Corners[3], Corners[7], Corners[6]);//(Corners[6], Corners[7], Corners[3]); // Left Face
FaceNormal[4] = GetQuadNormal(Corners[6], Corners[4], Corners[0]);//(Corners[0], Corners[4], Corners[6]); // Top Face
FaceNormal[5] = GetQuadNormal(Corners[1], Corners[5], Corners[7]);//(Corners[7], Corners[5], Corners[1]); // Bottom Face
// Check if facing camera
bool FacesScreen[6];
for (int32 i = 0; i < 6; ++i)
{
FacesScreen[i] = FaceNormal[i].z > 0;
}
The vertices provided are in Camera space and the code for identifying the normal of a quad is:
Vector3f GetQuadNormal(Vector3f P0, Vector3f P1, Vector3f P2)
{
return Cross((P1 - P0), (P2 - P0));
}
Vector3f Cross(Vector3f Vec1, Vector3f Vec2)
{
Vector3f Result;
Result.x = Vec1.y*Vec2.z - Vec1.z*Vec2.y;
Result.y = Vec1.z*Vec2.x - Vec1.x*Vec2.z;
Result.z = Vec1.x*Vec2.y - Vec1.y*Vec2.x;
return Result;
}
My corners index numbers are identified in the below picture:
The above cube is in model space and the front vertices have a positive z while the back vertices have a negative z. For some views, the dot product code properly identifies which faces are visible but for whatever reason, it can't figure out that 3 of the faces are visible for that view. Is comparing whether z is > 0 the proper way of checking if the face is facing the screen? For reference, because the faces aren't properly identified, my rasterized cube outline ends up looking like this(the green points are the 6 visible vertices identified):
it's a fine point, hehehe, you're missing a visible vertice
your 3d cross product is right, so the good news is that the bug should be easy to find because there aren't many moving parts.. points in right order, are you giving it the correct camera location et c.
yes the sign of the cross product is sufficient for checking if the face is visible.
it's a fine point, hehehe, you're missing a visible vertice
your 3d cross product is right, so the good news is that the bug should be easy to find because there aren't many moving parts.. points in right order, are you giving it the correct camera location et c.
yes the sign of the cross product is sufficient for checking if the face is visible.
The camera location is the origin facing the positive z direction. I think I fixed the error. I did the same dot product test but now, my vertices are in projection space (I multiplied them by the projection matrix) and the points chosen are proper and in the right order. Here is how it looks:
This version changes the points given different view directions where as the previous version was static as the cube moved to different parts of the screen. My code doesn't account for just 4 visible vertices yet but I can add that special case. Basically, the difference is that instead of having the vertices in view space, they are in screen space and it all ended up working out. Here is the current code, its a little long and I found some resources online to help me with it but it seems to be working properly:
Vector3f center;
bool CompareCcwOrder(Vector3f a, Vector3f b)
{
if (a.x - center.x >= 0.0f && b.x - center.x < 0.0f)
{
return true;
}
if (a.x - center.x < 0.0f && b.x - center.x >= 0.0f)
{
return false;
}
if (a.x - center.x == 0.0f && b.x - center.x == 0.0f)
{
if (a.y - center.y >= 0.0f || b.y - center.y >= 0.0f)
{
return a.y > b.y;
}
return b.y > a.y;
}
// compute cross product of (center -> a) x (center -> b)
float32 det = (a.x - center.x) * (b.y - center.y) - (b.x - center.x) * (a.y - center.y);
if (det < 0.0f)
{
return true;
}
if (det > 0.0f)
{
return false;
}
// points a and b are on the same line from the center
// check which point is closer to the center
float32 d1 = (a.x - center.x) * (a.x - center.x) + (a.y - center.y) * (a.y - center.y);
float32 d2 = (b.x - center.x) * (b.x - center.x) + (b.y - center.y) * (b.y - center.y);
return d1 > d2;
}
int32* GetOutlineList(Vector3f Corners[8])
{
// Assume winding order is CCW
Vector3f FaceNormal[6];
FaceNormal[0] = GetQuadNormal(Corners[3], Corners[2], Corners[0]);//(Corners[0], Corners[2], Corners[3]); // Front Face
FaceNormal[1] = GetQuadNormal(Corners[5], Corners[1], Corners[0]);//(Corners[0], Corners[1], Corners[5]); // Right Face
FaceNormal[2] = GetQuadNormal(Corners[7], Corners[5], Corners[4]);//(Corners[4], Corners[5], Corners[7]); // Back Face
FaceNormal[3] = GetQuadNormal(Corners[3], Corners[7], Corners[6]);//(Corners[6], Corners[7], Corners[3]); // Left Face
FaceNormal[4] = GetQuadNormal(Corners[6], Corners[4], Corners[0]);//(Corners[0], Corners[4], Corners[6]); // Top Face
FaceNormal[5] = GetQuadNormal(Corners[1], Corners[5], Corners[7]);//(Corners[7], Corners[5], Corners[1]); // Bottom Face
// Check if facing camera
bool FacesScreen[6];
for (int32 i = 0; i < 6; ++i)
{
FacesScreen[i] = FaceNormal[i].z > 0;
}
// Find visible points in CCW order
std::vector<Vector3f> IndexArray;
if (FacesScreen[0] != FacesScreen[1]) // Front and Right
{
IndexArray.push_back({ Corners[1].x, Corners[1].y, 1 });
IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 });
}
if (FacesScreen[1] != FacesScreen[2]) // Right and Back
{
IndexArray.push_back({ Corners[5].x, Corners[5].y, 5 });
IndexArray.push_back({ Corners[4].x, Corners[4].y, 4 });
}
if (FacesScreen[2] != FacesScreen[3]) // Back and Left
{
IndexArray.push_back({ Corners[6].x, Corners[6].y, 6 });
IndexArray.push_back({ Corners[7].x, Corners[7].y, 7 });
}
if (FacesScreen[3] != FacesScreen[0]) // Left And Front
{
IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 });
IndexArray.push_back({ Corners[3].x, Corners[3].y, 3 });
}
if (FacesScreen[0] != FacesScreen[4]) // Front and Top
{
IndexArray.push_back({ Corners[0].x, Corners[0].y, 0 });
IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 });
}
if (FacesScreen[0] != FacesScreen[5]) // Front and Bottom
{
IndexArray.push_back({ Corners[3].x, Corners[3].y, 3 });
IndexArray.push_back({ Corners[1].x, Corners[1].y, 1 });
}
if (FacesScreen[1] != FacesScreen[4]) // Right and Top
{
IndexArray.push_back({ Corners[4].x, Corners[4].y, 4 });
IndexArray.push_back({ Corners[0].x, Corners[0].y, 0 });
}
if (FacesScreen[1] != FacesScreen[5]) // Right and Bottom
{
IndexArray.push_back({ Corners[1].x, Corners[1].y, 1 });
IndexArray.push_back({ Corners[5].x, Corners[5].y, 5 });
}
if (FacesScreen[2] != FacesScreen[4]) // Back and Top
{
IndexArray.push_back({ Corners[4].x, Corners[4].y, 4 });
IndexArray.push_back({ Corners[6].x, Corners[6].y, 6 });
}
if (FacesScreen[2] != FacesScreen[5]) // Back and Bottom
{
IndexArray.push_back({ Corners[5].x, Corners[5].y, 5 });
IndexArray.push_back({ Corners[7].x, Corners[7].y, 7 });
}
if (FacesScreen[3] != FacesScreen[4]) // Left and Top
{
IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 });
IndexArray.push_back({ Corners[6].x, Corners[6].y, 6 });
}
if (FacesScreen[3] != FacesScreen[5]) // Left and Bottom
{
IndexArray.push_back({ Corners[7].x, Corners[7].y, 7 });
IndexArray.push_back({ Corners[3].x, Corners[3].y, 3 });
}
// Calculate center
center = {};
for (int32 i = 0; i < IndexArray.size(); ++i)
{
center.x += IndexArray[i].x;
center.y += IndexArray[i].y;
}
center.x /= IndexArray.size();
center.y /= IndexArray.size();
// Sort the elements in counter clock wise order
std::sort(IndexArray.begin(), IndexArray.end(), CompareCcwOrder);
// Only take every second point since there are 2 copies of every point in IndexArray
int32* Result = new int32[IndexArray.size()/2];
for (int32 i = 0; i < IndexArray.size() / 2; ++i)
{
Result[i] = IndexArray[i * 2].z;
}
return Result;
}
One thing that sucks about the above code is for every edge, I need to write which points describe it in code which makes the function really long. What also sucks is that every point is included twice since 2 edges can share a point which is also an inefficiency I want to get rid of. So far however, except for the 4 vertex case this code seems to be working properly. Also, when I choose vertices for edges, I write the index of the vertex as the z component since I don't actually need it for sorting in counter clock wise order. I need to get rid of the global center variable and maybe make that a functor instead. If there are any modifications that can make this faster or simpler please leave a comment. Thanks a lot for the help to get this working everyone!
What I'd do is transform all the 8 corner points of the cube to 2D screen space, and then use a 2D convex hull algorithm to recompute which edges are the outside edges of the projected silhouette. That solution has the appeal of using only generic "stock" algorithms (3D->2D projection, and 2D convex hull), so it will be very easy to reason about and maintain in the long run.
What I'd do is transform all the 8 corner points of the cube to 2D screen space, and then use a 2D convex hull algorithm to recompute which edges are the outside edges of the projected silhouette. That solution has the appeal of using only generic "stock" algorithms (3D->2D projection, and 2D convex hull), so it will be very easy to reason about and maintain in the long run.
Good call!
Way easier than calculating silhouette edges, I think.