Advertisement

Faster way to calculate normals

Started by July 13, 2003 06:34 PM
8 comments, last by -Egon- 21 years, 7 months ago
Hi all! I''ve finished the 2.0 version of my mesh file, and by now it stores vertex data, index data, normals, binormals and tangents. I''ve programmed a .3DS Loader that reads .3DS files, computes all data and saves zipped into a file of my own, but since I have shared and non-shared vertex, I had to write a special algorithm to compute normals. This is the first (obvius) algorithm I thougt:

for (int i=0;i<facesnumber;++i)
{
        //get 3 vector in a face

        Vector3D v0=vertexdata[faces[i].v1];
	Vector3D v1=vertexdata[faces[i].v2];
	Vector3D v2=vertexdata[faces[i].v3];

	//calculate side vectors

	Vector3D side0=v0-v1;
	Vector3D side1=v0-v2;
				
        //crossproduct them to get face''s normal

	Vector3D normal=side0*side1;
	normal.Normalize();
	normaldata[faces[i].v1]=normaldata[faces[i].v1]+normal;
	normaldata[faces[i].v2]=normaldata[faces[i].v2]+normal;
	normaldata[faces[i].v3]=normaldata[faces[i].v3]+normal;
}
				
for (int i=0;i<vertexnumber;++i) 	
        normaldata[i].Normalize();
It''s lighting fast, but when a vertex is duplicated, there''s no more smooth normals, you can clearly see a straigth edge. So I thougth this other algorithm:

for (int i=0;i<facesnumber;++i)
	{
                //get 3 vector in a face

		v0=vertexdata[faces[i].v1];
		v1=vertexdata[faces[i].v2];
		v2=vertexdata[faces[i].v3];
		//calculate side vectors

		side0=v0-v1;
		side1=v0-v2;
	        //crossproduct them to get face''s normal

		normal=side0*side1;
		normal.Normalize();
		normalface[i]=normal;
	}
	int count=0;
	for (int v=0;v<vertexnumber;++v) 
	{
        	normal.LoadZero;
		//Get current vertex

		v4=vertexdata[v];
		count=0;
		for( int i=0; i<facesnumber; ++i )
		{
			//Get 3 vertex in a face

			v1=vertexdata[faces[i].v1];
			v2=vertexdata[faces[i].v2];
			v3=vertexdata[faces[i].v3];
		        //  If current vertex VALUE (not index!!) is into face,

			//  compute face normal into vertex normal

			if ((v1 == v4) ||
			    (v2 == v4) ||
			    (v3 == v4) )
			{
				count++;
				normal = normal+normalface[i];
			}	
						}
			//ponderate normal over faces

			if (count>1) 
			{
				normal=normal/count;
				normal.Normalize();
				normaldata[v]=normal;
			}
		}
	}
But is SOOO DAMN SLOW!!!! The QUESTION: Anyone sees any way to get it faster?? I have no more ideas... It''s 1:30 AM in the morning and I''m veery tired... Thanks!! --------------------------------------- "It''s only a bit, it can''t hurt you..."
--------------------------------------- "It's only a bit, it can't hurt you..."
If you only do it once does it matter that much? I realize slow loading times can be a pain, but slower games is an even bigger pain =).
____________________________________________________________AAAAA: American Association Against Adobe AcrobatYou know you hate PDFs...
Advertisement
Theres a slightly better algorithm:

1) loop over faces, generate face normals (the first part of your algo). Sum the normals into each vertex in this stage.

2) loop over all vertices and normalize.
Raloth:

Yes, even it''s not at load time, it''s in design time, but I don''t like to wait 30-40 secs for every mesh with 60.000 faces...


JuNC:
That''s exactly what I do in my first algo... But it causes "The straigth line problem" when a vertex is duplicated.


---------------------------------------
"It''s only a bit, it can''t hurt you..."
--------------------------------------- "It's only a bit, it can't hurt you..."
Oh sorry, I see what you mean.

In that case, can you do the normal calculation on the pre-split vertices?

You could merge the vertices again if you don''t have them pre-split, but whether that would actually be faster I''m not sure (i''d tentatively say yes). Some sort of spatial hashing would enable that to go pretty quick.

Create another array of indices, merge the vertices and store an index into the first vertex in each merged set in the index_array position for each merged vertex.

int index_array[num_verts]

for each vertex v
find v_merge in merge_sets to merge v with,
if v cannot be merged then
add v to merge_sets.
i <- index of v
else
i <- index of v_merge

index_array[index of v] <- i

The face array then indexes into index_array which in turn indexes into vertex array. (same with normals)

You then do your first algorithm slightly modified, with a final pass to extract the normals into each vertex of the merge sets.


I''d speculate with a good hashing structure, you''d get far better than O(n*m) which your current algorithm gets. If there are further constraints on the vertex merging, you''ll have to modify the basic merge system.
Why are you dividing normal by count if you are simply normalizing it in the next call anyways? Not that it''ll have huge gains, but saving 3 divides per loop iteration is something isn''t it . Also, why are you using temporary variables v1,v2, and v3 in your inner most loops... all that does is take memory from one place and copy it to another, then do the compares. Why not just make the compares on the data that''s already in memory rather than copying it around first? Sure it''s a bit harder to read that way, but why copy 36 bytes around before you make each check? Oh, and again in your first loop, using the variables to store data when the data is right there. You are copying it into a variable so you can use it once (v1 and v2). This is a useless waste of memory, cache, and cpu cycles . Hope this gets you thinking a bit on optimizations.
Advertisement
Yeah, there are a few micro-optimizations to do, but it doesn''t alter the fact that the algorithm sucks
The code is by far not good, because it comes from v1.0 where they weren''t indexes (believe it or not, but I computed normals only with offsets into a huge pointer with all data... I rewrited some code and added math classes and adapted this old code). The problem is the algorithm itself, its cost is O(m*n). I was looking for an algorithm with better cost. I will optimize this code if I don''t find any better alqorithm.

Thanks all!

---------------------------------------
"It''s only a bit, it can''t hurt you..."
--------------------------------------- "It's only a bit, it can't hurt you..."
Whats the problem with my idea? (well, it''s not mine, it''s fairly standard ) It''s definitely not much slower than linear, depending on the hashing structure you use.
There''s no problem, I think I''ll go that way indeed

Maybe I''ll preprocess the mesh to merge vertex data and eventually I''ll make triangle strips. It should be faster, shouldn''t it?


---------------------------------------
"It''s only a bit, it can''t hurt you..."
--------------------------------------- "It's only a bit, it can't hurt you..."

This topic is closed to new replies.

Advertisement