Advertisement

Question about space transform.

Started by June 11, 2019 10:23 PM
1 comment, last by JoeJ 5 years, 7 months ago

Hello everyone,

Recently, I am learning GPU voxelization and I am reading a implementation from others, the implementation is like GPU Voxelization. It uses a orthographic camera to voxelize the scene. As I am reading the implementation, the space transforms confuse me a lot. The code is:


				RWTexture3D<uint> RG0;
				struct v2g
				{
					float4 pos : SV_POSITION;
					half4 uv : TEXCOORD0;
					float3 normal : TEXCOORD1;
					float angle : TEXCOORD2;
				};
				
				struct g2f
				{
					float4 pos : SV_POSITION;
					half4 uv : TEXCOORD0;
					float3 normal : TEXCOORD1;
					float angle : TEXCOORD2;
				};
				
				
				v2g vert(appdata_full v)
				{
					v2g o;
					
					float4 vertex = v.vertex;
					
					o.normal = UnityObjectToWorldNormal(v.normal);
					float3 absNormal = abs(o.normal);
					
					o.pos = vertex;
					
					o.uv = float4(TRANSFORM_TEX(v.texcoord.xy, _MainTex), 1.0, 1.0);
					
					
					return o;
				}
				
				
				[maxvertexcount(3)]
				void geom(triangle v2g input[3], inout TriangleStream<g2f> triStream)
				{
					v2g p[3];
					for (int i = 0; i < 3; i++)
					{
						p[i] = input[i];
						p[i].pos = mul(unity_ObjectToWorld, p[i].pos);						
					}
					

					float3 realNormal = float3(0.0, 0.0, 0.0);
					
					float3 V = p[1].pos.xyz - p[0].pos.xyz;
					float3 W = p[2].pos.xyz - p[0].pos.xyz;
					
					realNormal.x = (V.y * W.z) - (V.z * W.y);
					realNormal.y = (V.z * W.x) - (V.x * W.z);
					realNormal.z = (V.x * W.y) - (V.y * W.x);
					
					float3 absNormal = abs(realNormal);
					

					//Decide each side is suitable for projection(We want the projection has the largest area)
					int angle = 0;
					if (absNormal.z > absNormal.y && absNormal.z > absNormal.x)
					{
						angle = 0;
					}
					else if (absNormal.x > absNormal.y && absNormal.x > absNormal.z)
					{
						angle = 1;
					}
					else if (absNormal.y > absNormal.x && absNormal.y > absNormal.z)
					{
						angle = 2;
					}
					else
					{
						angle = 0;
					}
					
					for (int i = 0; i < 3; i ++)
					{
                      //SEGIVoxelViewFront,SEGIVoxelViewLeft and SEGIVoxelViewTop are matrix sent by script. Because we may do projection from top or left, we need these transform matrix.
						if (angle == 0)
						{
							p[i].pos = mul(SEGIVoxelViewFront, p[i].pos);					
						}
						else if (angle == 1)
						{
							p[i].pos = mul(SEGIVoxelViewLeft, p[i].pos);					
						}
						else
						{
							p[i].pos = mul(SEGIVoxelViewTop, p[i].pos);		
						}
						
						p[i].pos = mul(UNITY_MATRIX_P, p[i].pos);
						
						#if defined(UNITY_REVERSED_Z)
						p[i].pos.z = 1.0 - p[i].pos.z;	
						#else 
						p[i].pos.z *= -1.0;	
						#endif
						
						p[i].angle = (float)angle;
					}
					
					triStream.Append(p[0]);
					triStream.Append(p[1]);
					triStream.Append(p[2]);
				}

				float4 frag (g2f input) : SV_TARGET
				{
                  //This is the coordinate for the voxel, VoxelResolution is a integer sent by c# script, indicating the resolution of the voxel space.
					int3 coord = int3((int)(input.pos.x), (int)(input.pos.y), (int)(input.pos.z * VoxelResolution));
                  	//Then the author insert the information into the RWTexture3D<uint> RG0, coord is used as the index.
				}

The output of the vertex shader is still in local space, right? Because I don't see any space transform in the code above. In the geometry shader, the vertices are firstly transformed to world space: 

p[i].pos = mul(unity_ObjectToWorld, p[i].pos);

. Then they are multiplied by UNITY_MATRIX_P.  Now the x,y,z of p.pos should be in range(0,1) because this is a orthographic camera (w is 1). Finally they are passed to the fragment shader. 

However, I can't understand the line:  int3 coord = int3((int)(input.pos.x), (int)(input.pos.y), (int)(input.pos.z * VoxelResolution));

It seems the x,y's value is already be mapped to (0, VoxelResolution)? And the z value is (0,1)? I feel there are some internal transforms happening between geometry shader and fragment shader. What are these internal transforms? And, how can my camera know my target resolution? There is no code to control the resolution of the camera's screen in the script. The camera's setting is:


        voxelCameraGO = new GameObject("SEGI_VOXEL_CAMERA");
        voxelCameraGO.hideFlags = HideFlags.HideAndDontSave;
 
        voxelCamera = voxelCameraGO.AddComponent<Camera>();
        voxelCamera.enabled = false;
        voxelCamera.orthographic = true;
        voxelCamera.orthographicSize = voxelSpaceSize * 0.5f;
        voxelCamera.nearClipPlane = 0.0f;
        voxelCamera.farClipPlane = voxelSpaceSize;
        voxelCamera.depth = -2;
        voxelCamera.renderingPath = RenderingPath.Forward;
        voxelCamera.clearFlags = CameraClearFlags.Color;
        voxelCamera.backgroundColor = Color.black;
        voxelCamera.useOcclusionCulling = false;

For example, my VoxelResolution is 256. How does the fragment shader know my screen is 256*256?

15 hours ago, TBWorkss said:

There is no code to control the resolution of the camera's screen in the script. The camera's setting is:

 

It seems the resolution is set here:

15 hours ago, TBWorkss said:

voxelCamera.orthographicSize = voxelSpaceSize * 0.5f;

 

I guess if voxelSpaceSize is 128 for example, then this line:

15 hours ago, TBWorkss said:

int3 coord = int3((int)(input.pos.x), (int)(input.pos.y), (int)(input.pos.z * VoxelResolution));

...would have signed x and y (-64,63) but unsigned z (0,127), which confuses me. But maybe it helps :)

This topic is closed to new replies.

Advertisement