Textures
Hello, i'm new to the forum so I hope I'm posting in the right section
I only learned how to module a few weeks ago with Sketchup because 3ds is too hard for me
I've made a model but I have a problem with the textures:
at first the model looks fine: http://img179.imageshack.us/img179/3397/tb1.jpg
but then, exporting it and importing it as .X file, it has a really low contrast, something like this
http://img179.imageshack.us/img179/2617/tb2.jpg
anyone how this can be solved? because giving each group the same texture with a different texture is a bit sloppy i think
Thanks
The difference between the images has nothing to do with textures.
The first picture looks like it has normal group edges explicitly drawn with black lines. This is a commonly used modeling aid, but it is very rarely used in final rendering, except for some CAD programs where seeing discontinuous edges can be desirable.
D3D (thus, .x files) does not have the concept of normal groups at all, so you won't get the same result automatically - the group data is discarded when you export to .x file.
For similar results in D3D at runtime, you could try to apply an edge detection shader and draw black in the pixels that fall on an edge. For precise results, you'd need to use D3D10 and geometry shader to find the actual edges of the geometry based on adjacency and surface continuity.
A good, modern way of increasing edge contrast is to use a technique called ambient occlusion. It results in very realistic outlines on geometry details.
The first picture looks like it has normal group edges explicitly drawn with black lines. This is a commonly used modeling aid, but it is very rarely used in final rendering, except for some CAD programs where seeing discontinuous edges can be desirable.
D3D (thus, .x files) does not have the concept of normal groups at all, so you won't get the same result automatically - the group data is discarded when you export to .x file.
For similar results in D3D at runtime, you could try to apply an edge detection shader and draw black in the pixels that fall on an edge. For precise results, you'd need to use D3D10 and geometry shader to find the actual edges of the geometry based on adjacency and surface continuity.
A good, modern way of increasing edge contrast is to use a technique called ambient occlusion. It results in very realistic outlines on geometry details.
Niko Suni
Thanks for the response first of all,
but I'm having some problems understanding edge detection shader and ambient occlusion
so, an egde detection shader is something you have to implement in code (i'm using c#) ? Determine the vertices position to the camera and then calculating a value to make more dark?
but ambient occlusion is determined by the lighting of the model?
but I'm having some problems understanding edge detection shader and ambient occlusion
so, an egde detection shader is something you have to implement in code (i'm using c#) ? Determine the vertices position to the camera and then calculating a value to make more dark?
but ambient occlusion is determined by the lighting of the model?
Both of the techniques I mentioned can be done with shaders or CPU. The implementation details will be different between them, though.
Shaders are commonly programmed in a high-level shading language (HLSL in D3D) that resembles C in many ways but adds vectorized data types and intrinsic functionality for graphics-specific calculations. In case you're using shaders, your C# application works as a host for the shader system, feeding the GPU any necessary data to run the shaders.
The XNA library is intended to be used with C# directly and simplifies game programming somewhat, but you can also use SlimDX for close equivalence to raw D3D if you need more control at the expense of simplicity.
If you use CPU, you can use C# directly for the tasks. However, the GPU (shaders) can be order of magnitude faster in most graphics tasks.
Please be aware that the stuff I'm talking about requires moderate amount of experience with modern graphics programming. Since you seem like a beginner in this field, writing complex shaders (as of now) may be too advanced. Please take this as constructive advice, as this is certainly not intended to be an attack against you.
There is a very active DirectX/XNA section in these forums; be sure to read the FAQ and when you have specific questions, ask us :)
Shaders are commonly programmed in a high-level shading language (HLSL in D3D) that resembles C in many ways but adds vectorized data types and intrinsic functionality for graphics-specific calculations. In case you're using shaders, your C# application works as a host for the shader system, feeding the GPU any necessary data to run the shaders.
The XNA library is intended to be used with C# directly and simplifies game programming somewhat, but you can also use SlimDX for close equivalence to raw D3D if you need more control at the expense of simplicity.
If you use CPU, you can use C# directly for the tasks. However, the GPU (shaders) can be order of magnitude faster in most graphics tasks.
Please be aware that the stuff I'm talking about requires moderate amount of experience with modern graphics programming. Since you seem like a beginner in this field, writing complex shaders (as of now) may be too advanced. Please take this as constructive advice, as this is certainly not intended to be an attack against you.
There is a very active DirectX/XNA section in these forums; be sure to read the FAQ and when you have specific questions, ask us :)
Niko Suni
Edge detection can be implemented in geometry level or pixel level.
In case of geometry, you find triangle edges that only belong to one triangle - thus, the edge has discontinuity. This closely resembles the first image you posted, but cannot be 100% accurately the same since you don't have the full original group data available.
In case of pixels, you run a purpose-built convolution filter consisting of a 3x3 kernel over the scene depth buffer; this highlights abrupt changes in depth, thus revealing sharp edges at pixel level.
Ambient occlusion can also be used at geometry or pixel level.
Both cases involve casting some rays from each primitive (vertex or depth-buffer pixel) and seeing how many of the rays collide with the rest of the geometry. This establishes how "occluded" the primitive is, and the percentage can be used (and is commonly used) to scale your ambient light factor at that primitive.
In case of geometry, you find triangle edges that only belong to one triangle - thus, the edge has discontinuity. This closely resembles the first image you posted, but cannot be 100% accurately the same since you don't have the full original group data available.
In case of pixels, you run a purpose-built convolution filter consisting of a 3x3 kernel over the scene depth buffer; this highlights abrupt changes in depth, thus revealing sharp edges at pixel level.
Ambient occlusion can also be used at geometry or pixel level.
Both cases involve casting some rays from each primitive (vertex or depth-buffer pixel) and seeing how many of the rays collide with the rest of the geometry. This establishes how "occluded" the primitive is, and the percentage can be used (and is commonly used) to scale your ambient light factor at that primitive.
Niko Suni
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement