Advertisement

Texture Randomization

Started by August 10, 2012 09:17 AM
6 comments, last by jefferytitan 12 years, 5 months ago
<p>Hello, everyone


My experience is middle level in both 2D and 3D art. Long term, the sky is the limit, but short term I wish to stay with open source programs until I may justify and be able to spend money on paid programs. I use GIMP and Wings3D. I could use my friend's 3DS Max part time because he offered to let me use it when he is not in it.

Help for creating randomized placement of texture in a 2D image is what I want now. What I mean by this is I would like to totally eliminate the "carpet pattern" look of tiled textures by introducing some kind of automatic randomization of any texture which I place there. For the last couple years, I have been able to avoid the "carpet pattern" when random appearance is needed by tedious placement of texture elements one at a time, often editing most of them along the way to make most of them unique to further prevent any pattern effect in the image.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

Take a look at wang tiles, here is a chapter about them in GPU Gems.
Advertisement
The traditional artist solution to the repetition problem I believe is removing low frequency detail from your textures, but it's not much fun, e.g. I find the results boring even if they don't appear to tile much. I'm not sure about your tools, as I come from a programmer background. I have found decent results from having a couple of similar textures (e.g. two different grass textures) and blending them using Perlin noise or similar. I imagine a plugin could be written for a 3D program to do this. Most of my other experience relates to shaders, e.g. modifying the way you render it rather than the models themselves. For example blending a tall texture with a wide texture to give the impression of a larger square texture. Or dynamically adding random-looking details such as rocks and flowers.
Ashaman73,
That was a small amount of time for the first reply!

The name of the tile theory called Wang Tile was forgotten by me, but the concept is actually part of the obstacle to hurdle for me in my mind. In the best examples of the Wang Tile, I still see repeat patterns which make the plane look man made. In the case of some textures, this is not so bad because I want those to actually look effected by human hands. When a natural and random occuring appearance is wanted, this is not the path to a complete solution for me. By nature of using tiles, the Wang Tile theory is human looking because we humans notice patterns very easily even if the pattern is asymetrical. Any use of a tile several times or more by nature is high risk of being recognized by a person as a pattern. All this of course is my point of view on it, yet I have a philosophy of expecting others to usually see patterns if I notice them. I consider anything repeated to be pattern looking if they are similar enough to seem the same though perhaps in a different position.

The equations in the book "GPU Gems 2" have potential, but I consider the information to lead to more work to ensure natural - as opposed to human - appearance. In other words, I still see human made patterns in the examples given.

The problem with tiling is that the technical area generally does not introduce the size randomness seen in nature, such as when you look at a naturally occuring pile of something or various sized waves at sea.

Some people have found an effective solution by way of shading randomization applied in the scene. Likely this will be the ultimate answer which creates the effect of randomization with minimal decrease in performance of the game, but only research and testing can confirm this in my mind.

Ashaman73, good information you provided and it sure has a role in game development.

Thank you,

3Ddreamer

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

jefferytitan,

This is good here because you brought gradient noise to the discussion and perfect timing, too.

GIMP has had gradient noise used by artists, but here again I typically see patterns when I know the artist is attempting to avoid patterns in the image. I have not been asking a specific enough question posed to GIMP users, particularlly about using gradient noise and whether I may use my texture integrated in the gradient noise function of GIMP. The disconnect is directly because of my disappointment at the results which I have seen in gradient noise of GIMP. It does not necessarily mean GIMP is the problem but perhaps I simply have not found the technique yet in GIMP which accomplishes randomization of my texture with gradient noise.

In technical terms as far as I understand, the two basic types of math statements, the formula and the equation, have patterns in their expressions which impact both the game scene and performance of the game. In equations, a factor can be randomized, often by coding parameter, and the result can be a randomized solution, meaning randomness in the game scene. However, the equation itself has some kind of pattern in the expression making it difficult to create one which has a result free of obvious visual pattern. Formulas also have patterns in their expressions, but it is generally easier to make one side of the statement free of operations - therefore free of pattern on that side - resulting in less probability of noticeable pattern in the scene. Having a 2D image randomized on a 3D object makes the game code larger and more complex.

The larger the math statement in the code and the more of them there is then obviously the more the impact against performance, so I seek texture based solutions where possible to increase game performance. It seems to me that calling upon video memory is better for performance than relying on chip processing of code each frame, but I could be wrong. I see video memory as freeing more resource compared to code processing. It seems to me that it is easier to increase video memory size than it is to increase processor speed, so I am trying to depend on resolving all appearance issues before the object is handled in the graphics system.

Isn't it true that video memory uses information from short term storage in a repetative way each frame, but algorithms must be recalculated each frame? The video data must be stored and used each frame in any case, so why add processing calculation each frame as well? Wouldn't it be better to resolve the need for randomization in only the video memory in order to improve game performance?

Video memory vs chip processing seems to be the two fundamental methods for randomized appearance in textures, is this not true? I expect that getting the randomized appearance to be created in the texture by the 3D artist helps game performance compared with the alternative of extra processing to handle a texture randomization and read code each frame to impliment randomization by code rather than only in texture.

Gradient noise might be the perfect approach to texture randomization to maintain good game performance. I believe that we will come to a conclusion on this in my thread here.


3Ddreamer

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

The main source of patterns, or artifacts, in Perlin noise comes from the fact that the noise is generated on a grid lattice. Perlin specifically implemented his gradient and simplex noise variants to try to reduce the occurrence of grid artifacts. Nevertheless, the grid structure is still there. However, there are a few tricks you can do to help eliminate them.

1) Use a lacunarity that is not an integer. In most implementations, the term lacunarity is used to describe how the successive octaves of a fractal line up. For instance, a lacunarity of 2 (commonly used) means that each octave will be 2x the frequency of the preceding one. Integral lacunarity values will cause the grid boundaries of successive octaves to line up exactly, exacerbating the grid artifacts at each level. Here is an example of ridged multi-fractal noise. The first half of the image uses a lacunarity of 2, the second half uses a lacunarity of 2.333:
sner2.png
You can see that the change to lacunarity did help a little bit, but there are still artifacts. To take it further, you could alter the lacunarity at each step. However, this will only reduce the artifacts. It won't eliminate them completely.

2) Apply a rotation to the octaves. Each time an octave layer is sampled, rotate the input coordinates by some amount first. Each octave should have a different rotation (or you'll just end up with the lattice problem again). This is a more effective method than merely changing the lacunarity, but it may have some repercussions as far as the character of the noise itself. Here is an example of the same ridged noise as above, but with randomized octave rotations:
8jKwy.png

You can see that the grid artifacts are basically gone.

Another source of artifacts in using noise to generate textures is in the algorithm that is used to generate a seamless tile from an existing image. There are typically three commonly used ways people generate tiling textures from noise, and two of the ways are well implemented in the Gimp.

1) Alter the grid lattice of values under-pinning the noise function so that each octave wraps around. This is how Gimp's Seamless Noise filter works. Each octave of the function is itself a tiling pattern. This works well for noise that has integral lacunarity and no domain rotation, and not so well for other noise, so I have found that it doesn't work so great when used in conjunction with the above techniques for reducing artifacts.

2) Perform a 4-way blend of noise patterns. Given a chunk of non-tiling noise, you can conceptually split it into 4 quadrants, and perform a specialized blend using gradients so that edges blend from one quadrant to the next, and the result will tile. There is no inherent support for this in Gimp, but it can be done using gradient layers and Multiplicative layer blending.

3) Duplicate your layer of noise some number of times and offset the duplicates, then perform a blend between the duplicates to eliminate seams. This is what Gimp's Make Seamless filter does. the main drawback of this is that characteristics of the texture pattern are repeated several times in the same texture, kind of a no-no if you want to eliminate repetition.

(2) and (3) have additional drawbacks in that the character of the function is altered by the blending. If you have, for example, a high-contrast texture consisting of black and white, the blending process will corrupt the contrast, giving you a mottled pattern of blended grays instead. Worse, the blending will occur more in the center of the image, and less at the edges. Here is an example using Gimp's Make Seamless. On the left is the original texture, on the right the seamless version:

ME2ej.png

See how the blend caused the nice contrasting texture to basically be ugly? Not only did it corrupt our nice blacks and whites, it introduced "pinch" patterns where the seam blending was done These blending techniques are suitable only for lower-contrast textures where the additional artifacts will be less noticeable.

Now, our very own JTippetts here at gamedev has done quite a bit with noise, including implementing his own library. Some time back he wrote a journal post about seamless noise that includes a discussion of these very drawbacks. In his article and his noise library, he proposes a solution to the blending problem for seamless high-contrast functions by using higher dimensionality of noise functions and sampling them in "circles" to generate a seamless pattern. The technique works, and works very well, for high contrast textures (kinda hurts my brain, though); however it has the drawback of requiring, for example, 4D noise in order to generate a seamlessly tiling 2D texture. (And 6D noise, if you want to generate a seamlessly tiling 3D texture. Yikes.)

Of course, his technique isn't implemented in Gimp or in any image editing package, to my knowledge. The higher dimensionality requirement might make it a bit more intensive to implement in a shader as well.

So by using a combination of techniques (domain rotation for octave inputs, altering lacunarity, using higher-order functions to generate the seamless mappings) it is possible to eliminate any grid artifacts inherent in the texture itself. The only thing left will be hiding the repeating pattern generated by tiling the texture multiple times.

For these artifacts, there are solutions such as Wang tiles and aperiodic tiling that can be used. Generating Wang tiles is kind of difficult, and works best with the blending methods of seamless tiling. You generate multiple variants of a texture then use gradient blending to blend in the proper edge given a particular edge pattern. Of course, this would result in the same blending artifacts as the seamless algorithm does, so it would not be very good for high-contrast textures. This means that all the work of using JTipetts' method is just thrown out the window anyway. I still haven't wrapped my head around his technique well enough to figure out if there is any way you could use his method to generate Wang tiles directly. It kind of fries my brain just thinking about it. :D I have a hard time thinking in coordinate spaces higher than 3D.

Wang tiles can be used to tile a surface aperiodically, meaning that there will be no recurring macro patterns. However, you are still limited by the number of tiles in your Wang set, so given a large enough visible area, the user is probably still going to see that you reuse tiles. But still, it seems to be about the best you can get with a tiling system.



Now, if you implement shader-based in-place procedural materials, there is no requirement of actually generating tiles (seamless or otherwise). However, other limitations arise in this case. By using complex sequences of noise functions, you can generate some very elaborate texture patterns. For an example of this, I again refer you to JTippetts' journal. At that link you will see a mosaic of grayscale textures he generated via his noise library. The problem is, there are some patterns you can generate using an offline library like that, that would be very difficult to generate within the constraints of a shader environment. It could be done in most cases, I think, but the processing overhead could be quite drastic. In my experience, shader-based procedural textures tend to be rather simplistic indeed. As hardware becomes more powerful, though, this option becomes more and more viable.
Advertisement
Wow! This is a wonderful thread now!

Guys, the timing of each of your replies was perfect for me!


FLeBlanc, the basic issues which you expressed are just the validation I needed. It is now clearer than ever to me that processor based solutions for this are to be avoided for me. Many of my games will demand strict performance considerations due to things such as high effects. I'll save the shader work on only the most important things in games and sparingly at that. I really want balance in my games which will eventually have many features.

Someone had shader based work along this vain in his creating of nice randomized clouds (the cloud banks were amazing), desert sands (or beaches), and dynamic water. I had a good look at his website and thought deep about the consequences of his work. I decided that it was wonderful for rendering but not so much for games.

I will practice most if not all that you brought here about GIMP. This is exactly what I need.

Good stuff! Mr. Spock would be fascinated.

Here is another very useful thread at gamedev which is on my favorites list. biggrin.png


3Ddreamer

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

@FLeBlanc: Wow, very well considered response. I haven't seen lattice artifacts as bad as those you showed, but nonetheless the results for the rotated octaves is fantastic.

@3DDreamer:
You're correct, the most technically feasible approaches do repeat. The key in those cases is to make the repetition too large for human eyes to easily pick up the pattern. It's fine to provide a lot of the randomness to a shader in the form of a texture to reduce processing requirements. However for a large area of terrain the texture would be too big. That's why shaders would often expand and smooth the noise, repeat it, provide variations on it, etc. They often suffer from scaling and clustering which are different to that for natural systems.

Approaches like the clouds etc that you mention tend to be closer to physical simulations. Due to that they create great results that don't repeat, but they usually need to be programmed separately for each situation, and the processing requirements are formidable. An example is fluid simulation using the Navier Stokes equations, which requires solving complex equations for each of thousands of grid cells, and then formulating a visualisation on top of that. On top of that, they often require continual simulation, you can't expect good results if you turn off the simulation while off-screen then attempt to "catch up" later.

This topic is closed to new replies.

Advertisement