Advertisement

alpha blending problem

Started by May 01, 2008 05:16 AM
2 comments, last by lc_overlord 16 years, 6 months ago
Hi, I'm making a little UI where the user can open an image and then draw over it by clicking and dragging. When the image is opened it's displayed in the background (it's made into a texture and applied to a quad). A (not very accurate) depth map is available for the image and the aim is to make the brush strokes (created by the user 'drawing') appear like the follow the shape of the object. In essence, open a picture and a paint a new material on the depicted object. In this case I'm trying to simulate fur.. Each hair is drawn as 3 line segments and there's one hair per pixel covered by the user's brush stroke. Each hair starts at 0.6 alpha and ends with 0 alpha. Also, a bit of randomness is involved when generating each hair to make it looks more realistic as real fur/hair isn't perfect. Currently I'm initialising with glClearColor(0.0f,0.0f,0.0f,0.0f); glClearDepth(1.0f); // Depth Buffer Setup glEnable(GL_DEPTH_TEST); // Enables Depth Testing glDepthFunc(GL_ALWAYS); // The Type Of Depth Testing To Do glShadeModel(GL_SMOOTH); glViewport(0,0,m_w,m_h); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glLineWidth(1.5); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-1,1,1,-1,1.0,10.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glHint(GL_LINE_SMOOTH_HINT, GL_NICEST); glEnable(GL_POLYGON_SMOOTH); glEnable(GL_LINE_SMOOTH); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); However, as the depth test is set to always, new hairs cover large parts of the previous ones (apart from the root) so in the middle bits of each brush stroke only the root bits are visible so it just looks like random pixels. The effect I'm trying to achieve is the way it looks at the end of each brush stroke - ie furry and fluffy If I set the depth test to LEQUAL it goes all horrible because (from what I understand at least) even if things under the transparent parts of the hair should be visible they're not rendered because they fail the depth test. The only possible ways I can think of so far are rendering just the fur in a separate buffer and blending the new with the old so that the alpha of the new is 1-alpha_old or something along these lines so that new hairs overlapping with the old are covered by the old. Or keep a tree with all the hairs sorted by depth in memory and when a new one is drawn, redraw the area it affects in the correct order (based on depth) which sounds sensible but in most cases line segments of the old and the new hair would intersect so I'm not sure how I should handle that. Any suggestion on how I could start with either of the above approaches or any other possible solutions/hacks would be greatly appreciated.. The end result doesn't have to be accurate as long as it looks right... Apologies for the long post Thanks Tania
This has always been a problem, there is just one shurefire way to render multiple semi transparent polygons, and that is to sort them back to front, possibly also clip them if they intersect.
That takes time, a lot of time in this case, however, you can cheat.

Method 1: (which i think might work)
1. render the base color (alpha texture but single color) to the front buffer but at the same time write the depth to a separate buffer.
2. render the textured variant on top of that using GL_GREATER and blend it according to the depth compared to the depthmap so the more it's behind the texture the more transparent it gets
3. Now render the textured variant using GL_LEQUAL

Method 2: (which works and is faster but takes up lots of memory, it also works well on grass and vegetation)
1. render only the fur to a brown (base color) high res(the higher the better) texture, but instead of using alpha blending you render it without transparency (you can also use alpha testing and quads if you want to do other things with it).
2. place that texture on top of the rendering but with a custom multi sample filter that gives it just a nudge of extra blurriness.
Advertisement
Thanks :)
Method 2 sounds more promising I think (or makes more sense anyway - sorry I'm only just starting with opengl and graphics programming in general)

Method 2: (which works and is faster but takes up lots of memory, it also works well on grass and vegetation)
1. render only the fur to a brown (base color) high res(the higher the better) texture, but instead of using alpha blending you render it without transparency (you can also use alpha testing and quads if you want to do other things with it).
2. place that texture on top of the rendering but with a custom multi sample filter that gives it just a nudge of extra blurriness.

So just to make sure I understand what you're saying, I render the fur to a texture and update it every time the user draws new fur, then place it on top of the main rendering. How do I do the multi sample filter? Would all that actually work realtime so that it appears like the user is painting the fur on the object?


The multi sample filter is done in a fragment shader, it's pretty simple to do, just read from the texture multiple times with slightly different UV coords, add them together and then divide with the number of samples.

Yes it would all be done in real time, it's not that much slower than doing it in the frame buffer like you originally did, it might even be faster as you don't need any blending.
Just use a FBO and you should be fine.

This topic is closed to new replies.

Advertisement