Advertisement

Best way to draw outlines for a toon shader: Geometry shader? During post processing? Something else?

Started by April 20, 2022 01:41 AM
3 comments, last by Thaumaturge 2 years, 7 months ago

I'm trying to figure out the best way to create outlines for my GLSL toon shader.

So far I've found the following solutions:

Using a Geometry Shader

In the OpenGL 4 Shading Language Cookbook, there's a section in chapter 7 called Drawing silhouette lines using the geometry shader. You can read the geometry shader code here (the link button not working):

https://github.com/PacktPublishing/OpenGL-4-Shading-Language-Cookbook-Third-Edition/blob/master/chapter07/shader/silhouette.gs

Apparently, I need to load my model "with adjacency", and I'm not sure how to do that with tiny_gltf. Is there a way I can accomplish the same results but with a normal triangle layout instead of adjacency?

Post Processor Outlines

I read outlines are also created during post-processing. I can't find any example code, but I could search deeper if this solution is worth trying out. I'd prefer the first solution over this one though.

Needing Suggestions

The geometry shader solution seems like an interesting and straightforward option, but that adjacency triangles issue would slow me down, I have no experience with this data layout.

Any suggestions on how to accomplish these outlines are welcome. Although if you have ideas on how to get the first solution mentioned working with normal triangles, wow that would be perfect!

Thanks!

None

ihavequestions said:
I read outlines are also created during post-processing. I can't find any example code, but I could search deeper if this solution is worth trying out.

I think that one way to do it is to sample fragments around the current fragment, and to then shade the fragment as part of “a line” if the samples differ significantly in one or more ways--in depth, or in normal, or even in a secondary colour-channel, for example.

I don't know whether this is a particularly good way, offhand, but I believe that it does work.

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Advertisement

The ways I've done it:

  1. Based on depth buffer (texture) reads. Post process generate a black pixel where the local Z value is sufficiently deeper than some surrounding Z value. Easy and reasonably robust, but there's a trade-off balancing act between “too many internal contours show up” and “characters next to walls don't get outlines.”
  2. Based on ID buffer (stencil, or separate color channel) reads. Each object is rendered with a unique ID (color or stencil value.) Post-process generates a black pixel along borders.
  3. Flip the mesh winding order, push it out along the normal a few centimeters and back into the scene a few centimeters, and render with black color. Old-school, but very easy to implement, and can work surprisingly well. Outline thickness will be in world space, not screen space. Some tuning needed.
enum Bool { True, False, FileNotFound };

hplus0603 said:
Based on depth buffer (texture) reads. …

Based on ID buffer (stencil, or separate color channel) reads. …

If I'm not much mistaken, these two techniques can be used to reinforce each other--the former capturing outlines within an object (e.g. a character's arm extended across their chest) and the latter aiding the former in distinguishing objects.

I'll also note that I've found that with the depth-buffer approach there is a potential caveat that areas that are sloped from the perspective of the camera can end up with erroneous lines (or over-thick lines, depending on implementation).

There are likely ways to deal with this, I imagine; one thing that I've found seems to work well enough is to incorporate the surface normal into the relevant calculation.

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

This topic is closed to new replies.

Advertisement