Advertisement

Vertex/Fragment Programs

Started by January 11, 2005 01:12 PM
25 comments, last by vincoof 19 years, 10 months ago
I`m talking about GL_ARB_vertex_program GL_ARB_fragment_program I might sound stupid, but... I googled around until I got nuts about these topics and still don`t understand certain things... Anyway, besides the specs and CG Tutorials(PS:I don`t wanna use ATI/NVIDIA specific extensions), plus some minor tutorials that make "color * texture color" I have not found anything to learn from... do you know something that can show me for example all the instructions and what they exactly do plus how to use them, I`m also a little bit dizzy about variables, can`t understand what I send for input and how the output sounds like... Also if there`s an error on a line, can I find out what the error sounds like ? I`ve only seen something like, if the rendering function gives GL_INVALID_OPERATION then the shader is wrong in some way. Oh and is there a shader debugger ? :D I`d really wanna see what the variables contain at run-time and stuff(I guess I`m asking for too much huh ?) Sorry If I couldn`t write exactly what I want but I`m tired and confused. BIG PS: For Example If let`s say I wanna move a fragment`s position on the window (x,y) how would I do that ?

Relative Games - My apps

1- There is no shader debugger. However, you can use the GL_PROGRAM_ERROR_POSITION and GL_PROGRAM_ERROR_STRING to query what really happened when the program failed to load. See issue #36 of the ARB_vertex_program spec (as of revision #43).

2- I've submitted a full ARB_vp + ARB_fp demo to NeHe some time ago. It features cel-shading over an arbitrary mesh. This demo is an extension to NeHe's Cel-Shading tutorial. You should still find my demo in the downloads page under the letter A. Two reasons to download the demo : (i) a full line-by-line walkthrough is included for both the vertex and fragment program (which is a pretty good start for these extensions), and (ii) alternate rendering methods are provided too, so you can compare how something's done with or without programs. On top of that, the demo has few other pros (it's free, it works under linux, source code is in NeHe-style, etc) so feel free to try it ... at least run it once :)

3- I'm not sure what's the point of the "BIG PS" but if I understand it correctly, it can't be performed at the fragment program level. Fragment program does not position some fragments over the window, it just supposes a fragment is being computed at some place (and at this point you don't even know what this place is, unless you explictly send this information into texture coordinates for instance) and the fragment program computes what is the output color of it (and optionnally output depth) but doesn't computes "where" it places the fragment.
Advertisement
A)Big thanks to you man ;)

B)With point 3 I was actually trying to figure out how frame buffer distorsions are possible, at first I thought that in the fragment program you have acces to the frame buffer and can copy/write whereever, that would be super! But... right now I can`t figure out how that`s done... hm... but first I`ll check those things, thanks ;)

Relative Games - My apps

If you are doing a post processing shader (i.e. fullscreen "render-to-texture" then running the texture rendered over the entire screen through your shader) it is possible to sample "fragments" (depends on the resolution) around the current fragment by changing the texture coordinates inside the shader. This is used in blur operations for depth of field etc. and can probably be extended to distortions (maybe applying a distortion factor (e.g. sin, perlin noise, or distance from [0.5 0.5]) to the texture coordinates?). I wrote a shader that made the screen "shudder" by affecting the texture coordinates in a pixel shader (although it is usually faster to compute this in a vertex shader run on a grid as it is executed less often).

Hope that helps [smile]

EDIT: grammar and spelling :)
Quote: Original post by cippyboy
I have not found anything to learn from...

Because you don't need to. Trust me.
Quote: Original post by cippyboy
all the instructions and what they exactly do plus how to use them

This is already embedded in the ARB_vp and ARB_fp specification. Using them effectively is another kind of problem however. Considering what can you do using VPs and FPs I guess your creativity is the only limit. If you ever programmed assembler for a standard CPU you will be quite challanged by the new "stream" computing model but after some days you'll happily find your way around.
Quote: Original post by cippyboy
I`m also a little bit dizzy about variables, can`t understand what I send for input and how the output sounds like...

It depends.
There are different types of resources in VPs and FPs.
- Constants are just like standard programming languages 'const' thingies. Those are usually embedded in the source code and sometimes fetched using a GL call (I don't remember the name now but it should sound like ProgramParameter or something).
- Uniforms are read only constants... so what's the difference with the previous thing? Uniforms and constants are very similar and in fact some hardware does not know the difference (all NV GPUs don't).
- Variants are read only constants... which changes by vertex to vertex (for VP) and similarly in FP. You can't change a variant in the same way you change a uniform. Variants change over the data stream and those are usually passed as arrays... which brings you to the point. Variants are, in fact, position, color and more generally what are called 'generic vertex attributes' as the ARB_vp proposes. This means to change a variant you need to access the array. For FPs, this is quite different because input for the fill unit comes from the transform unit. Variants can't be changed in a FP for various reasons. The best you can do is to change the parameters which brings you those variants. Since FP variants are interpolated after the transformation you have to work on the VP. There are hundreds of uniforms but variants are usually limited resource. Fast evolution of drivers and hardware however allows you to not care about this too much.
- Temporaries: what exactly don't you get of them? They looks pretty much the standard variable thing to me...

The output for the VP must be a transformed, not w-divided vertex position (in the sense you have to write the output registers, but what you write it's up to you). You can also write other attributes.
Then, the computed work from each VP is considered and interpolated when "assembling primitives". Instead of having 3 xformed vertices, the GPU puts them togheter to build a triangle. The fill must be done so, the results are linearly interpolated just like normal color is for example. Interpolated results are passed to fill unit.

This is where the FP comes into play.
Technically, the standard GL rasterization stage is a special, "hardcoded" FP, just like the standard GL xformation routines are, so I won't consider the idea in which FPs are disabled.
So, every interpolated result from the previous step gets processed by the FP. That's it. The results from the FP computations are much less difficult to understand because framebuffers have limited functionalities. So, you have to write color and possibly depth. Position is an input data from interpolated VP computations so you cannot move a fragment in a FP. Note that you don't need to write color nor depth, and this could look somewhat weird in the beginning.

So, there's a VP and a FP... and something between them.
If you look at Cg, you would see there's the concept of "connectors". Connectors are, in fact, sets of inputs for the FP, and you need to make sure the FP gets compatible input from the VP.
If the VP writes a output register which isn't used by the FP then it's value is trashed. If the FP reads a value from a register which isn't written by the VP then its value... uhm, maybe it's (0,0,0,1) or maybe is undefined, you shall check this out by yourself.
Quote: Original post by cippyboy
Oh and is there a shader debugger ? :D I`d really wanna see what the variables contain at run-time and stuff(I guess I`m asking for too much huh ?)

I has been told there is one from NVidia available only to well-known companies, but I think this is just a rumor. The stream-MIMD processing model is not really well suited to debugging.
Quote: Original post by cippyboy
For Example If let`s say I wanna move a fragment`s position on the window (x,y) how would I do that ?

As far as I know right now you cannot (see above). Not in the FP at least, because fragment position is FP input and not output. You can obviously play this trick in the VP which can displace vertices and, indirectly, fragments.
Which is, in fact, what Vincoof said, just in my different, less readable manner ;).

Previously "Krohm"

OK, thanks a bunch ;)

1)I`m trying to code a vertex/fragment "compiler" so I can edit them on the fly and see results/minor debug information with those commands;
Uh... I might sound stupid by saying this but... I have a dialog box and an edittext window with ES_MULTILINE as it should and... when I press F7 I wanna compile the stuff, but... I`m having a stupid problem that`s eating me; the WM_KEYDOWN is not accesed when I write text in the edittext field nor when I press any key anywhere, not even WM_RBUTTONDOWN works on the edittext, just outside of it; I even tryed the
WM_COMMAND->case EDITTEXT_ID and I don`t receive any messages here either... /:

2)In a vertex program can I... create vertices ? :D I just thought of shadows for example but for now I should mind about other stuff right ?

Relative Games - My apps

Advertisement
not sure about whats happening in #1 (although you shoud get messages in the dialogproc of your dialog [the one passed to in the CreateDialog() call]), but as for #2: no, you cannot create vertices. To do shadows the silhouette must be determined and extra vertices created to make the shadow volume before the sending to the vertex program. The vertex program can be then used to extrude the volume to infinity based on the added vertices.
Ok, It`s just that I saw some ARB_shadow extension and wondered what the heck it does :D

1)Ok some other thing I cam across while writing some testing fragment programs, can I find out the fragment`s 3D position ? the window position doesn`t help too much...

2)How does the DST command work ? I`m sending 2 vectors and still seem not too get what I want... it should return the length on all 4 components right ?
Oh and how about the Square Root ? RSQ is reciprocal(which I don`t know what it should mean) I just want to do a square root to a variable that`s all...

Hope it`s not too much :D

Relative Games - My apps

cippyboy are you using OGL 2.0 so you dont have to use extensions? i have been trying out GLEW but i get all these linker errors...
heh
Nope, just the plain extensions as described above and... as seen in so many demos/games :D

Relative Games - My apps

This topic is closed to new replies.

Advertisement