Advertisement

Tube lights or projected area light

Started by February 15, 2018 03:28 AM
4 comments, last by hyperknot 6 years, 11 months ago

Hi, first post here. I'm making a simple Augmented Reality game from the known 2D puzzle game Slitherlink or Loopy. This will be the first time I'm using shaders, so I'm on a bit of a steep learning curve here.

My concept is that AR will look really nice with self illuminating objects, instead of normal materials where the shadows would be missing or wrong, as would be quite striking when composited to the camera feed.

So I'd like to make the game as "laser-beams" levitating above the table, which is technically saying displaying and illuminating using tube lights. This is where I'm stuck.

I've implemented smooth 2D line segment rendering by creating rectangles perpendicular to the camera and shading them in the fragment shader.

I also looked into area lights, but all I could come up with was just "getting the nearest point in a rectangle" concept, which is:
- looking nice on diffuse as long as it's a uniform color
- but is totally wrong for Blinn and Phong shading

My biggest problem is how to get the tube light illumination effect. Instead of the uniform white area on the screenshot below, I'd like to get colored, grid-like illumination on the ground. The number of tube lights can be up to 200.

My only idea is to render to a buffer from a top orthogonal projection, apply gaussian blur and use it for diffuse lighting on the floor. Does this sound reasonable?

Also, does anyone know how to get spectacular reflections right with an area light? Nothing PBR, just a Blinn would be nice. 

The scene is very simple: floor on 0, all lights in the same height and only the floor needs to be lit.

5a84f93e9a332_hyperlinesmacOS2018-02-1504-05-33.jpg.aab119352718d74ad43fa6de189286cb.jpg
5a84fd4853f25_hyperlinesmacOS2018-02-1504-19-48.jpg.8d5ece67f0b9d184a1dd9e3e204ced0e.jpg

My shader (Metal, but pretty much 1:1 GLSL):


fragment float4 floorFragmentShader(FloorVertexOut in [[stage_in]],
                                    constant Uniforms& uniforms [[buffer(2)]],
                                    texture2d<float> tex2D [[texture(0)]],
                                    sampler sampler2D [[sampler(0)]]) {

    float3 n = normalize(in.normalEye);

    float lightIntensity = 0.05;
    float3 lightColor = float3(0.7, 0.7, 1) * lightIntensity;

    // area light using nearest point
    float limitX = clamp(in.posWorld.x, -0.3, 0.3);
    float limitZ = clamp(in.posWorld.z, -0.2, 0.2);
    float3 lightPosWorld = float3(limitX, 0.05, limitZ);
    float3 lightPosEye = (uniforms.viewMatrix * float4(lightPosWorld, 1)).xyz;


    // diffuse
    float3 s = normalize(lightPosEye - in.posEye);

    float diff = max(dot(s, n), 0.0);
    float3 diffuse = diff * lightColor * 0.2 * 0;

    // specular
    float3 v = normalize(-in.posEye);

    // Blinn
    float3 halfwayDir = normalize(v + s);
    float  specB = pow(max(dot(halfwayDir, n), 0.0), 64.0);

    // Phong
    float3 reflectDir = reflect(-s, n);
    float  specR = pow(max(dot(reflectDir, v), 0.0), 8.0);

    float3 specular = specB * lightColor;

    // attenuation
    float distance = length(lightPosEye - in.posEye);
    float attenuation = 1.0 / (distance * distance);

    diffuse *= attenuation;
    specular *= attenuation;

    float3 lighting = diffuse + specular;
    float3 color = tex2D.sample(sampler2D, in.texCoords).xyz;
    color *= lighting + 0.1;

    return float4(float3(color), 1);
}

 

I take it you have read the Wicked Engine blog post on the topic? It's probably the most accessible description of the specular technique I've run into, and their results look pretty good.

1 hour ago, hyperknot said:

My only idea is to render to a buffer from a top orthogonal projection, apply gaussian blur and use it for diffuse lighting on the floor. Does this sound reasonable?

When on mobile, fake as much as possible :) I think there's a pretty good chance that will look fine.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement
3 hours ago, hyperknot said:

Also, does anyone know how to get spectacular reflections right with an area light? Nothing PBR, just a Blinn would be nice.

You could reuse your ortho projection buffer for this as well. If you compute the blur by iterations, you could keep the result after each or some iterations so you can blend between sharp close reflections, and blurry reflections at a distance.

To get the sampling position to the buffer you can raytrace from eye to ground, then reflect the ray and intersect it with the blurred buffer plane at equal hight of the tubes.

You could also figure out the math to project the buffer directly to the ground on CPU to save GPU instructions to intersect planes at each pixel. (Similar to planar reflections at water surface: Just render the buffer at negative height. But i guess calculating the amount of blurring results in the same math as raytracing.)

Ha, i remember i did this myself for a mobile game as well. No shaders there, just the invert height trick using preblurred textures:

 

@hyperknot Hi, probably fully featured area lights are not the best fit for mobile as they can result in quite heavy shaders. That being said, it is actually harder to find a proper diffuse term for the area lights than speculars. For the speculars, all you have to do is to trace the area light with your reflection vector (which you already have, named reflectDir). If the trace succeeds, then you can use the same reflection vector to calculate a phong specular. If it doesn't succeed, you have to find the point on the area light surface closest to the reflection ray. Your new reflection ray will be then closestPoint - in.posEye in your case, then just proceed to calculate the phong specular with that. :)

For tube lights, you can find the closest point on light surface by first finding the closest point to reflection ray on the tube line segment:


// P0 and P1 are the tube line segment endpoints
// surface.P is the surface position (start point of the reflection ray)
// R is the reflection ray
float3 L0 = P0 - surface.P;
float3 L1 = P1 - surface.P;
float3 Ld = L1 - L0;

float t = dot(R, L0) * dot(R, Ld) - dot(L0, Ld);
t /= dot(Ld, Ld) - sqr(dot(R, Ld)); // sqr is just x*x

float3 L = (L0 + saturate(t) * Ld);

Once you have the closest point on the segment, place a sphere on that point with the radius of the tube and pick the closest point on the sphere:


float3 centerToRay = dot(L, R) * R - L;
float3 closestPoint = L + centerToRay * saturate(light.GetRadius() / length(centerToRay));
L = normalize(closestPoint);

And now you can use L as the new reflection vector as input to your phong specular term.

Good luck!

Thanks for all the replies @swiftcoder, @JoeJ, @turanszkij

First, I wanted to understand this whole area light's diffuse on a plane, leaving the spectacular for a later part.

It puzzled me that:
1. Point light's diffuse should have 1/x^2 attenuation * the cosine part
2. Infinitely long tube light's diffuse are supposed to have 1/x attenuation * the cosine part
3. The math behind these calculations end up in double and triple integers all over the place.
 
So I wanted to understand what does a physically correct tube light's diffuse look like. I've given up on the theory and I went and looked at various physically correct unbiased renderers, and ended up with the open source Mitsuba renderer. Mitsuba has a simple XML based scene format and was used for validating the Frostbite renderer's calculations. Alternatives to Mitsuba are Tungsten and pbrt-v3 by the same group of people.

So the photon-mapped rendering of 4 lights looks like this:
scene.png.127d83ce93f3117713ef39e92f8bd9cf.png

And the diffuse part of a single light looks like this:

linediffuse.thumb.png.6e025bdb2953ec4d8c7fc19be74e843a.png

Now, it'd be interesting to analyse the second one in linear color space, as it's actually very much not a simple blur, but more like an elliptical gradient. This is a heavily levels adjusted version in photoshop.


diffuse-gradients.thumb.png.871c21fa0040ae0c2d2d25b574af0e68.png

But instead of an analytical solution for a fragment shader I thought that it'd be the simplest and actually the most efficient on mobile devices if I'd just load my rendered image as a texture.

I've lost way too much time trying to load an OpenEXR file into a Metal texture, then at the end I've just given up and load an 8-bit sRGB image, which Metal should gamma un-correct and correct during rendering, so I _think_ there shouldn't be any banding issue. 

So now, I'm here with a different line shader which places these textures on the floor.

5a8e39c065d1b_hyperlinesmacOS2018-02-2204-30-36.png.6b239244affc7ee8c0392ccd30744fff.png

How would you progress from here? Should I make the ortho image and calculate texture coordinates in the floors fragment shader, or I should render them from the projected view space and calculate the diffuse later in screen space? Do I understand right that the second version is deferred rendering?

I think in my case the ortho texture would be actually more efficient, as the texture barely changes, only when the player makes a move, while the deferred one would recalculate it every frame.

This topic is closed to new replies.

Advertisement