开发者

How does cube shadowmapping algorithm work?

开发者 https://www.devze.com 2023-04-13 00:59 出处:网络
I searched on the web, but I could not find an appropriate description. I would like to do the following in OpenGL ES 2.0:

I searched on the web, but I could not find an appropriate description. I would like to do the following in OpenGL ES 2.0:

In a simple scene a point light so开发者_如何学Curce would move around, and I'd like to render perfragment shadows. For this, I have to use a cube shadowmap.

I understand the basic algorithm, which is:

1.) Render from light's POV 6 times the scene. Store depth values in the cubemap's appropriate face. (if light is looking at +X then cube face +X, if looking at -X then cube face -X and so on)

2.) Render the scene from camera's POV and use depth values stored in cubemap for compare:

if depth < distance from light, then the fragment is in shadow...

I have some problems, and also some ideas. I would like to get only some confirmations or corrections about my ideas.

My problems:

How do I fetch from the cubemap? I know I have to use a vec3 for this, but how do I calculate this fetcher vector? I only have one idea: using the vertexPosition - lightPosition vector, but I'm not really sure if that is good. :(

The other problem is the following: distance from light: it is in world coordinate, so it is just a float value... depth values are in [0.0, 1.0] range...

How do I create distance in [0.0, 1.0] range? My idea is that I pass all 6 light's view matrices and the light's projection matrix too to the vertex shader. I calculate the vertex's position 2 times: one for camera's MVP (as normal) and one for light's MVP (for shadow calc, using the proper view matrix). This way I will get the fragment's position in light POV again, and because of w-divide, it's z value could be used as distance from light in [0.0, 1.0] after bias, so I can compare it with the depth value fetched from my cube shadow map... Am I right?

Please help me. Thanks in advance.

Edit:

All right, shadow cubemapping is starting to work. However, there's some error that I'm currently try to fix.

At first, the shadows were "stupid". Not where they should have been, or completely "stupid" shaped shadows were rendered...

The settings were: camera in (0, 4.9, 0), looking at (0,0,0), up (0,1,0). The views for light:

+X : LookAt(eyeX, eyeY, eyeZ, eyeX +1, eyeY, eyeZ, 0,1,0, matrix); //own implementation which does the same as gluLookAt

-X : LookAt(eyeX, eyeY, eyeZ, eyeX -1, eyeY, eyeZ, 0,1,0, matrix);

+Y : LookAt(eyeX, eyeY, eyeZ, eyeX, eyeY +1, eyeZ, -1,0,0, matrix);

and so on...

This does not work. Ok, but I animated it, and saw that the false shadow moved along with the lightsource: lightsource moved up, so did the shadow, ls moved down, the shadow too...

(The lightsource was in (0,0,0) and moved on Y axis, the box was left from the light.)

So for testing I did the following in the fragment shader:

vec3 fetcher = v_posWorld.xyz - u_light_pos;

fetcher.y *= -1.0;

This idea came because I thought that maybe the problem is that when rendering on a texture, the rendered image would be rendered upside down. This test worked, but of course only for -X and +X faces...

So I commented out the "fetcher.y *= -1.0;" line, and changed the views of the light:

+X : LookAt(eyeX, eyeY, eyeZ, eyeX +1, eyeY, eyeZ, 0,-1,0, matrix);

-X : LookAt(eyeX, eyeY, eyeZ, eyeX -1, eyeY, eyeZ, 0,-1,0, matrix);

+Y : LookAt(eyeX, eyeY, eyeZ, eyeX, eyeY+1, eyeZ, 0,0,-1, matrix);

-Y : LookAt(eyeX, eyeY, eyeZ, eyeX, eyeY-1, eyeZ, 0,0,1, matrix);

+Z : LookAt(eyeX, eyeY, eyeZ, eyeX, eyeY, eyeZ +1, 0,-1,0, matrix);

-Z : LookAt(eyeX, eyeY, eyeZ, eyeX, eyeY, eyeZ -1, 0,-1,0, matrix);

It works! Almost, because the view settings for +Y and -Y are not working as I expected :( After playing with it a little, I changed the camera's position to (0, -4.9, 0), and everything that worked, became something bad: the shadows were "stupid" again.

I'm comletely lost here. I do not know where my algorithm fails. Could it be, that for rendering on a texture, I should use a left-hand rule view (i mean when generating the view matrices for light)...?

Anyway, I keep working, but maybe I do not understand well cubemaps. :(

(and sorry for the long edit)


  1. Using the light-to-vertex vector as you assumed should indeed be the correct vector to use as texture coordinate.

  2. You can also just store a linear depth into the depth texture, by simply writing the distance of the vertex to the light source (divided by some known maximum light influence distance to transform it into [0,1]) into gl_FragDepth in the first 6 passes, istead of the vertex's projected depth. This way you can just use the vertex's distance to the light in the depth comparison without the need to project anything into light-space. This way you don't need to keep track of 6 different matrices and select the correct one to use for each vertex or fragment.

EDIT: It seems you cannot write into gl_FragDepth in ES, which makes rendering your own depth a bit more complicated. Just rendering into a normal texture won't do as the 8-bits of precision per channel are just too small in practice.

But you should be able to linearize the depth in the vertex shader, by just storing the vertex-to-light distance in the vertex's z component (transformed into [-1,1]) multiplied by it's w-component, which is then later divided by w and transformed into [0,1] by the rasterizer to gain the fragment's depth:

uniform mat4 lightModelView;
uniform mat4 lightProjection;
unfiorm float maxLightDistance;

attribute vec4 vertex;

void main()
{
    vec4 lightSpaceVertex = lightModelView * vertex;
    float lightDistance = length(lightSpaceVertex.xyz) / maxLightDistance;

    gl_Position = lightProjection * lightSpaceVertex;
    gl_Position.z = (2.0*lightDistance-1.0) * gl_Position.w;
}

It may be optimizable by just changing the light's projection matrix accordingly, but this code (in conjunction with a simple pass-through fragment shader) should store the linear light-to-vertex distance into the depth buffer, if I'm not on a completely wrong track here. It just multiplies the vertex's z by it's w and should therefore counter the perspective division step which would result in a non-linear depth value (and it also counters the transformation from [-1,1] to [0,1], of course).

EDIT: According to your new problems: First of all, I hope your light is located at (eyeX, eyeY, eyeZ) as the camera for the shadow map generation has to be located at the light's position, of course. If (eyeX, eyeY, eyeZ) is actually the position of your (normal) scene camera, then this is of course wrong and you should use (lightX, lightY, lightZ) instead.

Next, you should of course use a FoV (field of view) of exactly 90 degrees for the light's views, so the projection matrix should be generated somehow similar to this:

glFrustum(-near, near, -near, near, near, far);

or this:

gluPerspective(90, 1, near, far);
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号