开发者

Why Not Applying the Model View Matrix on Normal Vector in Toon Shading (Lighthouse3d Tutorial)?

开发者 https://www.devze.com 2023-03-13 22:28 出处:网络
I am learning GLSL by going through a tutorial on the Web. The tutorial has an example called the Toon Shading. Here is the link to Toon Shading - Version I.

I am learning GLSL by going through a tutorial on the Web.

The tutorial has an example called the Toon Shading. Here is the link to Toon Shading - Version I.

In this example, the vertex shader is written as follows:

 uniform vec3 lightDir;
 varying float intensity;

 void main()
 {
      intensity = dot(lightDir,gl_Normal);
      gl_Position = ftransform();
 }

To my best understanding, I know that if a surface is rotated then the normal vectors of the vertices of that surface should also be rotated the same amount, so that the normal vectors reflect the new direction of the surface. However, in the codes above, the Model View Matrix is not applied on to the normal vector. The normal vector is used directly to calculate the light intensity.

Regarding my concern, here is what the tutorial says:

"lets assume that the light’s direction is defined in world space."

and

"If no rotations or scales are performed on the model in the OpenGL application, then the normal defined in world space, provided to the vertex shader as gl_Normal, coincides with the normal defined in the local space."

These explanations gives me several questions:

1. What are world space and local space? How are they different? (This question
   seems a little bit elementary, but I need to 开发者_JAVA技巧understand...)

2. I figure the fact that "the light’s direction is defined in world space"
   has something to do with not applying the Model View Matrix on to the
   normal vector of a vertex. But, what is that?

3. Finally, If we don't apply the Model View Matrix on the normal vector then
   wouldn't the normal be pointing to a direction different from the actual
   direction of the surface? How do we solve this problem?

I hope I made my questions clear.

Thanks!


World space is, well, what it sounds like: the space of the world. It is the common space that all objects that exist in your virtual world reside in. The main purpose of world space is to define a common space that the camera (or eye) can also be positioned within.

Local space, also called "model space" by some, is the space that your vertex attribute data is in. If your meshes come from someone using a tool like 3DS Max, Blender, etc, then the space of those positions and normals is not necessarily the same as world space.

Just to cap things off, eye-space (also called camera-space or view-space) is essentially world space, except everything is relative to the position and orientation of the camera. When you move the camera in world space, you're really just changing the world-to-eye space transformation. The camera in eye-space is always at the origin.

Personally, I get the impression that the Lighthouse3D tutorial people were getting kind of lazy. Rare is the situation in which your vertex normals are in world space, so not showing that you have to transform normals as well as positions (which is what ftransform() did) was misleading.

The tutorial is correct in that if you have a normal in world-space and a light direction in world-space (and you're doing directional lighting, not point-lighting), then you don't need to transform anything. The purpose of transforming the normal is to transform it from local space to the same space as your light direction. Here, they just define that they are in the same space.

Sadly, actual users will not generally have the luxury of defining that their vertex normals are in any space other than local. So they will have to transform them.

Finally, If we don't apply the Model View Matrix on the normal vector then wouldn't the normal be pointing to a direction different from the actual direction of the surface?

Who cares? All that matters is that the two directions are in the same space. It doesn't matter if that space is the same space as the vertex positions' space. As you get farther into graphics, you will find that a space that is convenient for lighting may not be a space that you ever transform positions into.

And that's OK. Because the lighting equation only takes a direction towards the light and a surface normal. It doesn't take a position, unless you're doing point-lighting, and even then, the position is only useful insofar as it lets you calculate a light direction and attenuation factor. You can take the light direction and transform it into whatever space you want.

Some people do lighting in local space. If you're doing bump-mapping, you will often want to do lighting in the space tangent to the plane of the texture.


Addendum:

The standard way to handle normals (assuming that your matrices all still come through OpenGL's standard matrix commands) is to assume that the normals are in the same space as your position data. Therefore, they need to be transformed as well.

However, for reasons that are better explained here, (note: in the interest of full disclosure, I wrote that page) you cannot just transform the normals with the matrix you would use for the positions. Fortunately, OpenGL was written expecting that, so if you're using the standard OpenGL matrix stack, they give you a pre-defined matrix for handling this: gl_NormalMatrix.

A typical lighting scenario in GLSL would look like this:

uniform vec3 eyeSpaceLightDir;
varying float intensity;

void main()
{
    vec3 eyeSpaceNormal = gl_NormalMatrix * gl_Normal;
    intensity = dot(eyeSpaceLightDir, eyeSpaceNormal);
    gl_Position = ftransform;
}

I tend to prefer to preface my variable names to state what space they are in, so that it's obvious what's going on.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号