开发者

Determining if a polygon is inside the viewing frustum

开发者 https://www.devze.com 2023-03-13 06:20 出处:网络
here are my questions. I heard that opengl ignores the vertices which are outside the viewing frustum and doesn\'t consider them in rendering pipeline. Recently开发者_StackOverflow中文版 I ran into a

here are my questions. I heard that opengl ignores the vertices which are outside the viewing frustum and doesn't consider them in rendering pipeline. Recently开发者_StackOverflow中文版 I ran into a same post that said you should check this your self and if a point is not inside, it is you duty to find out not opengl's! Now,

  1. Is this true about opengl? does it understand if a point is not inside, and not to render it?
  2. I am developing a grass scene which has about 4000 grasses on rectangles. I have awful FPS, and the only solution I came up was to decide which grasses are inside the viewport and then only render them! My question here is that what solution is best for me to find out which rectangle is not inside or which one is?

Please consider that my question is not about points mainly but about rectangles. Also I need to sort the grasses based on their distance, so it is better if native on client side memory.

Please let me know if there are any effective and real-time ways to find out if any given mesh is inside or outside the frustum. Thanks.


Even if is true then OpenGL does not show polygons outside the frustum ( as any other 3d engines ) it has to consider them to check if there are inside or not and then fps slow down. Usually some smart optimization algorithm is needed to avoid flooding the scene with invisible objects. Check for example BSP trees+PVS or Portals as a starting point. To check if there is some bottleneck in the application, you can try with gDebugger. If nothing is reasonable wrong optimizing in order to draw just the PVS ( possible visible set ) is the way to go.


OpenGL won't render pixels ("fragments") outside your screen, so it has to clip somehow...

More precisely :

  • You submit your geometry
  • You make a Draw Call (glDrawArrays or glDrawElements)
  • Each vertex goes through the vertex shader, which computes the final position of the vertex in camera space. If you didn't write a vertex shader (=old opengl), the driver will create one for you.
  • The perspective division transforms these coordinates in Normalized Device Coordinates. Roughly, its means that the frustum of your camera is deformed to fit in a [-1,1]x[-1,1]x[-1,1] box
  • Everything outside this box is clipped. This can mean completely discarding a triangle, or subdivide it if it is across a clipping plane
  • Each remaining triangle is rasterized into fragments
  • Each fragment goes through the fragment shader

So basically, OpenGL knows how to clip, but each vertex still has to go through the vertex shader. So submitting your entire world will work, of course, but if you can find a way not to submit everything, your GPU will be happier.

This is a tradeoff, of course. If you spend 10ms checking each and every patch of grass on the CPU so that the GPU has only the minimal amount of data to draw, it's not a good solution either.

If you want to optimize grass, I suggest culling big patches (5m x 5m or so). It's standard AABB-frustum testing.

If you want to optimize a more generic model, you can investigate quadtree for "flat" models, octrees and bsp-trees for more complex objects.


Yes, OpenGL does not rasterize triangles outsize the viewing frustrum. But, this doesn't mean that this is optimal for applications: OpenGL implementation shall transform the vertex coordinate (by using fixed pipeline or vertex shaders), then, having the normalized coordinates it finally knows whether the triangle lie inside the viewing frustrum.

This mean that no pixel is rasterized in that cases, but the vertex data is processed all the same; simply doesn't produce fragments derived from a non visible triangle!

The OpenGL extension ARB_occlusion_query may help you, but in the discussion section make it clear:

Do occlusion queries make other visibility algorithms obsolete?

    No.

    Occlusion queries are helpful, but they are not a cure-all.  They
    should be only one of many items in your bag of tricks to decide
    whether objects are visible or invisible.  They are not an excuse
    to skip frustum culling, or precomputing visibility using portals
    for static environments, or other standard visibility techniques.

For the question regarding the mesh sorting on depth, you shall use the depth buffer: essentially the mesh fragment is effectively rendered only if its distance from the viewport is less than the previous fragment in the same position. This make you aware of sorting meshes. This buffer is essentially free, and it allows you to improve performances since it discard more far fragments.


  1. Yes. Like others have pointed out, OpenGL has to perform a lot of per-vertex operations to determine if it is in the frustum. It must do this for every vertex you send it. In addition to the processing overhead that must take place, keep in mind that there is also additional overhead in the transmission of those vertices from the CPU to the GPU. You want to avoid sending information to the GPU that it isn't going to use. Though the bandwidth between the CPU and GPU is quite good on modern hardware, there's still a limit.

  2. What you want is a Scene Graph. Scene graphs are frequently implemented with some kind of spatial partitioning scheme, e.g., Quadtrees, Octrees, BSPTrees, etc etc. Spatial partitioning allows you to intelligently determine what geometries are visible. Instead of doing this on a per-vertex basis (like OpenGL is forced to do) it can eliminate huge spatial subsets of geometry at a time. When rendering a complex scene, the performance savings can be enormous.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号