The app I'm developing displ开发者_JAVA百科ays a large-ish (about 1M vertices), static 2D image. Due to memory limitation issues I have been filling the VBOs with new data every time the user scrolls or zooms, giving the impression that the entire image "exists", even though it doesn't.
Recently I discovered that by using the "android:largeHeap" option I can get 256 MB of heap space on a Motorola Xoom, which means that I can store the entire image in VBOs. In my ideal world I would simply pass the the OpenGL engine the VBOs and either tell it that the camera has moved, or use glScale/glTranslate to zoom/scroll.
My questions are these: am I on the right track? Should I always "draw" all of the chunks and let OpenGl figure out which will actually be seen, or figure out which chunks are visible myself? Any difference between using something like gluLookAt and glScale/glTranslate?
I don't care about aspect ratio distortion (the image is mathematically generated, not a photo), it is much wider than it is high, and in the future the number of vertexes could get much, much larger (e.g. 60M). Thanks for your time.
Never let OpenGL figure out itself what's on screen. He won't. All vertices will be transformed, and all those who arent' on screen will be clipped; but you have a better knowledge that OpenGL about your scene.
Using a huuuge 256Mo VBO will make you render the whole scene each time, and transforme ALL vertices each time, which isn't good for preformance.
Make a number of small VBOs (e.g. a 3x3 grid only should be visible at any moment), and display only those that are visible. Optionally pre-fill future VBOs based on movement extrapolation...
There is no difference between gluLookAt and glTranslate/... . Both compute matrices, that's it.
By the way, if your image is static, can't you precompute it ( à la Google Maps ) ? Similarly, does your data offer some way to be "reduced" when zoomed out ? e.g. for a point cloud, only display 1 out of N points...
精彩评论