ardi
2014-07-18 08:59:55 UTC
While looking at OpenGL details in an app which is otherwise running fine,
I found something quite strange: The OpenGL view has a depth buffer of only
16 bits. I didn't notice it before because the view doesn't show a model
prone to depth precision issues, so 16 bit is enough, but it's anyway
strange.
This is on OSX, but I believe it will be the same on other platforms. It
was while asking for a canvas with double buffer, RGBA mode, alpha channel,
depth buffer, and without stencil. I checked this iMac has visuals with
8/8/8/8 RGBA, double buffer, and 32 bit depth buffer.
So I took a look at how I define wxGL_FLAGS and this seems to be the cause,
because, according to the docs:
"WX_GL_DEPTH_SIZE Specifies number of bits for Z-buffer (typically 0, 16 or
32)."
But however, I've always understood DEPTH_SIZE with the UNIX (GLX) meaning.
According to UNIX manpages:
"GLX_DEPTH_SIZE Must be followed by a nonnegative minimum size
specification. If this value is zero, visuals with no depth buffer are
preferred. Otherwise, the largest available depth buffer of at least the
minimum size is preferred."
So, obviously, I was requesting WX_GL_DEPTH_SIZE with just 1 bit, because
that's the way I've always got the largest available depth buffer on the
system. With UNIX systems this worked always fine (both on SGI IRIX, Linux,
etc).
Is there anyway for getting with wxWidgets a visual with the largest depth
buffer that has at least 8/8/8/8 RGBA bits? Well, yes, I can do some code
calling IsDisplaySupported
<http://docs.wxwidgets.org/trunk/classwx_g_l_canvas.html#aea68f828d3673d1c4d4f1a8e27abbc90>()
on a loop but I don't want to reinvent the wheel, and⊠it's not clear
either when to end the loop, as some graphics cards have more bits than
others, and future hardware can have even more bitsâŠ
ardi
I found something quite strange: The OpenGL view has a depth buffer of only
16 bits. I didn't notice it before because the view doesn't show a model
prone to depth precision issues, so 16 bit is enough, but it's anyway
strange.
This is on OSX, but I believe it will be the same on other platforms. It
was while asking for a canvas with double buffer, RGBA mode, alpha channel,
depth buffer, and without stencil. I checked this iMac has visuals with
8/8/8/8 RGBA, double buffer, and 32 bit depth buffer.
So I took a look at how I define wxGL_FLAGS and this seems to be the cause,
because, according to the docs:
"WX_GL_DEPTH_SIZE Specifies number of bits for Z-buffer (typically 0, 16 or
32)."
But however, I've always understood DEPTH_SIZE with the UNIX (GLX) meaning.
According to UNIX manpages:
"GLX_DEPTH_SIZE Must be followed by a nonnegative minimum size
specification. If this value is zero, visuals with no depth buffer are
preferred. Otherwise, the largest available depth buffer of at least the
minimum size is preferred."
So, obviously, I was requesting WX_GL_DEPTH_SIZE with just 1 bit, because
that's the way I've always got the largest available depth buffer on the
system. With UNIX systems this worked always fine (both on SGI IRIX, Linux,
etc).
Is there anyway for getting with wxWidgets a visual with the largest depth
buffer that has at least 8/8/8/8 RGBA bits? Well, yes, I can do some code
calling IsDisplaySupported
<http://docs.wxwidgets.org/trunk/classwx_g_l_canvas.html#aea68f828d3673d1c4d4f1a8e27abbc90>()
on a loop but I don't want to reinvent the wheel, and⊠it's not clear
either when to end the loop, as some graphics cards have more bits than
others, and future hardware can have even more bitsâŠ
ardi
--
Please read http://www.wxwidgets.org/support/mlhowto.htm before posting.
To unsubscribe, send email to wx-users+***@googlegroups.com
or visit http://groups.google.com/group/wx-users
Please read http://www.wxwidgets.org/support/mlhowto.htm before posting.
To unsubscribe, send email to wx-users+***@googlegroups.com
or visit http://groups.google.com/group/wx-users