| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This loosens the format validation in glBlitFramebuffer. When blitting
depth bits, don't require an exact match between the depth formats; only
require that the two formats have the same number of depth bits and the
same depth datatype (float vs uint). Ditto for stencil.
Between S8_Z24 buffers, the EXT_framebuffer_blit spec allows
glBlitFramebuffer to blit the depth and stencil bits separately. So I see
no reason to prevent blitting the depth bits between X8_Z24 and S8_Z24 or
the stencil bits between S8 and S8_Z24. However, we of course don't want
to allow blitting from Z32 to Z32_FLOAT.
Fixes Piglit fbo/fbo-blit-d24s8 on Intel drivers with separate stencil
enabled.
The problem was that, on Intel drivers with separate stencil, the default
framebuffer has separate depth and stencil buffers with formats X8_Z24 and
S8. The test attempts to blit the depth bits from a S8_Z24 buffer into the
default framebuffer.
v2: Check that depth datatypes match.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=44665
Note: This is a candidate for the 8.0 branch.
Reported-by: Xunx Fang <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The nvc0 gallium driver is advertising 128 MAX_INTERLEAVED_COMPS
which made it always assert in the linker when TFB was used since
the Outputs array was smaller than that maximum.
v2: added assertions
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
| |
Caused by commit 7a1e941ebee43cb97a2c77fd2269999b202308a2.
|
|
|
|
|
|
|
|
|
|
| |
Use a bitmask approach to compute gl_array_object::_MaxElement.
To make this work correctly depending on the shader type actually used,
make use of the newly introduced typed bitmask getters.
With this change I gain about 5% draw time on some osgviewer examples.
Signed-off-by: Mathias Fröhlich <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Depending on the installed shader type, different arrays are used
from gl_array_object. Provide helper functions that compute
the bitmask of these arrays that are finally enabled for a given
shader type. The will be used in a followup change.
Signed-off-by: Mathias Fröhlich <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Mathias Fröhlich <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default access flags for OpenGL ES (via GL_OES_map_buffer) and
desktop OpenGL are different. The code previously tried to handle
this, but the decision was made at compile time. Since the same
driver binary can be used for both OpenGL ES and desktop OpenGL, the
decision must be made at run-time.
This should fix bug #44433. It appears that the test case does
various map and unmap operations and inspects the state of the buffer
object around each. When it sees that GL_BUFFER_ACCESS does not match
its expectations, it fails.
NOTE: This is a candidate for release branches.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=44433
|
|
|
|
| |
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update the dd.h docs to indicate that GL_MAP_INVALIDATE_RANGE_BIT
can be used with GL_MAP_WRITE_BIT when mapping renderbuffers and
texture images.
Pass the flag when mapping texture images for glTexImage, glTexSubImage,
etc. It's up to drivers whether to actually make use of the flag.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
To try to use less tex memory and maybe get better performance.
Spotted by Roland Scheidegger.
NOTE: This is a candidate for the 8.0 and 7.11 branches.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The i965 driver advertises GL_ARB_texture_float and GL_ARB_texture_rg
support but the ctx->TextureFormatSupported[] table entries for
MESA_FORMAT_R_FLOAT32 and MESA_FORMAT_RGBA_FLOAT32 are false on gen 4
hardware. So the case for GL_R32F would fail and we'd print an
implementation error.
This patch adds more Mesa tex format options for GL_R32F and other R/G
formats so we fall back to 16-bit formats when 32-bit formats aren't
available.
Eric made the same fix in commit 6216a5b4 for the non R/G formats.
v2: try 16-bit formats before 32-bit formats and try RG formats before
RGBA where possible.
This should fix https://bugs.freedesktop.org/show_bug.cgi?id=44039
NOTE: This is a candidate for the 8.0 and 7.11 branches.
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
| |
According to Table 6.8 (Page 348) in the OpenGL 3.0 specification,
glGetVertexAttribiv supports GL_VERTEX_ATTRIB_ARRAY_INTEGER.
NOTE: This is a candidate for the 8.0 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TestMipMaps() function in src/OGLconform/textureNPOT.c calls glTexImage2D()
with width = 0. Texture with zero size skips miptree allocation due to a
condition in function _mesa_store_teximage3d(). While calling glGetTexImage()
it results in assertion failure in intel_map_texture_image() due to null mt
pointer.
This patch fixes the issue by detecting the zero size texture early in
glGetTexImage and glGetCompressedTexImage functions. In such a case function
simply returns doing nothing.
Verified that below mentioned bug is fixed by this patch.
https://bugs.freedesktop.org/show_bug.cgi?id=42334
NOTE: This is a candidate for stable branches
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
only check ARB_fbo, add shader_texture_lod as a requirement
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
The AL44 format occupies one byte, not two.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
Fixes piglit EXT_framebuffer_multisample/negative-copypixels.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
| |
Fixes piglit EXT_framebuffer_multisample/negative-copyteximage.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
| |
Fixes piglit EXT_framebuffer_multisample-negative-readpixels.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
| |
Fixes piglit EXT_framebuffer_multisample/renderbuffer-samples.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
|
|
|
|
| |
The array holds GLuint values so remove the float cast.
Note, however, that to compute the average of four GLuints we really
want to do (a+b+c+d)/4 but that could overflow. This change doesn't
address that for now.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
ir_variable is a class, not a struct.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In the first case, the newImage[] array contains GLuint values.
In the second case, the parameter type is GLuint, but the maxDepth
value is never used in this case (GL_FLOAT_32_UNSIGNED_INT_24_8_REV).
Pass ~OU just to be safe.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
Rather than testing the fbo's name against zero.
Reviewed-by: José Fonseca <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
We include both imports.h and u_math.h in the state tracker. This
leads to multiple, conflicting definitions of ffs() with MSVC.
Use FFS_DEFINED to skip the ffs() in u_math.h.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Call ffs() and ffsll() everywhere. Define our own ffs(), ffsll()
functions when the platform doesn't have them.
v2: remove #ifdef _WIN32, __IBMC__, __IBMCPP_ tests inside ffs()
implementation. The #else clause was recursive.
Reviewed-by: Kenneth Graunke <[email protected]>
Tested-by: Alexander von Gluck <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce vbo_get_minmax_indices() function to handle the min/max index
computation for nr_prims(>= 1). The old code just compute the first
prim's min/max index; this would results an error rendering if user
called functions like glMultiDrawElements(). This patch servers as
fixing this issue.
As when nr_prims = 1, we can pass 1 to paramter nr_prims, thus I made
vbo_get_minmax_index() static.
v2: per Roland's suggestion, put the indices address compuation into
vbo_get_minmax_index() instead.
Also do comination if possible to reduce map/unmap count
v3: per Brian's suggestion, use a pointer for start_prim to avoid
structure copy per loop.
Signed-off-by: Yuanhan Liu <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
The args to _mesa_reference_shader_program() can't be const.
|
| |
|
|
|
|
|
| |
Signed-off-by: Jakob Bornecrantz <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It used to be done in ir_to_mesa, and that was kind of a bad place.
I didn't change st_glsl_to_tgsi because there is some strange stuff
happening in the code that generates glDrawPixels shaders. It looked
like this would break horribly if I touched anything.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Track the calculated data in gl_shader_program instead of the
individual assembly shaders.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than looking at the settings in individual assembly programs,
look at the settings in the top-level uniform values. The old code
was flawed because examining each shader stage in isolation could
allow inconsitent usage across stages (e.g., bind unit 0 to a
sampler2D in the vertex shader and sampler1DShadow in the fragment
shader).
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the fixed-function fragment shader was tracked as a
gl_program. This means that it shows up in the driver as a Mesa IR
program instead of as a GLSL IR program. If a driver doesn't generate
Mesa IR from the GLSL IR, that program is empty. If the program is
empty there is either no rendering or a GPU hang.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Poking directly at the backing resources works only by luck. Core
Mesa code should only know about the gl_uniform_storage structure.
Soon other code that looks at samplers will use the gl_uniform_storage
structures instead of the data in the gl_program.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Useful when debugging to find the number of texture objects, shader
programs, etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On drivers that set gl_shader_compiler_options::LowerClipDistance (for
example i965), we need to handle transform feedback of gl_ClipDistance
specially, to account for the fact that the hardware represents it as
an array of vec4's rather than an array of floats.
The previous way this was accounted for (translating the request for
gl_ClipDistance[n] to a request for a component of
gl_ClipDistanceMESA[n/4]) doesn't work when performing transform
feedback on the whole unsubscripted array, because we need to keep
track of the size of the gl_ClipDistance array prior to the lowering
pass. So I replaced it with a boolean is_clip_distance_mesa, which
switches on the special logic that is needed to handle the lowered
version of gl_ClipDistance.
Fixes Piglit tests "EXT_transform_feedback/builtin-varyings
gl_ClipDistance[{1,2,3,5,6,7}]-no-subscript".
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Mesa shouldn't call into the drivers if there are no renderbuffers
bound to the attachments for the buffers to be cleared.
Fixes a number of the clearbuffer-* tests on softpipe.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
This fixes the test to allow cube/depth combinations on GL3
or EXT_gpu_shader4.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Found by Eirik Byrkjeflot Anonsen.
|
| |
|
|
|
|
|
|
|
| |
Fixes _mesa_clear_accum_buffer() being multiply defined if
FEATURE_accum is false.
Tested-by: Chih-Wei Huang <[email protected]>
|
|
|
|
| |
Reviewed-By: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
Also update the release notes to mention that Mesa 8.0 implements
OpenGL 3.0.
Signed-off-by: Kenneth Graunke <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were only comparing the number of depth and stencil bits but the
extension spec actually says the formats must match:
The error INVALID_OPERATION is generated if BlitFramebufferEXT is
called and <mask> includes DEPTH_BUFFER_BIT or STENCIL_BUFFER_BIT
and the source and destination depth or stencil buffer formats do
not match.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|