| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
| |
This fixes AMD_conservative_depth.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
The boolean return value was ignored by the caller.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we were mapping/unmapping the index buffer each time we
found the restart index in the buffer. This is bad when the restart
index is frequently used. Now just map the index buffer once, scan
it to produce a list of sub-primitives, unmap the buffer, then draw
the sub-primitives.
Also, clean up the logic of testing for indexed primitives and calling
handle_fallback_primitive_restart(). Don't call it for non-indexed
primitives.
v2: per Jose, only map the relevant part of the index buffer with
pipe_buffer_map_range()
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
| |
(cherry picked from commit 228da884c9bfe9258cc26e741f41b273aa3e668a)
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
On original gen4, the surface format didn't determine the return data
type from sampling like it does on g45 and later.
Fixes GL_EXT_texture_integer/texture_integer_glsl130
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Merge may produce incorrect order of operations for r600-eg:
x: inst1 R0.x, ... ; //from current group
...
t: inst0 R0.x, ... ; //from previous group, same destination
Result of inst1 will be lost.
So compare destinations and don't allow this.
Signed-off-by: Vadim Girlin <[email protected]>
|
|
|
|
|
|
|
|
| |
normalized."
This reverts commit b11c16752a18ef8dfb96d9f0ead6ecb62bde6773.
Breaks at least luminance destination formats.
|
|
|
|
|
| |
Signed-off-by: Michel Dänzer <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Signed-off-by: Ben Skeggs <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously a vertex shader that used no samplers would get updated (by
calling the driver's ProgramStringNotify) when a sampler in the
fragment shader was updated. This was discovered while investigating
some spurious code generation for shaders in Cogs. The behavior in
Cogs is especially pessimal because it ping-pongs sampler uniform
settings:
glUniform1i(sampler1, 0);
glUniform1i(sampler2, 1);
draw();
glUniform1i(sampler1, 1);
glUniform1i(sampler2, 0);
draw();
glUniform1i(sampler1, 0);
glUniform1i(sampler2, 1);
draw();
// etc.
ProgramStringNotify is still too big of a hammer. Applications like
Cogs will still defeat the shader cache. A lighter-weight mechanism
that can work with the shader cache is needed. However, this patch at
least restores the previous behavior.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In slow_read_depth_stencil_pixels_separate() we might have separate
depth and stencil buffers or a combined buffer. In the later case,
don't map the buffer twice. This function is used when the depth
scale/bias pixel transfer values are not the defaults.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=42963
Reviewed-by: José Fonseca <[email protected]>
|
| |
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glspec doesn't say that we should skip the attenuation and spot
calculation for infinite light(Ppli.w == 0). Instead, it gives a same
formula to do the light calculation for both finite light and infinite
light(see page 62 of glspec 2.1.pdf)
Also from the formula (2.4) at page 62 of glspec 2.1.pdf, we can skip
attenuation calculation if Ppli.w == 0.
This would fix all the intel oglc l_sed fail subcases and introduces no
intel oglc regressions.
v2: fix an wrong intendation(comments from Brian).
Signed-off-by: Yuanhan Liu <[email protected]>
Acked-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Make sure all lighting tables are updated before using the table to
calculate something, say using _SpotExpTable to calculate
_VP_inf_spot_attenuation.
Signed-off-by: Yuanhan Liu <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes regression in piglit:
ARB_color_buffer_float/GL_RGBA16F-getteximage
ARB_color_buffer_float/GL_RGBA16F-readpixels
ARB_color_buffer_float/GL_RGBA32F-getteximage
ARB_color_buffer_float/GL_RGBA32F-readpixels
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes some spurious GL errors in the upcoming
gl-3.0-required-sized-formats piglit test.
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
intelAllocateBuffer() was oblivious to separate stencil buffers. This
patch fixes it to allocate a non-tiled stencil buffer with special pitch,
just as the DDX does.
Without this, any app that attempted to create an EGL surface with stencil
bits would crash. Of course, this affected only environments that used the
builtin DRI2 backend, such as Android and Wayland.
Fixes GLBenchmark2.1 on Android on gen7.
Note: This is a candidate for the 7.11 branch.
Tested-by: Louie Tsaie <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I changed the dimensions of the stencil buffer's region, as allocated by
the DDX, at xf86-video-intel commit
commit 3e55f3e88b40471706d5cd45c4df4010f8675c75
dri: Do not tile stencil buffer
But I forgot to make the analogous update to the Intel DRI2 glue in Mesa.
This patch makes that update.
Surprisingly, the mismatch did not cause any bugs. But the mismatch, if
left unfixed, *would* create bugs in the next commit.
Note: This is a candidate for the 7.11 branch.
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When calculating the y offset needed for detiling window system stencil
buffers, replace the term
region->height * 2 + region->height % 2 - 1
with
rb->Height - 1 .
The two terms are incidentally equivalent due to some out-of-date,
incorrect code in the Intel DRI2 glue for DDX. (See
intel_process_dri2_buffer_with_separate_stencil(), line ``buffer_height /=
2;``).
Note: This is a candidate for the 7.11 branch (only the intel_span.c hunk).
Signed-off-by: Chad Versace <[email protected]>
|
| |
|
| |
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Rather than redefining the BYTE/SHORT_TO_FLOAT macros, just define new
ones with different names. These macros preserve zero when converting.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
and _mesa_sizeof_packed_type()
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=42635
|
|
|
|
|
|
| |
This fixes a crash with the piglit vbo-too-small test.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't assert/die if a VBO is too small. Return zero instead. For
debug builds, emit a warning message since this is an unusual situation
that might indicate that there's a bug in the app.
Note that util_draw_max_index() now returns max_index+1 instead of
max_index. This lets us return zero to indicate that one of the VBOs
is too small to draw anything.
Fixes a failure with the new piglit vbo-too-small test.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
| |
We can use the core Mesa code for glReadPixels now. We just have to
validate state and flush the bitmap cache before reading.
|
|
|
|
|
|
| |
st_cb_readpixels.c is going away next.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
We use the code in main/readpix.c now.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
The swrast ReadPixels code has no dependencies on swrast since moving
to Map/UnmapRenderbuffer(). We'll be able to remove s_readpix.c and
remove the state tracker's glReadPixels code next.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
We'll soon be able to use these for a core Mesa implementation of
glReadPixels.
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
This was only used by the xlib driver to add an alpha channel to the
front/window color buffer. This was no longer going to work well with
the move to direct mapping of renderbuffers.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
The days of 1-bpp, 8-bpp and dithering are long behind us.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
We no longer have software-allocated alpha buffers so we can forget
about the alpha channel.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Seldom used and this won't work when we move to using Map/UnmapRenderbuffer
everywhere. This will let us remove a bunch of core Mesa code too.
Reviewed-by: Eric Anholt <[email protected]>
|
| |
|