| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
htile is used for HiZ and HiS support and fast Z/S clears.
This commit just adds the htile setup and Fast Z clear.
We don't take full advantage of HiS with that patch.
v2 really use fast clear, still random issue with some tiles
need to try more flush combination, fix depth/stencil
texture decompression
v3 fix random issue on r6xx/r7xx
v4 rebase on top of lastest mesa, disable CB export when clearing
htile surface to avoid wasting bandwidth
v5 resummarize htile surface when uploading z value. Fix z/stencil
decompression, the custom blitter with custom dsa is no longer
needed.
v6 Reorganize render control/override update mecanism, fixing more
issues in the process.
v7 Add nop after depth surface base update to work around some htile
flushing issue. For htile to 8x8 on r6xx/r7xx as other combination
have issue. Do not enable hyperz when flushing/uncompressing
depth buffer.
v8 Fix htile surface, preload and prefetch setup. Only set preload
and prefetch on htile surface clear like fglrx. Record depth
clear value per level. Support several level for the htile
surface. First depth clear can't be a fast clear.
v9 Fix comments, properly account new register in emit function,
disable fast zclear if clearing different layer of texture
array to different value
v10 Disable hyperz for texture array making test simpler. Force
db_misc_state update when no depth buffer is bound. Remove
unused variable, rename depth_clearstencil to depth_clear.
Don't allocate htile surface for flushed depth. Something
broken the cliprect change, this need to be investigated.
v11 Rebase on top of newer mesa
v12 Rebase on top of newer mesa
v13 Rebase on top of newer mesa, htile surface need to be initialized
to zero, somehow special casing first clear to not use fast clear
and thus initialize the htile surface with proper value does not
work in all case.
v14 Use resource not texture for htile buffer make the htile buffer
size computation easier and simpler. Disable preload on evergreen
as its still troublesome in some case
v15 Cleanup some comment and remove some left over
v16 Define name for bit 20 of CP_COHER_CNTL
Signed-off-by: Pierre-Eric Pelloux-Prayer <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bring r600g allmost inline with closed source driver when
it comes to flushing and synchronization pattern.
v2-v4: history lost somewhere in outer space
v5: Fix compute size of flushing, use define for flags, update
worst case cs size requirement for flush, treat rs780 and
newer as r7xx when it comes to streamout.
v6: Fix num dw computation for framebuffer state, remove dead
code, use define instead of hardcoded value.
v7: Remove dead code
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, Mesa code assumed that glReadBuffer(GL_NONE) was only
valid for user-created framebuffer objects. However, the spec is
quite clear that is should also be valid for the default framebuffer.
From section 18.2.1 ("Obtaining Pixels from the Framebuffer") of the
GL 4.3 spec:
"When READ_FRAMEBUFFER_BINDING is zero, i.e. the default
framebuffer, src must be one of the values listed in table 17.4,
including NONE."
Similar language exists in the GLES 3.0 spec, and in desktop GL all
the way back to ARB_framebuffer_object.
Partially fixes GLES3 conformance test "CoverageES30.test".
NOTE: This is a candidate for stable branches.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was slightly wrong: we were computing the longest duration of
the query among all the rasterizer tasks.
Regardless, for tile-based implementations such as llvmpipe, time differences
will never be very useful, because rendering before/during/after the query
is all interleaved. And this is expected, see ARB_timer_query spec, issue 10.
In particular, piglit ext_timer_query-time-elapsed still fails, because
it makes assumptions that don't hold true in in tiled architectures. Not
sure how to fix that though.
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ARB/EXT_timer_query's definition of GL_TIME_ELAPSED match precisely the
subtraction of two GL_TIMESTAMP queries.
And for a lot of drivers, that's precisely how they have to implement
internally -- by emitting two hardware timestamp queries.
So, to simplify driver implementation, simply allow doing so in the state
tracker.
Eventually if no driver implements PIPE_QUERY_TIME_ELAPSED then we could
retire it.
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
To better reflect what it is being advertised.
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
|
|
|
| |
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The burst was incorrectly used, because ELEM_SIZE was always 0.
I don't know if the burst works, because I don't know of any test
which uses it.
NOTE: This is a candidate for the stable branches.
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
This fixes streamout breakage caused by the varying packing.
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
I need this to be able to use r600_get_temp in the function later.
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old call to tgsi_exec_machine_bind_shader() in
softpipe_delete_fs_state() was never called since the shader's original
tokens are never passed to the tgsi interpreter (only shader _variant_
tokens are). Now, unbind the variant's tokens from the tgsi interpreter
when we free the variant.
This doesn't fix any known bugs but it's the right thing to do.
Note: This is a candidate for the stable branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In exec_prepare() we were comparing pointers to see if the fragment
shader variant had changed before calling tgsi_exec_machine_bind_shader().
This didn't work reliably when there was a lot of shader token malloc/
freeing going on because the memory might get reused.
Instead, bind the shader variant during regular state validation.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=40404
(fixes a couple of piglit's glsl-max-varyings test)
Note: This is a candidate for the stable branches.
|
|
|
|
|
|
| |
This reverts commit d8287bac1fd4a77abc2db38de134f14176740d23.
Cause more issue than it fix. Need to think of a proper solution.
|
|
|
|
|
|
|
|
|
| |
This force surface allocated from ddx to be consider as height
aligned on 8 and fix 1D->2D tiling transition that result from
this.
Signed-off-by: Jerome Glisse <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
brw_emit_vertices contains special case logic to handle the case where
a vertex shader doesn't read any inputs. This special case logic was
incorrectly activating in the case were the only vertex input is
gl_VertexID. As a result, if a shader used gl_VertexID but used no
other inputs, then all vertices got a gl_VertexID of zero.
Fixes oglconform test "ubo-usage advanced.transform_feedback".
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The rather unweildy logic for determining this condition was repeated
in a large number of places. This patch consolidates it to a single
inline function.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements the following behaviours, which are mandated by
the GL 4.3 and GLES3 specs.
1. Regarding the GL_TRANSFORM_FEEDBACK_BUFFER_SIZE query: "If the
... size was not specified when the buffer object was bound
(e.g. if it was bound with BindBufferBase), ... zero is returned."
(GL 4.3 section 6.7.1 "Indexed Buffer Object Limits and Binding
Queries").
2. "BindBufferBase binds the entire buffer, even when the size of the
buffer is changed after the binding is established. It is
equivalent to calling BindBufferRange with offset zero, while size
is determined by the size of the bound buffer at the time the
binding is used." (GL 4.3 section 6.1.1 "Binding Buffer Objects to
Indexed Targets"). I interpret "at the time the binding is used"
to mean "at the time of the call to glBeginTransformFeedback".
3. "Regardless of the size specified with BindBufferRange, or
indirectly with BindBufferBase, the GL will never read or write
beyond the end of a bound buffer. In some cases this constraint may
result in visibly different behavior when a buffer overflow would
otherwise result, such as described for transform feedback
operations in section 13.2.2." (GL 4.3 section 6.1.1 "Binding
Buffer Objects to Indexed Targets").
Item 1 has been part of the spec all the way back to the inception of
the EXT_transform_feedback extension. Items 2 and 3 were added in GL
4.2 and GLES 3.
Prior to GL 4.2, in place of items 2 and 3, the spec simply said
"BindBufferBase is equivalent to calling BindBufferRange with offset
zero and size equal to the size of buffer." For transform feedback,
Mesa behaved as though this meant "...equal to the size of buffer at
the time of the call to BindBufferBase". However, this was
problematic because it left it ambiguous what to do if the buffer is
shrunk between the call to BindBuffer{Base,Range} and the call to
BeginTransformFeedback. Prior to this patch, Mesa's behaviour was to
try to write beyond the end of the buffer, likely resulting in memory
corruption. In light of this, I'm interpreting the spec change as a
clarification, not an intended behavioural change, so I'm making the
change apply regardless of API version.
Fixes GLES3 conformance test transform_feedback2_pause_resume.test.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In desktop GL, if a draw call would cause transform feedback buffers
to overflow, the draw call should succeed, and the extra primitives
should simply not be recorded in the transform feedback buffers.
In GLES3, however, if a draw call would cause transform feedback
buffers to overflow, the draw call is supposed to produce an
INVALID_OPERATION error and no drawing should occur.
This patch implements the GLES3-required behaviour.
Fixes GLES3 conformance test "transform_feedback_overflow.test".
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
| |
In GLES3, only glDrawArrays() and glDrawArraysInstanced() calls are
allowed when transform feedback is active.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the i965 driver contained code to compute the maximum
number of vertices that could be written without overflowing any
transform feedback buffers. This code wasn't driver-specific, and for
GLES3 support we're going to need to use it in core mesa. So this
patch moves the code into a core mesa function,
_mesa_compute_max_transform_feedback_vertices().
Reviewed-by: Ian Romanick <[email protected]>
v2: Eliminate C++-style variable declarations, since these won't work
with MSVC.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No functional change--this simply paves the way to allow futures
patches to call vbo_count_tessellated_primitives() during error
checking, before the _mesa_prim struct has been constructed.
This will be needed for GLES3, which requires draw calls to fail if
there is not enough space available in transform feedback buffers to
accommodate the primitives to be drawn.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Add support for TEX2, TXB2, TXL2, fix SHADOWCUBE
Signed-off-by: Vadim Girlin <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
Tested-by: Michel Dänzer <[email protected]>
|
|
|
|
| |
Signed-off-by: Vadim Girlin <[email protected]>
|
|
|
|
| |
Signed-off-by: Vadim Girlin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Since the idea is to just expand or shrink the bit width but not otherwise do
conversion we also need to adjust the sign bit according to src, otherwise
the conversion code will incorrectly clamp the values. (Since this only works
for casting to ordinary floats the norm and fixed bits should always be fine.)
This fixes the remaining piglit attribs GL3 failures.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
Dave found some, but there were more.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58039
|
|
|
|
|
|
|
|
| |
frees the object handle when a OpenVG
is destroyed.
Signed-off-by: Andreas Pokorny <[email protected]>
Signed-off-by: Brian Paul <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
| |
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=58380
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
a460aea3f14222af46f88d1bc686f82180b8a872 wasn't entirely correct,
since all coords are already ints hence need to skip the iround.
Passes piglit texelFetch with sampler1DArray/sampler2DArray.
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Make sure drivers initialize the version before:
* _mesa_initialize_exec_table is called
* _mesa_initialize_exec_table_vbo is called
* A context is made current
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
The driver should call _mesa_initialize_vbo_vtxfmt after
computing the context version.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
Drivers must compute the context version, and then call
_mesa_initialize_exec_table themselves.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In a future patch the exec functions will no longer set up
by _mesa_initialize_context and _vbo_CreateContext.
Therefore we must call _mesa_initialize_exec_table and
_mesa_initialize_exec_table_vbo.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
This change forces the context version to be computed before
initilizing the exec dispatch tables.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
This function initializes the exec/save dispatch tables
for VBO vtxfmt.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In glapi/gl_genexec.py:
* Remove _mesa_alloc_dispatch_table call
In glapi/gl_genexec.py and api_exec.h:
* Rename _mesa_create_exec_table to _mesa_initialize_exec_table
In context.c:
* Call _mesa_alloc_dispatch_table instead of _mesa_create_exec_table
* Call _mesa_initialize_exec_table (this is temporary)
Once all drivers have been modified to call
_mesa_initialize_exec_table, then the call to
_mesa_initialize_context can be removed from context.c.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
This allows the debug code to at least show the sign properly.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
util_blitter_blit_generic().
This is used by st_BlitFramebuffer() / r600_blit(), and ARB_fbo allows
overlapped blits, even though the result is undefined. No piglit regressions
on r600g / CYPRESS.
Signed-off-by: Henri Verbeet <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This should fix:
https://bugs.freedesktop.org/show_bug.cgi?id=58039
Tested-by: Darxus on bug 58039
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
These don't really belong in brw_structs.h.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct brw_instruction and the related instruction emitting code won't
be useful on Gen8+, as the instruction encoding changed. However, the
struct brw_reg code is still extremely valuable.
While we're at it, fix up some style points:
- s/GLuint/unsigned/g
- s/GLint/int/g
- s/GLshort/int16_t/g
- s/GLushort/uint16_t/g
- s/INLINE/inline/g
- Replace tabs with spaces
- Put return types on a separate line from the function name/parameters
- Remove trailing whitespace
- Remove extraneous whitespace around function parameters
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
| |
This checks if the pipe driver can support RGB32 formats.
Reviewed-by: Marek Olšák <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This adds the extensions + the tex buffer support for checking
the formats.
There is a piglit test enhancement sent to that list.
Reviewed-by: Marek Olšák <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
Not sure what was going on here, but running piglit with debug builds
might be a good plan :-)
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Tested-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
No statistically significant performance difference on glbenchmark 2.7
(n=60). It reduces cycles spent in the vertex shader by 3.3% +/- 0.8%
(n=5), but that's only about .3% of all cycles spent according to the
fixed shader_time.
Reviewed-by: Kenneth Graunke <[email protected]>
|