| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some drawing functions take a single _mesa_prim object, while others
take an array of primitives. Both kinds of functions used a parameter
called "prim" (the singular form), which was confusing.
Using the plural form, "prims," clearly communicates that the parameter
is an array of primitives.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
can_cut_index_handle_prims() was passed an array of _mesa_prim objects
and a count, and ran a loop for that many iterations. However, it
treated the array like a pointer, repeatedly checking the first element.
This patch makes it actually check every primitive.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The VBO module actually calls us with an array of _mesa_prim objects.
For example, it may break up a DrawArrays() call into multiple
primitives when primitive restart is enabled.
Previously, we treated prim like a pointer, always accessing element 0.
This worked because all of the primitive objects in a single draw call
have the same value for num_instances and basevertex.
However, accessing an array as a pointer and using the wrong object's
fields is misleading. For stylistic reasons alone, we should use the
right object.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These functions have almost identical code; the only difference is that
a few of the bits moved around. Adding a few trivial conditionals
allows the same function to work on all generations, and the resulting
code is still quite readable.
v2: Comment that the workaround flush is only necessary on SNB
(requested by Paul Berry).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes broken rendering if these MRFs contained anything other than zero.
NOTE: This is a candidate for stable branches.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
There is a slight functionality change. Previously we would compute a
common value for num_samplers for all stages, and populate that many
entries in each stage's surf_offset table regardless of how many
samplers each stage used. Now we only populate the number of entries
in the surf_offset table corresponding to the number of samplers
actually used by the stage.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously these functions would accept a pointer to the binding table
and an index indicating which entry in the binding table should be
updated. Now they merely take a pointer to the binding table entry to
be updated.
This will make it easier to generalize brw_texture_surfaces to support
geometry shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
v2: Use "unsigned" rather than "GLuint".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch implements pull constant upload, binding table upload, and
surface setup for geometry shaders, by re-using vertex shader code
that was generalized in previous patches.
Based on work by Eric Anholt <[email protected]>.
v2: Update ditry bits for brw_gs_ubo_surfaces to account for commit
77d8fbc (mesa: add & use a new driver flag for UBO updates instead of
_NEW_BUFFER_OBJECT).
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
v2: Use GLbitfield instead of GLbitfield64 in
brw_vec4_upload_binding_table.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
v2: Use GLbitfield instead of GLbitfield64 in
brw_upload_vec4_pull_constants.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hardware requires that after constant buffers for a stage are
allocated using a 3DSTATE_PUSH_CONSTANT_ALLOC_{VS,HS,DS,GS,PS}
command, and prior to execution of a 3DPRIMITIVE, the corresponding
stage's constant buffers must be reprogrammed using a
3DSTATE_CONSTANT_{VS,HS,DS,GS,PS} command.
Previously we didn't need to worry about this, because we only
programmed 3DSTATE_PUSH_CONSTANT_ALLOC_{VS,HS,DS,GS,PS} once on
startup (or, previous to that, whenever BRW_NEW_CONTEXT was flagged).
But now that we reallocate the constant buffers whenever geometry
shaders are switched on and off, we need to make sure the constant
buffers are reprogrammed.
We do this by adding a new bit, BRW_NEW_PUSH_CONSTANT_ALLOCATION, to
brw->state.dirty.brw.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we would always use the same push constant allocation
regardless of what shader programs were being run: the available push
constant space was split into 2 equal size partitions, one for the
vertex shader, and one for the fragment shader.
Now that we are adding geometry shader support, we need to do
something smarter. This patch adjusts things so that when a geometry
shader is in use, we split the available push constant space into 3
nearly-equal size partitions instead of 2.
Since the push constant allocation is now affected by GL state, it can
no longer be set up by brw_upload_initial_gpu_state(); instead it must
be set up by a state atom.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is required by the internal hardware docs and the PRM. Probably
the reason we were getting away with not doing it was because we only
emitted 3DSTATE_PUSH_CONSTANT_ALLOC_PS during startup. However that's
going to change with the introduction of geometry shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we gave all of the URB space (other than the small amount
that is used for push constants) to the vertex shader. However, when
a geometry shader is active, we need to divide it up between the
vertex and geometry shaders.
The size of the URB entries for the vertex and geometry shaders can
vary dramatically from one shader to the next. So it doesn't make
sense to simply split the available space in two. In particular:
- On Ivy Bridge GT1, this would not leave enough space for the worst
case geometry shader, which requires 64k of URB space.
- Due to hardware-imposed limits on the maximum number of URB entries,
sometimes a given shader stage will only be capable of using a small
amount of URB space. When this happens, it may make sense to
allocate substantially less than half of the available space to that
stage.
Our algorithm for dividing space between the two stages is to first
compute (a) the minimum amount of URB space that each stage needs in
order to function properly, and (b) the amount of additional URB space
that each stage "wants" (i.e. that it would be capable of making use
of). If the total amount of space available is not enough to satisfy
needs + wants, then each stage's "wants" amount is scaled back by the
same factor in order to fit.
When only a vertex shader is active, this algorithm produces
equivalent results to the old algorithm (if the vertex shader stage
can make use of all the available URB space, we assign all the space
to it; if it can't, we let it use as much as it can).
In the future, when we need to support tessellation control and
tessellation evaluation pipeline stages, it should be straightforward
to expand this algorithm to cover them.
v2: Use "unsigned" rather than "GLuint".
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
v2: Change name from "vec4_gs" to simply "gs".
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This paves the way for sharing the code that will set up the vertex
and geometry shader pipeline state.
v2: Rename the base class to brw_stage_state.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Defines that previously referred to VS now refer to VEC4, since they
will be shared by the user-programmable vertex shader and geometry
shader stages.
Defines that previously referred to the Gen6 geometry shader stage
(which is only used for transform feedback) are now renamed to
explicitly refer to Gen6, to avoid confusion with the Gen7
user-programmable geometry shader stage.
Based on work by Eric Anholt <[email protected]>.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
| |
This will avoid confusion when we add geometry shaders, since these
data structures will be shared by vertex and geometry shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Now that the name "gs" is no longer used to refer to the legacy fixed
function geometry shaders, we can use it to refer to user-defined
geometry shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
| |
"ff" is for "fixed function". This frees up the name "gs" to refer to
user-defined geometry shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is incorrect to assume that src[0] of a SEND-from-GRF opcode is the
GRF. For example, FS_OPCODE_UNIFORM_PULL_CONSTANT_LOAD uses src[1] for
the GRF.
To be safe, loop over all the source registers and mark any GRFs. We
probably won't ever have more than one, but it's simpler to just check
all three rather than attempting to bail early.
Not observed to fix anything yet, but likely to. Parallels the bug fix
in the previous commit, which actually does fix known failures.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: [email protected]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is incorrect to assume that src[0] of a SEND-from-GRF opcode is the GRF.
VS_OPCODE_PULL_CONSTANT_LOAD_GEN7 uses an IMM as src[0], and stores the
GRF as src[1].
To be safe, loop over all the source registers and mark any GRFs. We
probably won't ever have more than one, but it's simpler to just check
all three rather than attempting to bail early.
Fixes assertion failures in Unigine Sanctuary since we started making
register allocation rely on split_virtual_grfs working. (The register
classes were actually sufficient, we were just interpreting an IMM as
a virtual GRF number.)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68637
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Cc: [email protected]
|
|
|
|
|
|
|
|
| |
Thanks to Ken for trawling through my neglected public branches and
finding the bug in this change (inside a megacommit) that made me abandon
this work.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
This avoids the need to get the inter- and intra-tile offset and adjust
our miptree info based on them.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are things that happen to be occurring because of the batch flush at
the start of the blorp op (which exists to prevent batch space or aperture
space overflow), but the intention was for this sequence of state resets at
the end of blorp to be everything necessary for the next draw call.
Found when debugging the next commit, by comparing brw_new_batch() and
intel_batchbuffer_reset() to brw_blorp_exec().
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
The code that got replaced with map_raw didn't do the flush, but now
map_raw() is responsible for it and we don't have to worry about it.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
| |
v2 (Kenneth Graunke): Rebase on latest master.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gives us more information about why we're flushing that we can
use for handling our throttling.
v2 (Kenneth Graunke): Rebase on latest master, add missing
FLUSH_VERTICES and FLUSH_CURRENT, which fixes a regression in Glean's
polygonOffset test.
v3 (anholt): Drop FLUSH_CURRENT -- FLUSH_VERTICES is what we need, which
is "get any queued prims out of VBO and into the driver", not "update
ctx->Current so we can read it with the CPU." Also drop batch->used
check, which intel_batchbuffer_flush() does anyway.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was already happening because blorp happens to flush at the end of
every call, but we have been talking about removing that at some point,
and this would surely get overlooked.
v2 (Kenneth Graunke): Rebase on latest master. Note that we did remove
the other flush, and this change actually did get overlooked!
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
intel_flush() now did nothing except call through (and
intel_batchbuffer_flush() does the no-op check, too!)
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
df06745c5adb524e15d157f976c08f1718f08efa made it so that we didn't
allocate extra uniform space for unused clip planes, which also
incidentally made us not allocate any space at all, which we were relying
on for this no-uniforms case. Instead of putting the knowledge of this
special HW exception into the thing that normally preallocates prog_data
for us, just allocate it here.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68766
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we allocated space in brw_vs_prog_data's params and
pull_params arrays for MAX_CLIP_PLANES vec4s---even when it wasn't
necessary.
On a 64-bit architecture, this used 0.5 kB of space (8 clip planes *
4 floats per plane * 8 bytes per float pointer * 2 arrays of pointers =
512 bytes). Since this cost was per-vertex shader, it added up.
Conveniently, we already store the number of clip plane constants in the
program key. By using that, we can allocate the exact amount of space
needed. For the common case where user clipping is disabled, this means
0 bytes.
While we're here, mention exactly what code requires this extra space,
since it wasn't obvious.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Use `(void) success;` to silence this warning:
i965/brw_vs.c:481:12:
warning: unused variable 'success' [-Wunused-variable]
bool success = do_vs_prog(brw, ctx->Shader.CurrentVertexProgram,
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
MAD will be generated directly from ir_triop_fma, so this assertion
checks that all ir_expressions are usable.
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SEQ and SNE are not native i915 instructions, so they each generate at
least 3 instructions. If both operands are uniforms or constants, we
get 5 instructions like:
U[1] = MOV CONST[1]
U[0].xyz = SGE CONST[0].xxxx, U[1]
U[1] = MOV CONST[1].-x-y-z-w
R[0].xyz = SGE CONST[0].-x-x-x-x, U[1]
R[0].xyz = MUL R[0], U[0]
This code is stupid. Instead of having the individual calls to
i915_emit_arith generate the moves to utemps, do it in the caller. This
results in code like:
U[1] = MOV CONST[1]
U[0].xyz = SGE CONST[0].xxxx, U[1]
R[0].xyz = SGE CONST[0].-x-x-x-x, U[1].-x-y-z-w
R[0].xyz = MUL R[0], U[0]
This allows fs-temp-array-mat2-index-col-wr and
fs-temp-array-mat2-index-row-wr to fit in hardware limits (instead of
falling back to software rasterization).
NOTE: Without pending patches to the piglit tests, these tests will now
fail. This is an unrelated, pre-existing issue.
v2: Copy most of the body of the commit message into comments in the
code. Suggested by Eric.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Now that we use a fixed set of register classes, we can set up the
register set and conflict graphs once, at context creation, rather than
on every VS compile. This is obviously less expensive, and also what
we already do in the FS backend.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
We're soon going to be calling brw_alloc_reg_set() from outside of the
visitor, where we don't have the precomputed "max_grf" variable handy.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For now, nothing else can get allocated over them. That may change at
some point in the future.
This also means that base_reg_count can be computed without knowing the
number of registers used for the payload, which is required if we want
to allocate the register set once at context creation time.
See commit 551e1cd44f6857f7e29ea4c8f892da5a97844377, which implemented
virtually identical code in the FS backend.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Arrays, structures, and matrices use large VGRFs of arbitrary sizes.
However, split_virtual_grfs() breaks those down into VGRFs of size 1.
For reference, commit 5d90b988791e51cfb6413109271ad102fd7a304c is the
analogous change to the FS backend.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(From a suggestion by Francisco Jerez)
If an enum represents a bitfield of flags, e.g.:
enum E {
A = 1,
B = 2,
C = 4,
D = 8,
};
then C++ normally prohibits statements like this:
enum E x = A | B;
because A and B are implicitly converted to ints before OR-ing them,
and an int can't be stored in an enum without a type cast. C, on the
other hand, allows an int to be implicitly converted to an enum
without casting.
In the past we've dealt with this situation by storing flag bitfields
as ints. This avoids ugly casting at the expense of some type safety
that C++ would normally have offered (e.g. we get no warning if we
accidentally use the wrong enum type).
However, we can get the best of both worlds if we override the |
operator. The ugly casting is confined to the operator overload, and
we still get the benefit of C++ making sure we don't use the wrong
enum type.
v2: Remove unnecessary comment and unnecessary use of "enum" keyword.
Use static_cast.
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We never noticed that this field was uninitialized because it is only
used in an error path that reports internal Mesa errors.
But it's silly to have it around anyway because &brw->ctx is
equivalent.
Should fix Coverity defect CID 1063351: Uninitialized pointer field
(UNINIT_CTOR) /src/mesa/drivers/dri/i965/brw_vec4_emit.cpp: 148
Reviewed-by: Ian Romanick <[email protected]>
|