| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Signed-off-by: Leo Liu <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Required functions into place for implementation, create screen
with device fd returned from X server, also bail out to DRI2
with certain conditions.
v2: -organize the error out path (Axel)
-squash previous patch 1 and 2 into one (Emil)
Signed-off-by: Leo Liu <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Ilia Mirkin <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
On my oes_shader_io_blocks branch, this fixes 71
dEQP-GLES31.functional.program_interface_query.* tests.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Cc: [email protected]
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
Coverity issue 1361544 found an instance where the tcs variable is
checked for NULL, but unconditionally dereferenced later in the same
function.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Used only in assert().
|
|
|
|
|
|
|
| |
Both functions return uint64_t, so I expect the masking/shifting should
be done on 64-bit types.
Reviewed-by: Kristian Høgsberg <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jan Vesely <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
| |
ARB_gpu_shader_fp64 was the only feature missing.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
v2: do not write to the original indirect_offset since that is
an expression that could be used somewhere else (Ken)
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The scalar backend uses this to check URB input sizes.
v2: Removed redundant break after return (Curro)
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This is pretty much the same we do with SSBOs.
v2: do not shuffle in-place, it is not safe since the original 64-bit data
could be used after the write, instead use a temporary like we do
for SSBO stores (Iago)
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does the inverse operation of shuffle_32bit_load_result_to_64bit_data
and we will use it when we need to write 64-bit data in the layout expected
by untyped write messages.
v2 (curro):
- Use subscript() instead of stride()
- Assert on the input types rather than silently retyping.
- Use offset() instead of horiz_offset(), drop the multiplier definition.
- Drop the temporary vgrf and force_writemask_all.
- Make component_i const.
- Move to brw_fs_nir.cpp
v3 (curro):
- Pass dst and src by reference.
- Simplify allocation of tmp register.
- Move to brw_fs_nir.cpp.
- Get rid of the temporary.
v3 (Iago):
- Check that the src and dst regions do not overlap, since that would
typically be a bug in the caller.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are going to need the same logic for anything that reads
doubles via untyped messages (CS shared variables and SSBOs). Add a
helper function with that logic so that we can reuse it.
v2:
- Make this a static function instead of a method of fs_visitor (Iago)
- We only support types with a size of 4 or 8 (Curro)
- Avoid retypes by using a separate vgrf for the packed result (Curro)
- Put dst parameter before source parameters (Curro)
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
UBO loads with constant offset use the UNIFORM_PULL_CONSTANT_LOAD
instruction, which reads 16 bytes (a vec4) of data from memory. For dvec
types this only provides components x and y. Thus, if we are reading
more than 2 components we need to issue a second load at offset+16 to
read the next 16-byte chunk with components w and z.
UBO loads with non-constant offset emit a load for each component
in the vector (and rely in CSE to fix redundant loads), so we only
need to consider the size of the data type when computing the offset
of each element in a vector.
v2 (Sam):
- Adapt the code to use component() (Curro).
v3 (Sam):
- Use type_sz(dest.type) in VARYING_PULL_CONSTANT_LOAD() call (Curro).
- Add asserts to ensure std140 vector alignment rules are followed
(Curro).
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
UNIFORM_PULL_CONSTANT_LOAD is used to load a contiguous vec4 starting at a
constant offset that is 16-byte aligned. If we need to access an unaligned
offset we emit a load with an aligned offset and use the remaining constant
offset to select the component into the vec4 result that we are interested
in. This component must be computed in units of the type size, since that
is what fs_reg::set_smear expects.
This patch does this change in the two places where we use this message:
In demote_pull_constants when we lower uniform access with constant offset
into the pull constant buffer and in UBO loads with constant offset.
v2 (Sam):
- Fix set_smear() in fs_visitor::lower_constant_loads(), take into account
source type instead and remove MAX2 (Curro).
- Improve changes to nir_intrinsic_load_ubo case in nir_emit_intrinsic()
(Curro).
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This fixes a number of bugs of component() by reimplementing it in
terms of horiz_offset(): Handling of base registers starting at a
non-zero subreg_offset, handling of strided registers and overflow of
subreg_offset into reg_offset.
Reviewed-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
v2 (Curro):
- Assert on scale == 1 when shuffling 64-bit data.
- Remove type_slots, use type_sz(vec4_result.type) instead.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There will be a few places where we need to shuffle the result of a 32-bit
load into valid 64-bit data, so extract this logic into a separate helper
that we can reuse.
v2 (Curro):
- Use subscript() instead of stride()
- Assert on the input types rather than retyping.
- Use offset() instead of horiz_offset(), drop the multiplier definition.
- Don't use force_writemask_all.
- Mark component_i as const.
- Make the function name lower case.
v3 (Curro):
- Pass src and dst by reference.
- Move to brw_fs_nir.cpp
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using the LOAD_PAYLOAD instruction (emitted through the
emit_transpose() helper that is no longer useful and this commit
removes) which had to be marked force_writemask_all in some cases,
emit a series of moves to apply proper channel enable signals to the
destination. Until now lower_simd_width() had mainly been used to
lower things that invariably had a basic block-local temporary as
destination so it didn't seem like a big deal, but I found it to be
the reason for several Piglit regressions in my SIMD32 branch and
Igalia discovered the same issue independently while working on FP64
support.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were not accounting for subreg_offset in the check for the start
of the region.
Also, fs_reg::regs_read() already takes the stride into account, so we
should not multiply its result by the stride again. This was making
copy-propagation fail to copy-propagate cases that would otherwise be
safe to copy-propagate. Again, this was observed in fp64 code, since
there we use stride > 1 often.
v2 (Sam):
- Rename function and add comment (Jason, Curro).
- Assert that register files and number are the same (Jason).
- Fix code to take into account the assumption that src.subreg_offset
is strictly less than the reg_offset unit (Curro).
- Don't pass the registers by value to the function, use
'const fs_reg &' instead (Curro).
- Remove obsolete comment in the commit log (Curro).
v3 (Sam):
- Remove the assert and put the condition in the return (Curro).
- Fix function name (Curro).
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
| |
We were not considering the case where the load payload is writing to
a destination with a reg_offset > 0.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were not invalidating entries with a src that reads more than one register
when we find writes that overwrite any register read by entry->src after
the first. This leads to incorrect copy propagation because we re-use
entries from the ACP that have been partially invalidated. Same thing for
entries with a dst that writes to more than one register.
v2 (Sam):
- Improve code by defining regions_overlap() and using it instead of a
loop (Curro).
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
try_copy_propagate() was special-casing UNIFORM registers (the
BAD_FILE, ARF and FIXED_GRF cases are dead, see the assertion at the
top of the function) and then failing to take into account the
possibility of the instruction reading from a non-zero offset of the
destination of the copy. The VGRF/ATTR handling takes it into account
correctly, and there is no reason we couldn't use the exact same logic
for the UNIFORM file aside from the fact that uniforms represent
reg_offset in different units. We can work around that easily by
defining an additional constant with the right unit reg_offset is
expressed in.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because the semantics of source modifiers are type-dependent, the type of the
original source of the copy must be kept unmodified while propagating it into
some instruction, which implies that we need to have the guarantee that the
meaning of the instruction is going to remain the same after we have changed
the types. Whenthe size of the new type is different from the size of the old
type the new and old instructions cannot possibly be equivalent because the new
instruction will be reading more data than the old one was.
Prevents that we turn this:
load_payload(8) vgrf17:DF, |vgrf4+0.0|:DF 1sthalf
mov(8) vgrf18:DF, vgrf17:DF 1sthalf
load_payload(8) vgrf5:DF, vgrf18:DF, vgrf20:DF NoMask 1sthalf WE_all
load_payload(8) vgrf21:UD, vgrf5+0.4<2>:UD 1sthalf
mov(8) vgrf22:UD, vgrf21:UD 1sthalf
into:
load_payload(8) vgrf17:DF, |vgrf4+0.0|:DF 1sthalf
mov(8) vgrf18:DF, |vgrf4+0.0|:DF 1sthalf
load_payload(8) vgrf5:DF, |vgrf4+0.0|:DF, |vgrf4+2.0|:DF NoMask 1sthalf WE_all
load_payload(8) vgrf21:UD, vgrf5+0.4<2>:UD 1sthalf
mov(8) vgrf22:DF, |vgrf4+0.4|<2>:DF 1sthalf
where the semantics of the last instruccion have changed.
v2 (Curro):
- Update commit log and add comment to explain the problem better.
- Simplify the condition.
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically, consider the size of the data type of the operand to compute
the number of registers written.
v2 (Sam):
- Fix line width (Jordan).
- Add an assert (Jordan).
- Use REG_SIZE in the calculation of regs_written (Curro)
v3 (Sam):
- Fix assert and calculation of regs_written (Curro).
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has likely been broken since we started propagating copies not
matching the offset of the instruction exactly
(1728e74957a62b1b4b9fbb62a7de2c12b77c8a75). The copy source stride
needs to be taken into account to find out the offset at the origin
that corresponds to the offset at the destination of the copy which is
being read by the instruction. This has led to program miscompilation
on both my SIMD32 branch and Igalia's FP64 branch.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This can happen if the register already has a non-zero subreg_offset
when byte_offset() is called.
v2 (Sam):
- Refactor byte_offset() (Jordan).
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've apparently always been botching JIP for sequences such as:
do
cmp.f0.0 ...
(+f0.0) break
...
do
...
while
...
while
Because the "do" instruction doesn't actually exist, the inner "while"
is at the same depth as the "break". brw_find_next_block_end() thus
mistook the inner "while" as the end of the loop containing the "break",
and set the "break" to point to the wrong place.
Only "while" instructions that jump before our instruction are relevant.
We need to ignore the rest, as they're sibling control flow nodes (or
children, but this was already handled by the depth == 0 check).
See also commit 1ac1581f3889d5f7e6e231c05651f44fbd80f0b6.
This prevents channel masks from being screwed up, and fixes GPU
hangs(*) in dEQP-GLES31.functional.shaders.multisample_interpolation.
interpolate_at_sample.centroid_qualified.multisample_texture_16.
The test ended up executing code with no channels enabled, and that
code contained FIND_LIVE_CHANNEL, which returned 8 (out of range for
a SIMD8 program), which then was used in indirect GRF addressing,
which randomly got a boolean value (0xFFFFFFFF), interpreted it as
a sample ID, OR'd it into an indirect send message descriptor,
which corrupted the message length, sending a pixel interpolator
message with mlen 15, which is illegal. Whew :)
(*) Technically, the test doesn't GPU hang currently, but only
because another bug prevents it from issuing pixel interpolator
messages entirely...with that fixed, it hangs.
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
| |
I need to use this in an additional place.
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
STATE_BASE_ADDRESS stalls the whole pipeline, and the documentation
cautions us to emit it as little as possible for better performance.
We recently put some hacks in BLORP to try and avoid emitting it
if it was already set correctly. However, this wasn't quite minimal:
if BLORP is the first operation (i.e. glClear()), then it would emit
it, and subsequent draw calls would emit it again.
This caused a small drop in performance in GPUTest Triangle when
switching from Meta to BLORP.
Unlike most packets, STATE_BASE_ADDRESS isn't influenced by GL state:
it needs to be emitted once per batch, before most other commands, or
whenever we change the program cache BO. It's also valid in both the
3D and compute pipelines, which makes it even more unique.
This patch removes it from the atom mechanism and instead directly
calls it as part of every draw, compute dispatch, or BLORP operation.
We introduce a new flag indicating that STATE_BASE_ADDRESS has already
been emitted this batch, and if so, skip doing it again. When we make
a new program cache BO, we simply reset the flag, so the next operation
will emit it again. When we flush/reset the batch, we reset the flag.
This guarantees that we'll emit STATE_BASE_ADDRESS only when we have to.
It's also less code than the old atom mechanism.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
| |
We're about to start calling it directly, and this means the callers
won't have to think about generations.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
| |
This way all the programs are in one place again, and it also should
make some future STATE_BASE_ADDRESS related changes possible.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
opt_constant_folding is supposed to fold trees of constants into a
single constant. Surprisingly, it was also propagating constant values
from variables into expression trees - even when the result couldn't be
folded together. This is opt_constant_propagation's job.
The ir_dereference_variable::constant_expression_value() method returns
a clone of var->constant_value. So we would replace the dereference
with a constant, propagating it into the tree.
Skip over ir_dereference_variable to avoid this surprising behavior.
However, add code to explicitly continue doing it in the constant
propagation pass, as it's useful to do so.
shader-db statistics on Broadwell:
total instructions in shared programs: 8905349 -> 8905126 (-0.00%)
instructions in affected programs: 30100 -> 29877 (-0.74%)
helped: 93
HURT: 20
total cycles in shared programs: 71017030 -> 71015944 (-0.00%)
cycles in affected programs: 132456 -> 131370 (-0.82%)
helped: 54
HURT: 45
The only hurt programs are by a single instruction, while the helped
ones are helped by 1-4 instructions.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an ir_dereference_array has non-constant components, there's no
point in trying to evaluate its value (which involves walking down
the tree and possibly allocating memory for portions of the subtree
which are constant).
This also removes convoluted tree walking in opt_constant_folding(),
which tries to fold constants while walking up the tree. No need to
walk down, then up, then down again.
We did this for swizzles and expressions already, but I was lazy
back in the day and didn't do this for ir_dereference_array.
No change in shader-db.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We could probably clean this up more (maybe make it a method), but at
least there's only one copy of this code now, and that's a start.
No change in shader-db.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|