| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Previously, we were just depending on register widths to ensure that
various things were exec_size of 1 etc. Now, we do so explicitly using the
builder.
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
| |
Shortly, offset() will depend on the builder so we need it moved to some
place where it has access to that.
Reviewed-by: Iago Toral Quiroga <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
| |
This doesn't affect instructions allocated using the builder.
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Soon we will start using the builder to explicitly set all the execution
sizes. We could make a 32-wide builder, but the builder asserts that we
never grow it which is usually a reasonable assumption. Since this one
instruction is a bit of an odd-ball, we just set the exec_size explicitly.
v2: Explicitly new the fs_inst instead of using the builder and setting
exec_size after the fact.
v3: Set force_writemask_all with the builder instead of directly. The
builder over-writes it if we set it manually. Also, if we don't have
force_writemask_all in the builder it will assert-fail on SIMD32.
Reviewed-by: Iago Toral Quiroga <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, fs_inst::regs_read() fell back to depending on the register
width for the second source. This isn't really correct since it isn't a
SIMD8 value at all, but a SIMD4x2 value. This commit changes it to
explicitly be always one register.
v2: Use mlen for determining the number of registers read
Reviewed-by: Iago Toral Quiroga <[email protected]>
Acked-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Previously, we were allocating the payload with different sizes per gen and
then figuring out the mlen in the generator based on gen. This meant,
among other things, that the higher level passes knew nothing about it.
Acked-by: Francisco Jerez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
This makes things a little simpler, more efficient, and quite a bit more
readable.
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we would lazily emit a MOV whenever we encountered a use of a
constant. Now that we have a dedicated file for SSA values, we can
instead only emit the MOV's once, which is more consistent and prevents
us from relying on CSE to re-combine the constants when they aren't
absorbed into the instruction.
total instructions in shared programs: 6078991 -> 6073118 (-0.10%)
instructions in affected programs: 402221 -> 396348 (-1.46%)
helped: 1527
HURT: 0
GAINED: 8
LOST: 2
v2: split this out from the previous commit (Jason)
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we would use registers, but set a magical "parent_instr" field
to indicate that it was actually purely an SSA value (i.e., it wasn't
involved in any phi nodes). Instead, just use SSA values directly, which
lets us get rid of the hack and reduces memory usage since we're not
allocating a nir_register for every value. It also makes our handling of
load_const more consistent compared to the other instructions.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We already don't convert constants out of SSA, and in our backend we'd
like to have only one way of saying something is still in SSA.
The one tricky part about this is that we may now leave some undef
instructions around if they aren't part of a phi-web, so we have to be
more careful about deleting them.
v2: rename and flip meaning of flag (Jason)
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
Cc: "10.5" and "10.6" <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
Cc: "10.5" and "10.6" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From the "apparently I don't know C" files...GCC apparently supports:
x ?: y
which is equivalent to
x ? x : y
except that it doesn't cause side-effects to occur twice. See:
https://gcc.gnu.org/onlinedocs/gcc/Conditionals.html#Conditionals
This was confusing and looked like a typo. It doesn't really buy us
anything, so just write the obvious code in normal C.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Samuel Iglesias Gonsálvez <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enables using XY_FAST_COPY_BLT only for Yf/Ys tiled buffers.
It can be later turned on for other tiling patterns (X,Y) too.
V3: Flush in between sequential fast copy blits.
Fix src/dst alignment requirements.
Make can_fast_copy_blit() helper.
Use ffs(), is_power_of_two()
Move overlap computation inside intel_miptree_blit().
V4: Use _mesa_regions_overlap() function.
Add check for src_buffer == dst_buffer.
Simplify horizontal and vertical alignment computations.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of I915_TILING_{X,Y} we need to pass tiling format to libdrm
using drm_intel_bo_alloc_tiled(). But, In case of YF/YS tiled buffers
libdrm need not know about the tiling format because these buffers
don't have hardware support to be tiled or detiled through a fenced
region. libdrm still need to know buffer alignment value for its use
in kernel when resolving the relocation.
Using drm_intel_bo_alloc_for_render() for YF/YS tiled buffers
satisfy both the above conditions.
V2: Delete min/max buffer size restrictions not valid for i965+.
Remove redundant align to tile size statements.
Remove some redundant code now when there are no min/max buffer size.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Buffers with Yf/Ys tiling end up using meta upload / download
paths or the blitter for cases where they used tiled_memcpy paths
in case of Y tiling. This has exposed some bugs in meta path. To
avoid any piglit regressions on SKL this patch keeps the Yf/Ys
tiling disabled at the moment.
V3: Make brw_miptree_choose_tr_mode() actually choose TRMODE. (Ben)
Few cosmetic changes.
V4: Get rid of brw_miptree_choose_tr_mode().
Take care of all tile resource modes {Yf, Ys, none} for all
generations at one place.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function is for deleting per-screen resources, and the shader
compiler resources are not of such nature. Besides, dri shouldn't
need to even know about the presence of a shader compiler.
These resources will already be released when mesa gets unloaded,
and that should be sufficient.
Signed-off-by: Erik Faye-Lund <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Since commit 104c8fc2c2aa5621261f8 the GLSL IR will be freed if NIR is
being used. This was causing it to segfault if INTEL_DEBUG=wm is set.
This patch just makes it avoid dumping the GLSL IR in that case.
Reviewed-by: Ben Widawsky <[email protected]>
Reviewed-by: Tapani Pälli <[email protected]>
|
|
|
|
| |
This reverts commit 104c8fc2c2aa5621261f80aa6b4f76c3163078f1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Legacy user clipping (using gl_Position or gl_ClipVertex) is handled by
turning those into the modern gl_ClipDistance equivalents.
This is unnecessary in Core Profile: if user clipping is enabled, but
the shader doesn't write the corresponding gl_ClipDistance entry,
results are undefined. Hence, it is also unnecessary for geometry
shaders.
This patch moves the call up to run_vs(). This is equivalent for VS,
but removes the need to pass clip distances into emit_urb_writes().
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to the "URB SIMD8 Write > Write Data Payload" documentation,
"The write data payload can be between 1 and 8 message phases long."
Apparently, the simulator considers it an error if you issue an URB
SIMD8 message with only a header and no actual data to write.
v2: Try to put in a better PRM citation, now that the Broadwell docs
actually exist (requested by Jordan).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We were not emitting the LOD, which led to message lengths of 1 instead
of 3. Setting has_lod makes us emit the LOD, but I had to make changes
to avoid emitting the non-existent coordinate as well.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=91022
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brw_miptree_layout_2d tries to ensure that mt->total_width is a
multiple of the compressed block size, presumably because it wouldn't
be possible to make an image that has a fraction of a block. However
it was doing this by aligning mt->total_width to align_w. Previously
align_w has been used as a shortcut for getting the block width
because before Gen9 the block width was always equal to the alignment.
Commit 4ab8d59a2 tried to fix these cases to use the block width
instead of the alignment but it missed this case.
I think in practice this probably won't make any difference because
the buffer for the texture will be allocated to be large enough to
contain the entire pitch and libdrm aligns the pitch to the tile width
anyway. However I think the patch is worth having to make the
intention clearer.
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From Muchnick's Advanced Compiler Design and Implementation:
"To determine which variables are live at each point in a flowgraph, we
perform a backward data-flow analysis"
Previously, we were walking the blocks forwards and updating the livein and
then the liveout. However, the livein calculation depends on the liveout
and the liveout depends on the successor blocks. The net result is that it
takes one full iteration to go from liveout to livein and then another
full iteration to propagate to the predecessors. This works out to an
O(n^2) computation where n is the number of blocks. If we run things in
the other order, it's O(nl) where l is the maximum loop depth which is
practically bounded by 3.
In b2c6ba0c4b21391dc35018e1c8c4f7f7d8952bea, we made this same change in
the FS backend to great effect. Might as well keep it consistent and make
the same change for vec4. Also, this took the time to run the test:
ES31-CTS.arrays_of_arrays.InteractionFunctionCalls1
from 6:49.62 to 3:31.40 on Timothy Arceri's machine.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gen8 had some special restrictions which don't seem to carry over to gen9.
Quoting the spec for SKL:
"The Z_Height and Z_Width values must equal those present in
3DSTATE_DEPTH_BUFFER incremented by one."
This fixes nothing in piglit (and regresses nothing).
Signed-off-by: Ben Widawsky <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is always 0 - only brw_workaround_depthstencil_alignment ever sets
it, and that doesn't run on Gen6+. My initial Broadwell depth state
commit had this mistake.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The thread counts and URB information are all speculative numbers that were
based on some CHV numbers at the time.
v2:
Originally this patch had PCI IDs. I've moved that to a new patch at the end of
the series.
Remove is_cherryview hack.
Add PCI ids. These match the ones defined in the kernel. The only one tested by
us is 0x0a84.
Capitalize the hex string (Mark)
Signed-off-by: Ben Widawsky <[email protected]>
Tested-by: "Lecluse, Philippe" <[email protected]>
Reviewed-by: Mark Janes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b765119c changed the default value of all the counter bits to
64. However, older hardware only has 32 counter bits.
This has only been build-tested. We don't have any tests that verify
the advertised value against implementation behavior, so I don't know
what additional testing could be done.
NOTE: It appears that many Gallium drivers (at least r300 and i915g)
have the same problem, but I don't see a way for the state-tracker to
determine the counter size. Marek says, "For Gallium, a new PIPE_CAP or
new get_xxx_param function will be needed."
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Cc: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From Muchnick's Advanced Compiler Design and Implementation:
"To determine which variables are live at each point in a flowgraph, we
perform a backward data-flow analysis"
Previously, we were walking the blocks forwards and updating the livein and
then the liveout. However, the livein calculation depends on the liveout
and the liveout depends on the successor blocks. The net result is that it
takes one full iteration to go from liveout to livein and then another
full iteration to propagate to the predecessors. This works out to an
O(n^2) computation where n is the number of blocks. If we run things in
the other order, it's O(nl) where l is the maximum loop depth which is
practically bounded by 3.
On my HSW desktop, one particular shadertoy test gets a 20% improvement in
compile times:
N Min Max Median Avg Stddev
x 10 15.965 16.884 16.026 16.1822 0.34736846
+ 10 12.813 13.052 12.876 12.8891 0.06913666
Difference at 95.0% confidence
-3.2931 +/- 0.235316
-20.3501% +/- 1.45417%
(Student's t, pooled s = 0.250444)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This is based on Kenneth's patch to delete 'most of the IR'. Due to
linker changes to clone variables, we can now free all of IR.
Saves 58MB of memory when replaying a Dota 2 trace on Broadwell.
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Cc: [email protected]
|
|
|
|
|
| |
Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Start trimming the fat from intel_batchbuffer.c. First by moving the set
of routines for emitting PIPE_CONTROLS (along with the lore concerning
hardware workarounds) to a separate brw_pipe_control.c
Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Trivial.
|
|
|
|
|
|
|
| |
Matt, Jason, and I haven't found this useful in a long time.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
| |
As of this commit, nothing actually needs the brw_context.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
| |
This way we can stop doing is_gles3 checks inside of the compiler.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Previously, these were pulled out of the GL context conditionally based on
whether we were running ff/ARB or a GLSL program. Now, we just pass them
in so that the visitor doesn't have to grab them itself.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
| |
Previously, we were pulling it from brw->do_rep_send
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Previously, each shader took 3 shader time indices which were potentially
at arbirary points in the shader time buffer. Now, each shader gets a
single index which refers to 3 consecutive locations in the buffer. This
simplifies some of the logic at the cost of having a magic 3 a few places.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|