| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
On a release build, this makes the rest of vc4_qpu_validate.c go away
(the compiler didn't know that our qpu helper function calls had no
side effects).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are really useful hints to the compiler in the absence of link-time
optimization, and I'm going to use them in VC4.
I've made the const attribute be ATTRIBUTE_CONST unlike other function
attributes, because we have other things in the tree #defining CONST for
their own unrelated purposes.
v2: Alphabetize.
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we were setting payload_last_use_ip for unused payload
registers to 0, which made them interfere with whatever the first
instruction wrote to due to the workaround for SIMD16 uniform arguments.
Just use -1 to mean "unused" instead, and then skip setting any
interferences for unused payload registers.
instructions in affected programs: 0 -> 0
helped: 0
HURT: 0
GAINED: 1
LOST: 0
Reviewed-by: Jordan Justen <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
regs_read() will handle LINTERP for us since the previous commit. In
addition, we were being too conservative, since it will only read 2
registers on SIMD8.
instructions in affected programs: 9061 -> 8893 (-1.85%)
helped: 10
HURT: 0
GAINED: 0
LOST: 0
All of the changes were due to spills being eliminated, mostly in KSP
shaders.
Reviewed-by: Jordan Justen <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a source register in the push constant registers uses more than one
register, then we wouldn't update payload_last_use_ip for subsequent
registers.
Unlike most uniform data pushed into registers, the CS gl_LocalInvocationID
data varies per execution channel. Therefore for SIMD16 mode, we have vec16
data in the payload. In this case we then need to mark 2 registers in
payload_last_use_ip as last used by the instruction. There's a similar
situation for the z and w coordinates of gl_FragCoord for fragment shaders,
where it had only happened to work before because of some bogus interferences
which the next commit removes.
(Connor: added bit about gl_FragCoord to commit message)
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
| |
The second source always stays within the same SIMD8 register.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
| |
This prevents an assertion failure in brw_fs_live_variables.cpp,
fs_live_variables::setup_one_write: Assertion `var < num_vars' failed.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
| |
This prevents an assertion failure in brw_fs_live_variables.cpp,
fs_live_variables::setup_one_read: Assertion `var < num_vars' failed.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A fragment program from "Pixel Piracy" contains redundant OPTION
directives:
!!ARBfp1.0
OPTION ARB_precision_hint_fastest;
OPTION ARB_fog_exp2;
OPTION ARB_precision_hint_fastest;
OPTION ARB_fog_exp2;
...
We already allow redundant ARB_precision_hint_fastest directives, but
disallow the redundant (yet consistent) ARB_fog_exp2 directives, failing
to compile the program.
The specification seems to contradict itself - the main text says that
only one fog application option may be specified, but then backpedals,
indicating the intent is to disallow /contradictory/ flags. One of the
issues suggests that specifying contradictory ones is stupid, but
allowed, and only the last one should take effect.
Accepting multiple redundant (but consistent) directives seems harmless,
and like a reasonable interpretation of the specification. It also
fixes a fragment program found in the wild.
Cc: [email protected]
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the last few patches a way was provided to influence lower layer miptree
layout and allocation decisions via flags (replacing bools). For simplicity, I
chose not to touch the tiling requests because the change was slightly less
mechanical than replacing the bools.
The goal is to organize the code so we can continue to add new parameters and
tiling types while minimizing risk to the existing code, and not having to
constantly add new function parameters.
v2: Rebased on Anuj's recent Yf/Ys changes
Fix non-msrt MCS allocation (was only happening in gen8 case before)
v3: small fix in assertion requested by Chad
v4: Use parens to get the order right from v3.
Signed-off-by: Ben Widawsky <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
| |
This reverts commit 51e8d549e110f86cb7107cf712843aebd956fb9a.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the last few patches a way was provided to influence lower layer miptree
layout and allocation decisions via flags (replacing bools). For simplicity, I
chose not to touch the tiling requests because the change was slightly less
mechanical than replacing the bools.
The goal is to organize the code so we can continue to add new parameters and
tiling types while minimizing risk to the existing code, and not having to
constantly add new function parameters.
v2: Rebased on Anuj's recent Yf/Ys changes
Fix non-msrt MCS allocation (was only happening in gen8 case before)
v3: small fix in assertion requested by Chad
Signed-off-by: Ben Widawsky <[email protected]>
Reviewed-by: Jordan Justen <[email protected]> (v2)
Reviewed-by: Anuj Phogat <[email protected]> (v2)
Reviewed-by: Chad Versace <[email protected]> (v2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
size.
This in principle simple calculation was being open-coded in a number
of places (in a series I haven't yet sent for review there will be a
couple more), all of them were subtly broken in one way or another:
None of them were handling the HW_REG case correctly as pointed out by
Connor, and fs_inst::regs_read() was handling the stride=0 case rather
naively. This patch solves both problems and factors out the
calculation as a new fs_reg method.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gets rid of two no16() fall-backs and should allow better
scheduling of the generated IR. There are no uses of usubBorrow() or
uaddCarry() in shader-db so no changes are expected. However the
"arb_gpu_shader5/execution/built-in-functions/fs-usubBorrow" and
"arb_gpu_shader5/execution/built-in-functions/fs-uaddCarry" piglit
tests go from 40 to 28 instructions. The reason is that the plain ADD
instruction can easily be CSE'ed with the original addition, and the
b2i negation can easily be propagated into the source modifier of
another instruction, so effectively both operations are performed with
just one instruction.
v2: Rely on carry_to_arith() and borrow_to_arith() to lower these
(Ilia Mirkin).
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Booleans are represented as 0/-1 on modern hardware which means we can
just negate them to convert them into a numeric type. Negation has
the benefit that it can be implemented using a source modifier which
can easily be propagated into some other instruction. shader-db
results on HSW:
total instructions in shared programs: 6349082 -> 6346693 (-0.04%)
instructions in affected programs: 40948 -> 38559 (-5.83%)
helped: 123
HURT: 1
GAINED: 1
LOST: 0
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
| |
PIPE_CAPs will be added some other time.
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PIPE_CAPs and TGSI support will be added later. The TGSI support should be
straightforward. We only need to split TGSI_FILE_RESOURCE into TGSI_FILE_IMAGE
and TGSI_FILE_BUFFER, though duplicating all opcodes shouldn't be necessary.
The idea is:
* ARB_shader_image_load_store should use set_shader_images.
* ARB_shader_storage_buffer_object should use set_shader_buffers(slots 0..M-1)
if M shader storage buffers are supported.
* ARB_shader_atomic_counters should use set_shader_buffers(slots M..N)
if N-M+1 atomic counter buffers are supported.
PIPE_CAPs can describe various constraints for early DX11 hardware.
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of relying on hardware defaults the i915 kernel driver is
going program custom MOCS tables system-wide on Gen9 hardware. The
"WT" entry previously used for renderbuffers had a number of problems:
It disabled caching on eLLC, it used a reserved L3 cacheability
setting, and it used to override the PTE controls making renderbuffers
always WT on LLC regardless of the kernel's setting. Instead use an
entry from the new MOCS tables with parameters: TC=LLC/eLLC, LeCC=PTE,
L3CC=WB.
The "WB" entry previously used for anything other than renderbuffers
has moved to a different index in the new MOCS tables but it should
have the same caching semantics as the old entry.
Even though the corresponding kernel change ("drm/i915: Added
Programming of the MOCS") is in a way an ABI break it doesn't seem
necessary to check that the kernel is recent enough because the change
should only affect Gen9 which is still unreleased hardware.
v2: Update MOCS values for the new Android-incompatible tables
introduced in v7 of the kernel patch.
Cc: 10.6 <[email protected]>
Reference: http://lists.freedesktop.org/archives/intel-gfx/2015-July/071080.html
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
| |
s/build_error/compile_error in order to match the stored OpenCL status code.
Make program::build catch and log every OpenCL error.
Make tgsi error triggering uniform with the llvm one.
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is done by returning an rvalue of type void in the
ast_function_expression::hir function instead of a void expression.
This produces (in the case of the ternary) an hir with a call
to the void returning function and an assignment of a void variable
which will be optimized out (the assignment) during the optimization
pass.
This fix results in having a valid subexpression in the many
different cases where the subexpressions are functions whose
return values are void.
Thus preventing to dereference NULL in the following cases:
* binary operator
* unary operators
* ternary operator
* comparison operators (except equal and nequal operator)
Equal and nequal had to be handled as a special case because
instead of segfaulting on a forbidden syntax it was now accepting
expressions with a void return value on either (or both) side of
the expression.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85252
Signed-off-by: Renaud Gaubert <[email protected]>
Reviewed-by: Gabriel Laskar <[email protected]>
Reviewed-by: Samuel Iglesias Gonsalvez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The formats chosen (both by texture format choser, fbo storage allocation)
are different for big endian not just for rgba8 but also lower bit width
formats (why I don't actually know). Even the function to test for renderable
formats used different formats, however the actual colorbuffer setup did not.
And the blitter did not take that into account neither.
Untested (what could possibly go wrong...).
Same as for r100.
Acked-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The formats chosen (both by texture format choser, fbo storage allocation)
are different for big endian not just for rgba8 but also lower bit width
formats (why I don't actually know). Even the function to test for renderable
formats used different formats, however the actual colorbuffer setup did not.
And the blitter did not take that into account neither.
Untested (what could possibly go wrong...).
Acked-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Blit submits lots of packets which are usually handled by state atoms, so
these must be dirtied.
Not sure if this fixes anything, but it was a concern raised by bug 51658
(with this all issues there seen as actual bugs should be fixed, with the
exception of the patch to upload non-used texenv state atoms which I just
don't understand).
Acked-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is rather unfortunate that we don't know if a texture is going to be used
as a rt later, and we lack the means to do something about a format chosen
which we can't render to directly, so disable this and always chose renderable
format for rgba8 textures.
This addresses an issue raised on (old) bug,
https://bugs.freedesktop.org/show_bug.cgi?id=51658 with gnome-shell, don't
know if that's still applicable but it might fix other things as well.
Acked-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Along with fixing the type of pitch parameter, patch also changes
the types of few local variables and function return type.
Warnings fixed are:
intel_mipmap_tree.c:671:7: warning: passing argument 3 of
'intel_get_yf_ys_bo_size' from incompatible pointer type
intel_mipmap_tree.c:563:1: note: expected 'uint64_t *' but
argument is of type 'long unsigned int *'
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously OUT_BATCH was just a macro around an inline function which
does
brw->batch.map[brw->batch.used++] = dword;
When making consecutive calls to intel_batchbuffer_emit_dword() the
compiler isn't able to recognize that we're writing consecutive memory
locations or that it doesn't need to write batch.used back to memory
each time.
We can avoid both of these problems by making a local pointer to the
next location in the batch in BEGIN_BATCH().
Cuts 18k from the .text size.
text data bss dec hex filename
4946956 195152 26192 5168300 4edcac i965_dri.so before
4928956 195152 26192 5150300 4e965c i965_dri.so after
This series (including commit c0433948) improves performance of Synmark
OglBatch7 by 8.01389% +/- 0.63922% (n=83) on Ivybridge.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
| |
The next patch will replace the .used field with an on-demand
calculation of batchbuffer usage.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
| |
So that everything writing to the batch between BEGIN_BATCH() and
ADVANCE_BATCH() goes through OUT_BATCH.
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
BEGIN_BATCH() and ADVANCE_BATCH() will contain "do {" and "} while (0)"
respectively to allow declaring local variables used by intervening
OUT_BATCH macros. As such, BEGIN_BATCH() and ADVANCE_BATCH() need to be
in the same control flow.
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
|
|
|
|
|
|
|
| |
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=91337
Cc: 10.6 <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
| |
Cuts another 12% of vc4_uniforms.o, in exchange for computing it at
CSO creation time.
|
|
|
|
|
| |
In exchange for a bit of space and computation in CSO setup, we cut
vc4_uniform.c (draw time) code size by 4.8%.
|
|
|
|
|
| |
The rest of vc4_program.c is about compiling, while this is about
uniform emit at draw time.
|
|
|
|
|
| |
No code generation changes from this, but it'll be useful to have this
next time I go checking -Wdouble-promotion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This field should always be set for gen8. In the bdw PRM, Volume 2d:
Command Reference: Structures under INTERFACE_DESCRIPTOR_DATA, DWORD
6, Bits 9:0, Number of Threads in GPGPU Thread Group:
"This field should not be set to 0 even if the barrier is disabled,
since an accurate value is needed for proper pre-emption."
In the HSW PRM, the it doesn't mention that it must always be set, but
it should not hurt.
Reported-by: Kristian Høgsberg <[email protected]>
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
| |
|
| |
|
|
|
|
| |
Cuts another 88 bytes of compiled code.
|
|
|
|
|
| |
Drops 680 bytes of code, from avoiding a bunch of extra updates to the
next pointer in the struct.
|
|
|
|
|
|
| |
I needed to rewrite this a bit for safety checking in the next commit.
Despite being a static inline of the same thing that was being done, we
lose 36 bytes of code for some reason.
|
|
|
|
|
|
| |
Now that RCL generation is in the kernel, we don't have any other
callers. Oddly, the compiler generates another 8 bytes of code for
this, but the simplification is worth it.
|
|
|
|
|
|
|
| |
Now that we don't resize the CL as we build (it's set up at the top by
vc4_start_draw()), we can store the pointers instead of offsets from
the base. Saves a bit of math in emitting relocs (about 60 bytes of
code).
|
| |
|
|
|
|
| |
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
Extend the existing lower_ubo_reference pass to also detect SSBO loads
and lower them to __intrinsic_load_ssbo intrinsics.
Signed-off-by: Samuel Iglesias Gonsalvez <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|