| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Added meson test for standalone compiler with --dump-builder option
on builtin texture* functions.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107767
Signed-off-by: Yevhenii Kolesnikov <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
glsl/standalone with --dump-builder will exit when unsupported texture
functions are encountered.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107767
Signed-off-by: Sergii Romantsov <[email protected]>
Signed-off-by: Yevhenii Kolesnikov <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From OpPtrAccessChain description in the SPIR-V spec (1.4 rev 1):
For objects in the Uniform, StorageBuffer, or PushConstant storage
classes, the element’s address or location is calculated using a
stride, which will be the Base-type’s Array Stride when the Base
type is decorated with ArrayStride. For all other objects, the
implementation will calculate the element’s address or location.
For non-CL shaders the driver should layout the Workgroup storage
class, so override any explicitly set ArrayStride in the shader. This
currently fixes only the lower_workgroup_access_to_offsets case, which
is used by anv.
Reviewed-by: Juan A. Suarez <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: 1) Add more optimization rules for ROL/ROR (Matt Turner)
2) Add lowering rules for ROL/ROR (Matt Turner)
Signed-off-by: Sagar Ghuge <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Sagar Ghuge <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
When using ARB_gl_spirv, the block names are optional and the uniform
blocks are referred using Bindings instead. Teach
gl_nir_lower_buffers to handle those.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we were using the uniform explicit location to check if the
current nir variable was already processed while adding entries on the
uniform storage. But for UBOs/SSBOs, entries are added too but we lack
a explicit location.
For those we need to rely on the UBO/SSBO binding and the unifor
storage block_index. In that case several uniforms would need to be
updated at once.
v2: (from Timothy review)
* Improve wording and fix typos of some long comments.
* Rename update_uniform_storage for mark_stage_as_active
v3: (from cmarcelo review)
* Fixed some comment typos
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically, offset, stride (coming from arrays or matrices) and
row_major.
On GLSL, most of that info is computed using the layout qualifier, but
on ARB_gl_spirv they are explicit, and for Mesa, included on the
glsl_type.
From ARB_gl_spirv spec:
"Mapping of layouts
std140/std430 -> explicit *Offset*, *ArrayStride*, and
*MatrixStride* Decoration on struct members""
"7.6.2.spv SPIR-V Uniform Offsets and Strides
The SPIR-V decorations *GLSLShared* or *GLSLPacked* must not be
used. A variable in the *Uniform* Storage Class decorated as a
*Block* must be explicitly laid out using the *Offset*,
*ArrayStride*, and *MatrixStride* decorations"
For offset we needed to include the parent and index_in_parent while
processing the type, as the offset is maintained on glsl_struct_field
of the parent type, not on the type itself.
v2: Fix the default values for MATRIX_STRIDE, ARRAY_STRIDE and
ROW_MAJOR when the variable is not backed by a buffer object
(Antia Puentes).
v3: Update after Jason series "SPIR-V: Use NIR deref instructions for
UBO/SSBO access" that included just one explicit stride, instead
of a previous patch we wrote that had matrix_stride and
array_stride (Alejandro)
Signed-off-by: Antia Puentes <[email protected]>
Signed-off-by: Alejandro Piñeiro <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For this interfaces, the inner members are added only once as uniforms
or resources, in opposite to other cases, like a uniform array of
structs.
For those guessing why a issue (16) from ARB_program_interface_query
was used, instead of a quote of the core spec: The core spec is not
really clear about how members of arrays of blocks should be
enumerated.
On GLSL this was also problematic, specially when we were trying to
pass the 4.5 CTS tests. See commit "glsl: Fix program interface
queries relating to interface blocks"
(4c4d9e4f032d5753034361ee70aa88d16d3a04b4), as a reference. That one
also needed to rely on issue (16) to justify the change, pointing that
the core spec needs to be clarified.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
| |
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the ability to link uniform blocks and shader storage blocks
using NIR, intended for ARB_gl_spirv support. Among other things, this
linking needs to take into account that everything should work without
names, as they could be not present, while the GLSL IR uniform block
linking was wrote with the names on its core.
The other major difference compared with the GLSL IR linker is that we
don't deal with layouts. There are no references to std140, std430,
etc. Layouts are expressed through explicit offset, array stride and
matrix stride. That simplifies how the buffer size are computed. But
also means that we couldn't use the existing methods at glsl_types, so
we needed to implement new methods.
It is worth to note that this linking do a iteration over the
glsl_types, similarly to what the linking uniforms do. A possible
future improvement would be refactor both cases to try to share more
code that it sharing right now. On GLSL IR there are a class visitor,
specialized on each case, for that sharing. As adding a class visitor
on C would more complicated, for now we are just iterating on both.
Signed-off-by: Alejandro Piñeiro <[email protected]>
Signed-off-by: Neil Roberts <[email protected]>
Signed-off-by: Antia Puentes <[email protected]>
v2: (from Timothy review)
* Fix variable name convention
* Stop to use _function_name convention
* Don't use // for comments
* "nir/linker: Keep track of the stages referencing an UBO/SSBO"
squashed with this patch
v3: (from Caio review)
* Don't delete the linked shader on failure
* Use rzalloc_array to avoid some explicit initializations
Reviewed-by: Timothy Arceri <[email protected]>
Reviewed-by: Caio Marcelo de Oliveira Filho <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Helper used to know when a glsl_type is a leaf when iteraring through
a complex type. Note that GLSL IR linking also uses the concept of
leaf while doing the same iteration, although in that case it uses a
visitor. See link_uniform_blocks, process_array_leaf and others as
reference.
Signed-off-by: Alejandro Piñeiro <[email protected]>
Signed-off-by: Antia Puentes <[email protected]>
v2:
* Moved from gl_nir_linker to nir_types, so it could be used on nir
xfb gathering (Timothy Arceri)
* Minor update after Timothy's series about record to struct
renaming landed master.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While using SPIR-V shaders (ARB_gl_spirv), layout data is not implicit
to a specific value (std140, std430, etc) but explicitly included on
the type (explicit values for offset, stride and row_major).
So this method is equivalent to the existing std140_size and
std430_size, but using such explicit values.
Note that the value returned by this method is only valid if such data
is set, so when dealing with SPIR-V shaders.
v2: (all changes suggested by Jason Ekstrand)
* Iterate through all struct members, instead of assume that fields
are ordered by offset
* Use else if
* Take into account the case that explicit_stride > elem_size, to
fine graine the final size on arrays and matrices
* Handle different bit-sizes in general, not just 32 and 64.
v3: (change suggested by Caio Marcelo de Oliveira Filho)
* fix up explicit_size() to consider interface types
Signed-off-by: Alejandro Piñeiro <[email protected]>
Signed-off-by: Antia Puentes <[email protected]>
Signed-off-by: Neil Roberts <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
Reviewed-by: Caio Marcelo de Oliveira Filho <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note that the nir_types glsl_get_bit_size is not a wrapper of this
one, because for bools at the nir level, we want to return size 1, but
at the glsl_types we want to return 32.
v2: reuse the new method in order to simplify is_16bit and is_32bit
helpers (Timothy)
v3: add a comment clarifying the difference between
glsl_base_type_bit_size and glsl_get_bit_size.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Equivalent to the already existing ir_variable is_in_buffer_block and
is_in_shader_storage_block, adding the uniform buffer object one. I'm
using the short forms (ssbo, ubo) to avoid having method names too
long.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The data for some nir variables is only filled up for some specific
modes. We need now too for UBO/SSBO, as such info would be used when
linking for OpenGL (ARB_gl_spirv).
There is an existing comment just before that code (starts with XXX)
that points that binding still needs to be filled up for uniform
variables at that point, and that should be fixed, although it doesn't
specify why that's a problem or what would be the alternative. For now
doing the same for UBO/SSBO, and will hope that the future fixing is
done for all of them.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Providing nir variables for UBO/SSBO it is not required for Vulkan,
but it is needed for OpenGL (ARB_gl_spirv), like for example, to
gather info from the UBO/SSBO while linking.
In opposite with most cases where the nir variables is created, here
the type assigned is the full type (not just the bare type). This is
needed because while linking using the nir shader we need the explicit
layout info (explicit stride, explicit offset, row_major, etc).
Also, we need to assign an interface type, used also on the OpenGL
linker if it is a UBO/SSBO. See ir_variable::is_in_buffer_block as
example.
v2: assign interface_type to be the variable type, not need to be
arrayness (Timothy)
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
| |
No shader-db change on any Intel platform. No shader-db run-time
difference on a certain 36-core / 72-thread system at 95% confidence
(n=20).
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is no reason to mark the fmul in the expression
('fmul', ('fadd', a, b), ('fadd', a, b))
as commutative. If a source of an instruction doesn't match one of the
('fadd', a, b) patterns, it won't match the other either.
This change is enough to make this pattern work:
('~fadd@32', ('fmul', ('fadd', 1.0, ('fneg', a)),
('fadd', 1.0, ('fneg', a))),
('fmul', ('flrp', a, 1.0, a), b))
This pattern has 5 commutative expressions (versus a limit of 4), but
the first fmul does not need to be commutative.
No shader-db change on any Intel platform. No shader-db run-time
difference on a certain 36-core / 72-thread system at 95% confidence
(n=20).
There are more subpatterns that could be marked as non-commutative, but
detecting these is more challenging. For example, this fadd:
('fadd', ('fmul', a, b), ('fmul', a, c))
The first fadd:
('fmul', ('fadd', a, b), ('fadd', a, b))
And this fadd:
('flt', ('fadd', a, b), 0.0)
This last case may be easier to detect. If all sources are variables
and they are the only instances of those variables, then the pattern can
be marked as non-commutative. It's probably not worth the effort now,
but if we end up with some patterns that bump up on the limit again, it
may be worth revisiting.
v2: Update the comment about the explicit "len(self.sources)" check to
be more clear about why it is necessary. Requested by Connor. Many
Python fixes style / idom fixes suggested by Dylan. Add missing (!!!)
opcode check in Expression::__eq__ method. This bug is the reason the
expected number of commutative expressions in the bitfield_reverse
pattern changed from 61 to 45 in the first version of this patch.
v3: Use all() in Expression::__eq__ method. Suggested by Connor.
Revert away from using __eq__ overloads. The "equality" implementation
of Constant and Variable needed for commutativity pruning is weaker than
the one needed for propagating and validating bit sizes. Using actual
equality caused the pruning to fail for my ('fmul', ('fadd', 1, a),
('fadd', 1, a)) case. I changed the name to "equivalent" rather than
the previous "same_as" to further differentiate it from __eq__.
Reviewed-by: Connor Abbott <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
| |
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Search patterns that are expected to have too many (e.g., the giant
bitfield_reverse pattern) can be added to a white list.
This would have saved me a few hours debugging. :(
v2: Implement the expected-failure annotation as a property of the
search-replace pattern instead of as a property of the whole list of
patterns. Suggested by Connor.
Reviewed-by: Connor Abbott <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
| |
Trivial
Reviewed-by: Connor Abbott <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bfi/bfm behavior change replaced the bfi/bfm usage in
lower_bitfield_insert_to_shifts with actual shifts like the name says,
but it failed to handle the offset=0, bits==32 case in the new
lowering.
v2: Use 31 < bits instead of bits == 32, to get the 31 < (iand bits,
31) -> false optimization.
Fixes regressions in dEQP-GLES31.*bitfield_insert* on freedreno.
Fixes: 165b7f3a4487 ("nir: define behavior of nir_op_bfm and nir_op_u/ibfe according to SM5 spec.")
Reviewed-by: Daniel Schürmann <[email protected]>
|
|
|
|
|
|
|
|
| |
The helpers are needed so we can use the syntax `instr(cond)` in the
algebraic rules. Add simple rule for dropping a pair of mul-div of
the same value when wrapping is guaranteed to not happen.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When handling the specified ALU operations, check for the decorations
and set nir_alu_instr no_signed_wrap and no_unsigned_wrap flags accordingly.
v2: Add a glsl_base_type_is_unsigned_integer() helper. (Karol)
v3: Rename helper to glsl_base_type_is_uint().
v4: Use two flags, so we don't need the helper anymore. (Connor)
v5: Pass alu directly to handle function. (Jason)
Reviewed-by: Karol Herbst <[email protected]> [v3]
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They indicate the operation does not cause overflow or underflow.
This is motivated by SPIR-V decorations NoSignedWrap and
NoUnsignedWrap.
Change the storage of `exact` to be a single bit, so they pack
together.
v2: Handle no_wrap in nir_instr_set. (Karol)
v3: Use two separate flags, since the NIR SSA values and certain
instructions are typeless, so just no_wrap would be insufficient
to know which one was referred to. (Connor)
v4: Don't use nir_instr_set to propagate the flags, unlike `exact`,
consider the instructions different if the flags have different
values. Fix hashing/comparing. (Jason)
Reviewed-by: Karol Herbst <[email protected]> [v1]
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
There doesn't seem to be any reason to keep these opcodes around:
* fnot/fxor are not used at all.
* fand/for are only used in lower_alu_to_scalar, but easily replaced
Signed-off-by: Jonathan Marek <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
We can vectorize instructions with different constant sources by creating
a new load_const and using that.
Signed-off-by: Jonathan Marek <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
| |
This will be used to add compat profile support for higher GL
versions.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix round64 function to handle round to nearest even cases specially
with positive and negative numbers with fraction part 0.5.
v2: 1) Simplify unused bits (Elie Tournier)
Fixes:
KHR-GL45.gpu_shader_fp64.builtin.round_dvec2
KHR-GL45.gpu_shader_fp64.builtin.round_dvec3
KHR-GL45.gpu_shader_fp64.builtin.round_dvec4
KHR-GL45.gpu_shader_fp64.builtin.roundeven_double
KHR-GL45.gpu_shader_fp64.builtin.roundeven_dvec2
KHR-GL45.gpu_shader_fp64.builtin.roundeven_dvec3
KHR-GL45.gpu_shader_fp64.builtin.roundeven_dvec4
Signed-off-by: Sagar Ghuge <[email protected]>
Reviewed-by: Elie Tournier <[email protected]>
Acked-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Incrementing the iteration count was intended to fix an off-by-one error
when the first terminator was superseded by a later terminator. If
there is no first terminator or later terminator, there is no off-by-one
error. Incrementing the loop count creates one. This can be seen in
loops like:
do {
if (something) {
// No breaks or continues here.
}
} while (false);
Reviewed-by: Timothy Arceri <[email protected]>
Tested-by: Abel Briggs <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110953
Fixes: 646621c66da ("glsl: make loop unrolling more like the nir unrolling path")
|
|
|
|
|
|
|
|
|
|
|
| |
The iterator `i` already walks the right amount now that is
incremented by `dmul`, so no need to `* 2`. Fixes invalid memory
access in upcoming ARB_gl_spirv tests.
Failure bisected by Arcady Goldmints-Orlov.
Fixes: b019fe8a5b6 "glsl/nir: Fix handling of 64-bit values in uniform storage"
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
For n_columns == 1, we have a vector which is handled by the else
case. Fixes invalid memory access in upcoming ARB_gl_spirv tests.
Failure bisected by Arcady Goldmints-Orlov.
Fixes: 81e51b412e9 "nir: Make nir_constant a vector rather than a matrix"
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
bitfield_select.
bitfield_select is defined as:
bitfield_select(mask, base, insert) = (mask & base) | (~mask & insert)
matching the behavior of AMD's BFI instruction.
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
| |
This lets us use the optimization pattern
(('ult', 31, ('iand', b, 31)), False) to remove the
bcsel instruction for code originating in D3D shaders.
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
| |
The [iu]bfe and bfm instructions are defined to only use the five
least significant bits.
This optimizes a common pattern from D3D -> SPIR-V translation.
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
That is: the five least significant bits provide the values of
'bits' and 'offset' which is the case for all hardware currently
supported by NIR and using the bfm/bfe instructions.
This patch also changes the lowering of bitfield_insert/extract
using shifts to not use bfm and removes the flag 'lower_bfm'.
Tested-by: Eric Anholt <[email protected]>
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
friends.
These optimizations are based on the fact that
'and(a,b) <= umin(a,b)'.
For AMD, this series moves the optimization from LLVM to NIR,
so currently no vkpipeline-db changes here.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We don't expect the output of a TXS instruction to be wider than a
vec3. Add an assert() to make sure this never happens.
Suggested-by: Jason Ekstrand <[email protected]>
Signed-off-by: Boris Brezillon <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
| |
In ARB_gl_spirv we'll be able to use variables for uniform buffers, so
don't use the descriptor intrinsics to lower the block access.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Most places in NIR, we treat matrices like arrays. The one annoying
exception to this has been nir_constant where a matrix is a first-class
thing. This commit changes that so a matrix nir_constant is the same as
an array nir_constant. This makes matrix nir_constants a tiny bit more
expensive but shrinks all others by 96B.
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
| |
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
| |
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Now that nir_const_value is a scalar, there's no reason why we need
multiple paths here and it's just extra paths to keep working. While
we're here, we also add a vtn_fail_if check that component indices are
in-bounds.
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
| |
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
| |
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
| |
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
|
|
|
| |
It only accepts 32-bit integers so it should have a more descriptive
name. This patch should not be a functional change.
Reviewed-by: Karol Herbst <[email protected]>
|
|
|
|
|
|
|
|
|
| |
All of the callers for this function are looking at interpolation
qualifiers and want to make sure they're declared flat. Any 64-bit
integer inputs need to be flat. It's also makes the function make more
sense since "integer" is fairly generic.
Reviewed-by: Karol Herbst <[email protected]>
|