| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
There is a single bit for this, so it is a binary 0 or 1 meaning
offset 0B or 16B respectively.
v2:
- Since brw_inst_dst_da16_subreg_nr() is known to be 1, remove it
from the expression (Curro)
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The general idea is that with 32-bit swizzles we cannot address DF
components Z/W directly, so instead we select the region that starts
at the the 16B offset into the register and use X/Y swizzles.
The above, however, has the caveat that we can't do that without
violating register region restrictions unless we probably do some
sort of SIMD splitting.
Alternatively, we can accomplish what we need without SIMD splitting
by exploiting the gen7 hardware decompression bug for instructions
with a vstride=0. For example, an instruction like this:
mov(8) r2.x:DF r0.2<0>xyzw:DF
Activates the hardware bug and produces this region:
Component: x0 y0 z0 w0 x1 y1 z1 w1
Register: r0.2 r0.3 r0.2 r0.3 r1.2 r1.3 r1.2 r1.3
Where r0.2 and r0.3 are r0.z:DF for the first vertex of the SIMD4x2
execution and r1.2 and r1.3 are the same for the second vertex.
Using this to our advantage we can select r0.z:DF by doing
r0.2<0,2,1>.xyxy and r0.w by doing r0.2<0,2,1>.zwzw without needing
to split the instruction.
Of course, this only works for gen7, but that is the only hardware
platform were we implement align16/fp64 at the moment.
v2: Adapted to the fact that we now do this after converting to
hardware registers (Iago)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hardware can only operate with 32-bit swizzles, which is a rather
limiting restriction. However, the idea is not to expose this to the
optimization passes, which would be a mess to deal with. Instead, we let
the bulk of the vec4 backend ignore this fact and we fix the swizzles right
at codegen time.
At the moment the pass only needs to handle single value swizzles thanks to
the scalarization pass that runs before it.
Notice that this only works for X/Y swizzles. We will add support for Z/W
swizzles in the next patch, since they need a bit more work.
v2 (Sam):
- Do not expand swizzle of 64-bit immediate values.
v3:
- Do this after translation to hardware registers instead of doing it right
before so we don't need the force_vstride0 flag (Curro).
- Squashed patch that included FIXED_GRF in the list of register files that
need this translation (Iago).
- Remove swizzle assignments for VGRF and UNIFORM files in
convert_to_hw_regs(), they will be set by apply_logical_swizzle() (Iago).
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hardware only supports 32-bit swizzles, which means that we can
only access directly channels XY of a DF making access to channels ZW
more difficult, specially considering the various regioning restrictions
imposed by the hardware. The combination of both things makes handling
ramdom swizzles on DF operands rather difficult, as there are many
combinations that can't be represented at all, at least not without
some work and some level of instruction splitting depending on the case.
Writemasks are 64-bit in general, however XY and ZW writemasks also work
in 32-bit, which means these writemasks can't be represented natively,
adding to the complexity.
For now, we decided to try and simplify things as much as possible to
avoid dealing with all this from the get go by adding a scalarization
pass that runs after the main optimization loop. By fully scalarizing
DF instructions in align16 we avoid most of the complexity introduced
by the aforementioned hardware restrictions and we have an easier path
to an initial fully functional version for the vector backend in Haswell
and IvyBridge.
Later, we can improve the implementation so we don't necessarily
scalarize everything, iteratively adding more complexity and building
on top of a framework that is already working. Curro drafted some ideas
for how this could be done here:
https://bugs.freedesktop.org/show_bug.cgi?id=92760#c82
v2:
- Use a copy constructor for the scalar instructions so we copy all
relevant instructions fields from the original instruction.
v3: Fix indention in one switch (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a hardware bug affecting compressed double-precision SEL
instructions in align16 mode by which they won't read predication mask
properly. The bug does not affect other predicated instructions
and it does not affect SEL in Align1 mode either. This was found
empirically and verified by Curro in the simulator.
Fix this by splitting double-precision SEL in Align16 mode to use an
execution size of 4.
v2: Check that the dst type is 64-bit, since we can have 16-wide single
precision bcsel instructions that also write 2 registers.
v3: Replace bcsel by SEL in all the comments as bcsel is the nir opcode
but SEL is the actual assembly instruction (Matt).
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
We can't propagate the conditional modifier from one instruction to
another of a different execution size / group, since that would change
the channels affected by the conditional.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
v2: adapt to changes in offset()
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
v2 (Curro):
- Print it also for execsize < 4.
- QtrCtrl is still in effect, so print 2 * qtr_ctl + nib_ctl + 1
- Do not read the nib ctl from the instruction in gen < 7,
the field only exists in gen7+.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
v2: do it in the same fashion as the FS backend for consistency (Curro)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From the HSW PRM, Command Reference, QtrCtrl:
"NibCtrl is only allowed for SIMD4 instructions with a DF (Double Float)
source or destination type."
v2: Assert that the type is DF (Samuel)
v3: Don't set the default group to 0 and then set it only for 4-wide
instructions. Instead, assert that exec size and group are always
a correct match and then always set the default group from the
instruction. (Curro)
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generally, instructions in Align16 mode only ever write to a single
register and don't need any form of SIMD splitting, that's why we
have never had a SIMD splitting pass in the vec4 backend. However,
double-precision instructions typically write 2 registers and in
some cases they run into certain hardware bugs and limitations
that we need to work around by splitting the instructions so we only
write to 1 register at a time. This patch implements a SIMD splitting
pass similar to the one in the scalar backend.
Because we only use double-precision instructions in Align16 mode
in gen7 (gen8+ is fully scalar and gens < 7 do not implement fp64)
the pass should be a no-op on any other generation.
For now the pass only handles the gen7 restriction where any
instruction that writes 2 registers also needs to read 2 registers.
This affects double-precision instructions reading uniforms, for
example. Later patches will extend the lowering pass adding a few
more cases.
v2:
- Move the simd lowering pass after the main optimization loop and
run copy-propagation and dce if it reports progress (Curro)
- Compute number of registers written instead of fixing it to 1 (Iago)
- Use group from backend_instruction (Iago)
- Drop assertion that checked that we only split 8-wide instructions
into 4-wide. (Curro)
- Don't assume that instructions can only be 8-wide, we might want
to use 16-wide instructions in the future too (Curro)
- Wrap gen7 workarounds in a conditional to ease adding workarounds
for other gens in the future (Curro)
- Handle dst/src overlap hazard (Curro)
- Use the horiz_offset() helper to simplify the implementation (Curro)
- Drop the assertion that checks that each split instruction writes
exactly one register (Curro)
- Use the copy constructor to generate split instructions with all
the relevant fields initialized to the values in the original
instruction instead of copying only a handful of them manually (Curro)
v3 (Iago):
- When copying to a temporary, allocate the number of registers required
for the copy based on the size written of the lowered instruction
instead of assuming that all lowered instructions produce single-register
writes
- Adapt to changes in offset()
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
Just like the exec_size, we are going to need this in the vec4 backend
when we implement a simd splitting pass.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This will come in handy when we implement a simd lowering pass in a
follow-up patch.
v2: use byte_offset()
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our current data flow analysis does not take into account that channels
on 64-bit operands are 64-bit. This is a problem when the same register
is accessed using both 64-bit and 32-bit channels. This is very common
in operations where we need to access 64-bit data in 32-bit chunks,
such as the double packing and packing operations.
This patch changes the analysis by checking the bits that each source
or destination datatype needs. Actually, rather than bits, we use
blocks of 32bits, which is the minimum channel size.
Because a vgrf can contain a dvec4 (256 bits), we reserve 8
32-bit blocks to map the channels.
v2 (Curro):
- Simplify code by making the var_from_reg helpers take an extra
argument with the register component we want.
- Fix a couple of cases where we had to update the code to the new
way of representing live variables.
v3:
- Fix indent in multiline expressions (Matt)
- Fix comment's closing tag (Matt)
- Use DIV_ROUND_UP(inst->size_written, 16) instead of 2 * regs_written(inst)
to avoid rounding issues. The same for regs_read(i). (Curro).
- Add asserts in var_from_reg() to avoid exceeding the allocated
registers (Curro).
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the vec4 backend the generator sets to 8 the execution size for all
instructions by default, however, to implement 64-bit floating-point we
will need to split certain instruction into smaller sizes so we need the
IR to convey this information like we do in the scalar backend. This patch
uses the execution size from the vec4 IR.
We will use this feature in a later patch when we implement a SIMD
splitting pass.
v2:
- Drop the assertion on the execution size being 8 or 4 (Curro)
- Use exec_size from backend_instruction (Curro)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
We are going to need this in the vec4 backend too.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Gen7 hardware does not support double immediates so these need
to be moved in 32-bit chunks to a regular vgrf instead. Instead
of doing this every time we need to create a DF immediate,
create a helper function that does the right thing depending
on the hardware generation.
v2 (Curro):
- Use swizzle() and writemask() helpers and make tmp const.
v3 (Iago):
- Adapt to changes in offset()
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
v2: use a MOV with a conditional_mod instead of a CMP, like we do in d2b, to skip
loading a double immediate.
v3: Fix comment (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v2 (Curro):
- Generate the flag register with a MOV with conditional_mod instead of a CMP
instruction, which has the benefit that we can skip loading a DF
0.0 constant.
- Avoid the PICK_LOW_32BIT + MOV by using the flag result and a
SEL to set the boolean result.
v3:
- Fix comment (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From the BDW PRM, Workarounds chapter:
"DF->f format conversion for Align16 has wrong emask calculation when
source is immediate."
Notice that Broadwell and later are strictly scalar at the moment though, so
this is not really necessary.
v2: Instead of moving the immediate to a vgrf and converting from there, just
convert the double immediate to float in the compiler and move the result
to the destination (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Use these helpers to implement d2f and f2d. We will reuse these helpers when
we implement things like d2i or i2d as well.
v2:
- Rename the helpers (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The opcodes are not specific for conversions to/from float since we need
the same for conversions to/from other 32-bit types. Rename the opcodes
accordingly and change the asserts to check the size of the types involved
instead.
v2:
- Rename to VEC4_OPCODE_TO_DOUBLE and VEC4_OPCODE_FROM_DOUBLE (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
The pass does not support doubles in its current form.
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
v2: Make dst_reg_for_nir_reg() handle this for nir_register since we
want to have the correct type set before we call offset().
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
v2:
- Added newline before if() (Matt)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
Basically, ALIGN1 mode will ignore swizzles on the input vectors so we don't
want the copy propagation pass to mess with them.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These align1 opcodes do partial writes of 64-bit data. The problem is that we
want to use them to write on the same register to implement packDouble2x32 and
from the point of view of DCE, since both opcodes write to the same register,
only the last one stands and decides to eliminate the first, which is
not correct, so prevent this from happening.
v2: Make a helper in vec4_instruction to know if the instruction is an
align1 partial write. This will come in handy when we implement a
simd splitting pass in a later patch.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These opcodes will set the low/high 32-bit in each 64-bit data element
using Align1 mode. We will use this to implement packDouble2x32.
We use Align1 mode because in order to implement this in Align16 mode
we would need to use 32-bit logical swizzles (XZ for low, YW for high),
but the IR works in terms of 64-bit logical swizzles for DF operands
all the way up to codegen.
v2:
- use suboffset() instead of get_element_ud()
- no need to set the width on the dst
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These opcodes will pick the low/high 32-bit in each 64-bit data element
using Align1 mode. We will use this, for example, to do things like
unpackDouble2x32.
We use Align1 mode because in order to implement this in Align16 mode
we would need to use 32-bit logical swizzles (XZ for low, YW for high),
but the IR works in terms of 64-bit logical swizzles for DF operands
all the way up to codegen.
v2:
- use suboffset() instead of get_element_ud()
- no need to set the width on the dst
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
Add asserts so we remember to address this when we enable 64-bit
integer support, as suggested by Connor and Jason.
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For 32-bit instructions we want to use <4,4,1> regions for VGRF
sources so we should really set a width of 4 (we were setting 8).
For 64-bit instructions we want to use a width of 2 because the
hardware uses 32-bit swizzles, meaning that we can only address 2
consecutive 64-bit components in a row. Also, Curro suggested that
the hardware is probably fixing the width to 2 for 64-bit instructions
anyway, so just go with that and use <2,2,1>.
v2:
- No need to explicitly set the vertical stride of 64-bit regions to 2,
brw_vecn_grf with a width of 2 will do that for us.
- No need to adjust the width of dst registers.
v3 (Ian):
- Make type_size and width const.
Signed-off-by: Connor Abbott <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These need to be emitted as align1 MOV's, since they need to have a
stride of 2 on the float register (whether src or dest) so that data
from another thread doesn't cross the middle of a SIMD8 register.
v2 (Iago):
- The float-to-double needs to align 32-bit data to 64-bit before doing the
conversion. This was doable in align16 when we tried to use an execsize
of 4, but with an execsize of 8 we would need another align1 opcode to do
that (since we need data to cross the middle of a SIMD register). Just
making the opcode handle this internally seems more practical that adding
another opcode just for this purpose and having the caller know about this
before converting.
- The double-to-float conversion produces 32-bit elements aligned to 64-bit
so we make the opcode re-pack the result to 32-bit and fit in one register,
as expected by SIMD4x2 operation. This still requires that callers reserve
two registers for the float data destination because we need to produce
64-bit aligned data first, and repack it later on the same destination
register, but it saves the need for a re-pack opcode only to achieve this
making the operation complete in a single opcode. Hopefully that is worth
the weirdness of the double register allocation...
Signed-off-by: Connor Abbott <[email protected]>
Signed-off-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Basically, this involves considering the bit-size information to set
the appropriate type on both operands and destination.
v2 (Curro)
- Don't use two temporaries (and write one of them twice ) to obtain
the nir_alu_type.
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|