| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Gen7 hardware does not support double immediates so these need
to be moved in 32-bit chunks to a regular vgrf instead. Instead
of doing this every time we need to create a DF immediate,
create a helper function that does the right thing depending
on the hardware generation.
v2:
- Define setup_imm_df() as an independent function (Curro)
- Create a specific builder to get rid of some instruction field
assignments (Curro).
v3:
- Get devinfo from builder (Kenneth)
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Previously, there were occasionally NIR registers in our programs, but
they were always actually used SSA-only. Now that we're trying to support
control flow, we need to actually conditionally move to registers based on
whether channels are active or not.
|
|
|
|
|
| |
For now it's still always false, but I need it in place for kernel
backwards compat support as I extend the backend for control flow.
|
|
|
|
|
|
| |
This uses the branch condition code in inst->cond to jump to either
successor[0] (condition matches) or successor[0] (condition doesn't
match).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We're already checking that branch instructions are within the
contents of the shader and the proper PROG_END sequence is present.
The other thing we need in the presence of branching is to verify that
the shader doesn't overflow past the end of the uniforms stream.
To do that, we require that at the start of any basic block reading
uniforms have the following instructions:
load_imm temp, <offset within uniform stream>
add unif_addr, temp, unif
The instructions are generated by userspace, and the kernel verifies
that the load_imm is of the expected offset, and that the add adds it
to a uniform. We track which uniform in the stream that is, and at
draw call time fix up the uniform stream to have the address of the
start of the shader's uniforms for that draw call.
Signed-off-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
This isn't used yet, it's just a first step toward loop validation.
During the main parsing of instructions, we need to know when we hit a new
basic block so that we can reset validated state.
|
|
|
|
|
| |
This reduces how much we need to pass around as arguments, which was
becoming more of a problem with looping validation.
|
| |
|
|
|
|
|
| |
This only happens when live variables are set up, which is not in the
normal dump, but is set up when we've failed to register allocate.
|
|
|
|
|
| |
Right now our CFG is always a trivial single basic block, but that will
change when enable loops.
|
|
|
|
|
|
|
|
| |
Basically we just treat each block independently. The only inter-block
scheduling I can think of that would be be interesting would be to move
texture result collection to after a short loop/if block that doesn't do
texturing. However, the kernel disallows that as part of its security
validation.
|
|
|
|
|
|
|
|
|
| |
We still decide which uniform to lower based on how many
instructions-that-need-lowering use that uniform, but now we emit a new
temporary uniform load in each of the basic blocks containing an
instruction being lowered.
This commit is best reviewed with diff -b.
|
|
|
|
|
|
| |
We need to apply the peephole pass to each of the blocks in the program.
We don't do dataflow analysis for SF across blocks, but we also don't
generate code that would need us to do so.
|
|
|
|
|
|
|
| |
The optimization passes and scheduling aren't actually ready for multiple
blocks with control flow yet (as seen by the "cur_block" references in
them instead of iterating over blocks), but this creates the structures
necessary for converting them.
|
|
|
|
|
|
|
|
| |
We have the prior list_foreach() all over the code, but I need to move
where instructions live as part of adding support for control flow. Start
by just converting to a helper iterator macro. (The simpler
"qir_for_each_inst()" will be used for the for-each-inst-in-a-block
iterator macro later)
|
|
|
|
|
|
|
|
| |
This avoids a bunch of code gen regressions when enabling loops in vc4.
Prior to that, the GLSL that would have generated these optimizable phi
nodes was being lowered to csels between either (undef, a) or (a, a), and
those were being dealt with by nir_opt_undef and nir_opt_algebraic.
|
|
|
|
|
|
|
|
| |
The allocation has succeeded by that point, so it needs to be freed.
CovID: 1358929
Signed-off-by: Eric Engestrom <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
We're passed in a freshly dup()ed fd on screen create, so we should close
it on exit. Debugged by Hugh Cole-Baker.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was appearing in vc4 VS/CS in mupen64, due to vertex attrib lowering
producing some constants that were getting compared.
total instructions in shared programs: 112276 -> 112198 (-0.07%)
instructions in affected programs: 2239 -> 2161 (-3.48%)
total estimated cycles in shared programs: 283102 -> 283038 (-0.02%)
estimated cycles in affected programs: 2365 -> 2301 (-2.71%)
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
| |
Signed-off-by: Tim Rowley <[email protected]>
|
|
|
|
| |
Signed-off-by: Tim Rowley <[email protected]>
|
|
|
|
| |
Signed-off-by: Tim Rowley <[email protected]>
|
|
|
|
|
|
|
|
| |
Small api cleanup. Make all api functions call GetContext instead
of locally casting handle. Makes debugging easier by providing a
single point to track context changes.
Signed-off-by: Tim Rowley <[email protected]>
|
|
|
|
|
|
| |
v2: use signed compare, remove unneeded vmask
Signed-off-by: Tim Rowley <[email protected]>
|
|
|
|
|
|
|
| |
d3d97f8 broke llvm-3.7, which has a mismatched API for
setDataLayout/getDataLayout.
Signed-off-by: Tim Rowley <[email protected]>
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with encode tunneling
The idea of encode tunneling is to use video buffer directly for encoder,
but currently the encoder doesn’t support interlaced surface, the OMX
decoder set progressive surface before on that purpose.
Since now we are polling the driver for interlacing information for
decoder, we got the interlaced as preferred as other APIs(VDPAU, VA-API),
thus breaking the transcode with tunneling.
The solution is when with tunnel detected, re-allocate progressive target
buffers, and then converting the interlaced decoder results to there.
This has been tested with transcode results bit to bit matching as before
with surface from progressive to progressive.
Signed-off-by: Leo Liu <[email protected]>
Acked-by: Christian König <[email protected]>
Tested-by: Julien Isorce <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Leo Liu <[email protected]>
Acked-by: Christian König <[email protected]>
Tested-by: Julien Isorce <[email protected]>
|
|
|
|
|
|
|
|
| |
This shader will make interlaced yuv to progressive yuv.
Signed-off-by: Leo Liu <[email protected]>
Acked-by: Christian König <[email protected]>
Tested-by: Julien Isorce <[email protected]>
|
|
|
|
|
|
|
|
| |
We'll use weave shader in the later patch.
Signed-off-by: Leo Liu <[email protected]>
Acked-by: Christian König <[email protected]>
Tested-by: Julien Isorce <[email protected]>
|
|
|
|
|
|
|
|
| |
This bug is uncovered by glsl/lower_if_to_cond_assign.
I don't know if it can be reproduced in any other way.
Cc: <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
|
| |
[ Francisco Jerez: Use validate_build_common for error checking,
simplify control flow slightly and handle additional exception
types. ]
Reviewed-by: Francisco Jerez <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
| |
header_map was the only definition left in compiler.hpp, move it into
program.hpp which is its only user in clover/core.
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
| |
Superseded by compile_program() and link_program().
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
| |
[ Serge Martin: Fix inverted opts and log build ctor args.
Keep the log related to the build. Fix indentation ]
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gets rid of the program::build_* query methods and replaces them
with the program::build() method that returns a single data structure
containing all parameters for the last build done on the given target
device (including build logs, options and the binary itself).
[ Serge Martin: Fix inverted opts and log build ctor args ]
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This partially reverts 7e0180d57d330bd8d3047e841086712376b2a1cc.
Having two different exception subclasses for compilation and linking
makes it more difficult to share or move code between the two
codepaths, because the exact same function under the same error
condition would need to throw one exception or the other depending on
what top-level API is being implemented with it. There is little
benefit anyway because clCompileProgram() and clLinkProgram() can tell
whether they are linking or compiling a program.
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
| |
Return an API object from an intrusive reference to a Clover object,
incrementing the reference count of the object.
Reviewed-by: Francisco Jerez <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
| |
namespace.
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
| |
[ Serge Martin: disable internalize pass when building a library.
Otherwise some functions may be inlined and removed ]
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Split the work previously done by compile_program_llvm() into
compile_program() (which simply runs the front-end and serializes the
resulting LLVM IR) and link_program() (which takes care of everything
else down to binary codegen).
[ Serge Martin: allow LLVM IR dump after compilation ]
Reviewed-by: Serge Martin <[email protected]>
Tested-by: Jan Vesely <[email protected]>
|