| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
This just moves the code for dealing with pipe_shader_state /
pipe_compute_state / iris_uncompiled_shader to the end of the file.
Now that those do precompiles, they want to call the actual compile
functions. Putting them at the end eliminates the need for a bunch
of prototypes.
|
|
|
|
| |
v2 (Kenneth Graunke): split color/depthstencil cases, fix iris_clear
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Caio noted that this is not necessary on Gen8+:
"Before Gen8, there was a historical configuration control field to
swizzle address bit[6] for in X/Y tiling modes. This was set in
three different places: TILECTL[1:0], ARB_MODE[5:4], and
DISP_ARB_CTL[14:13]. For Gen8 and subsequent generations, the
swizzle fields are all reserved, and the CPU's memory controller
performs all address swizzling modifications."
Since we don't support earlier hardware, we can skip it entirely.
|
|
|
|
| |
A bit of irritating state cross dependency here, but nothing too hard
|
|
|
|
|
|
|
|
| |
The Vulkan driver only sets this if color writes are disabled, which
is more conservative - but would require us to inspect blend state.
(If color writes are enabled, we don't need to force anything, because
the internal signal is already correct. But it shouldn't hurt to do so.)
|
|
|
|
|
|
| |
I was misreading i965 - the 3DSTATE_WM::PixelShaderKillsPixel bit from
Gen < 8 needed all of this, but the 3DSTATE_PS_EXTRA bit only needs
prog_data->uses_kill.
|
|
|
|
|
|
| |
-1 is a little too bogus for most games ;)
Signed-off-by: Andre Heider <[email protected]>
|
|
|
|
| |
Signed-off-by: Andre Heider <[email protected]>
|
| |
|
| |
|
| |
|
|
|
|
| |
Suggested by Chris Wilson. More obvious what's going on.
|
|
|
|
|
|
| |
Suggested by Chris Wilson, if only to make it obvious to the human
readers that these are volatile reads. It may also be necessary for
the compiler in a few cases.
|
|
|
|
|
|
|
|
|
|
|
| |
When switching from bo_wait to sync-points, I missed that we turned an
if (not landed) bo_wait into a while (not landed) check_syncpt(), which
has a timeout of 0. This meant, rather than sleeping until the batch
is complete, we'd busy-loop, continually asking the kernel "is the batch
done yet???". This is not what we want at all - if we wanted a busy
loop, we'd just loop on !snapshots_landed. We want to sleep.
Add an effectively infinite timeout so that we sleep.
|
|
|
|
| |
I want to be able to wait with a non-zero timeout from elsewhere.
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of allocating 4K BO per query object, we can create a large blob
of memory and split it into pieces as required.
Having one BO for multiple query objects, we don't want to wait on all
of them, instead when we write last snapshot, we create a sync point, and
check syncpoints while waiting on particular object.
Signed-off-by: Sagar Ghuge <[email protected]>
|
|
|
|
| |
Fixes gl-1.0-spot-light
|
|
|
|
| |
res might be NULL, at which point this is an unbind.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
I inherited this from i965. It would be nice to track the state size
so INTEL_DEBUG=color,bat decoding can print the right number of e.g.
binding table entries or blend states, but...without a single point
of entry for state, it's a little tricky to get right. Punt for now,
and drop the dead code in the meantime.
|
|
|
|
|
|
|
| |
i965 re-emits 3DSTATE_CONSTANT_* on every batch, so there's no point in
restoring the constants from the context. Iris actually re-pins the
constant buffers properly across the batch, and avoids re-emitting the
constant packets unless it's necessary. So, we don't want ISP_DIS.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously I had a hack in st/mesa to make it stop remapping
VARYING_SLOT_* into the naively compacted slots, which aren't
what we want. But that wasn't very feasible, as we'd have to
update all drivers, or add capability bits, and it gets messy fast.
It turns out that I can map back to VARYING_SLOT_* in about 5 LOC,
so let's just do that. It removes the need for hacks, and is easy.
This also fixes KHR-GL46.enhanced_layouts.xfb_capture_struct, which
apparently with my hack was still getting the wrong slot info.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Set a render condition. We emit it immediately on the render
engine, and stash q->bo as ice->state.compute_predicate in case
the compute engine needs it.
2. Clear the render condition. We were incorrectly leaving a stale
compute_predicate kicking around...
3. Dispatch compute. We would then read the stale compute predicate,
and try to load it into MI_PREDICATE_DATA. But q->bo may have been
freed altogether, causing us to try and use garbage memory as a BO,
adding it to the validation list, failing asserts, and tripping
EINVALs in execbuf.
Huge thanks to Mark Janes for narrowing this sporadic GL CTS failure
down to a list of 48 tests I could easily run to reproduce it. Huge
thanks to the Valgrind authors for the memcheck tool that immediately
pinpointed the problem.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In st_nir_lower_uniforms_to_ubo() all UBO access in the shader have
its index incremented to open room for uniforms in constbuf0. So if
we use UBOs, we always need to include the extra binding entry in the
table.
To avoid doing this checks both when compiling the shader and when
assigning binding tables, store the num_cbufs in iris_compiled_shader.
Fixes a bunch of tests from Piglit and CTS that use UBOs but don't use
uniforms or system values. Note that some tests fitting this criteria
were passing because the UBOs were moved to be push
constants (avoiding the problem).
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
iris_bufmgr allocates addresses across the entire screen, since buffers
may be shared between multiple contexts. There used to be a single
special address, IRIS_BINDER_ADDRESS, that was per-context - and all
contexts used the same address. When I moved to the multi-binder
system, I made a separate memory zone for them. I wanted there to be
2-3 binders per context, so we could cycle them to avoid the stalls
inherent in pinning two buffers to the same address in back-to-back
batches. But I figured I'd allow 100 binders just to be wildly
excessive/cautious.
What I didn't realize was that we need 2-3 binders per *context*,
and what I did was allocate 100 binders per *screen*. Web browsers,
for example, might have 1-2 contexts per tab, leading to hundreds of
contexts, and thus binders.
To fix this, we stop allocating VMA for binders in bufmgr, and let
the binder handle it itself. Binders are per-context, and they can
assign context-local addresses for the buffers by simply doing a
ringbuffer style approach. We only hold on to one binder BO at a
time, so we won't ever have a conflicting address.
This fixes dEQP-EGL.functional.multicontext.non_shared_clear.
Huge thanks to Tapani Pälli for debugging this whole mess and
figuring out what was going wrong.
Reviewed-by: Tapani Pälli <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We use > for IRIS_MEMZONE_DYNAMIC because IRIS_BORDER_COLOR_POOL_ADDRESS
lives at the very start of that zone. However, IRIS_MEMZONE_SURFACE and
IRIS_MEMZONE_BINDER are normal zones. They used to be a single zone
(surface) with a single binder BO at the beginning, similar to the
border color pool. But when I moved us to multiple binders, I made them
have a real zone (if a small one). So both zones should use >=.
Reviewed-by: Tapani Pälli <[email protected]>
|
|
|
|
|
| |
Re-emitting 3DSTATE_SO_BUFFERS can be hazardous, as it could zero
offsets. Plus, it's just not necessary - BLORP doesn't change these.
|
|
|
|
|
|
|
|
|
| |
INTEL_DEBUG=reemit was breaking streamout tests, by re-emitting
3DSTATE_SO_BUFFER commands that tell the HW to zero the SO write
offsets. We would need to alter them to use 0xFFFFFFFF for the offset.
Also, have each upload function only flag bits relevant to its own
pipeline.
|
|
|
|
| |
See commit 31e4c9ce400341df9b0136419b3b3c73b8c9eb7e in i965.
|
|
|
|
|
| |
For combined depth/stencil formats, we may want to only blit one half.
If PIPE_BLIT_Z is set, blit depth; if PIPE_BLIT_S is set, blit stencil.
|
|
|
|
|
| |
st/mesa never asks for this today, but in theory someone might, and we
don't support it.
|
|
|
|
| |
dEQP-GLES3.functional.rasterization.fbo.rbo_multisample_*.primitives.points
|
|
|
|
|
|
|
|
| |
I think this was an attempt to work around various sample mask bugs I
had early on. It's not correct. A sample mask of 0 is legal and means
to disable all samples.
Fixes dEQP-GLES31.functional.texture.multisample.*.*sample_mask*
|
|
|
|
| |
loader shouldn't try, but let's be paranoid
|
|
|
|
|
|
|
| |
I had a hack in place earlier to pass the query type as q->index
for the regular statistics query, but we ended up adjusting the
interface and adding a new query type. Use that instead, fixing
pipeline statistics queries since the rebase.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
We were dividing by 4 in calculate_result_on_gpu(), and also in
iris_get_query_result(). We should stop doing the latter, and instead
divide by 4 in calculate_result_on_cpu() as well.
Otherwise, if snapshots were available, and you hit the
calculate_result_on_cpu() path, but requested it be written to a QBO,
you'd fail to get a divide.
|
|
|
|
| |
This is actually stored in ice->state, as it isn't gen-specific
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This is an awkward corner case. We create batches in order, each of
which creates and pins a BO. The other batches may not be set up yet,
so it may not be safe to ask whether they reference a BO.
Just avoid this for now. We could avoid it for other context-local BOs
too, but we currently don't have a flag for that (and I'm not certain
whether it's worth it).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Various places in the transfer code need to know whether they must
read the existing resource's values. Rather than checking both flags
everywhere, just make PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE also flag
PIPE_TRANSFER_DISCARD_RANGE - if we can discard everything, we can
discard a subrange, too.
Obviously, we can do better for PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE,
but eventually u_threaded_context should handle swapping out buffers
for new idle buffers, anyway. In the meantime, this is at least better.
|
|
|
|
|
|
|
| |
BLORP uses the render engine to write to buffers, and we need to flush
that data out to the actual surface (finishing the write). Then, the
rest of this function invalidates any caches that might have stale data
which needs to be refetched.
|
|
|
|
|
|
| |
I don't know if this is required - surprisingly, I haven't seen it
matter - but I'd like to use it for multi-slice transfer maps. We may
as well do the right thing.
|
|
|
|
|
|
| |
There are a variety of ways to fix this, many of which are simple, but
I could use some advice on which ones other people prefer, and so we'll
punt until after the holidays.
|
|
|
|
| |
We have to use SURFTYPE_BUFFER and ISL_FORMAT_RAW for these.
|
| |
|
|
|
|
|
|
|
| |
We were relying on CSE/GVN/etc to coalesce all intrinsics that load the
same value, but that's a bad idea. We might have a couple intrinsics
that reload the same value. If so, we only want to set up the uniform
on the first one we see.
|