| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Cc: "9.2" <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Once the compiler proplerly checks for default precision qualifiers,
these shaders will cease to compile.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Cc: "9.2" <[email protected]>
|
|
|
|
|
|
|
| |
Send it straight to the Department of Redundancy Department.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
For some reason, we didn't use this information even though the VS
backend has computed it (albeit poorly) for ages.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike the FS, the VS backend already computed the binding table size.
However, it did so poorly: after compilation, it looked to see if any
pull constants/textures/UBOs were in use, and set num_surfaces to the
maximum surface index for that category. If the VS only used a single
texture or UBO, this overcounted by quite a bit.
The shader time surface was also noted at state upload time (during
drawing), not at compile time, which is inefficient. I believe it also
had an off by one error.
This patch computes it accurately, while also simplifying the code.
It also renames num_surfaces to binding_table_size, since num_surfaces
wasn't actually the number of surfaces used. For example, a VS that
used one UBO and no other surfaces would have set num_surfaces to
SURF_INDEX_VS_UBO(1) == 18, rather than 1. A bit of a misnomer there.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
| |
This will be useful for the next commit.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Computing the minimum size was easy, and done at compile-time for no
extra overhead here. Making the binding table smaller wastes less batch
space.
Adding the CACHE_NEW_WM_PROG dirty bit isn't strictly necessary, since
other atoms depend on it and flag BRW_NEW_SURFACES. However, it's best
to add it for clarity and safety. It shouldn't add any new overhead.
v2: Use binding_table_size, rather than max_surface_index.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By tracking the maximum surface index used by the shader, we know just
how small we can make the binding table.
Since it depends entirely on the shader program, we can just compute
it once at compile time, rather than at binding table emit time (which
happens during drawing).
v2: Store binding_table_size, rather than max_surface_index, for
consistency with the VS (which needs to be able to represent 0
surfaces).
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SURF_INDEX_DRAW() has been the identity function since the dawn of time,
and both the shader code and binding table upload code relied on that,
simply using X rather than SURF_INDEX_DRAW(X).
Even if that continues to be true, using the macro clarifies the code.
The comment about draw buffers needing to be first in order for
headerless render target writes to work turned out to be wrong; with
this change, SURF_INDEX_DRAW can be changed to arbitrary indices and
everything continues working.
The confusion was over the "Render Target Index" field in the FB write
message header. If it were a binding table index, then RT 0 would have
to be at index 0 for headerless FB writes to work. However, it's
actually an index into the blend state table, so there's no problem.
Signed-off-by: Kenneth Graunke <[email protected]>
Cc: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Now that we have the number of samplers available, we don't need to
iterate over all 16. This should be particularly helpful for vertex
shaders.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Previously, we computed sampler counts when generating the SAMPLER_STATE
table. By computing it earlier, we should be able to shorten a bunch of
loops.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
This allows us to avoid uploading the VS sampler state table if only the
fragment program changes.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Now, each shader stage has a sampler state table that only refers to the
samplers actually used by that problem. This should make the VS table
non-existant or very small.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
| |
This allows us to coalesce the brw_samplers and gen7_samplers atoms.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also upload separate sampler default/texture border color entries.
At the moment, this is completely idiotic: both tables contain exactly
the same contents, so we're simply wasting batch space and CPU time.
However, soon we'll only upload data for textures actually /used/ in
a particular stage, which will usually make the VS table empty and
very likely eliminate all redundancy. This is just a stepping stone.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Like the previous patch, this simply pushes direct access to brw->wm up
one level in the call chain. Rather than passing the whole array, we
just pass a pointer to the correct spot in the array, similar to what we
do for the actual sampler state structure.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
When we begin uploading separate sampler state tables for VS and FS,
we won't be able to use &brw->wm.sdc_offset[ss_index]. By passing it in
as a parameter, we push the problem up to the caller.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we only have a single sampler state table shared among all
stages, so we just copy wm.sampler_count into vs.sampler_count.
In the future, each shader stage will have its own SAMPLER_STATE table,
at which point we'll need these separate sampler counts.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
I believe the data flow analysis actually works now, and it should be
safe to re-enable global copy propagation. It even does things now.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Since the initial value for livein is an overestimation (0xffffffff),
it's extremely likely that it will shrink, which means we can't simply
OR in new bits - we need to fully recompute it based on the current
liveout values.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Since we start with an overestimation of livein (0xffffffff), successive
steps can actually take away values. This means we can't simply OR in
new liveout values; we need to recompute it from scratch at each
iteration of the fixed-point algorithm.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The starting block always has livein = 0 and liveout = copy. Since we
start with real data, not estimates, there's no need to refine it with
the fixed point algorithm.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
The previous commit properly initialized liveout. This previous
(and incorrect) initialization is no longer necessary.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, livein was initialized to 0 for all blocks. According to
the textbook, it should be the universal set (~0) for all blocks except
the one representing the start of the program (which should be 0).
liveout also needs to be initialized to COPY for the initial block.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to page 360 of the textbook, the proper formula for liveout
is:
CPout(n) = COPY(i) union (CPin(i) - KILL(i))
Previously, we omitted COPY.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Excluding the existing liveout bits is a deviation from the textbook
algorithm. The reason for doing so was to determine if the value
changed, which means the fixed-point algorithm needs to run for another
iteration.
The simpler way to do that is to save the value from step (N-1) and
compare it to the new value at step N.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This is the "COPY" set from Muchnick's textbook, which is necessary
to do the dataflow algorithm correctly.
v2: Simplify initialization based on Paul Berry's observation that
out_acp contains exactly what needs to be in the COPY set.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
Although this function currently only initializes the KILL set, it will
soon initialize other data flow sets as well.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To compute the actual liveout/livein data flow values, we start with
some initial values and apply a fixed-point algorithm until they settle.
Previously, we iterated through all blocks, updating both liveout and
livein together in one pass. This is awkward, since computing livein
for a block requires knowing liveout for all parent blocks. Not all
of those parent blocks may have been processed yet.
This patch separates the two. First, we update liveout for all blocks.
At iteration N of the fixed-point algorithm, this uses livein values
from iteration N-1. Secondly, we update livein for all blocks. At
step N, this uses the liveout information we just computed (in step N).
This ensures each computation has a consistent picture of the data,
rather than seeing an random mix of data from steps N-1 and N depending
on the order of the blocks in the CFG data structure.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This variable indicates that the fixed-point algorithm made changes to
the data at this step, so it needs to run for another iteration.
"progress" seems a nicer name for that.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
The fixed-point algorithm needs to run at least once, so a do-while loop
is more natural.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The dataflow analysis used for global copy propagation is severely
broken, and I believe it doesn't actually do anything. Fixing it will
require a lot of changes, each of which might break things.
Once all the fixes land, we can re-enable this.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Any decent compiler will do this for us, although doing this
will make grepping through the code alot easier.
v2: In both mixer and query interface
v3: rebase
Reviewed-by: Christian König <[email protected]> [v1]
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
Code should loop through and cleanup the three (VL_NUM_COMPONENTS) idct
buffers, rather than doing the first one three times.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
| |
Check if we have successfully allocated memory.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
Free any allocated memory and return BadAlloc if create_video_buffer()
has failed to create a buffer.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
Not seen in the wild yet, but seems like a reasonable thing to do.
[suggested by Christian]
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was looking into some minor 422 issues/discrepencies I noticed long
ago using vdpau on my rv790.
I noticed that there is code that is halving height rather than width -
422 is full height AFAIK.
Making the changes below doesn't actually make any noticable difference
to what I was looking into.
Maybe there are more but here's three I've found so far
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
Fixes "Uninitialized scalar variable" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We are getting close to the maximum number of BRW_NEW_* bits that can
be stored in brw->state.dirty.brw without overflowing 32 bits, and
geometry shaders are going to add more. Add a STATIC_ASSERT so that
we will be alerted when we need to switch to 64 bits.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Tested by examining generated TGSI shaders from piglit/glsl-routing.
Cc: [email protected]
Reviewed-by: Henri Verbeet <[email protected]>
Tested-by: Henri Verbeet <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Since we expose non-NV12 formats as supported when there is no decoer
profile selected, make sure that those formats are actually allowed to
be allocated.
Signed-off-by: Ilia Mirkin <[email protected]>
Tested-by: Emil Velikov <[email protected]>
Cc: "9.2" <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Previously, we were asserting that each driver specified an NConfigOptions
exactly equal to the number of options they supplied, leading to frequent
bugs when people would forget to adjust the value when adjusting driver
options. Instead, just overallocate the table by a bit and leave sanity
checking to the assert in findOption().
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Consistently using a "The ___ driver hook." line at the the top of each
function's comment block makes it easy to see at a glance what function
is being implemented.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|