| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This will be used for shared, global and local memory areas.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The size of shared variables needs to be stored in gl_compute_program
in order to set up pipe_compute_state::req_local_mem.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This will allow to query the underlying drivers for the maximum
total storage size of all variables declared as <shared> with
PIPE_COMPUTE_CAP_MAX_LOCAL_SIZE.
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
| |
Looks like the various max's were never plumbed through.
Signed-off-by: Ilia Mirkin <[email protected]>
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now there has been only one type of color buffer that needs
to resolved - namely single sampled fast clear. As even the
sampler engine in GPU doesn't understand the associated meta data,
the color values need to be always resolved prior to reading them.
From SKL onwards there is new scheme supported called the lossless
compression of single sampled color buffers. This is something that
is understood by the sampling engine and therefore resolving of
these types of buffers is not necessary before sampling.
This patch adds means to make the distinction when considering if
resolve is needed.
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In addition to simply calling miptree_create() the higher level
call intel_miptree_create() also considers if the buffer should
be associated with an auxiliary buffer based on the given format.
Here we are allocating an auxiliary buffer which in turn has such
format that would mislead intel_miptree_create_layout() later on
to try to associate the auxiliary buffer with an auxiliary buffer.
To prevent this the actual buffer creation logic was split out
into its own function. Lets invoke that instead.
v2 (Ben): Do not signal msaa layout with explicit argument but
using layout_flags instead.
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
| |
This allows ls, and scripts to get the file names in the correct order of
optimization.
Signed-off-by: Ben Widawsky <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The L3 partitioning code tries to look at all programs - both render
programs (VS/TCS/TES/GS/FS) and compute (CS).
After calling brw_clear_cache, all prog_data pointers are invalid and
point to freed data. The intention was that flagging the dirty bits for
all programs would cause the next draw call to re-run the atoms for each
program stage, uploading new programs and installing new, valid pointers.
However, this doesn't quite work in our new multi-pipeline world. When
drawing or dispatching a compute workload, we only consider the programs
for the appropriate pipeline: drawing sets up VS/TCS/TES/GS/FS, but not
CS, and vice versa. This leaves pointers dangling a bit longer than
intended.
The L3 configuration code tries to inspect the prog_data for all shader
stages, so that we avoid having to reconfigure it when swapping back and
forth between render and compute workloads. So we can't have dangling
pointers.
The fix is simple: have brw_clear_cache NULL out stale prog_data
pointers, making it safe to inspect. The next L3 configuration pass
will see either the render shaders or compute shader as missing for
one go around, but will pick them up when both pipelines have run.
In other words, we'll simply reconfigure L3 twice, which is safe,
if a tiny bit wasteful - but then again, we just threw every compiled
shader we had on the floor and started recompiling the from scratch,
which is massively more wasteful, so it's not much of a concern.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93790
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there is no pipe info log, we would unconditionally deref length,
which was only optionally there. _mesa_copy_string handles the source
being null, as well as the length, so may as well just always call it.
Fixes a segfault in
dEQP-GLES31.functional.state_query.program_pipeline.info_log
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to commit dd9d2963d6 (mesa: AtomicBufferBindings should be
initialized to zero.), we should reset these to zero when unbinding.
This fixes a number of dEQP failures due to cross-test pollution. The
tests properly unbound everything, but when querying the values again,
the expectation was that they would be 0.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar as for AUX1-3, these enums aren't invalid (i.e. -1) but also not
supported by mesa. Returning BUFFER_COUNT causes the proper error to be
returned by ReadBuffer and other functions. This resolves some failures
in
dEQP-GLES31.functional.debug.negative_coverage.get_error.buffer.read_buffer
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes
dEQP-GLES31.functional.debug.negative_coverage.get_error.buffer.clear_bufferfv
and brings the logic up to spec with GL 4.5
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes
dEQP-GLES31.functional.debug.negative_coverage.get_error.buffer.clear_bufferuiv
and brings the logic up to spec with GL 4.5
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
Might as well handle everything in the same error call.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
There's a hunk above which sets INVALID_ENUM for GL_DEPTH
unconditionally.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes
dEQP-GLES31.functional.state_query.texture.texture_2d_multisample.depth_stencil_mode_integer
and a few related tests.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Nanley Chery <[email protected]>
|
|
|
|
| |
To get _mesa_num_tex_faces() prototype.
|
|
|
|
| |
To get _mesa_num_tex_faces() prototype.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were implementing those the same way than
the default pool, which is sub-optimal.
The buffer is supposed to return pointer to
a ram copy when user locks, and automatically
update the vram copy when needed.
v2: Rename NineBuffer9_Validate to NineBuffer9_Upload
Rename validate_buffers to update_managed_buffers
Initialize NineBuffer9 managed fields after the resource
is allocated. In case of allocation failure, when the dtor
is executed, This->base.pool is then rightfully set.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
For 32 bits, incoming stack is 4-byte aligned.
We need to realign the stack to 16-byte at some point,
or there are issues later (crash with SSE, llvm, etc).
This patch chooses to align the stack at API entry points.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
| |
using MIN/MAX is fine instead of CLAMP.
NRM doesn't exist anymore.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
| |
SQRT is not supported everywhere, so replace
it by RSQ + MUL and handle case <= 0.
Signed-off-by: Axel Davy <[email protected]>
|
|
|
|
|
|
|
|
| |
We had several issues of crashes with it.
This should fix it.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Add new argument to d3d9_to_pipe_format_checked to
be able to bypass format support checks. This argument
is set to TRUE when the requested Pool is SCRATCH.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Returns INVALIDCALL when trying to create a surface
of unsupported format.
In practice, apps are supposed to check for format
support before trying to create a render target
of that format. However some bad behaving apps
could just try to create the surface and deduce if
it failed that it wasn't supported.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Texture and CubeTexture use common code,
and thus ATI1/ATI2 is already implemented
for CubeTexture.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
| |
Clarify the behaviour and clean the checks
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We were having checks at both Create*Texture functions
and in ctors.
Move all Create*Texture checks to ctors.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
|
| |
This->base.base.resource is worth NULL
for SYSTEMMEM textures.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
|
| |
We do not support shared textures, thus no need to set
the shared flag.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We do not create a resource for SYSTEMMEM textures,
thus we do not need to set resource usage.
The only exception is vertexbuffer SYSTEMMEM, since
we do use a pipe resource for them.
Signed-off-by: Axel Davy <[email protected]>
Reviewed-by: Patrick Rudolph <[email protected]>
|
|
|
|
|
|
|
| |
So it's near the other cube map helper functions.
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
| |
Just minor clean-up so we're consistent everywhere.
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
| |
|
|
|
|
|
|
| |
Now that MSVC 2013 is required we can remove this.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
We already use these for gallium in
src/gallium/auxiliary/os/os_memory_stdc.h and it's always better to
minimize divergences between MinGW and MSVC.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
Spotted by Emil Velikov.
Trivial.
|
|
|
|
|
|
| |
Spotted by Emil Velikov.
Trivial.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Auxiliary buffers are always created with sample number of zero
which effectively prevents intel_miptree_create_layout() from trying
to associate auxiliary buffers with auxiliary buffers.
Now that there is more direct path available lets start using it
instead and stop even checking for such (im)possibility.
v2 (Ben): Do not signal msaa layout with explicit argument but
using layout_flags instead.
Signed-off-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the logic allocating and setting up miptrees is closely
combined with decision making when to re-allocate buffers in
X-tiled layout and when to associate colors with auxiliary buffers.
These auxiliary buffers are in turn also represented as miptrees
and are created by the same miptree creation logic calling itself
recursively. This means considering in vain if the auxiliary buffers
should be represented in X-tiled layout or if they should be
associated with auxiliary buffers again.
While this is somewhat unnecessary, this doesn't impose any problems
currently. Miptrees for auxiliary buffers are created as simgle-sampled
fusing the consideration for multi-sampled compression auxiliary
buffers. The format in turn is such that is not applicable for
single-sampled fast clears (that would require accompaning auxiliary
buffer).
But once the driver starts to support lossless compression of color
buffers the auxiliary buffer will have a format that would itself
be applicable for lossless compression. This would be rather
difficult and ugly to detect in the current miptree creation logic,
and therefore this patch seeks to separate the association logic
from the general allocation and setup steps.
v2 (Ben):
- Do not reconsider for X-tiling in intel_miptree_create()
as it was just forced to Y-tiling in miptree_create().
- Do not drop checks for allocation failures.
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
| |
This makes the logic a little more explicit and helps to keep
subsequent patches easier to read.
Suggested-by: Ben Widawsky <[email protected]>
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Part of brw_try_draw_prims() is a check to validate textures
(brw_validate_textures()). In case of textures that currently have
only level zero but are marked for mipmap generation, i965 driver
will decide to replace the underlying buffer with a larger one
capable of holding also the additional levels. This results into
blit from the original buffer to the newly allocated (see
intel_miptree_copy_teximage()). This blit is currently handled with
blitter engine and hence it won't effect the ongoing draw operation.
However, this blit in turn may trigger color resolve on the source
buffer. In principle, this should be possible with fast cleared
buffers but I only started hitting it when I enabled lossless
compression (that reguires similar resolve to fast cleared buffers).
Now, the color resolve is a meta operation and uses the same drawing
path we are already in middle of. After quite a bit of debugging I
realized that the resolve will modify the current vbo setup but it
won't restore it afterwards resulting in the original draw call
using wrong vertex data.
When brw_try_draw_prims() gets called, the vbo logic in the Mesa
core (see vbo_draw_arrays()) has just bound the vbo (see
vbo_bind_arrays() and recalculate_input_bindings()). Color resolve
operation will overwrite the vbo setup by calling vbo_bind_arrays()
against the resolve rectangle (see brw_draw_rectlist()). Once the
color resolve is done the vbo setup is left to the resolve rectangle
state and the original drawing call yields bogus results.
This patch aims to restore the original state after the color
resolve by calling vbo_bind_arrays() yet again after the vertex
array state in the core context have been restored.
Now having said all this, I'd also like to state that I'm quite
uncomfortable with the nested meta operations. Ths original draw
call in this case is in fact a meta operation itself. It is a blit
from level zero to level one when generating the additional mipmap
levels (see _mesa_meta_GenerateMipmap()). Imagine the complexity
if the blit in the middle from buffer to another would go to meta
path also instead of blitter.
I would very tempted to try to move all the resolves to happen
before a meta operation is started.
Additionally I still feel that work I did earlier in the spring/
summer time moving meta operations to use direct state upload
bypassing the core context would make sense.
v2: Force input recalculation by setting the flag explicitly
v3: Do not attempt to restore vbo for opengles1 which doesn't
support vertex buffer objects.
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Validation may kick off copies and subsequently color resolves.
Color resolves (and the copies themselves if ending up in meta path)
will overwrite the internal driver state but are not prepared to
restore it. Instead of adding that capability the validation can be
simply performed before the state is updated.
Signed-off-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Setting brw->ctx.NewDriverState and brw->ctx.NewGLState affects
the dirty bits for the current pipeline. But, we need to flag
everything dirty on *both* pipelines, so that when we switch
back, we'll realize our programs are stale and re-upload them.
To accomplish this, flag the saved state for both pipelines.
Only one of them should matter, but this way we don't have to
check which we need to set. It's harmless to set the other.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93790
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
Tested-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Feedback from Khronos is that 'invariant' should be allowed on block
members for desktop OpenGL. Fix piglit regression added by fe1e89a0:
invariant-qualifier-in-out-block-01.vert
v2:
- Allow it for in/out blocks in OpenGL ES too, so when OES_shader_io_blocks
is supported we don't need to do any change (Timothy)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89330
Signed-off-by: Samuel Iglesias Gonsálvez <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
| |
I think this was just missed; Curro and I were probably writing
code simultaneously and forgot to combine them at the end.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|