| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
S8_UNORM was inadvertedly supported together with Z16_UNORM.
I tried to update the code to accomodate stencil-only -- it seemed a simple
thing to do -- but "fbo-stencil clear GL_STENCIL_INDEX8" still fails,
and it's not worth debugging.
Therefore although this change tries to update for S8_UNORM, it also
disables it completely.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This special path hadn't been exercised by my earlier testing, and mask
values weren't being properly truncated to match the values.
This change fixes that.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Simply by adjusting the vector element width after/before
reading/writing the depth-stencil values.
Ran several GL_DEPTH_COMPONENT16 piglit tests without regressions.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Make it obvious what "unit" this is (no change in functionality).
draw still uses "unit" in places where it changes the shader by adding
texture sampling itself - it seems like this can't work with shaders
using dx10-style sample opcodes (can't mix gl-style and dx10-style
sample instructions in a shader).
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Split the sampler interface to use separate sampler and texture (sampler_view)
state. This is needed to support dx10-style sampling instructions.
This is not quite complete since both draw/llvmpipe don't really track
textures/samplers independently yet, as well as the gallivm code not quite
using the right sampler or texture index respectively (but it should work
for the sampling codes used by opengl).
We are however losing some optimizations in the process, apply_max_lod will
no longer work, and we potentially could end up with more (unnecessary)
recompiles (if switching textures with/without mipmaps only so it shouldn't
be too bad).
v2: don't use different callback structs for sampler/sampler view functions
(which just complicates things), fix up sampling code to actually use the
right texture or sampler index, and similar for llvmpipe/draw actually
distinguish between samplers and sampler views.
v3: fix more of PIPE_MAX_SAMPLER / PIPE_MAX_SHADER_SAMPLER_VIEWS mismatches
(both in draw and llvmpipe), based on feedback from José get rid of unneeded
static sampler derived state.(which also fixes the only 2 piglit regressions
due to a forgotten assignment), fix comments based on Brian's feedback.
v4: remove some accidental unrelated whitespace changes
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Now that things mostly seem to work enable those formats.
Some formats cause crashes (notably RGB8 variants) so switch these off
(these crashes are not specific to INT/UINT variants but the state tracker
doesn't use them for UNORM etc. formats so it went unnoticed so far).
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cast back the fake floats to ints, and make sure we don't try to do scaling
in format conversion (which only makes sense with normalized values).
Also need to disable blending and alpha test (as per spec) for such buffers.
This makes fbo-blending from the piglit ext_texture_integer tests work for most
formats (some crash, and the luminance and intensity variants have the GB or
GBA channels respectively wrong).
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were passing in the rt index however this was always 0 for non-independent
blend case. (The format was only actually used to decide if the color mask
covered all channels so this went unnoticed and was discovered by accident.)
Additionally, there was a second problem because we do fixups in the key based
on color buffer format we cannot use non-independent blend anyway as the fixed
up values would never get used.
So always turn non-independent blending into independent.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
v2: Andreas Boll <[email protected]>
- don't remove compatibility with scripts for the old build system
v3: Andreas Boll <[email protected]>
- remove more obsolete hacks
v4: Andreas Boll <[email protected]>
- add a previously removed TOP variable to fix vgapi build
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We get int/uint clear color value in this case, and util_pack_color can't
handle these formats at all (even if it could, float input color isn't what
we want).
Pass through the color union appropriately and handle the packing ourselves
(as I couldn't think of a good generic util solution).
This gets piglit fbo_integer_precision_clear and
fbo_integer_readpixels_sint_uint from the ext_texture_integer test group from
segfault to pass (which only leaves fbo-blending from that group not working).
v2: fix up comments
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usage with pipe_context:
pipe->flush(pipe, NULL, PIPE_FLUSH_END_OF_FRAME);
Usage with st_context_iface:
st->flush(st, ST_FLUSH_END_OF_FRAME, NULL);
The flag is only a hint for drivers. Radeon will use it for buffer eviction
heuristics in the kernel (e.g. for queries like how many frames have passed
since a buffer was used).
The flag is currently only generated by st/dri on SwapBuffers.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Stéphane Marchesin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was slightly wrong: we were computing the longest duration of
the query among all the rasterizer tasks.
Regardless, for tile-based implementations such as llvmpipe, time differences
will never be very useful, because rendering before/during/after the query
is all interleaved. And this is expected, see ARB_timer_query spec, issue 10.
In particular, piglit ext_timer_query-time-elapsed still fails, because
it makes assumptions that don't hold true in in tiled architectures. Not
sure how to fix that though.
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
To better reflect what it is being advertised.
Reviewed-by: Dave Airlie <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This builds on the previous draw/softpipe patch.
So llvmpipe does streamout calls after clip/viewport stages,
but we have the pre-clip position stored for later use, so
when we are doing transform feedback, and its the position vertex
grab the vertex from the stored pre clip position.
The perfect fix is too probably add a codegen transform feedback
stage in between shader and clip stages, but this is good enough
for now.
Reviewed-by: Roland Scheidegger <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This is redundant since we're calling draw_bind_fragment_shader()
which already does a flush.
v2: the redundant flush in llvmpipe_set_constant_buffer() has
already been removed by commit 3427466e6dbbb8db7c1ecda6b3859ca1cc5827a3
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
Not really used by anybody now.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
This fixes some use-after-free issues. I haven't measured any real
performance difference with a handful of Mesa demos.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this we only supported user-based constant buffers.
First, we basically plumb pipe_constant_buffer objects through llvmpipe
rather than pipe_resource objects.
Second, update llvmpipe_set_constant_buffer() and try_update_scene_state()
so they understand both resource- and user-based constant buffers.
The problem with user constant buffers is the potential for use-after-free,
as seen in some WebGL tests. The next patch will flip the switch for
resource-based const buffers.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
| |
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Support 16 (defined in LP_MAX_TGSI_CONST_BUFFERS) as opposed to 32 (as
defined by PIPE_MAX_CONSTANT_BUFFERS) because that would make the jit
context become unnecessarily large.
v2: Bump limit from 4 to 16 to cover ARB_uniform_buffer_object needs,
per Dave Airlie.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
This fixes the gears regression since transform feedback.
Reported-by: Brian Paul <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'd written most of this ages ago, but never finished it off.
This passes 115/130 piglit tests so far. I'll look into the
others as time permits.
v1.1: fix calloc return check as suggested by Jose.
Reviewed-by: Jose Fonseca <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
My understanding and actual implementation of how the pixels are being
fetch differed.
This fixes bug 57863.
Trivial.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This required an update for the query storage in llvmpipe, there
can now be an active query per query type, so an occlusion query
can run at the same time as a time elapsed query.
Based on PIPE_QUERY_TIME_ELAPSED patch from Dave Airlie.
v2: fix up piglits for timers (also from Dave Airlie)
a) if we don't render anything the result is 0, so just
return the current time
b) add missing screen get_timestamp callback.
Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
This fixes the "Source and destination overlap in memcpy" valgrind
warnings.
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
| |
Tell LLVM the exact alignment we can guarantee, based on the fs block
dimensions, pixel format, and the alignment of the resource base pointer
and stride.
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The fs shader now depends on the color buffer formats. The shader key was
extended to accommodate this, but llvmpipe_update_derived needs to be
updated to check the framebuffer dirty flag.
This fixes bug 57674.
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
| |
Completely forgot about updating Makefile when removing it. Stephane
already fixed the make build, but there were a few mentions of
lp_tile_soa left in the tree.
|
|
|
|
|
|
|
| |
Fixes sizeof not portable defects reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
The Makefile looks for a file which is gone (lp_tile_soa.c)
http://bugs.freedesktop.org/show_bug.cgi?id=57713
|
|
|
|
|
|
|
|
|
|
| |
This adds array (1d,2d) texture support to llvmpipe.
Though probably should do something about 1d array textures requiring gobs
of memory (this issue is not strictly limited to arrays but it is probably
worse there).
Initial code by Jakob Bornecrantz <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
No longer used/necessary, as we always blend in AoS now.
Trivial.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now dead code.
Also had to remove the show_tiles/show_subtiles because now the color
buffers are always stored in their native format, so there is no longer
an easy way to paint the tile sizes.
Depth-stencil buffers are still swizzled.
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update llvmpipe_is_format_supported and llvmpipe_is_format_unswizzled
so that only the formats that we can render without swizzling are
advertised.
We can still render all D3D10 required formats except
PIPE_FORMAT_R11G11B10_FLOAT, which needs to be implemented in a future
opportunity.
Removal of rendertarget swizzling will be done in a subsequent change.
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
| |
It was forgotten in the previous patch series, but it is trivial to
implement, based on the SoA path.
This fixes glean logicOp failures.
|
|
|
|
|
| |
Unfortunately for MSVC arrays with a constant variable size are still
considered dynamically sized.
|
|
|
|
| |
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
| |
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
| |
Also updated lp_build_const_mask_aos_swizzled to reflect this.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This also adds some code to handle per-quad lods for more than 4-wide fetches,
because otherwise I'd have to integrate the texelFetch function into
the splitting stuff... (but it is not used yet outside texelFetch).
passes piglit fs-texelFetch-2D, fails fs-texelFetchOffset-2D due to I believe
a test error (results are undefined for out-of-bounds fetches, we return
whatever is at offset 0, whereas the test expects [0,0,0,1]).
Texel offsets are only handled by texelFetch for now, though the interface
can handle it for everything.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This might have a slight overhead but handling mip offsets more like
the width (and image) strides should make some things easier (mip level
being just part of the offset calculation) later.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
This is preparation work for using mip level offsets + base_ptr for texture
sampling instead of per-mip pointers.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
| |
Signed-off-by: Dave Airlie <[email protected]>
|