| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
We always allocate the maximum amount of space and never change it, so
it makes sense to do it once. Programming it on startup also lets us
skip re-programming it from BLORP.
This removes a tiny amount of overhead from our drawing loop.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
This removes a tiny bit of code from our drawing loop.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we emit invariant state at startup (and never select the media
pipeline), the 3D pipeline will always already be selected, even if BLORP
is the first operation. So this is unnecessary.
v2: Fix unused variable warning (intel_context is no longer used).
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we have hardware contexts, we can safely initialize our GPU
state once at startup, rather than needing a state atom with the
BRW_NEW_CONTEXT flag set.
This removes a tiny bit of code from our drawing loop.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
These atoms don't actually exist.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
The existing code already returned a boolean; this just clarifies that.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We already implemented this for ES3, so we just need to turn it on.
Fixes 6 Piglit tests:
spec/glsl-1.50/compiler/built-in-functions/determinant-mat[234].{vert,frag}
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Page 17 of the GLSL 1.50.11 specification states:
"There is a built-in macro definition for each profile the
implementation supports. All implementations provide the following
macro:
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Previously we only supported "#version 150". This patch recognizes
"compatibility" to give the user a more descriptive error message.
Fixes Piglit's version-150-core-profile test.
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we didn't successfully parse the #version line, there's no point in
continuing with parsing and compiling: it's already failed.
Furthermore, it can actually be harmful: right after handling #version,
we call _mesa_glsl_initialize_types(), which checks state->es_shader and
language_version. If it isn't valid, it hits an assertion failure.
Fixes Piglit's "invalid-version-es." When processing "#version 110 es",
our code set state->es_shader and state->language_version = 110. It
then properly determined that this was invalid and flagged an error.
Since we continued anyway, we hit the assertion mentioned above.
NOTE: This is a candidate for the 9.1 branch.
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This was building the temporary array to pass to
save_SamplerParameteriv, and then not passing it.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Vinson Lee <[email protected]>
Signed-off-by: Vinson Lee <[email protected]>
|
|
|
|
|
|
|
| |
Fixes "Out-of-bounds access" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Signed-off-by: Maarten Lankhorst <[email protected]>
|
|
|
|
|
|
| |
Actually respect rasterizer state.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GPU (at least a3xx, but I think also a2xx) can render directly to
memory, bypassing tiling. Although it can't do this if blend, depth,
and a few other features of the pipeline are enabled. This direct
memory mode can be faster for some sorts of operations, such as simple
blits. In particular, this significantly speeds up XA by avoiding to
pull the entire dest pixmap into GMEM, render tiles, and write it all
back out again. This should also speed up resource copy-region and
blit.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The adreno a3xx GPU is found in newer snapdragon devices, such as the
nexus4. The a3xx is GLESv3 and OpenCL capable, although that is not
enabled yet in gallium.
Compared to a2xx, it introduces an entirely new unified shader ISA, and
re-shuffles all or nearly all of the registers. The good news is that
(for the most part) the registers are more orthogonal, not combining
unrelated state in a single register. And that there is a lot more
flexibility, so we don't need to patch and re-emit the shader like we
did on a2xx.
The shader compiler is currently quite dumb, there would be a lot of
room for improvement with an optimizing pass. Despite that, with the
a320 in my nexus4 it seems to be ~2-3x faster compared to the a220 in my
HP touchpad.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
Split the parts that are specific to adreno a2xx series GPUs from the
parts that will be in common with a3xx, so that a3xx support can be
added more cleanly.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use 128bit vector interleave for untwiddling in the blend code (with
256bit vectors). llvm generates terrible code for this for some reason,
so instead of generating a shuffle for 2 128bit vectors use a
extract/insert shuffle instead (it only seems to matter we're not using
128bit wide vectors for the shuffle). This decreases instruction count of
the blend code generated for a rgba8 render target without blending from
169 to 113 with llvm 3.1 and from 136 to 114 in llvm 3.2/3.3, and I got
a ~8% (llvm 3.1) and ~5% (3.2/3.3) performance improvement in gears.
(The generated code is still not terribly good as we could actually avoid
the interleaving completely but llvm can't know this.)
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We flush pending rendering before running CopySubBuffer, which
ensures that the right bits get to the screen.
NOTE: This is a candidate for stable release branches.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
The coordinates need to be inverted between glX and gallium.
NOTE: This is a candidate for stable release branches.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just like we produce from inside the Intel driver, this can help provide
information quickly about FBO incompatibility problems (particularly when
using apitrace replay).
Currently, in driver-marked incompleteness cases, you'll get both the
driver message and the core message on Intel. Until the other drivers are
fixed to produce output, I think this is better than not putting in a
message for driver-marked incomplete.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
Fixes piglit test "spec/!OpenGL 1.0/gl-1.0-front-invalidate-back"
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a fake front buffer is in use, if we request the front buffer
(using screen->dri2.loader->getBuffersWithFormat()), the X server
copies the real front buffer to the fake front buffer and returns the
fake front buffer. We sometimes make redundant requests for the front
buffer (due to using a single counter to track invalidates for both
the front and back buffers), so there's a danger of pending front
buffer rendering getting overwritten when the redundant front buffer
request occurs.
Previous to this patch, intel_update_renderbuffers() worked around
that problem by sometimes doing intel_flush() and intel_flush_front()
before calling intel_query_dri2_buffers(). But it only did the
workaround when the front buffer was bound for drawing; it didn't do
it when the front buffer was bound for reading.
This patch moves the workaround code to intel_query_dri2_buffers(), so
that it happens in exactly the circumstances where it is needed.
This should fix some of the sporadic failures in Piglit tests
fbo-sys-blit and fbo-sys-sub-blit.
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch that follows will fix a bug that prevents
intel_flush_front() from being called often enough. In doing so, it
will create a situation where intel_flush_front() is called during the
initial call to glXMakeCurrent(). In this circumstance,
ctx->DrawBuffer hasn't been initialized yet and is NULL. Fortunately,
intel->front_buffer_dirty is false, so intel_flush_front() doesn't
actually need to do anything. To avoid a segfault, swap the order of
terms in intel_flush_front()'s if statement.
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
| |
piglit OpenGL ES 3.0/minmax now passes. This was also one of the subcase
failures in OpenGL 3.2/minmax (and still is, because our value is too low
for 3.2, but at least we report what it is).
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Part of fixing piglit OpenGL ES 3.0/minmax.
v2: s/_gles3/_es3/ in extra name, for consistency (review by Matt).
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
|
|
|
|
|
|
|
| |
Noticed by inspection when reviewing the next commit.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Part of fixing piglit OpenGL ES 3.0/minmax.
v2: s/_gles3/_es3/ in extra name, for consistency (review by Matt).
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
|
| |
|
|
|
|
|
|
|
| |
These functions must clear all bound layers, not just the first.
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
Believe it or not but these two are actually the first two functions which
really belong in this file nowadays.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly just make sure the layer parameter gets passed through to the right
places (and get clamped, can do this at setup time), fix up clears to
clear all layers and disable opaque optimization. Luckily don't need to
touch the jitted code.
(Clears invoked via pipe's clear_render_target method will not work however
since the pipe_util_clear function used for it doesn't handle clearing
multiple layers yet.)
v2: per Brian's suggestion, prettify var initialization and add some comments,
add assertion for impossible layer specification for surface.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Transfers always use z/depth for layers no matter if it's a 1d or 2d array
texture, we don't follow OpenGL's crazyness there. Luckily this appears to
only be a doc bug, everyone doing the right thing already.
While here also document z/depth parameter for cube map arrays.
v2: fix typo spotted by Eric Anholt
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
| |
We returned 0 instead of 1 for the number of layers when the array texutre is
single-layered. This fixed it on GEN7+.
|
|
|
|
|
|
|
|
| |
Checking if array_size is greater than 1 is not enough for single-layered
array textures.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Moved draw_arrays() to st_draw_feedback.c and removed draw_arrays_instanced().
draw_arrays() was used by nobody else. Now there's just one "draw" entrypoint
into the draw module.
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This change came from the discovery that the STATIC_ASSERT to check that
the number of register file strings didn't actually work.
Similar changes could be made for the other string arrays in tgsi_string.c
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Removes the special-case suppression of gl_ClipVertex in the VUE map.
Also calculate vertex outcodes for user clip planes based on
gl_ClipVertex if written; otherwise gl_Position.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
When clipping triangles against a user clip plane, and gl_ClipVertex
is provided in the vertex, use it instead of hpos.
TODO: A similar change should be made at some point for line clipping.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
It was supported but not advertised. Also remove TODO tag for
PIPE_CAP_MIN_MAP_BUFFER_ALIGNMENT, as it is not a TODO.
|
|
|
|
| |
They were already supported, just being rejected in the TGSI translator.
|
|
|
|
|
|
|
| |
Fixes "Uninitialized pointer field" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Slab allocator is perfect for transfer. Improved OpenArena performance by 1%
with several casual runs.
|
|
|
|
| |
We need to unreference resources that we referenced.
|
|
|
|
|
| |
The BOs are mapped in their entire life times for the chipsets we support so
do not forget to unmap it.
|
|
|
|
| |
This magical line of code must have got lost at some point in the history...
|
|
|
|
| |
Add ilo_rasterizer_sf and initialize it in create_rasterizer_state().
|
|
|
|
| |
Add ilo_rasterizer_clip and initialize it in create_rasterizer_state().
|