| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
See the previous commit for the rationale.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we would:
1. Emit the new indirect state.
2. Flag CACHE_NEW_BLEND_STATE.
3. Rely on later state atoms to notice CACHE_NEW_BLEND_STATE and emit a
pointer to the new indirect state.
This is rather cumbersome: it requires two state atoms instead of one,
and there's a strict ordering dependency in the list. Plus, the code
gets spread across two functions (or even files in the case of Gen7+).
Gen7+ has a packet to update just the blend state pointer, so it makes a
lot of sense to simply emit that right away. Gen6 has a combined packet
which updates blending, the color calculator, and depth/stencil state;
however, each can still be modified independently.
This drops the Gen6 micro-optimization where we tried to only emit one
packet that changed all three states. State updates are pretty cheap.
CACHE_NEW_BLEND_STATE is no longer necessary, so drop it.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 6c966ccf07bcaf64fba1a9b699440c30dc96e732.
Apparently causes GPU hangs.
Conflicts:
src/mesa/drivers/dri/i965/brw_state.h
src/mesa/drivers/dri/i965/brw_state_upload.c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do a lot of multiplies by 3 or 4 for skinning shaders, and we can avoid
the sequence if we just move them into the right argument of the MUL.
On pre-IVB, this means reliably putting a constant in a position where it
can't be constant folded, but that's still better than MUL/MACH/MOV.
Improves GLB 2.7 trex performance by 0.788648% +/- 0.23865% (n=29/30)
v2: Fix test for pre-sandybridge.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
| |
This is a trivial port of 1d6ead38042cc0d1e667d8ff55937c1e32d108b1 from
the FS.
No significant performance difference on trex (misplaced the data, but it
was about n=20).
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is different from how we do it in the FS - we are using MAD even when
some of the args are constants, because with the relatively unrestrained
ability to schedule a MOV to prepare a temporary with that data, we can
get lower latency for the sequence of instructions.
No significant performance difference on GLB2.7 trex (n=33/34), though it
doesn't have that many MADs. I noticed MAD opportunities while reading
the code for the DOTA2 bug.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that Gen6+ relies on hardware contexts, we don't need to record an
occlusion query value at the end of each batch. That means we no longer
need to reserve space for the absurd number of PIPE_CONTROLs required to
do that on Sandybridge.
See commit 4e087de51ad0e7ba4a7199d3664e1d096f8dc510, which bumped this
up to 60 bytes. This is not quite a revert, as it uses 24 bytes instead
of 16, and saves the comments. As far as I can tell, the old value of
16 bytes was just wrong, so we shouldn't go back to that.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We always allocate the maximum amount of space and never change it, so
it makes sense to do it once. Programming it on startup also lets us
skip re-programming it from BLORP.
This removes a tiny amount of overhead from our drawing loop.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
This removes a tiny bit of code from our drawing loop.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we emit invariant state at startup (and never select the media
pipeline), the 3D pipeline will always already be selected, even if BLORP
is the first operation. So this is unnecessary.
v2: Fix unused variable warning (intel_context is no longer used).
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we have hardware contexts, we can safely initialize our GPU
state once at startup, rather than needing a state atom with the
BRW_NEW_CONTEXT flag set.
This removes a tiny bit of code from our drawing loop.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
These atoms don't actually exist.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
The existing code already returned a boolean; this just clarifies that.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Fixes piglit test "spec/!OpenGL 1.0/gl-1.0-front-invalidate-back"
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a fake front buffer is in use, if we request the front buffer
(using screen->dri2.loader->getBuffersWithFormat()), the X server
copies the real front buffer to the fake front buffer and returns the
fake front buffer. We sometimes make redundant requests for the front
buffer (due to using a single counter to track invalidates for both
the front and back buffers), so there's a danger of pending front
buffer rendering getting overwritten when the redundant front buffer
request occurs.
Previous to this patch, intel_update_renderbuffers() worked around
that problem by sometimes doing intel_flush() and intel_flush_front()
before calling intel_query_dri2_buffers(). But it only did the
workaround when the front buffer was bound for drawing; it didn't do
it when the front buffer was bound for reading.
This patch moves the workaround code to intel_query_dri2_buffers(), so
that it happens in exactly the circumstances where it is needed.
This should fix some of the sporadic failures in Piglit tests
fbo-sys-blit and fbo-sys-sub-blit.
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch that follows will fix a bug that prevents
intel_flush_front() from being called often enough. In doing so, it
will create a situation where intel_flush_front() is called during the
initial call to glXMakeCurrent(). In this circumstance,
ctx->DrawBuffer hasn't been initialized yet and is NULL. Fortunately,
intel->front_buffer_dirty is false, so intel_flush_front() doesn't
actually need to do anything. To avoid a segfault, swap the order of
terms in intel_flush_front()'s if statement.
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Removes the special-case suppression of gl_ClipVertex in the VUE map.
Also calculate vertex outcodes for user clip planes based on
gl_ClipVertex if written; otherwise gl_Position.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
When clipping triangles against a user clip plane, and gl_ClipVertex
is provided in the vertex, use it instead of hpos.
TODO: A similar change should be made at some point for line clipping.
Signed-off-by: Chris Forbes <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Putting the human readable device names directly in the PCI ID list
consolidates things in one place. It also makes it easy to customize
the name on a per-PCI ID basis without a huge code explosion.
Based on a patch by Kristian Høgsberg.
v2: Fix 830M/845G names and #undef CHIPSET (caught by Emit Velikov).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
At DDX commit Chris mentioned the tendency we have of finding out more
PCI IDs only when users report. So Let's add all new reserved Haswell IDs.
NOTE: This is a candidate for stable branches.
Bugzilla: http://bugs.freedesktop.org/show_bug.cgi?id=63701
Signed-off-by: Rodrigo Vivi <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Blorp and the hardware blitter can't be used to implement
CopyTexSubImage when the image type is 1D_ARRAY, because of a
coordinate system mismatch (the Y coordinate in the source image is
supposed to be matched up to the Z coordinate in the destination
texture).
The hardware blitter path (intel_copy_texsubimage) contained a perf
debug warning for this case, but it failed to actually fall back. The
blorp path didn't even check.
Fixes piglit test "copyteximage 1D_ARRAY".
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 045612c (intel: Add an assert for glCopyTexSubImage() being
called on MSAA buffers) added an assertion to intel_copy_texsubimage()
to make sure that multisampling was not in use, based on the
assumption that glCopyTexSubImage() can't legally be used with
multisampling.
However, there is one case where glCopyTexSubImage() can legally be
used with multisampling: when the source buffer is a multisampled
window system buffer. If the source and destination color formats
don't match, the blorp path will fail, so intel_copy_texsubimage()
will be called. In this case, we need intel_copy_texsubimage() to
return false so that we fall back to meta to do the copy. (The
multisampled source buffer won't cause a problem for the meta path,
because it uses glReadPixels, which forces a multisample resolve).
It's still safe to assert that the destination image is
single-sampled, because it's not legal to call glCopyTexSubImage() on
multisampled textures.
Fixes some failures with piglit tests "copyteximage
{1D,2D,CUBE,RECT,2D_ARRAY}" (with "samples=..." argument).
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Okay I now understand why Frank would want to run away, this is
my attempt at fixing the CVE out of bounds access to constants
outside the range. This attempt converts any illegal constants
to constant 0 as per the GL spec, and is undefined behaviour.
A future patch should add some debug for users to find this out,
but this needs to be backported to stable branches.
CVE-2013-1872
v2: drop the last hunk which was a separate fix (now in master).
hopefully fix the indentations.
v3: don't fail piglit, the whole 8/16 dispatch stuff was over
my head, and I spent a while figuring it out, but this one is
definitely safe, one piglit pass extra on my Ironlake.
NOTE: This is a candidate for stable branches.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were copying the source stencil data onto the destination depth data.
Fixes piglit copyteximage other than 1D_ARRAY.
v2: Fix unintentional dropping of the "don't double-copy for packed
depth/stencil" check. While blorp is only supported on separate
stencil hardware at the moment, hopefully that will change soon.
Review by Jordan.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When making v2 of da2880bea05bfc87109477ab026a7f5401fc8f0c, I carefully
checked all of the calls in that commit to see that I'd updated them, but
forgot to update the new calls in the later commits such as
.e845c5cf7abce55759501a473459aff3bf25c9ca. As a result, we were getting Y
tiled temporaries even though the whole point of the temporary was to
untile!
The steady state of the intro scene of lightsmark goes from 13 to 17 fps.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65154
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a gl_client_array is created with glColorPointer,
gl_client_array::Normalized is true. This caused the translation from the
gl_client_array's type to a BRW_SURFACEFORMAT to assertion fail.
Fixes the spinning cube's color in Android 4.2's ApiDemos.apk,
"Graphics > OpenGL ES".
Fixes assertion failure in mesa-demos/src/egl/opengles1/tri_x11 on Haswell
and Ivybridge:
brw_draw_upload.c:287: get_surface_type: Assertion `0' failed.
No Piglit regressions on Haswell.
Note: This is a candidate for the 9.1 branch.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=42182
Issue: AXIA-2954
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Rather than pointing the surface_state directly at a single
sub-image of the texture for rendering, we now point the
surface_state at the top level of the texture, and configure
the surface_state as needed based on this.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Set the renderbuffer's Depth field to match the texture's
Depth when rendering to a texture.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
brw->ib.type is reset to -1 at the start of each batch. If there's no
index buffer, it won't get updated to a sensible value, resulting in
_mesa_primitive_restart_index's "Invalid index buffer type" assertion
tripping.
Fixes a regression since 7c87a3b5dac118697a9b67caa7b6d5cab60f316d.
NOTE: This is a candidate for the 9.1 branch (and should be squashed).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65195
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In traditional multisampled framebuffer rendering, color samples must be
explicitly resolved via BlitFramebuffer before doing the scaled blitting
of the framebuffer. So, scaled blitting of a multisample framebuffer
takes two separate calls to BlitFramebuffer.
This patch implements the functionality of doing multisampled scaled
resolve using just one BlitFramebuffer call. Important changes involved
in this patch are listed below:
- Use float registers to scale and offset texture coordinates.
- Change offset computation to consider float coordinates.
- Round the scaled coordinates down to nearest integer.
- Modify src texture coordinates clipping to account for scaling..
- Linear filter is not yet implemented in blorp. So, don't use
blorp engine to do single sampled scaled blitting.
V3: Fix nearest filtering issue in scaled blits. Makes failing piglit
fbo-blit-stetch test and framebuffer_blit_functionality_magnifying_blit.test
in gles3 CTS pass.
Observed no piglit, gles3 CTS regressions on sandybridge & ivybridge with
this patch.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
These changes are required to implement scaled blitting in blorp
in my next patch.
No regressions observed in piglit quick-driver.tests with this patch.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
This reverts commit 98dfd59a0445666060c97b0dccaf0e9f030b547a.
The patch was clearly not Piglit tested, as it caused at least 225
tests to start crashing with assertion failures. That was before my
desktop tanked and the test run died completely.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is my attempt at fixing this as the CVE is making RH security team
care enough to make me look at this. (please upstream, security fixes are
more important than whatever else you are doing, if for no other reason than
it saves me having to fix stuff I've no real clue about).
Since Frank's original fix was denied, here is my attempt to just
alias all constants that are out of bounds < 0 or > nr_params to constant 0,
hopefully this provides the undefined behaviour idr requires..
CVE-2013-1872
v2: drop the last hunk which was a separate fix (now in master).
hopefully fix the indentations.
NOTE: This is a candidate for stable branches.
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Set fs_visitor::params_remap to NULL in the constructor.
This variable was potentially tested in fs_visitor::remove_dead_constants()
before being set.
NOTE: This is a candidate for stable release branches.
Signed-off-by: Frank Henigman <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Pre-Haswell hardware doesn't support an arbitrary restart index, and
instead compares the index buffer value against 0xFF for byte-size
buffers, 0xFFFF for short-size buffers, or 0xFFFFFFFF for unsigned
integer buffers.
OpenGL allows the restart index to be an arbitrary unsigned integer.
When comparing against byte/short types, the index buffer value should
be promoted to a full 32-bit integer before doing the comparison. The
restart index is /not/ supposed to be masked to byte/short size.
This means that with certain restart indexes, the comparison should
always fail. For example, a restart index of 0xF000FFFF should never
match any byte/short index buffer values due to the extra high bits.
We must not enable hardware primitive restart in such a case. For now,
fall back to software primitive restart as it's the simplest fix. In
the future, we could detect restart indexes that will never match and
skip both hardware and software primitive restart.
NOTE: This is a candidate for stable branches.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code that updates the ctx->Array._RestartIndex derived state mashed
it to 0xFFFFFFFF when GL_PRIMITIVE_RESTART_FIXED_INDEX was enabled
regardless of the index buffer type. It's supposed to be 0xFF for byte,
0xFFFF for short, or 0xFFFFFFFF for integer types.
The new _mesa_primitive_restart_index() helper gets this right.
The hardware appears to compare against the full 32-bit value some of
the time, causing primitive restart not to occur when it should. The
fact that it works some of the time is rather frightening.
Fixes sporadic failures in the ES 3 instanced_arrays_primitive_restart
conformance test when run in combination with other tests.
NOTE: This is a candidate for the 9.1 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Previously it would assertion fail in debug builds (though the correct
value was returned in a non-debug build). Marking it as a candidate for
stable even though it has no current consumers in the stable branches, in
case one shows up in a later backport.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=64727
NOTE: This is a candidate for stable branches.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were expanding the live range too far, breaking register_coalesce_2()
and compute_to_mrf() on 16-wide shaders. Turning it back on improves
GLB2.7 performance by 0.239355% +/- 0.0850649% (n=398). shader-db stats
are:
total instructions in shared programs: 1627211 -> 1609262 (-1.10%)
instructions in affected programs: 450351 -> 432402 (-3.99%)
While 33 new 16-wide shaders are gained, 70 are lost. Despite that,
tropics (the app that lost the most 16-wide) shows a .41% +/- .16%
(n=7/8, first-run outlier removed) performance improvement on my HSW.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The scheduler didn't know about uniform-type accesses, and if a uniform
access was last in a 16-wide, we'd walk off the end of the array. This
never happened, because we'd never coalesce out all the GRFs, due to a bug
to be fixed in the next commit.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the introduction of default-to-SARGB8 window system framebuffers,
non-blorp hardware lost blit acceleration for these two paths between the
window system and ARGB8888 textures. Since we shouldn't be doing any
conversion anyway, just compatibility-check the linear variants of the
formats.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=61954
Reviewed-by: Kenneth Graunke <[email protected]>
Tested-by: Tobias Jakobi <[email protected]>
|
|
|
|
|
|
|
|
| |
Since the glBitmap() MRT change, it's unused. There was basically no way
to responsibly use this function since MRT was introduced.
Reviewed-and-tested-by: Ian Romanick <[email protected]>
Acked-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Any 32-bit format got ARGB8888 handling (including, say, GL_RG1616), and
anything else got 16-bit (including, say, GL_R8), which could potentially
hang the GPU by writing out of bounds.
NOTE: This is a candidate for the stable branches.
Reviewed-and-tested-by: Ian Romanick <[email protected]>
Acked-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We'd only hit color buffer 0 even if multiple draw buffers were bound.
NOTE: This is a candidate for the stable branches.
Reviewed-and-tested-by: Ian Romanick <[email protected]>
Acked-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will ensure that we have resolves if we ever extend this to
glTexSubImage(), and fixes missing image start offset handling.
The texture buffer alloc ended up getting moved up, because we want to
look at the format of the image's actual mt to see if we'll end up
blitting the right thing, in the case of packed depth/stencil uploads.
This is the last caller of intelEmitCopyBlit() on a miptree-wrapped BO.
Reviewed-and-tested-by: Ian Romanick <[email protected]>
Acked-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The previous code was missing depth resolves, that had only been prevented
due to no blitting of Y tiling. The pair of flip args in the new blit
function means that we can just drop the pack->Invert fallback.
Reviewed-and-tested-by: Ian Romanick <[email protected]>
Acked-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I needed to do this for the PBO blit cases to use intel_miptree_blit().
But this also actually partially fixes a bug in EGLImage handling: We
can't share regions across contexts, because regions have a refcount that
isn't protected by a mutex, and different contexts can be simulataneously
accessed from multiple threads. Now we just need to get regions out of
__DRIImage. There was also a missing use of image->offset in the EGLImage
renderbuffer storage code.
Reviewed-and-tested-by: Ian Romanick <[email protected]>
Acked-by: Paul Berry <[email protected]>
|