| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
The format of the window system framebuffer changed from ARGB8888 to
SARGB8, but we're still supposed to render to it the same as ARGB8888
unless the user flipped the GL_FRAMEBUFFER_SRGB switch.
Reviewed-by: Kenneth Graunke <[email protected]>
NOTE: This is a candidate for stable branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I believe this extension was enabled by accident. As far as I can tell,
there has never been any code in Mesa to actually support it. Not only
that, this extension is only useful in the common-lite profile, and Mesa
does the common profile.
This "fixes" the piglit test oes_matrix_get-api.
Signed-off-by: Ian Romanick <[email protected]>
Cc: "9.1 9.2" <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For some reason that I don't yet fully understand, Glaze does not work with
libEGL unless libEGL is linked with -Bsymbolic.[*]
Beyond that specific reason, all of the reasons for which libGL.so is linked
with -Bsymbolic, (see the commit history), should also apply here.
[*] The specific behavior I am seeing is that when Glaze calls dlopen for
libEGL.so, ifunc resolvers within Glaze for EGL functions are called before
the dlopen returns. These resolvers cannot succeed, as they need the return
value from dlopen in order to find the functions to resolve to. I don't know
what's causing these resolvers to be called, but I have verified that linking
libEGL with -Bsymbolic causes this problematic behavior to stop.
CC: "9.1 and 9.2" <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From the bspec documentation of the SEND instruction:
"destination region cannot cross the 256-bit register boundary."
To avoid violating this restriction when executing SIMD16 texturing
operations (such as those used by blorp), we need to ensure that the
destination of the SEND instruction doesn't exceed 256 bits in size.
An easy way to do this is to set the type of the destination register
to UW (unsigned word), since 16 unsigned words can fit inside a
256-bit register. Fortunately, this has no effect on the sampling
operation, since the sampler always infers the destination data type
from the sampler message rather than from the type of the instruction
operand.
Previously, we did this for texturing operations issued by the vec4
and fs back-ends, but not for blorp. This patch makes blorp use the
same trick.
I haven't observed any behavioural difference on actual hardware due
to this patch, but it avoids a warning from the simulator so it seems
like the right thing to do.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We originally had a path just did the loop and called
ctx->Driver.AllocTextureImageBuffer(), which I moved into Mesa core. But
we can do better, avoiding incorrect miptree size guesses and later
texture validations by just directly allocating the miptree and setting it
to all the images.
v2: drop debug printf.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
| |
I've rewritten a lot of this file.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
No change in copies during a piglit run, but it's one less first_level !=
0 in our codebase.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
As long as the baselevel, maxlevel still sit inside the range we had
previously validated, there's no need to reallocate the texture.
I also hope this makes our texture validation logic much more obvious.
It's taken me enough tries to write this change, that's for sure. Reduces
miptree copy count on a piglit run by 1.3%, though the change in amount of
data moved is much smaller.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Given that a teximage that calls us with this flag set will immediately
proceed to allocate the other levels, we can probably just go ahead and
allocate those levels now.
Reduces miptree copies in piglit by about .05%.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the caller shows up with GL_BASE_LEVEL != 0, it doesn't mean that the
texture will over the course of its lifetime have that nonzero baselevel,
it means that the caller is filling the texture from the bottom up for
some reason (one could imagine demand-loading detailed texture layers at
runtime, for example). If we allocate from just the current baselevel, it
means when they come along with the next level up, we'll have to allocate
a new miptree and copy all of our bits out of the first miptree.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's say you started allocating your 2D texture with level 2 of a tree as
a 1x1 image. The driver doesn't know if this means that level 0 is 4x4 or
4x1 or 1x4, so we would just allocate a single 1x1 and let it get copied
in to the real location at texture validate time later.
Since this is just a temporary allocation that *will* get copied, the
extra space allocation of just taking the normal path which will happen to
producing a 4x1 level 0, 2x1 level 1, and 1x1 level 2 is the right way to
go, to reduce complexity in the normal case.
No change in miptree copies over the course of a piglit run.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has no effect currently, because intel_finalize_mipmap_tree() always
makes mt->first_level == tObj->BaseLevel.
The change I made before to handle it
(b1080cfbdb0a084122fcd662cd27b4748c5598fd) got very close to working, but
after fixing some unrelated bugs in the series, it still left
tex-miplevel-selection producing errors when testing textureLod(). The
problem is that for explicit LODs, the sampler's LOD clamping is ignored,
and only the surface's MIP clamping is respected. So we need to use
surface mip clamping, which applies on top of the sampler's mip clamping,
so the sampler change gets backed out.
Now actually tested with a non-regressing series producing a non-zero
computed baselevel.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We know that the object's mt is equal to the firstimage's mt because it's
gone through intel_finalize_mipmap_tree(). Saves a lookup of firstimage
on pre-gen7.
v2: Merge in the warning fix that appeared later in the series (noted by
Chad)
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
Fixes "Deference before null check" defect reported by Coverity.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Vadim Girlin <[email protected]>
|
| |
|
|
|
|
| |
Accidentally broken by the consolidation.
|
|
|
|
|
|
| |
It seems that case with opencl enabled was forgotten
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
This has been very useful for tracking down bugs in libdrm.
The *_PRINT_TEXDEPTH environment variables were probably never used,
so I removed them.
|
|
|
|
|
|
|
|
|
| |
The function r600_choose_tiling is new and needs a review.
The only change in functionality is that it enables 2D tiling for compressed
textures on SI. It was probably accidentally turned off.
v2: don't make scanout buffers linear
|
|
|
|
| |
Textures can never have target==PIPE_BUFFER.
|
|
|
|
| |
Also slightly optimize r600_buffer_map_sync_with_rings.
|
|
|
|
| |
and the util_format_s3tc_init calls too.
|
|
|
|
|
|
|
|
| |
More work needs to be done for this to be entirely shared with r600g.
I'm just trying to share r600_texture.c now.
The reason I put the implementation to si_descriptors.c is that the emit
function had already been there.
|
|
|
|
| |
This will be used in the next commit.
|
| |
|
| |
|
|
|
|
| |
r600_texture.c is one step closer to r600g.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
It's always 0.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This doesn't fix any known issue (I haven't run piglit with this yet),
but the code was obviously completely wrong. It looks like copy-pasted from CMP.
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
| |
v2: use CMP on drivers without native integer support
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes the MSVC build error introduced with commit
b2e327e08f8519da131dd382adcc99240d433404.
api_arrayelt.c
src\mesa\main/mtypes.h(1809) : error C2061: syntax error : identifier 'uint32_t'
src\mesa\main/mtypes.h(1810) : error C2059: syntax error : '}'
src\mesa\main/mtypes.h(1825) : error C2079: 'Minimum' uses undefined union 'gl_perf_monitor_counter_value'
src\mesa\main/mtypes.h(1828) : error C2079: 'Maximum' uses undefined union 'gl_perf_monitor_counter_value'
Signed-off-by: Vinson Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the argument to emit_bool_to_cond_code() is an ir_expression, we
loop over the operands, calling accept() on each of them, which
generates assembly code to compute that subexpression. We then emit
one or two final instruction that perform the top-level operation on
those operands.
If it's not an expression (say, a boolean-valued variable), we simply
call accept() on the whole value.
In commit 80ecb8f1 (i965/fs: Avoid generating extra AND instructions on
bool logic ops), Eric made logic operations jump out of the expression
path to the non-expression path.
Unfortunately, this meant that we would first accept() the two operands,
skip generating any code that used them, then accept() the whole
expression, generating code for the operands a second time.
Dead code elimination would always remove the first set of redundant
operand assembly, since nothing actually used them. But we shouldn't
generate it in the first place.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
This appears in Volume 1 Part 1 of the Sandybridge PRM on page 48.
Signed-off-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ironlake's counters are always enabled; userspace can simply send a
MI_REPORT_PERF_COUNT packet to take a snapshot of them. This makes it
easy to implement.
The counters are documented in the source code for the intel-gpu-tools
intel_perf_counters utility.
v2: Adjust for core data structure changes. Add a table mapping buffer
object offsets to exposed counters (which changes each generation).
Finally, add report ID assertions to sanity check the BO layout
(thanks to Carl Worth).
v3: Update for core BeginPerfMonitor hook changes (requested by Brian).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This provides an interface for applications (and OpenGL-based tools) to
access GPU performance counters. Since the exact performance counters
available vary between vendors and hardware generations, the extension
provides an API the application can use to get the names, types, and
minimum/maximum values of all available counters. Counters are also
organized into groups.
Applications create "performance monitor" objects, select the counters
they want to track, and Begin/End monitoring, much like OpenGL's query
API. Multiple monitors can be in flight simultaneously.
v2: Pass ctx to all driver hooks (suggested by Christoph), and attempt
to fix overallocation of bitsets (caught by Christoph). Incomplete.
v3: Significantly rework core data structures. Store counters in groups
rather than in a global list. Use their array index in the group's
counter list as the ID rather than trying to store a globally unique
counter ID. Use bitsets for active counters within a group, and
also track which groups are active so that's easy to query.
v4: Remove _mesa_ prefix on static functions; detect out of memory
conditions in new_performance_monitor(); make BeginPerfMonitor hook
return a boolean rather than setting m->Active or raising an error.
Switch to GLuint/unsigned for NumGroups, NumCounters, and
MaxActiveCounters (which also means switching a bunch of temporary
variable types). All suggested by Brian Paul. Also, remove
commented out code at the bottom of the block. Finally, fix the
dispatch sanity test (noticed by Ian Romanick).
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Brian Paul <[email protected]> [v3]
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This is better than overriding the extension enable based on the
language version; it's robust against shaders that do:
#version 140
#extension GL_ARB_uniform_buffer_object : disable
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Explicit attribute locations are supported with GLSL 3.30, GLSL ES 3.00,
or "#extension GL_ARB_explicit_attrib_location: enable". Using a helper
function makes it easy to check for this.
This enables support in GLSL 3.30, which was previously missing.
Previously, we overrode the extension enable flag for ES 3.00. This is
not robust against a shader such as:
#version 330
#extension GL_ARB_explicit_attrib_location : disable
Disabling extensions should not remove core language functionality.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Every caller passed true.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Hardware requires the magnitude of the largest component to not exceed
1; brw_cubemap_normalize ensures that this is the case.
Unfortunately, we would previously multiply the array index for cube
arrays by the normalization factor. The incorrect array index would then
cause the sampler to attempt to access either the wrong cube, or memory
outside the cube surface entirely, resulting in garbage rendering or in
the worst case, hangs.
Alter the normalization pass to only multiply the .xyz components.
Fixes broken rendering in the arb_texture_cube_map_array-cubemap piglit,
which was recently adjusted to provoke this behavior.
V2: Fix indent.
Signed-off-by: Chris Forbes <[email protected]>
Cc: "9.2" [email protected]
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compress empty triangles (don't emit more than one in a row) and
never emit empty triangles if we already generated a triangle
covering a non-null area. We can't skip all null-triangles
because c_primitives expects ones that were generated from vertices
exactly at the clipping-plane, to be emitted.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We need to count the clipper primitives before the rasterizer
discards one it considers to be null.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We need to subdivide triangles if either of the dimensions is
larger than the max edge length, not when both of them are larger.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|