| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally when a built-in array (such as gl_ClipDistance) is
redeclared, we call get_variable_being_redeclared() to do the
redeclaration, and it in turn calls check_builtin_array_max_size() to
make sure that the redeclared array size isn't too large.
However when a built-in array is redeclared as part of redeclaring
gl_in, we don't call get_variable_being_redeclared() (since the
individual built-ins aren't each represented by their own ir_variable
anymore). So we need to add an explicit call to
check_builtin_array_max_size() to make sure the new array size isn't
too large.
Note: at the moment this is redundant with a test that's done at link
time, so there's no change to piglit results. But the patch that
follows will prevent link errors from being reported if gl_PerVertex
isn't used, so in order to prevent that patch from causing
regressions, we need to add the compile check now. Besides, it's
nicer to report this error at compile time anyhow.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The queries GEOMETRY_VERTICES_OUT, GEOMETRY_INPUT_TYPE, and
GEOMETRY_OUTPUT_TYPE (defined by GL 3.2) differ from the corresponding
queries in ARB_geometry_shader4 in the following ways:
- They use different enum values
- They can only be queried; they cannot be set.
- Attempting to query them yields INVALID_OPERATION if the program is
not linked, or lacks a geometry shader.
This patch switches us over from the ARB_geometry_shader4 behaviour to
the GL 3.2 behaviour.
Fixes piglit test query-gs-prim-types.
v2: Improve comment above has_core_gs.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When program_resource_visitor visits variables that were created by
lower_named_interface_blocks, it needs to do extra work to un-do the
effects of lower_named_interface_blocks and construct the proper API
names.
Fixes piglit test
spec/glsl-1.50/execution/interface-blocks-api-access-members.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
These variables will need to be treated specially by
program_resource_visitor, so that they can be addressed through the
API using their interface block name (and array index, for interface
block arrays).
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Fixes piglit tests:
- interface-block-interpolation-{array,named,unnamed}
- glsl-1.50-interface-block-centroid {array,named,unnamed}
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Later patches will use this information to do proper error checking of
interpolation qualifiers that appear inside of interface blocks.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
In future patches, we will need this in order to interpret
interpolation qualifiers that appear inside interface blocks.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Future patches will need to call this function when there isn't an
ir_varible present to refer to.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although in principle there is no hardware limitation that prevents
gl_MaxGeometryInputComponents from being set to 128 on Gen7, we have
the following limitations in the vec4 compiler back end:
- Registers assigned to geometry shader inputs can't be spilled or
later re-used for any other purpose.
- The last 16 registers are set aside for the "MRF hack", meaning they
can only be used to send messages, and not for general purpose
computation.
- Up to 32 registers may be reserved for push constants, even if there
is sufficient register pressure to make this impractical.
A shader using 128 geometry input components, and having an input type
of triangles_adjacency, would use up:
- 1 register for r0 (which holds URB handles and various pieces of
control information).
- 1 register for gl_PrimitiveID.
- 102 registers for geometry shader inputs (17 registers per input
vertex, assuming DUAL_INSTANCED dispatch mode and allowing for one
register of overhead for gl_Position and gl_PointSize, which are
present in the URB map even if they are not used).
- Up to 32 registers for push constants.
- 16 registers for the "MRF hack".
That's a total of 152 registers, which is well over the 128 registers
the hardware supports.
Fortunately, the GLSL 1.50 spec allows us to reduce
gl_MaxGeometryInputComponents to 64. Doing that frees up 48
registers, brining the total down to 104 registers, leaving 24
registers available to do computation.
Fixes piglit test
spec/glsl-1.50/execution/geometry/max-input-components.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is similar to what we do for 16-wide vs 8-wide fragment shaders.
First we try compiling the geometry shader in DUAL_OBJECT mode. If we
can't do that without spilling, we fall back on DUAL_INSTANCED mode,
which should require less spilling (since it uses an interleaved
layout of payload registers).
In an ideal world we'd fall back to SINGLE mode, which would allow us
to interleave general-purpose registers too (resulting in even less
likelihood of spilling). But at the moment, the vec4 generator and
visitor classes don't have the infrastructure to interleave general
purpose registers, so DUAL_INSTANCED is the best we can do.
As a side benefit this paves the way for implementing instanced
geometry shaders (which are incompatible with DUAL_OBJECT mode).
Since most geometry shaders used in piglit testing are small,
DUAL_INSTANCED mode won't get exercised very much in a normal piglit
run. To force DUAL_INSTANCED mode to be used for all geometry
shaders, set INTEL_DEBUG=nodualobj.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Geometry shaders that run in "DUAL_INSTANCED" mode store their inputs
in vec4's. This means that when compiling gl_PointSize input
swizzling (a MOV instruction which uses a geometry shader input as
both source and destination), we need to do two things:
- Set force_writemask_all to ensure that the MOV happens regardless of
which channels are enabled.
- Set the source register region to <4;4,1> (instead of <0;4,1> to
satisfy register region restrictions.
v2: move the source register region fixup to the top of
vec4_generator::generate_vec4_instruction(), so that it applies to all
instructions rather than just MOV.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
Not yet enabled.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
In future patches, this will allow us to first try compiling a
geometry shader in DUAL_OBJECT mode (which is more efficient but uses
more registers) and then if spilling is required, fall back on
DUAL_INSTANCED mode.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise the scheduler would be invoked with prog_data->total_grf ==
0, causing havoc.
In a future patch, this will allow us to try compiling a geometry
shader in DUAL_OBJECT mode with spilling disabled, and then fall back
to DUAL_INSTANCED mode if that failed.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When geometry shaders are operated in "single" or "dual instanced"
mode, a single set of geometry shader inputs is interleaved into the
thread payload (with each payload register containing a pair of
inputs) in order to save register space.
This patch modifies vec4_visitor::lower_attributes_to_hw_regs so that
it can handle the interleaved format.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All geometry shaders begin this instruction:
mov(1) g0.2<1>:ud 0x0:ud { align1 }
which sets up GRF0 properly for scratch reads and writes. Since this
instruction has a SIMD size of 1, it will only have an effect if the
first channel is enabled. In practice, the hardware seems to always
dispatch geometry shaders with the first channel enabled, but I can't
find anything in the docs to guarantee that.
So to be on the safe side, set force_writemask_all on the instruction,
which guarantees that it will have the desired effect regardless of
which channels are enabled.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When lower_named_interface_blocks lowers a built-in interface block
member to an ir_variable, it needs to set explicit_location in the
ir_variable. Otherwise the linker gets confused and treats the
variable as a generic varying.
Fixes the following piglit tests, which were regressed by commit
63974c0 (glsl: Simplify the interface to
link_invalidate_variable_locations):
- clip-distance-bulk-copy
- clip-distance-in-bulk-read
- clip-distance-in-explicitly-sized
- clip-distance-in-param
- clip-distance-in-values
- core-inputs
- gs-redeclares-both-pervertex-blocks
- gs-redeclares-pervertex-in-only
- redeclare-pervertex-subset-vs-to-gs
- unsized-in-named-interface-block-gs
- unsized-in-named-interface-block-multiple
- unsized-in-unnamed-interface-block-gs
- unsized-in-unnamed-interface-block-multiple
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70820
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
This will allow us to re-use it for precompiling geometry shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This should never have been in the program key in the first place,
since it's determined by the shader source, not by GL state. Change
the code to just refer to gl_program::UsesClipDistanceOut directly.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will make it easier for back-ends to share code between geometry
shader and vertex shader compilation. Also, it is renamed to
"UsesClipDistanceOut" to clarify that (a) in geometry shaders, it
refers to the gl_ClipDistance output rather than the gl_ClipDistance
input, and (b) it is irrelevant in fragment shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since gl_ClipDistance is lowered from an array of floats to an array
of vec4's during compilation, transform feedback has special logic to
keep track of the pre-lowered array size so that attempting to perform
transform feedback on gl_ClipDistance produces a result with the
correct size.
Previously, this special logic always consulted the vertex shader's
size for gl_ClipDistance. This patch fixes it so that it uses the
geometry shader's size for gl_ClipDistance when a geometry shader is
in use.
Fixes piglit test spec/glsl-1.50/transform-feedback-type-and-size.
v2: Change the type of LastClipDistanceArraySize to "unsigned", and
clarify the comment above it.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've always overriden
ctx->Const.{Vertex,Fragment}Program.MaxTextureImageUnits to reflect
the number of texture image units supported by the hardware (rather
than using the default values assigned by Mesa core) so it seems
sensible to do that for GeometryProgram.MaxTextureImageUnits too. We
set it to 0 if geometry shaders aren't supported.
Once that is done, we can just unconditionally add
GeometryProgram.MaxTextureImageUnits to MaxCombinedTextureImageUnits.
Fixes piglit test "spec/glsl-1.50/built-in
constants/gl_MaxCombinedTextureImageUnits".
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The encoding of constant, relative, and relative-const src registers is
a bit more complex than originally thought, which gives an extra bit to
encode const reg # at expense of taking a bit from relative offset.
In most cases a3xx seems to actually use a scheme whereby it can encode
an extra bit for const register. You have three possible encodings in
thirteen bits:
register: (11 bits for N.c)
00........... rN.c
relative: (10 bits for N)
010.......... r<a0.x + N>
011.......... c<a0.x + N>
const: (12 bits for N.c)
1............ cN.c
Which means we can deal w/ more consts than previously thought.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
| |
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Fail more gracefully when buffer allocation/import fails.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
d3d10 requires that cube corners are filtered with accurate weights (that
is, the weight of the non-existing corner texel should be evenly distributed
to the other 3 texels). OpenGL does not require this (but recommends it).
This requires us to use different filtering code, since we need per-texel
weights which our 2d lerp doesn't (and can't) do. And of course the (now
per element) weights need to be adjusted too for it to work.
Invoke the new filtering code whenever there's an edge to keep things simpler,
as it will work for edges too not just corners but of course it's only needed
with corners.
More ugly code for not much gain but at least a hacked up cubemap demo
shows very nice corners now... Not sure yet if and how this should be
configurable...
v2: incorporate feedback from Jose, only use special corner filtering code
when there's a corner not when there's only an edge (as corner filtering code
is slower, though a perf difference was only measureable when always
forcing edge code). Plus some minor style fixes.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
| |
No driver uses it any more, and it's been replaced by megadrivers.
v2: Remove always-on conditional for NEED_LIBPROGRAM (review by Emil)
Reviewed-by: Matt Turner <[email protected]> (v1)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
v2: drop dridir now that it's unused.
v3: Fix linking after rebase when building just swrast from classic but a
drm-using gallium driver.
v4: Consistently put spaces around += in the updated Makefile.am block.
v5: Set a global driverAPI variable so loaders don't have to update to
createNewScreen2() (though they may want to for thread safety).
Reviewed-by: Matt Turner <[email protected]> (v3)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This required some reordering of headers to ensure that the symbol name
redefines happened before any prototypes.
v2: drop dridir now that it's unused.
v3: Consistently put spaces around += in the updated Makefile.am blocks.
v4: Set a global driverAPI variable so loaders don't have to update to
createNewScreen2() (though they may want to for thread safety).
Reviewed-by: Matt Turner <[email protected]> (v2)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
i915 has symbols for formerly-shared code that conflict with i965, so we
define them away using gen-symbol-redefs.py. Options considered:
- This option. Downsides: The symbols in profiling and debugging don't
match the source. The symbol list may change in the future and we won't
notice without manually running the tool again.
- Use objcopy --localize-hidden to automatically demote our symbols to
locals. This didn't work on i965 due to c++ weak symbols (which can't
be localized), but could work on i915. We could do it on i915 only, but
it does produce libtool warnings at link time due to libtool not knowing
if the resulting .o file is safe to link (stupid libtool). Plus you end
up with different symbols of the same name, which is confusing for
debugging too. On the other hand, no future symbol conflicts long term.
- Write our own libelf tool that handles c++ weak symbols like we want and
apply it to all drivers. All the downsides of above, but applies
uniformly across drivers.
- Edit the files to just rename all the i915 or i965 symbols that
conflict. There are on the order of 100 that have a prefix we used to
share, so it would take a bit of typing. Fewest downsides, but still
can have conflicts long term.
Ultimately, this is the least invasive change at the moment, and we can
see if the "more symbol conflicts appear later" thing is a real concern or
not.
Note that the ability to compile a version of i915 without INTEL_DEBUG env
support is dropped. It's too useful.
v2: drop dridir now that it's unused.
v3: Consistently put spaces around += in the updated Makefile.am block.
v4: Set a global driverAPI variable so loaders don't have to update to
createNewScreen2() (though they may want to for thread safety).
Reviewed-by: Matt Turner <[email protected]> (v2)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
| |
Acked-by: Matt Turner <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
v2: drop dridir now that it's unused.
v3: Consistently put spaces around += in the updated Makefile.am block.
v4: Set a global driverAPI variable so loaders don't have to update to
createNewScreen2() (though they may want to for thread safety).
v5: Fix missed public symbol in nouveau. (caught by Emil)
Reviewed-by: Matt Turner <[email protected]> (v2)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we've split things such that mesa core is in libdricore,
exposing the whole Mesa core interface in the global namespace, and the
i965_dri.so code all links against that. Along with polluting application
namespace terribly, it requires extra PLT indirections and prevents LTO.
Instead, we can build all of the driver contents into the same .so with
just a few symbols exposed to be referenced from the actual driver .so
file, allowing LTO and reducing our exposed symbol count massively.
FPS improvement on GLB2.7 with INTEL_NO_HW=1: 2.61061% +/- 1.16957% (n=50)
(without LTO, just the PLT reductions from this commit)
Note that the X Server requires commit
7ecfab47eb221dbb996ea6c033348b8eceaeb893 to successfully load this driver!
v2: Set a global driverAPI variable so loaders don't have to update to
createNewScreen2() (though they may want to for thread safety).
v3: Drop AM_CPPFLAGS addition (Emil pointed out I'd missed some cflags
that would be necessary, though only if we actually relied on them).
v4: Fix install with DESTDIR set.
Reviewed-by: Matt Turner <[email protected]> (v1)
Reviewed-by: Emil Velikov <[email protected]> (v2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we move to megadrivers, we are unable to build multiple drivers with
the same public global symbol per driver (Think an X Server with an intel
and a nouveau driver, and the X Server implementing indirect for both --
we have to actually talk to the right driver). By slipping the
driDriverAPI vtable into the driver's extension list, we can replace the
usage of the global symbol with usage of the loader-dlsym()ed driver
information.
v2: Pull in the hunk to avoid crashing on null driver_extensions. Thanks,
Emil!
Reviewed-by: Matt Turner <[email protected]> (v1)
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow a megadrivers build to reference the actual driver being
loaded from the shared dri_util screen creation code.
v2: Fix indentation, fallback case in EGL (review by Emil).
Reviewed-by: Matt Turner <[email protected]> (v1)
Reviewed-by: Chad Versace <[email protected]> (v1)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
| |
v2: Fix uninitialized variable use in the old-ABI case.
Reviewed-by: Chad Versace <[email protected]> (v1)
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: Fix asprintf error checking.
Reviewed-by: Matt Turner <[email protected]> (v1)
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous interface relied on a static struct, which meant that the
driver didn't get a chance to edit the struct before the struct got used.
For megadrivers, I want struct specific to the driver being loaded.
v2: Fix the prototype in the docs (caught by Marek). Since the driver
name was in the function, we didn't need to also pass it in.
v3: Fix asprintf error checking (caught by Matt's gcc).
Reviewed-by: Matt Turner <[email protected]> (v1)
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This way they aren't all sitting in the global namespace (with the same
name per driver).
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turns out already we have this nice mechanism for providing optional
things from the driver to the loader, and I was going to have to rename
the public global symbol to avoid conflicts when doing megadrivers.
While the former __driConfigOptions is technically loader interface, this
is the only loader that made use of that symbol. Continue paying
attention to it if we can't find the new option, to retain compatibility
with old drivers.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
| |
I'm planning on doing driver extension parsing from 3 places, and making
the extension loading step a bit longer.
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
kernel object.
Based on a similar fix from Aaron Watry. It seems unlikely that we
will ever need a kernel-specific setting for this, and the Gallium API
doesn't support it. Remove kernel::max_block_size() altogether.
|
|
|
|
| |
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
The gallium vbuf module, which we've been using for some time now, takes
care of uploading user-space vertex/index data into real buffers. The
upload code in the svga driver was unused.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
Print info about packing, format, type, and tiling. This will help debug
future issues with this fastpath.
Reviewed-by: Frank Henigman <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes texture corruption of Weston clients on cairo-glesv2 backend.
Commit 49ed599 introduced the bug.
Corruption occured when glTexSubImage called
intel_texsubimage_tiled_memcpy() with:
x,y=10,9
w,h=7,7
format=GL_ALPHA(0x1906)
type=GL_UNSIGNED_BYTE(0x1401)
gl_format=MESA_FORMAT_A8(0x18)
packing.alignemnt=4
The function miscalculated the source image's stride as w*cpp=7 without
taking into account the packing alignment. The actual stride was 8.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70435
Reported-by: U. Artie Eoff <[email protected]>
Tested-by: Kristian Høgsberg <[email protected]>
Reviewed-by:Frank Henigman <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
| |
Small typo introduced in a3ed98f.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit 800610f (i965/fs: Improve accuracy of dFdy() to match
dFdx()) I unrolled the high-accuracy dFdy() computation from a single
SIMD16 instruction to two SIMD8 instructions because of text I found
in the i965 (gen4) PRM saying that instruction compression could not
be used in align16 mode. I couldn't find similar text in later
hardware docs, and I observed problems trying to use instruction
compression on align16 mode on Ivy Bridge, so I assumed that the
restriction still applied and the associated documentation had simply
been lost.
After consultation with the hardware engineers, it turns out this is
not the case. In point of fact, the restriction was dropped in gen5,
re-introduced in Ivy Bridge, and dropped again in Haswell. The reason
I didn't notice this is that in the Ivy Bridge documentation, the
restriction was in a different section, and described using different
language.
Now that we know that the restriction only applies to Gen4 and Ivy
Bridge, we can limit the unrolling to those platforms.
Tested on gen5, gen6, and gen7 (both Ivy Bridge and Haswell).
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|