| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Fixes piglit EXT_framebuffer_multisample/negative-copyteximage.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
| |
Fixes piglit EXT_framebuffer_multisample-negative-readpixels.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
| |
Fixes piglit EXT_framebuffer_multisample/renderbuffer-samples.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
| |
Fixes some _mesa_problem()s in oglconform.
Reviewed-by: Brian Paul <[email protected]>
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we were saying that everything from the starting tile to
region width+height was part of the limits of our depthbuffer, even if
the tile was near the bottom of the depthbuffer. This mean that our
range was not clipping to buffer buonds if the start tile was anything
but the start of the buffer.
In bebc91f0f3a1f2d19d36a7f1a4f7c992ace064e9, this was changed to
saying that we're just rendering to a region of the size of the
renderbuffer. This is great -- we get a range that should actually
match what we want. However, the hardware's range checking occurs
after the X/Y offset addition, so we were clipping out rendering to
small depth mip levels when an X/Y offset was present. Just add
tile_x/y to the width in that case -- the WM won't produce negative
x/y values pre-offset, so we just need to get the left/bottom sides of
the region to cover our buffer.
Fixes the following Piglit regressions on gen7:
spec/ARB_depth_buffer_float/fbo-clear-formats
spec/ARB_depth_texture/fbo-clear-formats
spec/EXT_packed_depth_stencil/fbo-clear-formats
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
|
|
|
|
| |
The array holds GLuint values so remove the float cast.
Note, however, that to compute the average of four GLuints we really
want to do (a+b+c+d)/4 but that could overflow. This change doesn't
address that for now.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
ir_variable is a class, not a struct.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In the first case, the newImage[] array contains GLuint values.
In the second case, the parameter type is GLuint, but the maxDepth
value is never used in this case (GL_FLOAT_32_UNSIGNED_INT_24_8_REV).
Pass ~OU just to be safe.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
And pass integer width, height values.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
Rather than testing the fbo's name against zero.
Reviewed-by: José Fonseca <[email protected]>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
We include both imports.h and u_math.h in the state tracker. This
leads to multiple, conflicting definitions of ffs() with MSVC.
Use FFS_DEFINED to skip the ffs() in u_math.h.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
include mesa headers before gallium headers to avoid problem with
ffs() being defined in u_math.h and then again in imports.h
The next commit will add some #ifdefs to prevent multiple definitions
of ffs().
|
|
|
|
|
|
|
| |
glsl_to_tgsi_visitor is earlier defined as a class, not a struct.
Fixes MSVC warning.
NOTE: This is a candidate for the 8.0 branch.
|
|
|
|
|
|
|
|
|
|
|
| |
Call ffs() and ffsll() everywhere. Define our own ffs(), ffsll()
functions when the platform doesn't have them.
v2: remove #ifdef _WIN32, __IBMC__, __IBMCPP_ tests inside ffs()
implementation. The #else clause was recursive.
Reviewed-by: Kenneth Graunke <[email protected]>
Tested-by: Alexander von Gluck <[email protected]>
|
|
|
|
| |
Don't know how that slipped by.
|
|
|
|
|
| |
Also, call vbo_sizeof_ib_type() once and fix argument cast in
MapBufferRange() call.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce vbo_get_minmax_indices() function to handle the min/max index
computation for nr_prims(>= 1). The old code just compute the first
prim's min/max index; this would results an error rendering if user
called functions like glMultiDrawElements(). This patch servers as
fixing this issue.
As when nr_prims = 1, we can pass 1 to paramter nr_prims, thus I made
vbo_get_minmax_index() static.
v2: per Roland's suggestion, put the indices address compuation into
vbo_get_minmax_index() instead.
Also do comination if possible to reduce map/unmap count
v3: per Brian's suggestion, use a pointer for start_prim to avoid
structure copy per loop.
Signed-off-by: Yuanhan Liu <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
The args to _mesa_reference_shader_program() can't be const.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
glDrawBuffer(GL_FRONT_AND_BACK) results in to segmentation fault if
intel->is_front_buffer_rendering is not enabled with GL_FRONT_AND_BACK.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=44153
Reported-by: Yi Sun <[email protected]>
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Jakob Bornecrantz <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead, do the uniform setting and input / output mapping directly in
brw_link_shader. Hurray for not generating Mesa IR! However, once
the i965 driver stops calling _mesa_ir_link_shader, UsesClipDistance
and UsesKill are no longer set.
Ideally gen6_upload_vs_push_constants should use the
gl_shader_program, but I don't see a way to propagate the information
there. The other alternative, since this is the only usage, is to
move gl_vertex_program::UsesClipDistance to brw_vertex_program.
The compile (and precompile) stages use UsesKill to determine the
cache key for the shader. This is then used to determine whether or
not to compile the shader. Calculating this data during compilation
is too late.
Signed-off-by: Ian Romanick <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This previously enabled some optimizations in the fragment shader
(interpolation, etc.) if some input components were always 0.0 or
1.0. However, this data was generated by analyzing Mesa IR. The
next patch in this series removes generation of Mesa IR for GLSL
paths. When we detect that case, just set the used mask to ~0 and
circumvent the optimizations.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It used to be done in ir_to_mesa, and that was kind of a bad place.
I didn't change st_glsl_to_tgsi because there is some strange stuff
happening in the code that generates glDrawPixels shaders. It looked
like this would break horribly if I touched anything.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Track the calculated data in gl_shader_program instead of the
individual assembly shaders.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than looking at the settings in individual assembly programs,
look at the settings in the top-level uniform values. The old code
was flawed because examining each shader stage in isolation could
allow inconsitent usage across stages (e.g., bind unit 0 to a
sampler2D in the vertex shader and sampler1DShadow in the fragment
shader).
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the fixed-function fragment shader was tracked as a
gl_program. This means that it shows up in the driver as a Mesa IR
program instead of as a GLSL IR program. If a driver doesn't generate
Mesa IR from the GLSL IR, that program is empty. If the program is
empty there is either no rendering or a GPU hang.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Poking directly at the backing resources works only by luck. Core
Mesa code should only know about the gl_uniform_storage structure.
Soon other code that looks at samplers will use the gl_uniform_storage
structures instead of the data in the gl_program.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The gen7_urb atom depends on CACHE_NEW_VS_PROG and CACHE_NEW_GS_PROG,
causing gen7_upload_urb() to be called when switching to a new VS
program.
In addition to partitioning the URB space between the VS and GS,
gen7_upload_urb() also allocated space for VS and PS push constants.
Unfortunately, this meant that whenever CACHE_NEW_VS was flagged, we'd
reallocate the space for the PS push constants. According to the BSpec,
after sending 3DSTATE_PUSH_CONSTANT_ALLOC_PS, we must reprogram
3DSTATE_CONSTANT_PS prior to the next 3DPRIMITIVE.
Since our URB allocation for push constants is entirely static, it makes
sense to split it out into its own atom that only subscribes to
BRW_NEW_CONTEXT. This avoids reallocating the space and trashing
constants.
Fixes a rendering artifact in Extreme Tuxracer, where instead of a snow
trail, you'd get a bright red streak (affectionately known as the
"bloody penguin bug").
This also explains why adding VS-related dirty bits to gen7_ps_state
made the problem disappear: it made 3DSTATE_CONSTANT_PS be emitted after
every 3DSTATE_PUSH_CONSTANT_ALLOC_PS packet.
NOTE: This is a candidate for the 7.11 branch.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38868
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
|
|
|
|
|
| |
This shouldn't happen, because the DDX should only load this driver if
IS_965. But better to do something defined in that case.
|
|
|
|
| |
Fixes piglit EXT_transform_feedback/buffer-usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were naively emitting each component at a time, even if we were
emitting the same value to multiple channels. Improves on a codegen
regression from the old VS to the new VS on some unigine shaders
(because we emit constant vecs/matrices as immediates instead of
loading them as push constants, so we had over 4x the instructions for
using them).
shader-db results:
Total instructions: 58594 -> 58540
11/870 programs affected (1.3%)
765 -> 711 instructions in affected programs (7.1% reduction)
|
|
|
|
|
| |
Useful when debugging to find the number of texture objects, shader
programs, etc.
|
|
|
|
|
|
|
|
|
|
| |
It caused an X protocol error in some (rare) situations.
This is a follow-on to the previous commits which fixes a bug reported
by Wayne E. Robertz.
NOTE: This is a candidate for the 7.11 branch.
Reviewed-by: Adam Jackson <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
as we do in Fake_glXChooseVisual(). This registers the MesaGLX
extension on the display so we can clean up buffers, etc. when
the display connection is closed.
Fixes a bug reported by Wayne E. Robertz.
NOTE: This is a candidate for the 7.11 branch.
Reviewed-by: Adam Jackson <[email protected]>
|
|
|
|
|
|
| |
As suggested by Brian.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
| |
This along with the TGSI support lets the piglit sampler-cube-shadow
test pass on softpipe.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On drivers that set gl_shader_compiler_options::LowerClipDistance (for
example i965), we need to handle transform feedback of gl_ClipDistance
specially, to account for the fact that the hardware represents it as
an array of vec4's rather than an array of floats.
The previous way this was accounted for (translating the request for
gl_ClipDistance[n] to a request for a component of
gl_ClipDistanceMESA[n/4]) doesn't work when performing transform
feedback on the whole unsubscripted array, because we need to keep
track of the size of the gl_ClipDistance array prior to the lowering
pass. So I replaced it with a boolean is_clip_distance_mesa, which
switches on the special logic that is needed to handle the lowered
version of gl_ClipDistance.
Fixes Piglit tests "EXT_transform_feedback/builtin-varyings
gl_ClipDistance[{1,2,3,5,6,7}]-no-subscript".
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This just fixes up the enables for native integers and EXT_texture_integer
support in st/mesa.
It also set the MaxClipPlanes to 8.
We should consider exposing caps for MCP vs MCD, but since core
mesa doesn't care yet maybe we can wait for now.
v2: use 32-bit formats as per Marek's mail.
v3: add calim's fix for INT_DIV_TO_MUL_RCP disabling.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
| |
It doesn't look like the GLSL compiler will produce sign op
for an unsigned anyways (seems insane anyways).
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Mesa shouldn't call into the drivers if there are no renderbuffers
bound to the attachments for the buffers to be cleared.
Fixes a number of the clearbuffer-* tests on softpipe.
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
This fixes the test to allow cube/depth combinations on GL3
or EXT_gpu_shader4.
Signed-off-by: Dave Airlie <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
We're not quite ready to actually support it in the implementation,
but at least this allows GL 3.0 API-reliant applications to hopefully
run successfully, though they won't get multisampling.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The EXT_texture_array required only 64, but GL 3.0 required 256.
Since we're already exposing values that can get us way beyond our
ability to map the single object directly, go ahead and expose all the
way to hardware limits.
Tested with new piglit EXT_texture_array/maxlayers on gen7.
Reviewed-by: Kenneth Graunke <[email protected]>
|
| |
|