| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
reduces from 2664->2656.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
this reduces it from 1088 -> 1080 bytes
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
this just removes 4 bytes from this object.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
This removes a hole, and puts the large allocation at the end,
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
These 392->388 and 72->68.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
reduces 40->32
but reduces use in context from 7680->6144.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
gl_program : 1344->1336
gl_shader: 488->472
gl_shader_program: 352->344.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
Reduces size from 184 to 176 bytes.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
reduces size from 64 to 56 bytes.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
drops 80 bytes to 72.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
drops from 56 to 48 bytes,
drops gl_vertex_array_object from 4584 to 4320 bytes
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
| |
drops size from 520 -> 512 bytes,
which then makes gl_texture_attrib go from 99984 to 98440.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
this drops the size from 52 bytes to 48 bytes.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
drops size from 28 bytes to 20.
Acked-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher [email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89670
Reviewed-by: Matt Turner <[email protected]>
Cc: Vinson Lee <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Fixes a piglit regression
(shaders/glsl-fs-vec4-indexing-temp-dst-in-nested-loop-combined) with
my series for GVN.
Reviewed-by: Matt Turner <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is currently not a problem because the vec4 visitor happens to
mask out unused components from the destination, but it might become
an issue when we start using atomics without writeback message. In
any case it seems sensible to set it again here because the
consequences of setting the wrong writemask (random graphics memory
corruption) are difficult to debug and can easily go unnoticed.
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
_surface_read.
And calculate the message response size based on the number of
components rather than the other way around. This simplifies their
interface somewhat and allows the caller to request a writeback
message with more than one vector component in SIMD4x2 mode.
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This was telling the sampler to do texture fetches for *all* channels
in the non-constant surface index case, what could have reduced
throughput unnecessarily when some of the channels were disabled by
control flow.
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
descriptor.
This is going to be useful because the Gen7+ uniform and varying pull
constant, texturing, typed and untyped surface read, write, and atomic
generation code on the vec4 and fs back-end all require the same logic
to handle conditionally indirect surface indices. In pseudocode:
| if (surface.file == BRW_IMMEDIATE_VALUE) {
| inst = brw_SEND(p, dst, payload);
| set_descriptor_control_bits(inst, surface, ...);
| } else {
| inst = brw_OR(p, addr, surface, 0);
| set_descriptor_control_bits(inst, ...);
| inst = brw_SEND(p, dst, payload);
| set_indirect_send_descriptor(inst, addr);
| }
This patch abstracts out this frequently recurring pattern so we can
now write:
| inst = brw_send_indirect_message(p, sfid, dst, payload, surface)
| set_descriptor_control_bits(inst, ...);
without worrying about handling the immediate and indirect surface
index cases explicitly.
v2: Rebase. Improve documentatation and commit message. (Topi)
Preserve UW destination type cargo-cult. (Topi, Ken, Matt)
Reviewed-by: Topi Pohjolainen <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both do_vs_prog and do_gs_prog initialize brw_stage_prog_data::nr_params to
the number of uniform *vectors* required by the shader rather than the number
of uniform components, contradicting the comment. This is inconsistent with
what the state upload code and scalar path expect but it happens to work until
Gen8 because vec4_visitor interprets it as a number of vectors on construction
and later on overwrites its original value with the number of uniform
components referenced by the shader.
Also there's no need to add the number of samplers, they're not actually
passed in as uniforms.
Fixes a memory corruption issue on BDW with SIMD8 VS.
Cc: "10.5" <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several steppings of Skylake fail when using SIMD16 with 3-source
instructions (such as MAD).
This implements WaDisableSIMD16On3SrcInstr and fixes ~190 Piglit
tests.
Based on a patch by Neil Roberts.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Neil Roberts <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The places that were checking whether 3-source instructions are
supported have now been combined into a small helper function. This
will be used in the next patch to add an additonal restriction.
Based on a patch by Kenneth Graunke.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
brwContextInit now queries the GPU revision number via a new parameter
for DRM_I915_GETPARAM. This new parameter requires a kernel patch and
a patch to libdrm. If the kernel doesn't support it then it will
continue but set the revision number to -1. The intention is to use
this to implement workarounds that are only needed on certain
steppings of the GPU.
Reviewed-by: Kristian Høgsberg <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Generate GL_INVALID_OPERATION and return NULL when the buffer object
hasn't been created. All callers expect this.
v2: Use a more concise error message.
Cc: Laura Ekstrand <[email protected]>
Reviewed-by: Laura Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should improve the performance of any shaders using the KIL
instruction. I'm a bit surprised we missed this.
Unfortunately, I have not been able to measure any performance
improvements from this patch. It does make ARB_fragment_program
behave similarly to GLSL code.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is already copied in two places, and I want to copy it to a third
place.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Carl Worth <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So it turns out that this doesn't actually fix any bugs or add any features,
stictly speaking. However, it does avoid a lot of kludginess. Previously, if
you called
glCopyTextureSubImage3D(texcube, 0, 0, 0, zoffset = 3, ...
it would grab the texture image object for face = 0 in teximage.c instead of
the desired face = 3. But Line 274 of brw_blorp_blit.cpp would correct for
this by updating the slice to 3.
This commit does the correct thing before calling any drivers,
which should make the functionality much more robust and uniform across all
drivers.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Martin Peres <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
| |
This allows it to be called from a loop.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Kristian Høgsberg <[email protected]>
|
|
|
|
| |
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we put all the uniforms into one big array. The problem with
this approach is that, as soon as there was one indirect array acces, the
backend would decide that the entire large array should be pull constants.
This commit splits the array in half: first direct-only uniforms and then
potentially-indirect uniforms. This may not be optimal, but it does let
the backend promote things to push constants.
Shader-db results on HSW:
total instructions in shared programs: 4114840 -> 4112172 (-0.06%)
instructions in affected programs: 43316 -> 40648 (-6.16%)
helped: 116
HURT: 0
v2: Set param_size[num_direct_uniforms] only if we have indirect uniforms.
This caused a bug that, strangely enough, only showed up on Broadwell
vertex shaders.
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Previously, we just assigned variable locations in nir_lower_io. Now, we
force the user to assign variable locations for us. This gives the backend
a bit more control over where variables are placed.
v2: Rename from _packed to _scalar
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We never did a single hash table lookup in the entire NIR code base that I
found so there was no real benifit to doing it that way. I suppose that
for linking, we'll probably want to be able to lookup by name but we can
leave building that hash table to the linker. In the mean time this was
causing problems with GLSL IR -> NIR because GLSL IR doesn't guarantee us
unique names of uniforms, etc. This was causing massive rendering isues in
the unreal4 Sun Temple demo.
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
| |
Different errors for type mismatches, size mismatches and matrix/
non-matrix mismatches. Use a common format of "uniformName"@location
in the messags.
Reviewed-by: Martin Peres <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Jason Ekstrand <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On platforms that do not natively generate 0u and ~0u for Boolean
results, b2f expressions that look like
f = b2f(expr cmp 0)
will generate better code by pretending the expression is
f = ir_triop_sel(0.0, 1.0, expr cmp 0)
This is because the last instruction of "expr" can generate the
condition code for the "cmp 0". This avoids having to do the "-(b & 1)"
trick to generate 0u or ~0u for the Boolean result. This means code like
mov(16) g16<1>F 1F
mul.ge.f0(16) null g6<8,8,1>F g14<8,8,1>F
(+f0) sel(16) m6<1>F g16<8,8,1>F 0F
will be generated instead of
mul(16) g2<1>F g12<8,8,1>F g4<8,8,1>F
cmp.ge.f0(16) g2<1>D g4<8,8,1>F 0F
and(16) g4<1>D g2<8,8,1>D 1D
and(16) m6<1>D -g4<8,8,1>D 0x3f800000UD
v2: When the comparison is either == 0.0 or != 0.0 use the knowledge
that the true (or false) case already results in zero would allow better
code generation by possibly avoiding a load-immediate instruction.
v3: Apply the optimization even when neither comparitor is zero.
Shader-db results:
GM45 (0x2A42):
total instructions in shared programs: 3551002 -> 3550829 (-0.00%)
instructions in affected programs: 33269 -> 33096 (-0.52%)
helped: 121
Iron Lake (0x0046):
total instructions in shared programs: 4993327 -> 4993146 (-0.00%)
instructions in affected programs: 34199 -> 34018 (-0.53%)
helped: 129
No change on other platforms.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Tapani Palli <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Eric's initial patch adding constant expression evaluation for
ir_unop_round_even used nearbyint. The open-coded _mesa_round_to_even
implementation came about without much explanation after a reviewer
asked whether nearbyint depended on the application not modifying the
rounding mode. Of course (as Eric commented) we rely on the application
not changing the rounding mode from its default (round-to-nearest) in
many other places, including the IROUND function used by
_mesa_round_to_even!
Worse, IROUND() is implemented using the trunc(x + 0.5) trick which
fails for x = nextafterf(0.5, 0.0).
Still worse, _mesa_round_to_even unexpectedly returns an int. I suspect
that could cause problems when rounding large integral values not
representable as an int in ir_constant_expression.cpp's
ir_unop_round_even evaluation. Its use of _mesa_round_to_even is clearly
broken for doubles (as noted during review).
The constant expression evaluation code for the packing built-in
functions also mistakenly assumed that _mesa_round_to_even returned a
float, as can be seen by the cast through a signed integer type to an
unsigned (since negative float -> unsigned conversions are undefined).
rint() and nearbyint() implement the round-half-to-even behavior we want
when the rounding mode is set to the default round-to-nearest. The only
difference between them is that nearbyint() raises the inexact
exception.
This patch implements _mesa_roundeven{f,}, a function similar to the
roundeven function added by a yet unimplemented technical specification
(ISO/IEC TS 18661-1:2014), with a small difference in behavior -- we
don't bother raising the inexact exception, which I don't think we care
about anyway.
At least recent Intel CPUs can quickly change a subset of the bits in
the x87 floating-point control register, but the exception mask bits are
not included. rint() does not need to change these bits, but nearbyint()
does (twice: save old, set new, and restore old) in order to raise the
inexact exception, which would incur some penalty.
Reviewed-by: Carl Worth <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
total instructions in shared programs: 6263270 -> 6203091 (-0.96%)
instructions in affected programs: 2606529 -> 2546350 (-2.31%)
helped: 14301
GAINED: 5
LOST: 3
Revewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Before, we enabled NIR if you set INTEL_USE_NIR to anything which mean that
INTEL_USE_NIR=false would actually turn on NIR. In preparation for turning
NIR on by default, this commit makes it smarter by allowing the
INTEL_USE_NIR variable to work as either a force-enable or a force-disable.
Reviewed-by: Mark Janes <[email protected]>
|
|
|
|
|
|
|
|
|
| |
As VARYING_SLOT_MAX can be bigger than 32.
I'll probably stop building swrast with MSVC in the near future, but this
seems a real bug regardless.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
program/lex.yy.c and program/program_parse.tab.c is already included in
the PROGRAM_FILES variable.
We still need to specify the dependency relationship though.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
I wasn't aware of these _glapi_ stub functions when I committed
4bdbb588a9d385509f9168e38bfdb76952ba469c. Fixes "make check"
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89662
Reviewed-by: Mark Janes <[email protected]>
|
|
|
|
|
|
|
|
| |
Removing this block of pragmas doesn't seem to increase the number of
warning generated by MSVC. Other than signed/unsigned comparison warnings
there's very few other warnings nowadays.
Acked-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
Silences an MSVC warning where it's called from call_once().
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
Never called from outside of context.c
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the new _glapi_new_nop_table() and _glapi_set_nop_handler() to
improve how we handle calling no-op GL functions.
If there's a current context for the calling thread, generate a
GL_INVALID_OPERATION error. This will happen if the app calls an
unimplemented extension function or it calls an illegal function
between glBegin/glEnd.
If there's no current context, print an error to stdout if it's a debug
build.
The dispatch_sanity.cpp file has some previous checks removed since
the _mesa_generic_nop() function no longer exists.
This fixes the piglit gl-1.0-dlist-begin-end and gl-1.0-beginend-coverage
tests on Windows.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we throttle before the user begins preparing commands for the
next frame when we acquire the draw/read buffers. However, construction
of the command buffer can itself take significant time relative to the
frame time. If we move the throttle from the buffer acquire to the
command submit phase we can allow the user to improve concurrency
between the CPU and GPU (i.e. reduce the amount of time we waste inside
the throttle).
v2: Whitespace + delay throttling until after the next submission for
greater parallelism
Signed-off-by: Chris Wilson <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Kenneth Graunke <[email protected]>
Cc: Ben Widawsky <[email protected]>
Cc: Kristian Høgsberg <[email protected]>
Cc: Chad Versace <[email protected]>
Cc: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]> [v1]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to facilitate the concurrency offered by triple buffering and to
offset the latency induced by swapping via an external process, which
may incur extra rendering itself, only throttle to the previous frame
and not the last. The second issue that mostly affects swap benchmarks,
but also can incur jitter in the throttling, is that the throttle bo is
closer to the next SwapBuffers rather than immediately after the previous
SwapBuffers. Throttling to the previous frame doubles the maximum possible
latency at the benefit of improving throughput and reducing jitter.
v2: Rename "first_post_swapbuffer" batches array to a plain
throttle_batch[] as the pluralisation was contorting the name and not
making it clear as to whether it was the first batch or first_post_swap
batch. Not least of which was that not all throttle points are SwapBuffers.
Signed-off-by: Chris Wilson <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Kenneth Graunke <[email protected]>
Cc: Ben Widawsky <[email protected]>
Cc: Kristian Høgsberg <[email protected]>
Cc: Chad Versace <[email protected]>
Cc: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When rendering to an fbo, even though it may be acting as a winsys
frontbuffer or just generally, we never throttle. However, when rendering
to an fbo, there is no natural frame boundary. Conventionally we use
SwapBuffers and glFinish, but potential callers avoid often glFinish for
being too heavy handed (waiting on all outstanding rendering to complete).
The kernel provides a soft-throttling option for this case that waits for
rendering older than 20ms to be complete (that's a little too lax to be
used for swapbuffers, but is here a useful safety net). The remaining
choice is then either never to throttle, throttle after every draw call,
or at after intermediate user defined point such as glFlush and thus all the
implied flushes. This patch opts for the latter as that is the current
method used for flushing to front buffers.
v2: Defer the throttling from inside the flush to the next
intel_prepare_render() and switch non-fbo frontbuffer throttling over to
use the same lax method. The issuing being that
glFlush()/intel_prepare_read() is just as likely to be called inside a
tight loop and not at "frame" boundaries.
v3: Rename from need_front_throttle to need_flush_throttle to avoid any
ambiguity between front buffer rendering and fbo rendering. (Chad)
v4: Whitespace
Signed-off-by: Chris Wilson <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Kenneth Graunke <[email protected]>
Cc: Ben Widawsky <[email protected]>
Cc: Kristian Høgsberg <[email protected]>
Cc: Chad Versace <[email protected]>
Cc: Ian Romanick <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|