| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
| |
Fixes another mistake in 144bbb7b78e.
Reviewed-by: Matt Turner <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77502
|
|
|
|
|
|
|
|
| |
bitfieldInsert takes scalar integers for its last two arguments. Since
bitfieldInsert is lowered on i965 to two instructions that have more
flexible arguments, I didn't notice when I wrote this.
Reviewed-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
| |
-1.1372% +/- 0.858033% effect on cairo runtime on glamor (n=175).
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Otherwise it fails to compile if the drm egl platform is disabled.
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
It looks like this bit of code is trying to disable the prime capability if
the driver doesn't support createImageFromFds. However the logic looks a bit
broken and what it would actually do is disable all other capabilities apart
from prime. This patch fixes it to actually disable prime.
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This should give the caller some information of what called the error.
For the gbm_bo_import() case, for instance, it is possible to know if
the import is not supported or the error was caused by an invalid
parameter.
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
| |
Cc: "10.0" "10.1" <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In all fairness we allow the gallium tests to be build with --disable-dri
which will result in the approapriate winsys to not be build, thus the
build will fail.
./configure --disable-dri --with-gallium-drivers=svga --enable-gallium-tests
Cc: Brian Paul <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
| |
Rather than defining our own set of variables, use NEED_WINSYS_XLIB
and based on it include the sw/xlib winsys.
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function relies on the sw/dri winsys which is build only when --enable-dri
is set. Fixes build issues with the following config
./configure --disable-dri --with-gallium-drivers=svga --enable-xa
Issue can be reproduced with any hw gallium driver + st that uses the pipe-loader.
Cc: Brian Paul <[email protected]>
Reported-by: Brian Paul <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL (3.0) allows you to clear individual color buffers in a fb. In fact
for fbs containing both int and float/normalized color buffers this is
required (because the clearing values are otherwise undefined if applied
to all buffers). The gallium interface was changed a while ago, but llvmpipe
ignored it (hence doing such individual clears always resulted in clearing
all buffers, plus some assorted asserts due to the mixed fbs).
So change the clear command to indicate the buffer to be cleared. Also, because
indicating the buffer to be cleared would have made lp_rast_arg_cmd larger
which is unacceptable (we're trying to shrink it some day) allocate the clear
value in the scene and just pass a pointer.
There's several advantages and disadvantages here:
+ clearing individual buffers works (we could also actually bin such clears now
if they'd come through clear_render_target() if the surface is in the current
fb, though we didn't do this before for the single rb case and still don't try).
+ since there's one clear per rb, we do the format conversion in setup rather
than per bin. Aside from the (drop in the ocean...) performance advantage this
means that clearing to very small values (that is, denormal when converted to
the format) should work for small float (fp16 etc.) formats, as the util code
couldn't handle it correctly before (because cpu denorms are disabled when
executing the bin commands, screwing up the magic conversion and flushing
the values to 0, though this was not verified).
- there's some overhead for traditional old-style clear-all MRT cases, since
there's one rast clear command per rb instead of one for all rbs.
This fixes https://bugs.freedesktop.org/show_bug.cgi?id=76976.
v2: get rid of the ugly manual memcpy stuff and just use union util_color.
This is 32 bytes instead of 16 but as the allocation is per scene we can live
with those additional 16 bytes (and the additional 128 bytes in the setup
context), which makes the code much more obvious. Suggested by Brian.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
util_color often merely represents a collection of bytes, however it is
inconvenient if those bytes can only be accessed as floats/doubles for int
formats exceeding 32bits.
(Note that since rgba8 formats use one uint, not 4 bytes, hence the byte and
short member were left as is.)
|
|
|
|
|
|
| |
Currently it's the same value.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Changing SX_MISC hangs RV740. When we're at it, let's use DX_RASTERIZATION_KILL
on all R700 and later chipsets.
Cc: 10.0 10.1 [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
| |
Cc: 10.0 10.1 [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
| |
Cc: [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
| |
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
| |
This fixes broken rendering in DOTA 2.
Cc: 10.0 10.1 [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
| |
Cc: 10.0 10.1 [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
| |
Cc: 10.0 10.1 [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
| |
Cc: 10.0 10.1 [email protected]
|
|
|
|
|
|
|
| |
We forgot to set these bits.
Cc: 10.1 [email protected]
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
| |
Cc: [email protected]
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Cc: [email protected]
|
|
|
|
|
|
| |
Broken by:
b2238b3452b0bcf3c1216c20c9918f9f0664b464
winsys/radeon: remove cs_write_reloc, add simpler cs_get_reloc
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The current version checking is wrongly refusing to create 3.3 contexts;
unsupported version are checked elsewhere; and the DRI path doesn't do
this sort of checking neither.
This enables piglit glsl 3.30 tests to run without skipping.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
According to Roland all TGSI support is there in theory.
In practice there are a few piglit failures and crashes, as this hadn't
been tested before.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GLX_ARB_create_context_profile spec says:
"If version 3.1 is requested, the context returned may implement
any of the following versions:
* Version 3.1. The GL_ARB_compatibility extension may or may not
be implemented, as determined by the implementation.
* The core profile of version 3.2 or greater."
Mesa does not support GL_ARB_compatibility, and there are no plans to
ever support it, therefore the only chance to honour a 3.1 context is
through core profile, i.e, the 2nd alternative from the spec.
This change does that. And with it piglit tests that require 3.1
contexts no longer skip.
Assuming there is no objection with this change, src/glx/dri_common.c
and src/gallium/state_trackers/wgl/stw_context.c should also be updated
accordingly, given they have the same logic.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Lets make draw_get_option_use_llvm function available unconditionally
and use it to avoid useless allocations when LLVM paths are active.
TGSI machine is never used when we're using LLVM.
Signed-off-by: Zack Rusin <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
| |
|
|
|
|
| |
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
| |
There's no reason to compute texel size, stride, etc. if there's no
image data to map.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
| |
And add a const qualifier.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes a segmentation fault in conform divzero.c test.
This happens when glTexImage(level, width=0, height=0) is called. We
don't allocate texture memory in that case so the ImageSlices array
was never allocated.
Cc: "10.1" <[email protected]>
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Straightforward fix to properly load dest->color with color data, as
opposed to position data as previously implemented.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=27499
Cc: "10.1" <[email protected]>
Signed-off-by: Ville Syrjälä <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
Courtesy of MSVC static code analyser.
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
| |
Fixes assertion failures with radeonsi.
Tested-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This prevents buffer overflow w/ llvmpipe when running piglit
bin/gl-3.2-layered-rendering-clear-color-all-types 1d_array single_level -fbo -auto
v2: Compute the framebuffer size as the minimum size, as pointed out by
Brian; compacted code; ran piglit quick test list (with no
regressions.)
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
| |
Courtesy of Clang static analyzer.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
| |
In cases where varying fetches are optimized away (just pass-through in
vertex shader, but unused in fragment shader) we need to calculate the
correct TOTALATTROVS based on the actual number of varyings fetched,
otherwise lockup.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
HiZ operations make the depth/render caches out of sync with the sampler
caches. We need to arrange for a TC flush to happen before the target
buffer is used by the sampler. Calling brw_render_cache_set_add_bo
makes that happen.
On previous generations, brw_blorp_exec took care of flushing the
texture cache by calling intel_batchbuffer_emit_mi_flush after doing
any rendering. If we were to use the normal drawing path, then
brw_postdraw_set_buffers_need_resolve would handle this.
On Broadwell, we don't use BLORP, and we don't emit a rectangle
primitive via the normal drawing path. The 3DSTATE_WM_HZ_OP and
PIPE_CONTROL implicitly make drawing happen. So, none of our existing
code makes this flush happen - we need to do it directly.
Fixes 11 Piglit copyteximage subtests.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77223
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77226
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
No need to use 32-bits to store 15 and 12.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Leo Liu <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The way cik_num_banks() was calculating the index only makes sense for
the CIK specific macrotile mode array. For SI, we need to use the tile
mode index directly.
This happened to work most of the time because most of the SI tiling
modes use the same number of banks.
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
|