| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
execinfo.h is not available on MinGW.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are now seing cs that can go over the vram+gtt size to avoid
failing flush early cs that goes over 70% (gtt+vram) usage. 70%
is use to allow some fragmentation.
The idea is to compute a gross estimate of memory requirement of
each draw call. After each draw call, memory will be precisely
accounted. So the uncertainty is only on the current draw call.
In practice this gave very good estimate (+/- 10% of the target
memory limit).
v2: Remove left over from testing version, remove useless NULL
checking. Improve commit message.
v3: Add comment to code on memory accounting precision
Signed-off-by: Jerome Glisse <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
NOTE: This is a candidate for the 9.1 branch.
|
|
|
|
|
|
|
| |
Now that branch 9.1 is created, bump the minor version in
master.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 2906e2034c9d674601960a5b586b6e986e6ef04f.
Fixes a regression in the glean depthStencil test.
Reverting this does not affect any tests in es3conform, so a more recent
patch must have also fixed the failure this one was intended to fix.
Reported-by: lu hua <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=59494
|
|
|
|
|
|
| |
Christoph implemented this, so we should expect it to be present now.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=60082
|
| |
|
|
|
|
|
|
|
|
| |
v2: Update to handle BufferSize being -1 and return a NULL sampler
view if the specified range would cause out of bounds access.
Reviewed-by: Brian Paul <[email protected]>
Acked-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v2: Record texObj.BufferSize as -1 in TexBuffer(non-Range) instead
of the buffer's current size so we know we always have to use the
full size of the buffer object (i.e. even if it changes without the
user calling TexBuffer again) for the texture.
Clarify invalid offset alignment error message.
v3: Use extra GL_CORE-only section in get_hash_params.py for
TEXTURE_BUFFER_OFFSET_ALIGNMENT.
v4: Remove unnecessary check for profile in _mesa_TexBufferRange.
Add check for extension enable in get_tex_level_parameter_buffer.
v5: Fix position in gl_API.xml.
Add comment about meaning of BufferSize == -1.
v6: Add back checks for core profile and add a note about it.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
Reported-by: Lauri Kasanen<[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=47248#c15
|
|
|
|
|
|
| |
Not used by any driver anymore.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Check that the return value from xcb_dri2_swap_buffers_reply is
non-NULL before accessing the struct members.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
S8_UNORM was inadvertedly supported together with Z16_UNORM.
I tried to update the code to accomodate stencil-only -- it seemed a simple
thing to do -- but "fbo-stencil clear GL_STENCIL_INDEX8" still fails,
and it's not worth debugging.
Therefore although this change tries to update for S8_UNORM, it also
disables it completely.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This special path hadn't been exercised by my earlier testing, and mask
values weren't being properly truncated to match the values.
This change fixes that.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The struct padding got broken by c789b981b244333cfc903bcd1e2fefc010500013.
This caused serious performance regression because part of the key was
uninitialized and hence the shader always recompiled (at least on release
builds...).
While here also fix key size calculation when the number of samplers
and the number of sampler views are different.
v2: add comment
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
We never really have multisampling with one sample per pixel.
See also http://bugs.freedesktop.org/show_bug.cgi?id=59873
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
| |
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
| |
See previous commit for more info.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The swrast fragment program interpreter has trouble computing the
right texture LOD because it doesn't have easy access to input
derivatives. This causes the GLSL-based meta generate mipmap code
to fetch texels from the wrong mipmap level.
One possible fix would be to set the GL_TEXTURE_MIN/MAX_LOD parameters
to limit sampling from the right level. But let's just use the
_mesa_generate_mipmap() fallback since it's a lot faster than using
the fragment shader interpreter.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=54240
Note: This is a candidate for the 9.0 branch.
Reviewed-by: José Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The gallium docs for pipe_screen::is_format_supported() says that
samples==0 or samples==1 both mean that multisampling is not supported.
Return GL_MAX_SAMPLES==0 instead of 1 for consistency with other drivers.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
| |
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Simply by adjusting the vector element width after/before
reading/writing the depth-stencil values.
Ran several GL_DEPTH_COMPONENT16 piglit tests without regressions.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The maximum number of URB entries come from the 3DSTATE_URB_VS and
3DSTATE_URB_GS state packet documentation; the thread count information
comes from the 3DSTATE_VS and 3DSTATE_PS state packet documentation.
NOTE: This is a candidate for the 9.0 branch.
Signed-off-by: Kenneth Graunke <[email protected]>
Signed-off-by: Eugeni Dodonov <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The packet length may change at some point in the future. Specifying it
explicitly (rather than hardcoding it in the command #define) allows us
to change it much more easily in the future.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Left behind by a8ab7e33.
|
|
|
|
|
|
| |
I did not list the *_get_program_binary extensions since they're not
useful to anyone with their current implementation (that supports 0
binary formats).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we were keeping a CPU-only buffer to accumulate the batchbuffer in,
which was an improvement over mapping the batch through the GTT directly
(since any readback or other failure to stream through write combining
correctly would hurt). However, on LLC-sharing architectures we can do better
by mapping the batch directly, which reduces the cache footprint of the
application since we no longer have this extra copy of a batchbuffer around.
Improves performance of GLBenchmark 2.1 offscreen on IVB by 3.5% +/- 0.4%
(n=21). Improves Lightsmark performance by 1.1 +/- 0.1% (n=76). Improves
cairo-gl performance by 1.9% +/- 1.4% (n=57).
No statistically significant difference in GLB2.1 on SNB (n=37). Improves
cairo-gl performance by 2.1% +/- 0.1% (n=278).
|
|
|
|
| |
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
| |
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes side effect in assertion defects reported by Coverity.
Note: This is a candidate for the 9.1 branch.
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
| |
Currently a gralloc internal structure is exposed to Mesa,
Use a query function instead to maintain ABI compatibility.
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
| |
Comment change only.
|
|
|
|
|
|
|
| |
Old kernel do not have dma support, patch pushed were missing some
of the check needed to not use dma.
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Only drivers supporting DRI2 version >=4 support GLX_INTEL_swap_event.
So lets mark it as such otherwise applications which use this extension
(i.e. everything based on Clutter, e.g. gnome-shell) break horribly on
drivers supporting DRI2 versions only up to 3.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Get rid of special handling for reserved regs.
Use one intrinsic for all kinds of interpolation.
v2[Vincent Lejeune]: Rebased against current master
Reviewed-by: Tom Stellard <[email protected]>
Signed-off-by: Vadim Girlin <[email protected]>
|
|
|
|
|
|
|
|
| |
r600_bytecode::ar_chan stores the register channel for the value that
will be loaded into the AR register.
At the moment, this field is only used by the LLVM backend. The default
backend always sets ar_chan = 0.
|
| |
|
|
|
|
|
|
| |
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=59588
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
| |
https://bugs.freedesktop.org/show_bug.cgi?id=59877
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It shouldn't be needed and older kernels don't support
it.
v2: Replace with PS partial flush as before.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=59945
Signed-off-by: Alex Deucher <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
| |
v2: Add virtual address to dma src/dst offset for cayman
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We keep track of ring emission order in a stack, whenever we need to
flush we empty the stack in a fifo order. There is few helpers function
for bo mapping and other ring activities that will make sure that
the ring stack is properly flush and submitted.
v2: fix st flush path, and other flush path to properly flush all
rings if necessary
v3: - improve name of ring helpers
- make sure that each time a cs is gona be written it endup at
top of the stack to avoid any issue such as :
STACK[0] = dma (withbo A,B)
STACK[1] = gfx (withbo C,D)
Now if code try to emit a dma command relative to bo C or D
it will start writting cmd stream into the cs and once it
reach the point where it adds relocation it will flush.
At that point the cs will have cmd that don't have proper
relocation into the relocation buffer and kernel will just
refuse to run.
v4: - Drop the stack idea as it turn out there is no way to use it
or benefit from it. Any time the driver start command on other
ring, it always need to flush the previous ring. So make code
simpler by not using a stack.
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add ring support, you can create a cs for each ring. DMA ring is
bit special regarding relocation as you must emit as much relocation
as there is use of the buffer.
v2: - Improved comment on relocation changes
- Use a single thread to queue cs submittion this simplify driver
code while not impacting performances. Rational for this is that
you have to wait for all previous submission to have completed
so there was never a case while we could have 2 different thread
submitting a command stream at the same time. This code just
consolidate submission into one single thread per winsys.
v3: - Do not use semaphore for empty queue signaling, instead use
cond var. This is because it's tricky to maintain an even number
of call to semaphore wait and semaphore signal (the number of
cs in the stack would for instance make that number vary).
Signed-off-by: Jerome Glisse <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Make it obvious what "unit" this is (no change in functionality).
draw still uses "unit" in places where it changes the shader by adding
texture sampling itself - it seems like this can't work with shaders
using dx10-style sample opcodes (can't mix gl-style and dx10-style
sample instructions in a shader).
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Split the sampler interface to use separate sampler and texture (sampler_view)
state. This is needed to support dx10-style sampling instructions.
This is not quite complete since both draw/llvmpipe don't really track
textures/samplers independently yet, as well as the gallivm code not quite
using the right sampler or texture index respectively (but it should work
for the sampling codes used by opengl).
We are however losing some optimizations in the process, apply_max_lod will
no longer work, and we potentially could end up with more (unnecessary)
recompiles (if switching textures with/without mipmaps only so it shouldn't
be too bad).
v2: don't use different callback structs for sampler/sampler view functions
(which just complicates things), fix up sampling code to actually use the
right texture or sampler index, and similar for llvmpipe/draw actually
distinguish between samplers and sampler views.
v3: fix more of PIPE_MAX_SAMPLER / PIPE_MAX_SHADER_SAMPLER_VIEWS mismatches
(both in draw and llvmpipe), based on feedback from José get rid of unneeded
static sampler derived state.(which also fixes the only 2 piglit regressions
due to a forgotten assignment), fix comments based on Brian's feedback.
v4: remove some accidental unrelated whitespace changes
Reviewed-by: José Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|