| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Remove local definition of RADEON_INFO_TILE_CONFIG and use the correct
macro provided by libdrm_radeon RADEON_INFO_TILING_CONFIG.
Latter was present as of libdrm 2.4.22, sirca 2010.
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
| |
Analogous to previous commit.
Cc: "12.0 13.0" <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Boyan Ding <[email protected]>
[Emil Velikov: handle the all cases]
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
| |
Compile tested only.
Reviewed-by: Vinson Lee <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
None of the drivers which implement this hook do anything with the
texture parameter value. Drivers just look at the pname and set a
dirty flag if needed.
We were doing some ugly casting and type conversion to setup the
argument so that all goes away.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Some GPUs, notably nv3x/nv4x can't render to mismatched color/zs
framebuffer depths. Fallbacks can be done by the driver, with shadow
surfaces, but no reason to encourage applications to select non-matching
glx visuals.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces an iterate and test bit in a bitmask loop by a
loop only iterating over the bits set in the bitmask.
v2: Use _mesa_bit_scan{,64} instead of open coding.
v3: Use u_bit_scan{,64} instead of _mesa_bit_scan{,64}.
Reviewed-by: Brian Paul <[email protected]>
Signed-off-by: Mathias Fröhlich <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaces a loop that iterates all lights and test
which of them is enabled by a loop only iterating over
the bits set in the enabled bitmask.
v2: Use _mesa_bit_scan{,64} instead of open coding.
v3: Use u_bit_scan{,64} instead of _mesa_bit_scan{,64}.
Reviewed-by: Brian Paul <[email protected]>
Signed-off-by: Mathias Fröhlich <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This is more future-proof, plugs the memory leak of Label and properly
destroys the buffer mutex.
Reviewed-by: Marek Olšák <[email protected]>
Cc: "11.0 11.1" <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Add missing `const` specifier for pointer pointing to a const struct.
Signed-off-by: Giuseppe Bilotta <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Giuseppe Bilotta <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since d21320f6258b2e1780a15c1ca718963d8a15ca18 the same txformat table entries
are used for "normal" texturing as well as for blits. However, I forgot to put
in an entry for the bgrx8 (le) and xrgb8 (be) formats - the normal texturing
path can't hit them because the radeon tex format chooser will never chose
them, but we get that format from the dri buffers (at least I assume we got
it from there). This caused lots of piglit regressions (and probably lots of
trouble outside piglit too).
This fixes bug https://bugs.freedesktop.org/show_bug.cgi?id=92900.
Tested-by: Ian Romanick <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Cc: "11.0" <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
We want to use intel_debug.c in code that doesn't link to dri common.
v2: Remove unnecessary stddef.h include (Topi), use util/debug.h
in all DRI driver and remove driParseDebugString() (Iago).
Reviewed-by: Topi Pohjolainen <[email protected]>
Signed-off-by: Kristian Høgsberg Kristensen <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Using C99 initializers for the primitive arrays makes things more
readable.
Signed-off-by: Ian Romanick <[email protected]>
Suggested-by: Matt Turner <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
| |
Nothing sets it.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
Defined to 0 in a few places, but it's not used anywhere.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Two drivers use this file, and neither supports quads.
No piglit regressions on i915 (G33) or radeon (Radeon 7500).
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Two drivers use this file, and neither supports quad strips.
No piglit regressions on i915 (G33) or radeon (Radeon 7500).
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The value passed in count previously was "vertex after the last vertex
to be processed." Calling that "count" was misleading and kind of mean.
Looking at the code, many functions immediately do "count-start" to get
back the true count. That's just silly.
If it is better for the loops to be 'for (j = start; j < (start +
count); j++)', GCC will do that transformation.
NOTE: There is some strange formatting left by this patch. That was
done to make it more obvious that the before and after code is
equivalent. These will be fixed in the next patch.
No piglit regressions on i915 (G33) or radeon (Radeon 7500).
v2: Fix a remaining (count-start) in render_quad_strip_verts.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Brian Paul <[email protected]> [v1]
Cc: "10.6 11.0" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ARB_viewport_array specifies that DEPTH_RANGE consists of double-
precision parameters (corresponding commit d4dc35987), and a preparatory
commit (6340e609a) added _mesa_get_viewport_xform() which returned
double-precision scale[3] and translate[3] vectors, even though X, Y,
Width, and Height were still floats.
All users of _mesa_get_viewport_xform() immediately convert the double
scale and translation vectors into floats (which were floats originally,
but were converted to doubles in _mesa_get_viewport_xform(), sigh).
i965 at least cannot consume doubles (see SF_CLIP_VIEWPORT). If we want
to pass doubles to hardware, we should have a different function that
does that.
Acked-by: Mathias Froehlich <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
radeon_fbo.c: In function 'radeon_map_renderbuffer_s8z24':
radeon_fbo.c:162:9: warning: variable 'ret' set but not used [-Wunused-but-set-variable]
int ret;
^
radeon_fbo.c: In function 'radeon_map_renderbuffer_z16':
radeon_fbo.c:200:9: warning: variable 'ret' set but not used [-Wunused-but-set-variable]
int ret;
^
radeon_fbo.c: In function 'radeon_map_renderbuffer':
radeon_fbo.c:242:8: warning: variable 'ret' set but not used [-Wunused-but-set-variable]
int ret;
^
radeon_fbo.c: In function 'radeon_unmap_renderbuffer':
radeon_fbo.c:419:14: warning: variable 'ok' set but not used [-Wunused-but-set-variable]
GLboolean ok;
^
Signed-off-by: Vinson Lee <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The original code only half considered hyperz as an option. As per
previous commit "major != 2 cannot occur" we can simply things, and
allow users to set the option if they choose to do so.
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As mentioned by Michel Dänzer
"FWIW though, any code which is specific to radeon DRM major version 1
can be removed, because that's the UMS major version."
and Marek Olšák
"major != 2" can't occur. You don't have to check the major version at
all and you can just assume it's always 2."
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Dead since at least 2009 with commit ccf7814a315(radeon: major cleanups
removing old dead codepaths.)
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Roland Scheidegger <[email protected]>
|
|
|
|
|
|
|
| |
Generated by sed; no manual changes.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The formats chosen (both by texture format choser, fbo storage allocation)
are different for big endian not just for rgba8 but also lower bit width
formats (why I don't actually know). Even the function to test for renderable
formats used different formats, however the actual colorbuffer setup did not.
And the blitter did not take that into account neither.
Untested (what could possibly go wrong...).
Acked-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Blit submits lots of packets which are usually handled by state atoms, so
these must be dirtied.
Not sure if this fixes anything, but it was a concern raised by bug 51658
(with this all issues there seen as actual bugs should be fixed, with the
exception of the patch to upload non-used texenv state atoms which I just
don't understand).
Acked-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is rather unfortunate that we don't know if a texture is going to be used
as a rt later, and we lack the means to do something about a format chosen
which we can't render to directly, so disable this and always chose renderable
format for rgba8 textures.
This addresses an issue raised on (old) bug,
https://bugs.freedesktop.org/show_bug.cgi?id=51658 with gnome-shell, don't
know if that's still applicable but it might fix other things as well.
Acked-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Most of the data stored(duplicated) was unused, and for the one that is
follow the approach set by other drivers.
This eliminates the use of legacy (dri1) types.
Cc: Marek Olšák <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b765119c changed the default value of all the counter bits to
64. However, older hardware only has 32 counter bits.
This has only been build-tested. We don't have any tests that verify
the advertised value against implementation behavior, so I don't know
what additional testing could be done.
NOTE: It appears that many Gallium drivers (at least r300 and i915g)
have the same problem, but I don't see a way for the state-tracker to
determine the counter size. Marek says, "For Gallium, a new PIPE_CAP or
new get_xxx_param function will be needed."
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Cc: Alex Deucher <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Fredrik Höglund <[email protected]>
Signed-off-by: Fredrik Höglund <[email protected]>
|
|
|
|
|
|
|
|
|
| |
_mesa_update_framebuffer now operates on arbitrary read and draw framebuffers.
This allows BlitNamedFramebuffer to update the state of its arbitrary read and
draw framebuffers.
Reviewed-by: Fredrik Höglund <[email protected]>
Signed-off-by: Fredrik Höglund <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Rename _mesa_framebuffer_renderbuffer to _mesa_FramebufferRenderbuffer_sw in
preparation for adding the ARB_direct_state_access backend function for
FramebufferRenderbuffer and NamedFramebufferRenderbuffer to share.
Reviewed-by: Fredrik Höglund <[email protected]>
Signed-off-by: Fredrik Höglund <[email protected]>
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Consistently just use C99's __func__ everywhere.
No functional changes.
Signed-off-by: Marius Predut <[email protected]>
Acked-by: Michel Dänzer <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of _WindowMap just use the translation and scale
of the viewport transform directly. Thereby avoid dividing by
_DepthMaxF again.
v2:
Change order of assignments.
Reviewed-by: Brian Paul <[email protected]>
Signed-off-by: Mathias Froehlich <[email protected]>
|
|
|
|
| |
Acked-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
| |
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
| |
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This avoids duplication of some macros and other definitions across the
tree.
Note that COPY_4FV switches from a memcpy-based implementation to an
assignment of 4 floats.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
| |
v2: Try to patch up the scons bits.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Nothing enables the extension yet, but the values are now available.
The spec calls for it to only be exposed for GL 3.3+, which is core-only
in mesa. Instead we allow any driver to enable it, including in a compat
context for any GL version.
Signed-off-by: Ilia Mirkin <[email protected]>
Reviewed-by: Glenn Kennard <[email protected]>
|
|
|
|
|
|
| |
We have two copies of it in the tree, I'm going to delete one.
Reviewed-by: Marek Olšák <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
NewBufferObject took a "target" parameter, which it blindly passed to
_mesa_initialize_buffer_object(), which ignored it.
Not much point in passing it around.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
| |
|