| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
This is an internal project that Catalyst uses and now open source will do
too.
v2: squashed these commits in:
- winsys/amdgpu: fix warnings in addrlib
- winsys/amdgpu: set PIPE_CONFIG and NUM_BANKS in tiling_flags
|
|
|
|
| |
Cc: 10.5 10.4 <[email protected]>
|
|
|
|
| |
Cc: 10.5 10.4 <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes the GL_COMPRESSED_RED_RGTC1 part of piglit's rgtc-teximage-01
test as well as the precision part of Wine's 3dc format test (fd.o bug
89156).
The Z component seems to contain a lower precision version of the
result, probably a temporary value from the decompression computation.
The Y and W component contain different data that depends on the input
values as well, but I could not make sense of them (Not that I tried
very hard).
GL_COMPRESSED_SIGNED_RED_RGTC1 still seems to have precision problems in
piglit, and both formats are affected by a compiler bug if they're
sampled by the shader with a swizzle other than .xyzw. Wine uses .xxxx,
which returns random garbage.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89156
Signed-off-by: Marek Olšák <[email protected]>
Cc: 10.5 10.4 <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we assume that all buffers allocated by the DDX are scanout, a new flag
that says "this is not scanout" has to be added to support the non-scanout
buffers and maintain backward compatibility.
This fixes bad rendering on Wayland.
The flag is defined as:
#define RADEON_TILING_R600_NO_SCANOUT RADEON_TILING_SWAP_16BIT
AFAIK, RADEON_TILING_SWAP_16BIT is not used on SI.
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
flags to enforce no tiling.
Signed-off-by: Axel Davy <[email protected]>
|
| |
|
| |
|
| |
|
|
|
|
| |
Change DST_ALPHA to ONE.
|
|
|
|
|
|
|
|
|
| |
This along with the latest drm-fixes branch should help with bad performance
of MSAA. Remember: Nx MSAA can't be more than N times slower (where N=2,4,6).
Anyway, I recommend at least 512 MB of VRAM for Full HD 6x MSAA.
NOTE: This is a candidate for the 9.1 branch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These are optimizations which make MSAA a lot faster.
The MSAA work is complete with this commit. (except for enablement of AA
optimizations for RGBA16F, for which a patch is ready and waiting until
the kernel CS checker fix lands)
MSAA can't be made any faster as far as hw programming is concerned.
The catch is only one process and one colorbuffer can use the optimizations
at a time. There usually is only one MSAA colorbuffer, so it shouldn't be
an issue.
Also, there is a limit on the size of MSAA colorbuffer resolution in terms
of megapixels. If the limit is surpassed, the AA optimizations are disabled.
The limit is:
- 1 Mpix on low-end and some mid-level chipsets (1024x768 and 1280x720)
- 2 Mpix on some mid-level chipsets (1600x1200 and 1920x1080)
- 3 or 4 Mpix on high-end chipsets (2048x1536 or 2560x1600, respectively)
It corresponds to the number of raster pipes (= GB pipes) available, each pipe
can hold 1 Mpix of AA compression data.
If it's enabled, the driver prints to stdout:
radeon: Acquired access to AA optimizations.
|
|
|
|
|
|
|
|
|
|
|
| |
but an MSAA resource is bound. This effectively makes the MSAA disable switch
not affect rasterization, but it still affects the alpha-to-one and
alpha-to-coverage states. This hardware just lacks a proper MSAA disable
switch.
This fixes graphics corruption in sauerbraten.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=59194
|
|
|
|
| |
Set: RADEON_DEBUG=msaa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not as optimized as r600g - the MSAA compression is missing,
so r300g needs a lot of bandwidth (more than r600g to do the same thing).
However, if the bandwidth is not an issue for you, you can enjoy this
unoptimized MSAA support.
The only other missing optimization for MSAA is the fast color clear.
MSAA is enabled on r500 only, because that's the only GPU family I tested.
That said, MSAA should work on r300 and r400 as well (but you must set
RADEON_MSAA=1 to allow it, then turn MSAA on in your app or set GALLIUM_MSAA=n,
n >= 2, n <= 6)
I will enable the support by default on r300-r400 once someone (other than me)
tests those chipsets with piglit.
The supported modes are 2x, 4x, 6x.
The supported MSAA formats are RGBA8, BGRA8, and RGBA16F (r500 only).
Those 3 formats are used for all GL internal formats.
Tested with piglit. (I have ported all MSAA tests to GL2.1)
|
| |
|
|
|
|
|
|
| |
Not really used by anybody now.
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"get_transfer + transfer_map" becomes "transfer_map".
"transfer_unmap + transfer_destroy" becomes "transfer_unmap".
transfer_map must create and return the transfer object and transfer_unmap
must destroy it.
transfer_map is successful if the returned buffer pointer is not NULL.
If transfer_map fails, the pointer to the transfer object remains unchanged
(i.e. doesn't have to be NULL).
Acked-by: Brian Paul <[email protected]>
|
|
|
|
| |
NOTE: This is a candidate for the stable branches.
|
|
|
|
|
|
| |
Commit 11f056a3f0b87e86267efa8b5ac9d36a343c9dc1 broke the r300g build. Fix it
up, and reinstate some code which isn't needed by r600g and radeonsi but is
by r300g.
|
| |
|
| |
|
| |
|
|
|
|
| |
Just to be safe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tiled surface have all kind of alignment constraint that needs to
be met. Instead of having all this code duplicated btw ddx and
mesa use common code in libdrm_radeon this also ensure that both
ddx and mesa compute those alignment in the same way.
v2 fix evergreen
v3 fix compressed texture and workaround cube texture issue by
disabling 2D array mode for cubemap (need to check if r7xx and
newer are also affected by the issue)
v4 fix texture array
v5 fix evergreen and newer, split surface values computation from
mipmap tree generation so that we can get them directly from the
ddx
v6 final fix to evergreen tile split value
v7 fix mipmap offset to avoid to use random value, use color view
depth view to address different layer as hardware is doing some
magic rotation depending on the layer
v8 fix COLOR_VIEW on r6xx for linear array mode, use COLOR_VIEW on
evergreen, align bytes per pixel to a multiple of a dword
v9 fix handling of stencil on evergreen, half fix for compressed
texture
v10 fix evergreen compressed texture proper support for stencil
tile split. Fix stencil issue when array mode was clear by
the kernel, always program stencil bo. On evergreen depth
buffer bo need to be big enough to hold depth buffer + stencil
buffer as even with stencil disabled things get written there.
v11 rebase on top of mesa, fix pitch issue with 1d surface on evergreen,
old ddx overestimate those. Fix linear case when pitch*height < 64.
Fix r300g.
v12 Fix linear case when pitch*height < 64 for old path, adapt to
libdrm API change
v13 add libdrm check
Signed-off-by: Jerome Glisse <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changing pipe_resource was wrong, because it can be used by other contexts
at the same time. This fixes the last possible race condition in r300g
that I know of.
This also fixes blitting NPOT compressed textures. Random pixels sometimes
appeared at the right-hand edge of the texture.
Finally, this removes r300_texture_desc::stride_in_pixels. It makes little
sense with sampler views and surfaces being able to override width0, height0,
and the format entirely.
|
|
|
|
|
|
|
|
| |
This partially reverts commit 363ff844753c46ac9c13866627e096b091ea81f8.
It caused severe performance drops in Nexuiz. Reported by Phoronix.
Tested by me on r300g and by IRC people on r600g.
|
| |
|
|
|
|
|
|
|
|
|
| |
The DDX may allocate a buffer with a too small size.
Instead of failing, let's pretend everything's alright.
Such bugs should be fixed in the DDX, of course.
NOTE: This is a candidate for the stable branches.
|
|
|
|
|
|
|
|
|
|
| |
these are never USCALED, always UINT in reality.
taken from some work by Christoph Bumiller
v2: fixup formatting of table + tabs
Signed-off-by: Dave Airlie <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The drivers don't need to care about the domains. All they need to set
are the bind and usage flags. This simplifies the winsys too.
This also fixes on r600g:
- fbo-depth-GL_DEPTH_COMPONENT32F-copypixels
- fbo-depth-GL_DEPTH_COMPONENT16-copypixels
- fbo-depth-GL_DEPTH_COMPONENT24-copypixels
- fbo-depth-GL_DEPTH_COMPONENT32-copypixels
- fbo-depth-GL_DEPTH24_STENCIL8-copypixels
I can't explain it.
Reviewed-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
| |
It's part of pb_buffer already.
Reviewed-by: Alex Deucher <[email protected]>
|
| |
|
|
|
|
| |
Some of those have been in drivers already.
|
| |
|
|
|
|
|
|
|
|
| |
Blending and maybe even alpha-test don't work with those formats.
Only supporting RGBA, BGRA, RGBX, BGRX.
NOTE: This is a candidate for the 7.10 and 7.11 branches.
|
|
|
|
|
| |
Renaming a few files, types, and functions.
Also make the winsys independent of r300g.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hardware should be set according to this table:
FORMAT -> R300 COLORFORMAT
-------------------------
X16 -> UV88
X16Y16 -> ARGB8888
X32 -> ARGB8888
X32Y32 -> ARGB16161616
US_OUT_FMT must contain the real format.
I wasn't able to make B3G3R2 and L4A4 work, but those aren't important.
|
| |
|
|
|
|
| |
Only st/xorg used it and even incorrectly with regards to pipelined transfers.
|
| |
|
|
|
|
| |
Running any older kernel is not recommended anyway.
|