| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
v2: added fprintf to r600_get_llvm_processor_name
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
radeonsi now reports PIPE_VIDEO_CAP_SUPPORTS_PROGRESSIVE = true if UVD support
isn't available. It's what all the other drivers do.
Also, some #include directives were missing in radeon_uvd.h.
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
| |
This enables more queries for the Gallium HUD with radeonsi.
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
| |
To follow the unwritten convention of r600g and radeonsi.
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
for (;;) {
} while ();
I was surprised to see such a statement.
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
| |
|
|
|
|
|
|
| |
Became apparent with the C11 thread changes. Unfortunately I didn't
have all dependencies to build the driver, and only noticed
this issue on build server.
|
|
|
|
|
|
|
| |
Also set the unsynchronized flag if the whole resource was discarded
to avoid doing buffer-busy checks again.
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The radeonsi code was not cleaning up either of these items leading to
leaked memory.
v2: Move cleanup to r600_common_context_cleanup instead of duplicating
the logic for SI
CC: "10.0" <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Tom Stellard <[email protected]>
CC: "10.0" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we were creating a new LLVMContext every time that we called
radeon_llvm_parse_bitcode, which caused us to leak the context every time
that we compiled a CL program.
Sadly, we can't dispose of the LLVMContext at the point that it was being
created because evergreen_launch_grid (and possibly the SI equivalent) was
assuming that the context used to compile the kernels was still available.
Now, we'll create a new LLVMContext when creating EG/SI compute state, store
it there, and pass it to all of the places that need it.
The LLVM Context gets destroyed when we delete the EG/SI compute state.
Reviewed-by: Tom Stellard <[email protected]>
CC: "10.0" <[email protected]>
|
| |
|
|
|
|
|
| |
This should make the differences and similarities between color and
depth buffer handling more clear.
|
|
|
|
|
|
|
|
| |
This adds 2 optimizations for radeonsi:
- handling of DISCARD_RANGE
- mapping an uninitialized buffer range is automatically UNSYNCHRONIZED
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
This will be used by common code in the next commit.
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Reviewed-by: Christoph Brill <[email protected]>
v2: Renamed r600_buffer.c to r600_buffer_common.c. The stupid build system
doesn't allow 2 files of the same name in different directories.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we assume that all buffers allocated by the DDX are scanout, a new flag
that says "this is not scanout" has to be added to support the non-scanout
buffers and maintain backward compatibility.
This fixes bad rendering on Wayland.
The flag is defined as:
#define RADEON_TILING_R600_NO_SCANOUT RADEON_TILING_SWAP_16BIT
AFAIK, RADEON_TILING_SWAP_16BIT is not used on SI.
Reviewed-by: Michel Dänzer <[email protected]>
|
| |
|
|
|
|
|
| |
Reviewed-by: Michel Dänzer <[email protected]>
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We need to do this until function calls are supported.
v2:
- Fix loop conditional
https://bugs.freedesktop.org/show_bug.cgi?id=64225
CC: "10.0" <[email protected]>
|
|
|
|
|
|
| |
There are also some changes to the printfs.
Reviewed-and-Tested-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
libdrm does the DRM version check and decides if 2D tiling is used.
Reviewed-and-Tested-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The above two variables are unused as of commit
commit 024fe6852a76f33d7e2afc5621340e387c381bb0
Author: Tom Stellard <[email protected]>
Date: Tue Apr 2 10:42:50 2013 -0700
radeon/llvm: Use LLVM C API for compiling LLVM IR to ISA v2
which removed the only cpp file from drivers/radeon, but missed to
remove the CXXFLAGS. The sequential commit reintroduced and empty
LLVM_CPP_FILES.
Lets cleanup and remove both.
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
* minimise flags duplication
* distingush between VISIBILITY C and CXX flags
* set only required flags - C and/or CXX
v2: add LLVM_CFLAGS back to AM_CFLAGS (add missing backslash)
Signed-off-by: Emil Velikov <[email protected]>
|
|
|
|
|
|
|
|
| |
Prevents a memory leak.
v2: Remove null check
CC: "10.0" <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: Fix indentation
Reviewed-by: Tom Stellard <[email protected]>
CC: "10.0" <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: Fix indentation
Reviewed-by: Tom Stellard <[email protected]>
CC: "10.0" <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Tom Stellard <[email protected]>
CC: "10.0" <[email protected]>
|
| |
|
|
|
|
|
| |
Without DataLayout, a lot of optimization passes aren't run and the ones
that are don't work as well.
|
|
|
|
|
|
| |
Add alot of missing fields as well.
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Signed-off-by: Christian König <[email protected]>
|
| |
|
|
|
|
|
|
|
|
| |
Textures that likely reside in VRAM, are mapped for reading and
don't require direct mapping should be staged into GTT, to avoid bad
performance. This fixes readback performance of VDPAU surfaces.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
| |
With code dump enabled LLVM may generate disassembly during compilation.
Show this disassembly when available and prefer it to SI bytecode dump.
Reviewed-by: Tom Stellard <[email protected]>
Signed-off-by: Jay Cornwall <[email protected]>
|
|
|
|
|
|
|
|
| |
It doesn't work (decodes to garbage) with most videos on UVD 3.0. Worse
yet, it often results in random memory corruption or GPU hangs. Rumor
has it only the newest UVD hardware could do it anyway.
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DPB size calculations seem to be off; there is various random
corruption happening, even with advanced profile. Always assuming
a minimum number of references appears to fix it, similarly to
H.264. This might overallocate the DPB. Also clean up the SPS/PPS
field setup so that it matches VC-1 specifications better.
With these changes, all advanced profile VC-1 files I could get my
hand on work fine.
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
UVD can only support NV12 in the case of hardware decoding, but we
can still use all other formats for software decoding. Use the UNKNOWN
profile to signal that we're not interesting in hardware decoding.
v2: use profile instead of entrypoint
Reviewed-by: Christian König <[email protected]>
|
|
|
|
| |
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
| |
No need to keep a copy of the message in system memory anymore,
since it should now be in GART memory on newer chips.
Signed-off-by: Christian König <[email protected]>
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|