| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
* We have a symbol conflict as rtasm in
Mesa collides with rtasm in gallium.
* As us linking gallium and mesa together
is an edge case, lets just omit the rtasm
code from Mesa as we should be going
llvmpipe soon :)
|
|
|
|
|
|
|
|
|
| |
Fixes cayman and TN with htile enabled. Should fix:
https://bugs.freedesktop.org/show_bug.cgi?id=59089
https://bugs.freedesktop.org/show_bug.cgi?id=58667
Possibly others.
Signed-off-by: Alex Deucher <[email protected]>
|
|
|
|
|
|
|
|
| |
Upcoming async dma support rely on winsys knowing about GPU families.
Signed-off-by: Jerome Glisse <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Jerome Glisse <[email protected]>
Reviewed-by: Alex Deucher <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not as optimized as r600g - the MSAA compression is missing,
so r300g needs a lot of bandwidth (more than r600g to do the same thing).
However, if the bandwidth is not an issue for you, you can enjoy this
unoptimized MSAA support.
The only other missing optimization for MSAA is the fast color clear.
MSAA is enabled on r500 only, because that's the only GPU family I tested.
That said, MSAA should work on r300 and r400 as well (but you must set
RADEON_MSAA=1 to allow it, then turn MSAA on in your app or set GALLIUM_MSAA=n,
n >= 2, n <= 6)
I will enable the support by default on r300-r400 once someone (other than me)
tests those chipsets with piglit.
The supported modes are 2x, 4x, 6x.
The supported MSAA formats are RGBA8, BGRA8, and RGBA16F (r500 only).
Those 3 formats are used for all GL internal formats.
Tested with piglit. (I have ported all MSAA tests to GL2.1)
|
|
|
|
| |
Preparation for MSAA and alpha-to-coverage.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
gl_constants::MaxSamples is an integer, so setting it to 1.0 is just
silly.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chris Forbes <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
v2: Move {RED,RG,RGB,RGBA}_SNORM changes from the previous commit to
this commit. Based on suggestions from Ken.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The OpenGL 3.2 core profile spec says:
"The following base internal formats from table 3.11 are
color-renderable: RED, RG, RGB, and RGBA. The sized internal formats
from table 3.12 that have a color-renderable base internal format
are also color-renderable. No other formats, including compressed
internal formats, are color-renderable."
The OpenGL 3.2 compatibility profile spec says (only ALPHA is added):
"The following base internal formats from table 3.16 are
color-renderable: ALPHA, RED, RG, RGB, and RGBA. The sized internal formats
from table 3.17 that have a color-renderable base internal format
are also color-renderable. No other formats, including compressed
internal formats, are color-renderable."
Table 3.12 in the core profile spec and table 3.17 in the compatibility
profile spec list SNORM formats as having a base internal format of RED,
RG, RGB, or RGBA. From this we infer that they should also be color
renderable.
The OpenGL ES 3.0 spec says:
"An internal format is color-renderable if it is one of the formats
from table 3.12 noted as color-renderable or if it is unsized format
RGBA or RGB. No other formats, including compressed internal
formats, are color-renderable."
In the OpenGL ES 3.0 spec, none of the SNORM formats have "color-
renderable" marked in table 3.12. The RGB I and UI formats also are not
color-renderable in ES3, but we'll save that change for another patch.
Both NVIDIA's closed-source driver (version 304.64) and AMD's
closed-source driver (Catalyst 12.6 on HD 3650) reject *all* SNORM
formats for renderbuffers in OpenGL 3.3 compatibility profiles.
v2: Move {RED,RG,RGB,RGBA}_SNORM changes from the this commit to the
next commit. Based on suggestions from Ken.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
The Z32 pixel is 4 bytes so multiply x by 4, not 2.
Note: This is a candidate for the stable branches.
|
|
|
|
|
|
| |
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=58972
Note: This is a candidate for the stable branches.
|
|
|
|
|
|
| |
Bump limit from 32 to 128.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=58545
|
|
|
|
|
|
|
|
| |
Fixes piglit glx-dont-care-mask test.
Note: This is a candidate for the stable branches.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
| |
Fixes piglit glx-dont-care-mask test.
Note: This is a candidate for the stable branches.
Reviewed-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code now lives in an external tree.
For the next Mesa release fetch the code from the master branch
of this LLVM repo:
http://cgit.freedesktop.org/~tstellar/llvm/
For all subsequent Mesa releases, fetch the code from the official LLVM
project:
www.llvm.org
|
|
|
|
|
|
| |
Tom Stellard:
- Backend now has same name for all LLVM versions
- Add missing LLVM_VERSION_INT definition
|
|
|
|
| |
Broken since commit 598cc1f74d7ae924e84dee801b456ab7b0b22f84
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
- skip the vertex buffer reallocation in flush and just use
the unsynchronized flag to get new memory.
- remove the cruft needed to get around the issues with the vertex buffer
reallocation in flush
- use pb_buffer instead of pipe_resource
|
|
|
|
| |
Broken by e73bf3b805de78299f1a652668ba4e6eab9bac94.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes intel_miptree_unmap_etc() (which decompresses ETC
textures to linear) to pay attention to map->x and map->y when writing
to the destination image. Previously these values were ignored,
causing the xoffset and yoffset parameters passed to
glCompressedTexSubImage2D() to be ignored.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
We used to have to jump through hoops to call glFlush at swap buffer time,
but the flush extension made that unnecessary a long time ago.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
That means we can map and read multiple slices with one transfer_map call.
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
- We should use a 3D transfer of size Width x 1 x NumLayers.
- We should use layer_stride instead of stride.
(even though they are likely to be equal with 1D array textures)
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
This uses a 3D blit to decompress the texture and then a 3D transfer
to read it.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was the fast path based on _mesa_format_matches_format_and_type
for GetTexImage, but it never worked, because the Mesa format we were testing
there was always compressed. Further testing showed that the fast path
had been completely broken.
In this commit, the somewhat limited helper util_create_rgba_texture is
no longer used and instead, custom code for the texture creation is added,
which tries to find the best matching RGBA8 format, so that we can hit
the fast path *always* if the read format is a variant of RGBA8 and supported
by the driver.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
I'll deal with 2D arrays later.
NOTE: This is a candidate for the stable branches.
|
|
|
|
|
|
| |
Scaling and flipping in the Z direction isn't allowed yet.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Usage with pipe_context:
pipe->flush(pipe, NULL, PIPE_FLUSH_END_OF_FRAME);
Usage with st_context_iface:
st->flush(st, ST_FLUSH_END_OF_FRAME, NULL);
The flag is only a hint for drivers. Radeon will use it for buffer eviction
heuristics in the kernel (e.g. for queries like how many frames have passed
since a buffer was used).
The flag is currently only generated by st/dri on SwapBuffers.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Stéphane Marchesin <[email protected]>
|
| |
|
|
|
|
| |
NOTE: This is a candidate for the stable branches.
|