| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Signed-off-by: Maarten Lankhorst <[email protected]>
|
|
|
|
|
|
| |
Actually respect rasterizer state.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GPU (at least a3xx, but I think also a2xx) can render directly to
memory, bypassing tiling. Although it can't do this if blend, depth,
and a few other features of the pipeline are enabled. This direct
memory mode can be faster for some sorts of operations, such as simple
blits. In particular, this significantly speeds up XA by avoiding to
pull the entire dest pixmap into GMEM, render tiles, and write it all
back out again. This should also speed up resource copy-region and
blit.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The adreno a3xx GPU is found in newer snapdragon devices, such as the
nexus4. The a3xx is GLESv3 and OpenCL capable, although that is not
enabled yet in gallium.
Compared to a2xx, it introduces an entirely new unified shader ISA, and
re-shuffles all or nearly all of the registers. The good news is that
(for the most part) the registers are more orthogonal, not combining
unrelated state in a single register. And that there is a lot more
flexibility, so we don't need to patch and re-emit the shader like we
did on a2xx.
The shader compiler is currently quite dumb, there would be a lot of
room for improvement with an optimizing pass. Despite that, with the
a320 in my nexus4 it seems to be ~2-3x faster compared to the a220 in my
HP touchpad.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
Split the parts that are specific to adreno a2xx series GPUs from the
parts that will be in common with a3xx, so that a3xx support can be
added more cleanly.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use 128bit vector interleave for untwiddling in the blend code (with
256bit vectors). llvm generates terrible code for this for some reason,
so instead of generating a shuffle for 2 128bit vectors use a
extract/insert shuffle instead (it only seems to matter we're not using
128bit wide vectors for the shuffle). This decreases instruction count of
the blend code generated for a rgba8 render target without blending from
169 to 113 with llvm 3.1 and from 136 to 114 in llvm 3.2/3.3, and I got
a ~8% (llvm 3.1) and ~5% (3.2/3.3) performance improvement in gears.
(The generated code is still not terribly good as we could actually avoid
the interleaving completely but llvm can't know this.)
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
| |
We flush pending rendering before running CopySubBuffer, which
ensures that the right bits get to the screen.
NOTE: This is a candidate for stable release branches.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
The coordinates need to be inverted between glX and gallium.
NOTE: This is a candidate for stable release branches.
Reviewed-by: Brian Paul <[email protected]>
|
| |
|
|
|
|
|
|
|
| |
These functions must clear all bound layers, not just the first.
Reviewed-by: Jose Fonseca <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
Believe it or not but these two are actually the first two functions which
really belong in this file nowadays.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly just make sure the layer parameter gets passed through to the right
places (and get clamped, can do this at setup time), fix up clears to
clear all layers and disable opaque optimization. Luckily don't need to
touch the jitted code.
(Clears invoked via pipe's clear_render_target method will not work however
since the pipe_util_clear function used for it doesn't handle clearing
multiple layers yet.)
v2: per Brian's suggestion, prettify var initialization and add some comments,
add assertion for impossible layer specification for surface.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Transfers always use z/depth for layers no matter if it's a 1d or 2d array
texture, we don't follow OpenGL's crazyness there. Luckily this appears to
only be a doc bug, everyone doing the right thing already.
While here also document z/depth parameter for cube map arrays.
v2: fix typo spotted by Eric Anholt
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
| |
We returned 0 instead of 1 for the number of layers when the array texutre is
single-layered. This fixed it on GEN7+.
|
|
|
|
|
|
|
|
| |
Checking if array_size is greater than 1 is not enough for single-layered
array textures.
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
Moved draw_arrays() to st_draw_feedback.c and removed draw_arrays_instanced().
draw_arrays() was used by nobody else. Now there's just one "draw" entrypoint
into the draw module.
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This change came from the discovery that the STATIC_ASSERT to check that
the number of register file strings didn't actually work.
Similar changes could be made for the other string arrays in tgsi_string.c
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Chia-I Wu <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
| |
It was supported but not advertised. Also remove TODO tag for
PIPE_CAP_MIN_MAP_BUFFER_ALIGNMENT, as it is not a TODO.
|
|
|
|
| |
They were already supported, just being rejected in the TGSI translator.
|
|
|
|
|
| |
Slab allocator is perfect for transfer. Improved OpenArena performance by 1%
with several casual runs.
|
|
|
|
| |
We need to unreference resources that we referenced.
|
|
|
|
|
| |
The BOs are mapped in their entire life times for the chipsets we support so
do not forget to unmap it.
|
|
|
|
| |
This magical line of code must have got lost at some point in the history...
|
|
|
|
| |
Add ilo_rasterizer_sf and initialize it in create_rasterizer_state().
|
|
|
|
| |
Add ilo_rasterizer_clip and initialize it in create_rasterizer_state().
|
|
|
|
|
|
| |
Introduce ilo_surface_cso and initialize it in create_surface(). With the
change, we can emit SURFACE_STATE directly from the CSO and remove
emit_surf_SURFACE_STATE(). We do not deal with depth/stencil surfaces yet.
|
|
|
|
|
|
| |
Introduce ilo_cbuf_cso and initialize it in set_constant_buffer(). As
ilo_view_surface is embedded in ilo_cbuf_cso, switch to emit_SURFACE_STATE()
for constant buffers and remove emit_cbuf_SURFACE_STATE().
|
|
|
|
|
|
| |
Introduce ilo_view_cso and initialize it in create_sampler_view(). Add
emit_SURFACE_STATE() to GPE, which can emit SURFACE_STATE from
ilo_view_surface.
|
|
|
|
| |
Define struct ilo_view_surface for SURFACE_STATE construction and emission.
|
|
|
|
|
|
|
|
|
| |
Moving the work to create time reduces the work at emit time.
Saves time overall as create work is only done once.
Fix compiler warning in gen7_pipeline_sol.
[olv: remember pipe_alpha_state instead of pipe_depth_stencil_alpha_state in
ilo_dsa_state]
|
|
|
|
|
|
|
| |
Introduce ilo_ve_cso and initialize it in create_vertex_elements_state().
This commit goes a step further by setting up mappings from HW VB to PIPE VB,
which we failed to do previously. That allows us to support instanced
rendering.
|
|
|
|
|
|
| |
Remove hiz and dsa from the parameters. We would know whether HiZ buffer
exists from ilo_texture once it is supported. DSA state should not affect
3DSTATE_DEPTH_BUFFER.
|
|
|
|
|
| |
Introduce ilo_blend_cso and initialize it in create_blend_state(). This saves
us from having to construct hardware blend states in draw_vbo().
|
|
|
|
|
|
| |
Introduce ilo_sampler_cso and initialize it in create_sampler_state(). This
saves us from having to perform CPU-intensive calculations to construct
hardware sampler states in draw_vbo().
|
|
|
|
|
|
| |
This allows us to memcpy() the state in draw_vbo(). Add ilo_init_states() and
ilo_cleanup_states() that are called when contexts are created and destroyed
respectively, and properly set the initial scissor state in ilo_init_states().
|
|
|
|
|
|
| |
Introduce ilo_viewport_cso and initialize it in set_viewport_states(). This
saves us from having to perform CPU-intensive calculations to construct
hardware viewport states in draw_vbo().
|
|
|
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_sampler_state;
struct ilo_view_state;
struct ilo_cbuf_state;
struct ilo_resource_state;
struct ilo_global_binding;
in ilo_context.
|
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_dsa_state;
struct ilo_blend_state;
struct ilo_fb_state;
in ilo_context.
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_rasterizer_state;
in ilo_context.
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_viewport_state;
struct ilo_scissor_state;
in ilo_context.
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_so_state;
in ilo_context.
|
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_vb_state;
struct ilo_ve_state;
struct ilo_ib_state;
in ilo_context.
|
| |
|
| |
|
|
|
|
|
|
| |
Also report if a shader writes the layer semantic
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
These should just work, required by d3d10. Too large resources will
get thrown out separately anyway.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
RADEON_GEM_WAIT_IDLE is declared DRM_IOW but mesa
uses it with drmCommandWriteRead instead of drmCommandWrite
which leads to the ioctl being unmatched and returning an
error on at least OpenBSD.
Problem originally noticed in libdrm by Mark Kettenis.
Dave Airlie pointed out that mesa has the same issue.
Signed-off-by: Jonathan Gray <[email protected]>
|
|
|
|
| |
Signed-off-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The main change is to use MCJIT rather than the old JIT, which will never
be supported for System z. The endianness part is by example since the
patch was tested on a glibc system.
Signed-off-by: Richard Sandiford <[email protected]>
Signed-off-by: Brian Paul <[email protected]>
|