| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Now that we have a function to initialize states, initialize dirty flags there
too.
|
|
|
|
|
|
|
| |
Even with hardware contexts, since we do not pin resources, we have to re-emit
the states so that the resources are referenced (by cp->bo) and their offsets
are updated in case they are moved. This also allows us to elimiate cp flush
in is_bo_busy().
|
|
|
|
| |
It has been broken since 17350ea979b883662573dac136cd9efb49938210.
|
|
|
|
|
|
|
|
|
|
|
| |
We did downsample (=resolve) MSAA resources to make ReadPixels work with MSAA
GLX visuals, which was enough for read-only color-only transfers.
This commit makes write color transfers and depth-stencil transfers work
in a similar manner. It does downsampling in transfer_map and upsampling
in transfer_unmap.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
| |
There isn't any difference between 32_FLOAT and 32_*INT in vertex fetching.
Both of them don't do any format conversion.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
| |
We can use the fragment shader TGSI property WRITES_ALL_CBUFS.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Use new util_fill_box helper for util_clear_render_target.
(Also fix off-by-one map error.)
v2: handle non-zero z correctly in new helper
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
| |
The motivation is to kill tiling and pitch in struct intel_bo. That requires
us to make tiling and pitch not queryable, and be passed around as function
parameters.
|
|
|
|
|
| |
We are moving toward making struct intel_bo alias drm_intel_bo. As a first
step, we cannot have function tables.
|
|
|
|
| |
buf->bo_size is readily avaiable, no need to go via buf->bo->get_size().
|
|
|
|
| |
Merge the bodies to tex_create_bo/buf_create_bo respectively.
|
|
|
|
| |
Signed-off-by: Maarten Lankhorst <[email protected]>
|
|
|
|
|
|
| |
Actually respect rasterizer state.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GPU (at least a3xx, but I think also a2xx) can render directly to
memory, bypassing tiling. Although it can't do this if blend, depth,
and a few other features of the pipeline are enabled. This direct
memory mode can be faster for some sorts of operations, such as simple
blits. In particular, this significantly speeds up XA by avoiding to
pull the entire dest pixmap into GMEM, render tiles, and write it all
back out again. This should also speed up resource copy-region and
blit.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The adreno a3xx GPU is found in newer snapdragon devices, such as the
nexus4. The a3xx is GLESv3 and OpenCL capable, although that is not
enabled yet in gallium.
Compared to a2xx, it introduces an entirely new unified shader ISA, and
re-shuffles all or nearly all of the registers. The good news is that
(for the most part) the registers are more orthogonal, not combining
unrelated state in a single register. And that there is a lot more
flexibility, so we don't need to patch and re-emit the shader like we
did on a2xx.
The shader compiler is currently quite dumb, there would be a lot of
room for improvement with an optimizing pass. Despite that, with the
a320 in my nexus4 it seems to be ~2-3x faster compared to the a220 in my
HP touchpad.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
Split the parts that are specific to adreno a2xx series GPUs from the
parts that will be in common with a3xx, so that a3xx support can be
added more cleanly.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
Believe it or not but these two are actually the first two functions which
really belong in this file nowadays.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly just make sure the layer parameter gets passed through to the right
places (and get clamped, can do this at setup time), fix up clears to
clear all layers and disable opaque optimization. Luckily don't need to
touch the jitted code.
(Clears invoked via pipe's clear_render_target method will not work however
since the pipe_util_clear function used for it doesn't handle clearing
multiple layers yet.)
v2: per Brian's suggestion, prettify var initialization and add some comments,
add assertion for impossible layer specification for surface.
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
| |
We returned 0 instead of 1 for the number of layers when the array texutre is
single-layered. This fixed it on GEN7+.
|
|
|
|
|
|
|
|
|
| |
This change came from the discovery that the STATIC_ASSERT to check that
the number of register file strings didn't actually work.
Similar changes could be made for the other string arrays in tgsi_string.c
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
| |
It was supported but not advertised. Also remove TODO tag for
PIPE_CAP_MIN_MAP_BUFFER_ALIGNMENT, as it is not a TODO.
|
|
|
|
| |
They were already supported, just being rejected in the TGSI translator.
|
|
|
|
|
| |
Slab allocator is perfect for transfer. Improved OpenArena performance by 1%
with several casual runs.
|
|
|
|
| |
We need to unreference resources that we referenced.
|
|
|
|
|
| |
The BOs are mapped in their entire life times for the chipsets we support so
do not forget to unmap it.
|
|
|
|
| |
This magical line of code must have got lost at some point in the history...
|
|
|
|
| |
Add ilo_rasterizer_sf and initialize it in create_rasterizer_state().
|
|
|
|
| |
Add ilo_rasterizer_clip and initialize it in create_rasterizer_state().
|
|
|
|
|
|
| |
Introduce ilo_surface_cso and initialize it in create_surface(). With the
change, we can emit SURFACE_STATE directly from the CSO and remove
emit_surf_SURFACE_STATE(). We do not deal with depth/stencil surfaces yet.
|
|
|
|
|
|
| |
Introduce ilo_cbuf_cso and initialize it in set_constant_buffer(). As
ilo_view_surface is embedded in ilo_cbuf_cso, switch to emit_SURFACE_STATE()
for constant buffers and remove emit_cbuf_SURFACE_STATE().
|
|
|
|
|
|
| |
Introduce ilo_view_cso and initialize it in create_sampler_view(). Add
emit_SURFACE_STATE() to GPE, which can emit SURFACE_STATE from
ilo_view_surface.
|
|
|
|
| |
Define struct ilo_view_surface for SURFACE_STATE construction and emission.
|
|
|
|
|
|
|
|
|
| |
Moving the work to create time reduces the work at emit time.
Saves time overall as create work is only done once.
Fix compiler warning in gen7_pipeline_sol.
[olv: remember pipe_alpha_state instead of pipe_depth_stencil_alpha_state in
ilo_dsa_state]
|
|
|
|
|
|
|
| |
Introduce ilo_ve_cso and initialize it in create_vertex_elements_state().
This commit goes a step further by setting up mappings from HW VB to PIPE VB,
which we failed to do previously. That allows us to support instanced
rendering.
|
|
|
|
|
|
| |
Remove hiz and dsa from the parameters. We would know whether HiZ buffer
exists from ilo_texture once it is supported. DSA state should not affect
3DSTATE_DEPTH_BUFFER.
|
|
|
|
|
| |
Introduce ilo_blend_cso and initialize it in create_blend_state(). This saves
us from having to construct hardware blend states in draw_vbo().
|
|
|
|
|
|
| |
Introduce ilo_sampler_cso and initialize it in create_sampler_state(). This
saves us from having to perform CPU-intensive calculations to construct
hardware sampler states in draw_vbo().
|
|
|
|
|
|
| |
This allows us to memcpy() the state in draw_vbo(). Add ilo_init_states() and
ilo_cleanup_states() that are called when contexts are created and destroyed
respectively, and properly set the initial scissor state in ilo_init_states().
|
|
|
|
|
|
| |
Introduce ilo_viewport_cso and initialize it in set_viewport_states(). This
saves us from having to perform CPU-intensive calculations to construct
hardware viewport states in draw_vbo().
|
|
|
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_sampler_state;
struct ilo_view_state;
struct ilo_cbuf_state;
struct ilo_resource_state;
struct ilo_global_binding;
in ilo_context.
|
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_dsa_state;
struct ilo_blend_state;
struct ilo_fb_state;
in ilo_context.
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_rasterizer_state;
in ilo_context.
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_viewport_state;
struct ilo_scissor_state;
in ilo_context.
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_so_state;
in ilo_context.
|
|
|
|
|
|
|
|
|
|
| |
Define and use
struct ilo_vb_state;
struct ilo_ve_state;
struct ilo_ib_state;
in ilo_context.
|
| |
|
|
|
|
|
|
|
| |
These should just work, required by d3d10. Too large resources will
get thrown out separately anyway.
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was always doing per-pixel alignment which isn't necessary, except
for the buffer case (due to the per-element offset). The disabled code
for calculating it was incorrect because it assumed that always the full
block would be fetched, which may not be the case, so fix this up.
The original code failed for instance for r10g10b10a2 the alignment would
have been calculated as 4 (block_width) * 4 (bytes) so 16, but the actual
fetch may have only fetched 2 values at a time, hence only alignment 8 -
it is unclear what exactly would happen in this case (alignment larger
than size to fetch).
So just use the (already calculated) fetch size instead and get alignment
from that which should always work, no matter if fetching 1,2 or 4 pixels.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For rendering to buffers, we cannot have any y alignment.
So make sure that tile clear commands only clear up to the fb width/height,
not more (do this for all resources actually as clearing more seems
pointless for other resources too). For the jit fs function, skip execution
of the lower half of the fragment shader for the 4x4 stamp completely,
for depth/stencil only load/store the values from the first row
(replace other row with undef).
For the blend function, also only load half the values from fs output,
replace the rest with undefs so that everything still operates on the
full 4x4 block to keep code the same between 4x1 and 4x4 (except for
load/store of course which also needs to skip (store) or replace these
values with undefs (load))., at the cost of slightly less optimal code
being produced in some cases.
Also reduce 1d and 1d array alignment too, because they can be handled the
same as buffers so don't need to waste memory.
v2: don't try to run special blend code for 4x1, (very) slightly less
complexity if we just use the same code as for 4x4 which may or may not
make it easier to optimize in the future (as we care a lot more about 4x4
performance than 1d).
v2: don't use undef values for unused fs src outputs with llvm 3.1 as it
apparently can trigger a bug in llvm.
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some parameters were used inconsistently, for instance not using
block_width/block_height/block_size for deferring number of pixels
but rather relying on guesses from the number of fragment shaders etc,
so fix this up (no actual change in behavior since the block size stays
fixed). (Though most of the code would work with different block_height,
with three exceptions, one being the hacked r11g11b10 conversions and
twiddle code which only work with block_height 2 not 1, and the last
one being blend vector type not being 128bit wide.)
Reviewed-by: Jose Fonseca <[email protected]>
|