| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
We use the new miptree offset to pick out the sub-image when we bind
the EGLImage to a texture.
Signed-off-by: Kristian Høgsberg <[email protected]>
|
|
|
|
|
|
|
|
| |
This lets us specify an offset into the bo where the miptree starts,
which will let us set up a texture for a single plane in a planar buffer.
Signed-off-by: Kristian Høgsberg <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code for growing the memory pool (which is used for storing all of
the global buffers) wasn't working. There seem to be two separate issues
with the memory pool code. The first was the way it was growing the pool.
When the memory pool needed more space, it would:
1. Copy the data from the memory pool's backing texture to system memory.
2. Delete the memory pool's texture
3. Create a bigger backing texture for the memory pool.
4. Copy the data from system memory into the bigger texture.
The copy operations didn't seem to be working, and I suspect that since
they were using fragment shaders to do the copy, that there might have
been a problem with the mixing of compute and 3D state.
The other issue is that the size of 1D textures is limited, and I was
having trouble getting 2D textures to work.
I think these problems will be easier to solve once more code is shared
between 3D and compute, which is why I decided to disable it for now
rather than continue searching for a fix.
|
|
|
|
|
|
|
| |
The original strategy for handling floating point loads, which was to
lower (f32 load) to (f32 bitcast (i32 load)) wasn't really working. The
main problem was that the DAG legalizer couldn't handle replacing a node
with two results (load) with a node with only one result (bitcast).
|
|
|
|
| |
The IMM bit is already being set in SICodeEmitter.
|
| |
|
|
|
|
|
|
|
|
| |
It didn't change performance on Lightsmark or Nexuiz, which both used
DYNAMIC_DRAW buffers, but it was killing performance (40% CPU wasted pwriting
buffers) on a closed-source app we're looking at.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the infrastructure required for this extension. There is no
xserver support and no driver support yet. Drivers can enable this be
advertising DRI2 version 4 and accepting the
__DRI_CTX_FLAG_ROBUST_BUFFER_ACCESS flag and the
__DRI_CTX_ATTRIB_RESET_STRATEGY attribute in create context.
Some additional Mesa infrastructure is needed before drivers can do
this. The GL_ARB_robustness spec, which all Mesa drivers already
advertise, requires:
"If the behavior is LOSE_CONTEXT_ON_RESET_ARB, a graphics reset
will result in the loss of all context state, requiring the
recreation of all associated objects."
It is necessary to land this infrastructure now so that the related
infrastructure can land in the xserver. The xserver has very long
release schedules, and the remaining Mesa parts should land long, long
before the next xserver merge window opens.
v2: Expose robustness as a DRI2 extension rather than bumping
__DRI_DRI2_VERSION.
v3: Add a comment explaining why dri2->base.version >= 3 is also
required for GLX_ARB_create_context_robustness.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
This allows revising the dri_interface.h separately from adding driver
support.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Remove 'extern' from the functions declared in texcompress_etc.h.
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Use r600_resource_texture::flished_depth_texture for GPU access, and
allocate it in the VRAM. For transfers we'll allocate texture in the GTT
and store it in the r600_transfer::staging.
Improves performance when flushed depth texture is frequently used by the
GPU, e.g. in Lightsmark (~30%)
Signed-off-by: Vadim Girlin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
With fixes and updates from Ben Widawsky and comments from Paul Berry.
v2: Use drm_intel_gem_context_destroy to destroy hardware context;
remove useless initialization of hw_ctx, both suggested by Eric.
Signed-off-by: Kenneth Graunke <[email protected]>
Signed-off-by: Ben Widawsky <[email protected]>
Acked-by: Paul Berry <[email protected]>
|
|
|
|
|
|
|
|
| |
4952caa caused the _EXT to fall off the name of this enum. This is
fine. Update the unit test to expect the new value.
Signed-off-by: Ian Romanick <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51956
|
|
|
|
| |
the query type is already documented
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
PIPE_QUERY_TIMESTAMP is already implemented and working.
|
| |
|
| |
|
|
|
|
|
|
| |
This is adds a new driver function to retrieve the timestamp.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
| |
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
| |
This doesn't do anything with the uniform block declarations yet, so
usage of those uniforms finds them to be undeclared.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
I've been trying to derive from this for UBO support, and the slightly
obfuscated types were putting me over the edge.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The got_one variable was set iff one of the bits in flags.i was set.
v2: Fix incorrect dropping of the ARB_conservative_depth warning.
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
This fixes a segfault in r600_screen_create() introduced by
eb065f5d9d1159af3a88a64a7606c9b6d67dc3
Reported by tilman on irc.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This the first step towards being able to use evergreen_cb to bind RATs.
|
| |
|
|
|
|
|
|
|
|
| |
This function is used when dispatching compute shader in order to avoid
mixing compute and 3D registers in the context's dirty list. This
allows the compute code to resuse 3D functions like evergreen_cb, which
return a struct r600_pipe_state and still have control over when and how
the register writes are emitted.
|
|
|
|
|
|
|
| |
This allows the shader type bit to be set in the pm4 header when
emitting registers for compute shaders.
Reviewed-by: Marek Olšák <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
| |
The start_compute_cs atom initializes some config and context registers
to the values needed for running compute shaders. When a compute shader
is dispatched, this atom is emitted after the start_cs_cmd atom, which
initializes registers that are common to both 3D and compute.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Some packets require the shader type bit (bit 1) to be set when
used for compute shaders. The pkt_flag will be initialized to
RADEON_CP_PACKET3_COMPUTE_MODE for any struct r600_command_buffer used
for dispatching compute shaders and it will be or'd against the result of
the PKT3 macro when adding a new packet to a struct r600_command buffer.
Reviewed-by: Marek Olšák <[email protected]>
|
| |
|
| |
|
| |
|
| |
|