| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
it also matches GL 4.2
further discussion:
http://lists.freedesktop.org/archives/mesa-dev/2013-August/042680.html
Cc: [email protected]
|
|
|
|
| |
Spotted by okias on IRC.
|
|
|
|
|
| |
Not only should we mark states dirty when the underlying resource is renamed,
we should also update the CSO bo when available.
|
|
|
|
| |
We will need it in the following commit.
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
v2:
- Preserve word boundaries.
v3:
- Use const and restrict.
- Fix indentation.
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Most image functions are required to return a CL_INVALID_OPERATION
error when used on devices without image support.
v2:
- Simplified the code
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
| |
v2: Add information about the item's starting point and size
v3: Rebased on top of master
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
| |
v2: Rebased on top of master
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
| |
This should allow a deeper pipeline.
|
|
|
|
|
|
|
| |
When mapping a busy resource with PIPE_TRANSFER_DISCARD_RANGE or
PIPE_TRANSFER_FLUSH_EXPLICIT, we can avoid blocking by allocating and mapping
a staging bo, and emit pipelined copies at proper places. Since the staging
bo is never bound to GPU, we give it packed layout to save space.
|
|
|
|
| |
Enable PIPE_CAP_BUFFER_MAP_PERSISTENT_COHERENT and reorder caps a bit.
|
|
|
|
|
| |
With the recent clean-ups, we can pass the mapped pointer around between
functions cleanly. Drop it to make ilo_transfer smaller.
|
|
|
|
|
| |
It maps to drm_intel_gem_bo_map_unsynchronized(), which results in
unsynchronized GTT mapping.
|
|
|
|
| |
Many of the transfer functions do not need an ilo_context. Drop it.
|
|
|
|
|
|
| |
Add xfer_map() to replace map_bo_for_transfer(). Add xfer_unmap() and
xfer_alloc_staging_sys() to simplify texture and buffer mapping/unmapping, and
enable more code sharing between them.
|
|
|
|
|
|
| |
Add a bunch of helper functions and a big comment for
choose_transfer_method(). This also fixes handling of
PIPE_TRANSFER_MAP_DIRECTLY to not ignore tiling.
|
|
|
|
| |
We used FREE() in one of the error path.
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
| |
[ Francisco Jerez: Check for devices not associated with the specified
context. Style fix. ]
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
| |
Move fence creation to the new ilo_fence_create().
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us two things: we now need less item copies when we have
to defrag+grow the pool (to just one copy per item) and, even in the
case where we don't need to defrag the pool, we reduce the data copied
to just the useful data that the items use.
Note: The fallback path is a bit ugly now, but hopefully we won't need
it much.
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now, before moving everything to host memory, we try to create a
new resource to use as a pool. I we succeed we just use this resource
and delete the previous one. If we fail we fallback to using the
shadow.
This should make growing the pool faster, and we can also save
64KB of memory that were allocated for the 'shadow', even if they
weren't used.
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
| |
Opps, I should use larger fonts, I guess.
Reported-by: Ilia Mirkin <[email protected]>
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the bits we want to share between generations from fd3_program to
ir3_shader. So overall structure is:
fdN_shader_stateobj -> ir3_shader -> ir3_shader_variant -> ir3
|- ...
\- ir3_shader_variant -> ir3
So the ir3_shader becomes the topmost generation neutral object, which
manages the set of variants each of which generates, compiles, and
assembles it's own ir.
There is a bit of additional renaming to s/fd3_compiler/ir3_compiler/,
etc.
Keep the split between the gallium level stateobj and the shader helper
object because it might be a good idea to pre-compute some generation
specific register values (ie. anything that is independent of linking).
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
First step of reoganization split out compiler (so it can be shared
between a3xx and a4xx). Rename ir3_shader -> ir3 (since we'll want
the name ir3_shader for a higher level object).
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
The scheduler also needs to be aware of predicate register (p0) in
addition to address register (a0).
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
| |
Remove some obsolete comments, rename deref->addr.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
It seems like for the most part, different behaviors, workarounds, etc,
should be conditional on GPU patch revision (ie. a320.0 vs a320.2)
rather than GPU id (a320 vs a330).
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The fixed size heap is a remnant of the fdre-a3xx assembler. Yet it is
convenient for being able to free the entire data structure in one shot
without worrying about leaking nodes.
Change it to dynamically grow the heap size (adding chunks) as needed so
we don't have an artificial upper limit on shader size (other than hw
limits) and don't always have to allocate worst-case size.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
| |
The iteration variables go from 0 anyway.
Signed-off-by: Jan Vesely <[email protected]>
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
| |
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
| |
This will be used in the following patch to avoid duplicated code
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
| |
v2: Remove unnecesary variables
Reviewed-by: Tom Stellard <[email protected]>
|
|
|
|
|
|
|
| |
Can we please keep it clean and avoid ending up in messy situation
like ddx.
Signed-off-by: Jérôme Glisse <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a situation where double-register values are used, the phi nodes can
still end up being u32 values. They all get merged into one RA node
though. When fixing up the merge (which comes after the phi node), the
phi node's def would get fixed, but not its sources which would remain
at the low register value.
This maintains the invariant that a phi node's defs and sources are
allocated the same register.
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Ilia Mirkin <[email protected]>
Cc: <[email protected]>
|
|
|
|
| |
Just to be cautious.
|
|
|
|
|
| |
s/alloc_bo/rename_bo/ as that is what the functions do. Simplify bo
allocation and move the complexity to bo renaming.
|
|
|
|
|
| |
Add resource_get_bo_name() and resource_get_bo_initial_domain() for use by
both functions.
|
|
|
|
| |
GEN7.5 gains support for those formats natively.
|
|
|
|
| |
Pass ilo_dev_info to all format translation functions.
|
|
|
|
|
|
|
| |
Don't assert (debug builds) or assign random uninitialized value for
predicate register (p0).. that screws up kill, etc.
Signed-off-by: Rob Clark <[email protected]>
|