| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Split out of the main fd6_emit() code, since it was already getting to
be a pretty giant function.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
We probably need to rethink how we detect which instruction first
defines higher register classes. But for now, this at least fixes
the symptom.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The elements added into a vector should have the same type as the
first one, otherwise this hits an assertion in LLVM.
Fixes: 4b3549c0846 ("radv: reduce the number of loaded channels for vertex input fetches")
reported-by: Philip Rebohle <[email protected]>
Signed-off-by: Samuel Pitoiset <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
|
|
|
|
|
|
|
|
|
| |
After the previous changes to emulate the ETC/EAC formats using the
secondary shadow miptree, the etc_format field of the intel_mipmap_tree
struct became redundant and the remaining check that used it has been
replaced. (Nanley Chery)
Reviewed-by: Nanley Chery <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
OES_copy_image extension was disabled on Gen7 due to the lack of support
for ETC2 images. Enabled it back. (Kenneth Graunke)
v2:
- Removed the blank lines in the comments above OES_copy_image and
OES_texture_view extensions in intel_extensions.c (Nanley Chery)
Reviewed-by: Nanley Chery <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For CopyImageSubData to copy the data during the 1st draw call, we need
to update the shadow tree right before the rendering.
v2:
- Added assertion that the miptree doesn't need update at the time we
update the texture surface. (Nanley Chery)
v3:
- As we now update the tree before the rendering we don't need to copy
the data during the unmap anymore. Removed the unnecessary update from
the intel_miptree_unmap in intel_mipmap_tree.c (Nanley Chery)
v4:
- Fixed unrelated empty line removal (Nanley Chery)
- As now the intel_upate_etc_shadow of intel_mipmap_tree.c is only
called inside its following function, we don't need to declare it at
the top of the file anymore. (Nanley Chery)
Reviewed-by: Nanley Chery <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GPUs Gen < 8 cannot sample ETC2 formats. So far, they converted the
compressed EAC/ETC2 images to non-compressed RGBA images. When
GetCompressed* functions were called, the pixels were returned in this
RGBA format and not the compressed format that was expected.
Trying to fix this problem, we use a secondary shadow miptree to store the
decompressed data for the rendering and the main miptree to store the
compressed for the Get functions to work. Each time that the main miptree
is written with compressed data, we decompress them to RGB and update the
shadow. Then we use the shadow for rendering.
v2:
- Fixes in the commit message (Nanley Chery)
- Reversed the changes in brw_get_texture_swizzle and swapped the b, g
values at the time that we decompress the data in the function:
intel_miptree_update_etc_shadow of intel_mipmap_tree.c (Nanley Chery)
- Simplified the format checks in the miptree_create function of the
intel_mipmap_tree.c and reserved the call of the
intel_lower_compressed_format for the case that we are faking the ETC
support (Nanley Chery)
- Removed the check for the auxiliary usage for the shadow miptree at
creation (miptree_create of intel_mipmap_tree.c) as we won't use
auxiliary buffers with these types of trees (Nanley Chery)
- Set the etc_format of the non-ETC miptrees to MESA_FORMAT_NONE and
removed the unecessary checks (Nanley Chery)
- Fixed an unrelated indentation change (Nanley Chery)
- Modified the function intel_miptree_finish_write to set the
mt->shadow_needs_update to true to catch all the cases when we need to
update the miptree (Nanley Chery)
- In order to update the shadow miptree during the unmap of the
main and always map the main (Nanley Chery) the following change was
necessary: Splitted the previous update function that was updating all
the mipmap levels and use two functions instead: one that updates one
level and one that updates all of them. Used the first during unmap
and the second before the rendering.
- Removed the BRW_MAP_ETC_BIT flag and the mechanism to decide which
miptree should be mapped each time and reversed all the changes in the
higher level texture functions that upload data to textures as they
aren't needed anymore.
- Replaced the boolean needs_fake_etc with an inline function that
checks when we need to fake the ETC compression (Nanley Chery)
- Removed the initialization of the strides in the update function as
the values will be overwritten by the intel_miptree_map call (Nanley
Chery)
- Used minify instead of division in the new update function
intel_miptree_update_etc_shadow_levels in intel_mipmap_tree.c (Nanley
Chery)
- Removed the depth from the calculation of the number of slices in
the new update function (intel_miptree_update_etc_shadow_levels of
intel_mipmap_tree.c) as we don't need to support 3D ETC images.
(Nanley Chery)
v3:
- Renamed the rgba_fmt in function miptree_create
(intel_mipmap_tree.c) to decomp_format as the format is not always in
rgba order. (Nanley Chery)
- Documented the new usage for the shadow miptree in the comment above
the field in the intel_miptree struct in intel_mipmap_tree.h (Nanley
Chery)
- Removed the redundant flags from the mapping of the miptrees in
intel_miptree_update_etc_shadow of intel_mipmap_tree.c (Nanley Chery)
- Fixed the switch from surface's logical level to physical level in
the intel_miptree_update_etc_shadow_levels of intel_mipmap_tree.c
(Nanley Chery)
- Excluded the Baytrail GPUs from the check for the ETC emulation as
they support the ETC formats natively. (Nanley Chery)
- Simplified the check if the format is BGRA in
intel_miptree_update_etc_shadow of intel_mipmap_tree.c (Nanley Chery)
v4:
- Removed the functions intel_miptree_(map|unmap)_etc and the check if
we need to call them as with the new changes, they became unreachable.
(Nanley Chery)
- We'd rather calculate the level width and height using the shadow
miptree instead of the main in intel_miptree_update_etc_shadow_levels of
intel_mipmap_tree.c (Nanley Chery)
- Fixed the format in the mt_surface_usage, set at the miptree creation,
in miptree_create of intel_mipmap_tree.c (Nanley Chery)
v5:
- Fixed the levels calculations in intel_mipmap_tree.c (Nanley Chery)
- Update the flag shadow_needs_update outside the function
intel_miptree_update_etc_shadow (Nanley Chery)
- Fixed indentation error (Nanley Chery)
v6:
- Fixed typo in commit message (Nanley Chery)
- Simplified the assignment of the mt_fmt in the miptree_create of the
intel_mipmap_tree.c (Nanley Chery)
- Combined declarations and assignments where it was possible in the
intel_miptree_update_etc_shadow and
intel_miptree_update_etc_shadow_levels of the intel_mipmap_tree.c
(Nanley Chery)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=81843
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104272
Reviewed-by: Nanley Chery <[email protected]>
|
|
|
|
|
|
|
| |
Use more generic field names. We'll reuse these fields for a workaround
with ASTC miptrees.
Reviewed-by: Eleni Maria Stea <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was probably useful when it was first written, however it
looks to be no longer necessary.
As far as I can tell these days dce is smart enough to remove useless
instructions from if branches. Once this is done
nir_opt_peephole_select() will end up removing the empty if.
Removing this support reduces the dolphin uber shader compilation
time spent in nir_opt_dead_cf() by a little over 7x.
No shader-db changes on i965 or radeonsi.
Tested-by: Dieter Nützel <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Reviewed-by: Connor Abbott <[email protected]>
|
|
|
|
| |
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
|
| |
Reduce stack space used by clipper, which had lead to crashes in some
versions for MSVC
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
| |
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
| |
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
|
|
| |
- Ensure all threads have optimal floating-point control state
- Disable auto-generation of fused FP ops for VERTEX shader stage
- Disable "fast" FP ops for VERTEX shader stage
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
| |
Reduces amount of compile churn when testing different default values
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
|
| |
The intrinsic returns the number of leading zeros, not the bit number of
the first nonzero, so just flip it based on the mask size
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
| |
Fixes crashes on some compute shaders when running on AVX512
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
|
| |
- Was not useful to inline in release builds
- FORCEINLINE can be used if absolutely necessary
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
| |
Fulfills an unused internal interface
Reviewed-by: Bruce Cherniak <[email protected]>
|
|
|
|
|
|
|
| |
normalized and scaled formats also return floats.
Fixes: 4b3549c0846 ("radv: reduce the number of loaded channels for vertex input fetches")
Reviewed-by: Samuel Pitoiset <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All of the affected shaders are Unreal4 demos.
All Gen6+ platforms had similar results. (Skylake shown)
total instructions in shared programs: 15437170 -> 15437001 (<.01%)
instructions in affected programs: 21536 -> 21367 (-0.78%)
helped: 43
HURT: 0
helped stats (abs) min: 1 max: 4 x̄: 3.93 x̃: 4
helped stats (rel) min: 0.68% max: 1.01% x̄: 0.80% x̃: 0.80%
95% mean confidence interval for instructions value: -4.07 -3.79
95% mean confidence interval for instructions %-change: -0.83% -0.77%
Instructions are helped.
total cycles in shared programs: 383007896 -> 383007378 (<.01%)
cycles in affected programs: 158640 -> 158122 (-0.33%)
helped: 38
HURT: 4
helped stats (abs) min: 1 max: 48 x̄: 13.89 x̃: 6
helped stats (rel) min: 0.03% max: 1.01% x̄: 0.33% x̃: 0.19%
HURT stats (abs) min: 2 max: 3 x̄: 2.50 x̃: 2
HURT stats (rel) min: 0.06% max: 0.09% x̄: 0.08% x̃: 0.08%
95% mean confidence interval for cycles value: -16.90 -7.77
95% mean confidence interval for cycles %-change: -0.39% -0.19%
Cycles are helped.
Iron Lake and GM45 had similar results. (Iron Lake shown)
total instructions in shared programs: 8213746 -> 8213745 (<.01%)
instructions in affected programs: 127 -> 126 (-0.79%)
helped: 1
HURT: 0
total cycles in shared programs: 187734146 -> 187734144 (<.01%)
cycles in affected programs: 2132 -> 2130 (-0.09%)
helped: 1
HURT: 0
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Section 5.4.1 (Conversion and Scalar Constructors) of the GLSL 4.60 spec
says:
It is undefined to convert a negative floating-point value to an
uint.
Assuming that (uint)some_float behaves like (uint)(int)some_float allows
some optimizations in the i965 backend to proceed.
This basically undoes the small amount of damage done by
"intel/compiler: Avoid propagating inequality cmods if types are
different".
v2: Replicate part of the commit message as a comment in the code.
Suggested by Jason.
shader-db results compairing *before* "intel/compiler: Avoid propagating
inequality cmods if types are different" and after this commit:
Skylake
total cycles in shared programs: 383007996 -> 383007896 (<.01%)
cycles in affected programs: 85208 -> 85108 (-0.12%)
helped: 13
HURT: 8
helped stats (abs) min: 2 max: 26 x̄: 10.77 x̃: 6
helped stats (rel) min: 0.09% max: 0.65% x̄: 0.28% x̃: 0.14%
HURT stats (abs) min: 2 max: 12 x̄: 5.00 x̃: 3
HURT stats (rel) min: 0.04% max: 0.32% x̄: 0.12% x̃: 0.07%
95% mean confidence interval for cycles value: -9.31 -0.21
95% mean confidence interval for cycles %-change: -0.24% <.01%
Cycles are helped.
Broadwell
total cycles in shared programs: 415251194 -> 415251370 (<.01%)
cycles in affected programs: 83750 -> 83926 (0.21%)
helped: 7
HURT: 13
helped stats (abs) min: 10 max: 12 x̄: 11.43 x̃: 12
helped stats (rel) min: 0.30% max: 0.30% x̄: 0.30% x̃: 0.30%
HURT stats (abs) min: 2 max: 36 x̄: 19.69 x̃: 22
HURT stats (rel) min: 0.05% max: 0.89% x̄: 0.44% x̃: 0.47%
95% mean confidence interval for cycles value: 0.76 16.84
95% mean confidence interval for cycles %-change: <.01% 0.37%
Inconclusive result (%-change mean confidence interval includes 0).
Haswell
total instructions in shared programs: 13823885 -> 13823886 (<.01%)
instructions in affected programs: 2249 -> 2250 (0.04%)
helped: 0
HURT: 1
total cycles in shared programs: 390094243 -> 390094001 (<.01%)
cycles in affected programs: 85640 -> 85398 (-0.28%)
helped: 15
HURT: 6
helped stats (abs) min: 4 max: 26 x̄: 18.53 x̃: 18
helped stats (rel) min: 0.09% max: 0.66% x̄: 0.47% x̃: 0.42%
HURT stats (abs) min: 2 max: 14 x̄: 6.00 x̃: 2
HURT stats (rel) min: 0.04% max: 0.37% x̄: 0.15% x̃: 0.04%
95% mean confidence interval for cycles value: -17.36 -5.69
95% mean confidence interval for cycles %-change: -0.44% -0.14%
Cycles are helped.
Ivy Bridge
total cycles in shared programs: 180986448 -> 180986552 (<.01%)
cycles in affected programs: 34835 -> 34939 (0.30%)
helped: 0
HURT: 10
HURT stats (abs) min: 2 max: 18 x̄: 10.40 x̃: 10
HURT stats (rel) min: 0.06% max: 0.36% x̄: 0.28% x̃: 0.30%
95% mean confidence interval for cycles value: 4.67 16.13
95% mean confidence interval for cycles %-change: 0.20% 0.35%
Cycles are HURT.
Sandy Bridge
total cycles in shared programs: 154603969 -> 154603970 (<.01%)
cycles in affected programs: 171514 -> 171515 (<.01%)
helped: 25
HURT: 14
helped stats (abs) min: 1 max: 4 x̄: 1.80 x̃: 1
helped stats (rel) min: 0.02% max: 0.10% x̄: 0.04% x̃: 0.04%
HURT stats (abs) min: 1 max: 8 x̄: 3.29 x̃: 3
HURT stats (rel) min: 0.03% max: 0.28% x̄: 0.10% x̃: 0.11%
95% mean confidence interval for cycles value: -0.91 0.96
95% mean confidence interval for cycles %-change: -0.02% 0.04%
Inconclusive result (value mean confidence interval includes 0).
No changes on Iron Lake or GM45.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
| |
v2 (idr): Move adding the test to after adding the fix. Reordering the
two commits prevents possible headaches for git-bisect with scripts that
always do 'ninja check'.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109404
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v2: Fix silly bug in logic. s/||/&&/
All but one of the affected shaders is in an Unreal4 demo. The other is
in Tomb Raider. All of the cases that Ian investigated appear to be
sequences like the following
if (int(uint(some_float)) < 0) /* other relations too */
...
At least in Tomb Raider, it's not obvious that this sequence came from
the original shader.
In some of the Unreal demos, the shader contains code like
if (int(uint(textureLod(...))) > 0)
...
which explicitly generates the offending sequence.
All Gen6+ platforms had similar results (Skylake shown):
total instructions in shared programs: 15437170 -> 15437187 (<.01%)
instructions in affected programs: 4492 -> 4509 (0.38%)
helped: 0
HURT: 17
HURT stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
HURT stats (rel) min: 0.05% max: 0.73% x̄: 0.66% x̃: 0.73%
95% mean confidence interval for instructions value: 1.00 1.00
95% mean confidence interval for instructions %-change: 0.57% 0.75%
Instructions are HURT.
total cycles in shared programs: 383007996 -> 383007992 (<.01%)
cycles in affected programs: 20542 -> 20538 (-0.02%)
helped: 6
HURT: 7
helped stats (abs) min: 2 max: 6 x̄: 5.33 x̃: 6
helped stats (rel) min: 0.11% max: 0.36% x̄: 0.32% x̃: 0.36%
HURT stats (abs) min: 4 max: 4 x̄: 4.00 x̃: 4
HURT stats (rel) min: 0.27% max: 0.27% x̄: 0.27% x̃: 0.27%
95% mean confidence interval for cycles value: -3.30 2.69
95% mean confidence interval for cycles %-change: -0.19% 0.19%
Inconclusive result (value mean confidence interval includes 0).
No changes on Iron Lake or GM45.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109404
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
Tested-by: [email protected]
Tested-by: Danylo Piliaiev <[email protected]>
|
|
|
|
|
|
|
| |
We emit an FBL instruction which only exists since Gen7. This prevents
the test from segfaulting when run with TEST_DEBUG=1.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
Add compute shader initilization, assign and cleanup in vl_compositor API.
Set video compositor compute shader render as default when pipe support it.
Signed-off-by: James Zhu <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
Add compute shader to support video compositor render.
Signed-off-by: James Zhu <[email protected]>
Acked-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
| |
Rename csc_matrix to shader_params, and increase shader_params size
to store more constants for compute shader,
Signed-off-by: James Zhu <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
| |
Split vl_compositor graphic shaders from vl_compositor API in order to share
vl_compositor API with vl_compositor compute shader later.
Signed-off-by: James Zhu <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
| |
Move dirty define to header file to share with compute shader.
Signed-off-by: James Zhu <[email protected]>
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In opt_peel_initial_if optimization, when moving the continue list to
end of the continue block, before the jump, could happen that the
continue list itself also ends with a jump.
This would mean that we would have two jump instructions in a row: the
first one from the continue list and the second one from the contine
block.
As inserting an instruction after a jump is not allowed (and it does not
make sense, as it will not be executed), remove the jump from the
continue block and keep the one from continue list, as it will be
executed first.
CC: Jason Ekstrand <[email protected]>
Reviewed-by: Caio Marcelo de Oliveira Filho <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
opt_split_alu_of_phi moves ALU instruction to the end of continue block.
But if the continue block ends with a jump instruction (an explicit
"continue" instruction) then the ALU must be inserted before the jump,
as it is illegal to add instructions after the jump.
CC: Ian Romanick <[email protected]>
Fixes: 0881e90c099 ("nir: Split ALU instructions in loops that read phis")
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of generating a GL_INVALID_ENUM error when the type or format
is incorrect while using glClear{Named}Buffer{Sub}Data, generate
GL_INVALID_VALUE.
From page 72 (page 94 of the PDF) of the OpenGL 4.6 spec:
" An INVALID_VALUE error is generated if type is not one of the
types in table 8.2.
An INVALID_VALUE error is generated if format is not one of the
formats in table 8.3."
Fixes the following test:
KHR-GL45.direct_state_access.buffers_errors
v2: correct the doxygen documentation.
Cc: Pi Tabred <[email protected]>
Cc: Brian Paul <[email protected]>
Signed-off-by: Andres Gomez <[email protected]>
Reviewed-by: Tapani Pälli <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've noticed the Team Fortress 2 engine seems to do many small
calls to glSubData(..). Let's pick our heuristic based on the
resource base width, not the size of a particular upload.
This will cause transfers to be batched together in the transfer
queue.
Revelant glbench microbenchmark --
Before: buffer_upload_dynamic_element_array_131072 = 131.17 mbytes_sec
After: buffer_upload_dynamic_element_array_131072 = 6828.24 mbytes_sec
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This improves Unigine Valley benchmark by 3 to 10 fps (depending
on the scene).
It also improves the Team Fortress 2 benchmark from 6 fps to 13
fps (host: 20 fps).
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Transfers will be placed here at unmap time instead of incurring
a VM exit. There's an attempt to deduplicate intersecting 1D transfers,
which are surprisingly common.
This can also help with mipmapped texture upload and smaller
textures, where the majority of the time is spent in the guest
kernel / QEMU -- not virglrenderer. This is shown by the GLbench
texture upload benchmark:
Before:
texture_upload_rgba_teximage2d_32 = 64.23 mtexel_sec
After:
texture_upload_rgba_teximage2d_32 = 367.44 mtexel_sec
v2: Split up list iteration functions (@gerddie)
v3: Support for optimizing glBufferSubData
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
| |
Let's encode the new protocol with new helper functions.
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is to have two command buffers:
1) One for transfers
2) One for commands, which can include transfers
At flush time, (2) will be filled. Otherwise, (1) will be
used to submit transfers if there are enough of them.
v2: Pass size directly to cmd_buf_create (@gerddie)
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is motivated by the following scenario:
glSubBufferData(GL_ARRAY_BUFFER, ...)
glFlush(..)
glSubBufferData(GL_ARRAY_BUFFER, ...)
glSubBufferData(GL_ARRAY_BUFFER, ...)
glSubBufferData(GL_ARRAY_BUFFER, ...)
This increases @davidriley's Team Fortress 2 apitrace from
1 fps to 6 fps and helps with the Chromium glbench
microbenchmarks:
Before: texture_update_rgba_texsubimage2d_2048 = 554.96 mtexel_sec
buffer_upload_dynamic_array_12 = 0.02 mbytes_sec
buffer_upload_dynamic_array_576 = 1.07 mbytes_sec
After: texture_update_rgba_texsubimage2d_2048 = 612.29 mtexel_sec
buffer_upload_dynamic_array_12 = 2.22 mbytes_sec
buffer_upload_dynamic_array_576 = 164.89 mbytes_sec
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
| |
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
| |
It's good to keep track of these things.
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Much of our logic is based around the idea the upper 16 bits
of a command dword can encode the length of the command.
Now that the command buffer >= 2^16 - 1, we should check for
this.
v2: alignment, and only check VIRGL_ENCODE_MAX_DWORDS
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's define a helper function and use it.
This commit also allows resources to be emitted into different command
buffers.
Like the ioctls, send 0 for layer_stride and stride. If we actually
send the real values, there are various assumptions in virglrenderer
for non-1D buffers that may need to be modified.
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Mostly similar to VIRGL_CCMD_RESOURCE_INLINE_WRITE. However, this
uses the resource's already attached iovecs rather than the command
buffer to transfer the data.
v2: Used (1 << 16) not (1 << 15) [@gerddie]
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
| |
This will allow us to destroy transfers w/o having a pointer
to the context.
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
| |
This should save some memory when allocating and freeing transfers.
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
|
| |
Since we're just uploading to guest memory, let's just align to dword
size.
Fixes: e0f932 ("u_upload_mgr: pass alignment to u_upload_data manually")
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
| |
This allows a minor optimization for texture upload.
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
|
|
|
| |
The guest memory is still clean until host GL touches it,
which we should track elsewhere.
Reviewed-by: Gert Wollny <[email protected]>
|
|
|
|
| |
Reviewed-by: Gert Wollny <[email protected]>
|