aboutsummaryrefslogtreecommitdiffstats
path: root/src/intel
Commit message (Collapse)AuthorAgeFilesLines
* intel/compiler: Add a file-level description of brw_eu_validate.cMatt Turner2019-01-261-1/+13
| | | | | | Acked-by: Jose Maria Casanova Crespo <[email protected]> Reviewed-by: Iago Toral Quiroga <[email protected]> Reviewed-by: Francisco Jerez <[email protected]>
* android: fix build issues with libmesa_anv_gen* librariesTapani Pälli2019-01-251-0/+1
| | | | | | | We need this include path to find nir/nir_xfb_info.h. Signed-off-by: Tapani Pälli <[email protected]> Reviewed-by: Eric Engestrom <[email protected]>
* intel/batch-decoder: fix a vb end address calculationAndrii Simiklit2019-01-251-1/+3
| | | | | | | | | | According to the loop implementation (in 'ctx_print_buffer' function), which advances dword by dword over vertex buffer(vb), the vb size should be aligned by 4 bytes too. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109449 Signed-off-by: Andrii Simiklit <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]>
* intel/batch-decoder: fix vertex buffer size calculation for gen<8Andrii Simiklit2019-01-251-1/+1
| | | | | | | | | | | | | | | | It should be incremented by one according to how it is calculated by 'emit_vertex_buffer_state': "\#if GEN_GEN < 8 .BufferAccessType = step_rate ? INSTANCEDATA : VERTEXDATA, .InstanceDataStepRate = step_rate, \#if GEN_GEN >= 5 .EndAddress = ro_bo(bo, end_offset - 1), \#endif \#endif" Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109449 Signed-off-by: Andrii Simiklit <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]>
* anv: drop always-successful VkResultEric Engestrom2019-01-251-9/+4
| | | | | Signed-off-by: Eric Engestrom <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Avoid race condition in anv_block_pool_map.Rafael Antognolli2019-01-242-6/+27
| | | | | | | | | | | | | | | | | Accessing bo->map and then pool->center_bo_offset without a lock is racy. One way of avoiding such race condition is to store the bo->map + center_bo_offset into pool->map at the time the block pool is growing, which happens within a lock. v2: Only set pool->map if not using softpin (Jason). v3: Move things around and only update center_bo_offset if not using softpin too (Jason). Cc: Jason Ekstrand <[email protected]> Reported-by: Ian Romanick <[email protected]> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109442 Fixes: fc3f58832015cbb177179e7f3420d3611479b4a9 Reviewed-by: Jason Ekstrand <[email protected]>
* intel/compiler: Reset default flag register in brw_find_live_channel()Matt Turner2019-01-231-2/+11
| | | | | | | | | | | | emit_uniformize() emits SHADER_OPCODE_FIND_LIVE_CHANNEL with its flag_subreg set, so that the IR knows which flag is accessed. However the flag is only used on Gen7 in Align1 mode. To avoid setting unnecessary bits in the instruction words, get the information we need and reset the default flag register. This allows round-tripping through the assembler/disassembler. Reviewed-by: Francisco Jerez <[email protected]>
* anv: Implement transform feedback queriesJason Ekstrand2019-01-223-2/+73
| | | | Reviewed-by: Lionel Landwerlin <[email protected]>
* genxml: Add SO_PRIM_STORAGE_NEEDED and SO_NUM_PRIMS_WRITTENJason Ekstrand2019-01-226-0/+192
| | | | Reviewed-by: Lionel Landwerlin <[email protected]>
* anv: Implement CmdBegin/EndQueryIndexedJason Ekstrand2019-01-221-1/+20
| | | | Reviewed-by: Lionel Landwerlin <[email protected]>
* anv: Implement vkCmdDrawIndirectByteCountEXTJason Ekstrand2019-01-222-1/+148
| | | | | | | | Annoyingly, this requires that we implement integer division on the command streamer. Fortunately, we're only ever dividing by constants so we can use the mulh+add+shift trick and it's not as bad as it sounds. Reviewed-by: Lionel Landwerlin <[email protected]>
* anv: Implement the basic form of VK_EXT_transform_feedbackJason Ekstrand2019-01-227-2/+329
| | | | Reviewed-by: Lionel Landwerlin <[email protected]>
* anv: Add pipeline cache support for xfb_infoJason Ekstrand2019-01-224-9/+52
| | | | Reviewed-by: Lionel Landwerlin <[email protected]>
* anv: Add but do not enable VK_EXT_transform_feedbackJason Ekstrand2019-01-221-0/+1
| | | | Reviewed-by: Lionel Landwerlin <[email protected]>
* anv: Always emit at least one vertex elementJason Ekstrand2019-01-221-3/+1
| | | | | | | | | This seems to make the simulator happier. The early return wasn't really protecting anything and the code that follows will happily initialize the dummy element to STORE_0 and emit it. Reviewed-by: Alejandro Piñeiro <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]>
* anv/pipeline: Add a pdevice helper variableJason Ekstrand2019-01-211-9/+10
| | | | | Reviewed-by: Lionel Landwerlin <[email protected]> Reviewed-by: Iago Toral Quiroga <[email protected]>
* anv: Only parse pImmutableSamplers if the descriptor has samplersJason Ekstrand2019-01-211-12/+31
| | | | | | | Cc: [email protected] Reviewed-by: Tapani Pälli <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]> Reviewed-by: Iago Toral Quiroga <[email protected]>
* nir: replace more nir_load_system_value calls with builder functionsKarol Herbst2019-01-211-2/+1
| | | | | | Signed-off-by: Karol Herbst <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]> Reviewed-by: Bas Nieuwenhuizen <[email protected]>
* nir: rename nir_var_ssbo to nir_var_mem_ssboKarol Herbst2019-01-191-1/+1
| | | | | | | | Signed-off-by: Karol Herbst <[email protected]> Acked-by: Jason Ekstrand <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]> Reviewed-by: Bas Nieuwenhuizen <[email protected]>
* nir: rename nir_var_ubo to nir_var_mem_uboKarol Herbst2019-01-191-1/+1
| | | | | | | | Signed-off-by: Karol Herbst <[email protected]> Acked-by: Jason Ekstrand <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]> Reviewed-by: Bas Nieuwenhuizen <[email protected]>
* nir: rename nir_var_function to nir_var_function_tempKarol Herbst2019-01-192-9/+9
| | | | | | | | Signed-off-by: Karol Herbst <[email protected]> Acked-by: Jason Ekstrand <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]> Reviewed-by: Bas Nieuwenhuizen <[email protected]>
* intel/genxml: add missing MI_PREDICATE compare operationsLionel Landwerlin2019-01-197-1/+12
| | | | | | | | Doesn't save us a great deal of lines but at least they get decoded in aubinators. Signed-off-by: Lionel Landwerlin <[email protected]> Reviewed-by: Rafael Antognolli <[email protected]>
* anv: document cache flushes & invalidationsLionel Landwerlin2019-01-191-0/+67
| | | | | | | | | | | | | | | | | A little bit of explanation regarding how vkCmdPipelineBarrier() works. v2: Avoid referring to data port cache when it's actually sampler caches (Jason) Complete explanation for indirect draws (Jason) v3: s/samplers/sampler/ (Jason) s/UBOs/data port/ Add documentation for VK_ACCESS_CONDITIONAL_RENDERING_READ_BIT_EXT (Lionel) Signed-off-by: Lionel Landwerlin <[email protected]> Acked-by: Eric Engestrom <[email protected]> (v1) Reviewed-by: Jason Ekstrand <[email protected]> (v2)
* anv: narrow flushing of the render target to buffer writesLionel Landwerlin2019-01-196-20/+15
| | | | | | | | | | | | | | In commit 9a7b3199037ac4 ("anv/query: flush render target before copying results") we tracked all the render target writes to apply a flushes in the vkCopyQueryResults(). But we can narrow this down to only when we write a buffer (which is the only input of vkCopyQueryResults). v2: Drop newer render target write flags introduce by 1952fd8d2ce905 ("anv: Implement VK_EXT_conditional_rendering for gen 7.5+") Signed-off-by: Lionel Landwerlin <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]> (v1)
* intel/fs: Promote execution type to 32-bit when any half-float conversion is ↵Francisco Jerez2019-01-181-0/+21
| | | | | | | | | | | | | | | needed. The docs are fairly incomplete and inconsistent about it, but this seems to be the reason why half-float destinations are required to be DWORD-aligned on BDW+ projects. This way the regioning lowering pass will make sure that the destination components of W to HF and HF to W conversions are aligned like the corresponding conversion operation with 32-bit execution data type. Tested-by: Iago Toral Quiroga <[email protected]> Reviewed-by: Iago Toral Quiroga <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Implement VK_EXT_conditional_rendering for gen 7.5+Danylo Piliaiev2019-01-187-14/+265
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conditional rendering affects next functions: - vkCmdDraw, vkCmdDrawIndexed, vkCmdDrawIndirect, vkCmdDrawIndexedIndirect - vkCmdDrawIndirectCountKHR, vkCmdDrawIndexedIndirectCountKHR - vkCmdDispatch, vkCmdDispatchIndirect, vkCmdDispatchBase - vkCmdClearAttachments Value from conditional buffer is cached into designated register, MI_PREDICATE is emitted every time conditional rendering is enabled and command requires it. v2: by Jason Ekstrand - Use vk_find_struct_const instead of manually looping - Move draw count loading to prepare function - Zero the top 32-bits of MI_ALU_REG15 v3: Apply pipeline flush before accessing conditional buffer (The issue was found by Samuel Iglesias) v4: - Remove support of Haswell due to possible hardware bug - Made TMP_REG_PREDICATE and TMP_REG_DRAW_COUNT defines to define registers in one place. v5: thanks to Jason Ekstrand and Lionel Landwerlin - Workaround the fact that MI_PREDICATE_RESULT is not accessible on Haswell by manually calculating MI_PREDICATE_RESULT and re-emitting MI_PREDICATE when necessary. v6: suggested by Lionel Landwerlin - Instead of calculating the result of predicate once - re-emit MI_PREDICATE to make it easier to investigate error states. v7: suggested by Jason - Make anv_pipe_invalidate_bits_for_access_flag add CS_STALL if VK_ACCESS_CONDITIONAL_RENDERING_READ_BIT is set. v8: suggested by Lionel - Precompute conditional predicate's result to support secondary command buffers. - Make prepare_for_draw_count_predicate more readable. Signed-off-by: Danylo Piliaiev <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Implement VK_KHR_draw_indirect_count for gen 7+Danylo Piliaiev2019-01-182-0/+148
| | | | | | | | | | | | | | | v2: by Jason Ekstrand - Move out of the draw loop population of registers which aren't changed in it. - Remove dependency on ALU registers. - Clarify usage of PIPE_CONTROL - Without usage of ALU registers patch works for gen7+ v3: set pending_pipe_bits |= ANV_PIPE_RENDER_TARGET_WRITES Signed-off-by: Danylo Piliaiev <[email protected]> Reviewed-by: Lionel Landwerlin <[email protected]> Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Re-sort the extensions listJason Ekstrand2019-01-181-6/+6
| | | | | | I like to keep things in good order so that you can find them. Reviewed-by: Lionel Landwerlin <[email protected]>
* intel/fs: Don't touch accumulator destination while applying regioning ↵Jason Ekstrand2019-01-181-1/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | alignment rule In some shaders, you can end up with a stride in the source of a SHADER_OPCODE_MULH. One way this can happen is if the MULH is acting on the top bits of a 64-bit value due to 64-bit integer lowering. In this case, the compiler will produce something like this: mul(8) acc0<1>UD g5<8,4,2>UD 0x0004UW { align1 1Q }; mach(8) g6<1>UD g5<8,4,2>UD 0x00000004UD { align1 1Q AccWrEnable }; The new region fixup pass looks at the MUL and sees a strided source and unstrided destination and determines that the sequence is illegal. It then attempts to fix the illegal stride by replacing the destination of the MUL with a temporary and emitting a MOV into the accumulator: mul(8) g9<2>UD g5<8,4,2>UD 0x0004UW { align1 1Q }; mov(8) acc0<1>UD g9<8,4,2>UD { align1 1Q }; mach(8) g6<1>UD g5<8,4,2>UD 0x00000004UD { align1 1Q AccWrEnable }; Unfortunately, this new sequence isn't correct because MOV accesses the accumulator with a different precision to MUL and, instead of filling the bottom 32 bits with the source and zeroing the top 32 bits, it leaves the top 32 (or maybe 31) bits alone and full of garbage. When the MACH comes along and tries to complete the multiplication, the result is correct in the bottom 32 bits (which we throw away) and garbage in the top 32 bits which are actually returned by MACH. This commit does two things: First, it adds an assert to ensure that we don't try to rewrite accumulator destinations of MUL instructions so we can avoid this precision issue. Second, it modifies required_dst_byte_stride to require a tightly packed stride so that we fix up the sources instead and the actual code which gets emitted is this: mov(8) g9<1>UD g5<8,4,2>UD { align1 1Q }; mul(8) acc0<1>UD g9<8,8,1>UD 0x0004UW { align1 1Q }; mach(8) g6<1>UD g5<8,4,2>UD 0x00000004UD { align1 1Q AccWrEnable }; Fixes: efa4e4bc5fc "intel/fs: Introduce regioning lowering pass" Reviewed-by: Francisco Jerez <[email protected]>
* intel/eu: Stop overriding exec sizes in send_indirect_messageJason Ekstrand2019-01-181-3/+0
| | | | | | | | For a long time, we based exec sizes on destination register widths. We've not been doing that since 1ca3a9442760b6f7 but a few remnants accidentally remained. Reviewed-by: Anuj Phogat <[email protected]>
* anv/tests: Adding test for the state_pool padding.Rafael Antognolli2019-01-172-1/+75
| | | | | | | Add a test that checks that we can use the extra space allocated for padding while allocating larger anv_states. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Add support for non-userptr.Rafael Antognolli2019-01-171-46/+71
| | | | | | | | | | | | | If softpin is supported, create new BOs for the required size and add the respective BO maps. The other main change of this commit is that anv_block_pool_map() now returns the map for the BO that the given offset is part of. So there's no block_pool->map access anymore (when softpin is used. v3: - set fd to -1 on softpin case (Jason) Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Remove state flush.Rafael Antognolli2019-01-1710-51/+2
| | | | | | | We have all the state buffers snooped, so we don't need to clflush everything anymore. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Enable snooping on block pool and anv_bo_pool BOs.Rafael Antognolli2019-01-171-10/+16
| | | | | | | | | | | | | | | | | | We are not going to use userptr for anv block pool BOs anymore. However, so far we have been relying on the fact that userptr BOs are snooped on non-llc platforms. Let's make sure that the block pool BOs are still snooped, and we can also remove the clflush'ing that we do on all state buffers. And since we plan to remove the flushes, set the anv_bo_pool BOs to cached (snooped on non-LLC platforms) too. For LLC platforms, they are all cached by default, so this becomes a no-op. v5: - Add snooping to anv_bo_pool BOs too (Jason). - Remove anv_gem_set_domain. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Add padding information.Rafael Antognolli2019-01-173-10/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's possible that we still have some space left in the block pool, but we try to allocate a state larger than that state. This means such state would start somewhere within the range of the old block_pool, and end after that range, within the range of the new size. That's fine when we use userptr, since the memory in the block pool is CPU mapped continuously. However, by the end of this series, we will have the block_pool split into different BOs, with different CPU mapping ranges that are not necessarily continuous. So we must avoid such case of a given state being part of two different BOs in the block pool. This commit solves the issue by detecting that we are growing the block_pool even though we are not at the end of the range. If that happens, we don't use the space left at the end of the old size, and consider it as "padding" that can't be used in the allocation. We update the size requested from the block pool to take the padding into account, and return the offset after the padding, which happens to be at the start of the new address range. Additionally, we return the amount of padding we used, so the caller knows that this happens and can return that padding back into a list of free states, that can be reused later. This way we hopefully don't waste any space, but also avoid having a state split between two different BOs. v3: - Calculate offset + padding at anv_block_pool_alloc_new (Jason). v4: - Remove extra "leftover". Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Rework chunk return to the state pool.Rafael Antognolli2019-01-171-23/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit tries to rework the code that split and returns chunks back to the state pool, while still keeping the same logic. The original code would get a chunk larger than we need and split it into pool->block_size. Then it would return all but the first one, and would split that first one into alloc_size chunks. Then it would keep the first one (for the allocation), and return the others back to the pool. The new anv_state_pool_return_chunk() function will take a chunk (with the alloc_size part removed), and a small_size hint. It then splits that chunk into pool->block_size'd chunks, and if there's some space still left, split that into small_size chunks. small_size in this case is the same size as alloc_size. The idea is to keep the same logic, but make it in a way we can reuse it to return other chunks to the pool when we are growing the buffer. v2: - Include Jason's suggestions to the algorithm that returns chunks. - Update comments. v3: - Disallow returning 0 blocks (Jason). - fix min_size in the loop (Jason). - remove temporary variables (Jason) v4: - return_chunk() should never return blocks larger than pool->block_size. Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Remove some asserts.Rafael Antognolli2019-01-171-3/+0
| | | | | | | They won't be true anymore once we add support for multiple BOs with non-userptr. Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Validate the list of BOs from the block pool.Rafael Antognolli2019-01-171-5/+49
| | | | | | | | | | | | | | | | | | | We now have multiple BOs in the block pool, but sometimes we still reference only the first one in some instructions, and use relative offsets in others. So we must be sure to add all the BOs from the block pool to the validation list when submitting commands. v2: - Don't add block pool BOs to the dependency list right before execbuf (Jason) - Call anv_execbuf_add_bo() to each BO in the block pools (Jason) - Use anv_execbuf_add_bo_set() to add surface state dependencies to execbuf. v3: - Add comment to the non-softpin case (Jason). Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Split code to add BO dependencies to execbuf.Rafael Antognolli2019-01-171-23/+39
| | | | | | | | | This part of the anv_execbuf_add_bo() code is totally independent of the BO being added. Let's split it out, so we can reuse it later. v3: rename to anv_execbuf_add_bo_set (Jason). Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Add support for a list of BOs in block pool.Rafael Antognolli2019-01-172-11/+59
| | | | | | | | | | | | | | | | | | | So far we use only one BO (the last one created) in the block pool. When we switch to not use the userptr API, we will need multiple BOs. So add code now to store multiple BOs in the block pool. This has several implications, the main one being that we can't use pool->map as before. For that reason we update the getter to find which BO a given offset is part of, and return the respective map. v3: - Simplify anv_block_pool_map (Jason). - Use fixed size array for anv_bo's (Jason) v4: - Respect the order (item, container) in anv_block_pool_foreach_bo (Jason). Reviewed-by: Jason Ekstrand <[email protected]>
* anv: Update usage of block_pool->bo.Rafael Antognolli2019-01-177-31/+36
| | | | | | | | | | | Change block_pool->bo to be a pointer, and update its usage everywhere. This makes it simpler to switch it later to a list of BOs. v3: - Use a static "bos" field in the struct, instead of malloc'ing it. This will be later changed to a fixed length array of BOs. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Remove pool->map.Rafael Antognolli2019-01-173-19/+7
| | | | | | | | After switching to using anv_state_table, there are very few places left still using pool->map directly. We want to avoid that because it won't be always the right map once we split it into multiple BOs. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Rename anv_free_list2 to anv_free_list.Rafael Antognolli2019-01-172-31/+30
| | | | | | Now that we removed the original anv_free_list, we can now use its name. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Remove anv_free_list.Rafael Antognolli2019-01-172-66/+0
| | | | | | | The next commit already renames anv_free_list2 -> anv_free_list since the old one is gone. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Use anv_state_table on back_alloc too.Rafael Antognolli2019-01-172-15/+22
| | | | Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Use anv_state_table on anv_state_pool_alloc.Rafael Antognolli2019-01-172-35/+48
| | | | | | | | | | Use anv_state_pool_return_blocks() to return blocks to the pool, instead of manually pushing them. v3: - return blocks from the end of the chunk (Jason). Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Add helper to push states back to the state table.Rafael Antognolli2019-01-172-0/+35
| | | | | | | | | | | | | | The use of anv_state_table_add() combined with anv_state_table_push(), specially when adding a bunch of states to the table, is very verbose. So we add this helper that makes things easier to digest. We also already add the anv_state_table member in this commit, so things can compile properly, even though it's not used. v2: assert that the states are always aligned to their size (Jason) v3: Add "table" member to anv_state_pool in this commit. Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Add getter for anv_block_pool.Rafael Antognolli2019-01-174-5/+18
| | | | | | | | | | We will need the anv_block_pool_map to find the map relative to some BO that is not at the start of the block pool. v2: just return a pointer instead of a struct (Jason) v4: Update comment (Jason) Reviewed-by: Jason Ekstrand <[email protected]>
* anv/allocator: Add anv_state_table.Rafael Antognolli2019-01-172-2/+283
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a structure to hold anv_states. This table will initially be used to recycle anv_states, instead of relying on a linked list implemented in GPU memory. Later it could be used so that all anv_states just point to the content of this struct, instead of making copies of anv_states everywhere. One has to call anv_state_table_add(), which returns an index for the state in the table, and then get a pointer to such index, and finally fill in the rest of the struct. TODO: 1) There's a lot of common code between this table backing store memory and the anv_block_pool buffer, due to how we grow it. I think it's possible to refactory this and reuse code on both places. 2) Add unit tests. v3: - Rename state table memfd (Jason) - Return VK_ERROR_OUT_OF_HOST_MEMORY on more places (Jason) - anv_state_table_grow returns VkResult (Jason) - Rename variables to be more informative (Jason) - Return errors on state table grow. - Rename anv_state_table_push/pop to anv_free_list_push2/pop2 This will be renamed again to remove the trailing "2" later. v4: - Remove exit(-1) from anv_state_table (Jason). - Use uint32_t "next" field in anv_free_entry (Jason). Reviewed-by: Jason Ekstrand <[email protected]>
* anv/tests: Fix block_pool_no_free test.Rafael Antognolli2019-01-171-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were 2 problems with this test. First it was comparing highest, which was -1, with an uint32_t. So the current value would never be higher than that, and the assert would always be false. It just never reached this point because of the next problem. It was always looking for the highest value of each thread and storing it in thread_max. So a test case like this wouldn't work: [Thread]: [Blocks] [0]: [0, 32, 64, 96] [1]: [128, 160, 192, 224] [2]: [256, 288, 320, 352] Not only that would skip values and iterate only over thread number 2, instead of walking through all of them, but thread_max was also initialized to -1. And then compared to unsigned blocks[i][next[i]. We fix that by getting the smallest value of each thread, and checking if it is lower than thread_min, which is initialized to INT32_MAX. And then we end up walking through all the blocks of all threads. We also change "blocks" to be int32_t instead of uint32_t, since in some places (alloc_blocks) it was already referenced as int32_t, and that fixes the comparison to -1. v2: - keep highest initialized to -1, and change blocks to be int32_t. Reviewed-by: Jason Ekstrand <[email protected]>