summaryrefslogtreecommitdiffstats
path: root/src/util/hash_table.h
Commit message (Collapse)AuthorAgeFilesLines
* util: Implement a hash table cloning functionThomas Helland2018-03-141-0/+2
| | | | | | | V2: Don't rzalloc; we are about to rewrite the whole thing (Vladislav) Reviewed-by: Eric Anholt <[email protected]> Reviewed-by: Emil Velikov <[email protected]>
* util: hashtable: make hashing prototypes matchLionel Landwerlin2017-10-301-1/+1
| | | | | | | | | It seems nobody's using the string hashing function. If you try to pass it directly to the hashtable creation function, you'll get compiler warning for non matching prototypes. Let's make them match. Signed-off-by: Lionel Landwerlin <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
* mesa/util: add a hash table wrapper which support 64-bit keysSamuel Pitoiset2017-06-141-0/+25
| | | | | | | | | Needed for bindless handles which are represented using 64-bit unsigned integers. All hash table implementations should be uniformized later on. Signed-off-by: Samuel Pitoiset <[email protected]> Reviewed-by: Nicolai Hähnle <[email protected]>
* util: Change the pointer hashing functionThomas Helland2017-05-221-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use our knowledge that pointers are at least 4 byte aligned to remove the useless digits. Then shift by 6, 10, and 14 bits and add this to the original pointer, effectively folding in the entropy of the higher bits of the pointer into a 4-bit section. Stopping at 14 means we can add the entropy from 18 bits, or at least a 600Kbyte section of memory. Assuming that ralloc allocates from a linearly allocated heap less than this we can make a very efficient pointer hashing function for our usecase. Even if we are not on an architecture that is 4 byte aligned, there is still a high big chance that the thing we are allocating is at least 8 bytes in size, so even then we will have entropy into the third bit. The 4 bit increment on the shifts is chosen rather arbitrarily; if we had chosen a 3 bit increment we would need to add another xor to cover a decently sized memorypool. Increasing it to 5 bits would spread our entropy more, possibly hurting us with more collisions on hash tables of size less than 32. With a hash table of size 16 there are a max of 11 entries, and we can assume that with such a small table collisions are not that painfull. This allows us to hash the whole 32 or 64 bit pointer at once, instead of running FNV1a, looping through each byte and doing increments, decrements, muls, and xors on every byte. This cuts _mesa_hash_data from 1.5 % on profiles, to making _mesa_hash_pointer show up with a 0.09% share. Collisions on insertion actually seems to be ever so slightly lower with this hash function, as found by printing a loop counter and sorting the data. perf stat shows a 1.5% reduction in instruction count, and a 5% reduction in stalled cycles. Shader-db runtime goes from 225 to 220 seconds. No instruction-count changes in shader-db, but there are some minor changes in cycle-count that is likely caused by nir walking a set in some of its passes, and this causing a different ordering. That might eventually lead to a difference in register allocation. However, the effect is a net positive; total cycles in shared programs: 24739550 -> 24738482 (-0.00%) cycles in affected programs: 374468 -> 373400 (-0.29%) helped: 178 HURT: 49 Reviewed-by: Marek Olšák <[email protected]> Reviewed-by: Eric Anholt <[email protected]>
* util: Move hash_table_call_foreach to util hash tableThomas Helland2016-09-121-0/+13
| | | | | | | | | | It is included through the util/hash_table include in the program hash_table, so this should be safe. This will be needed when we start converting each use of the program_hash_table, as some places need this function. Signed-off-by: Thomas Helland <[email protected]> Reviewed-by: Timothy Arceri <[email protected]>
* util: fix new gcc6 warningsRob Clark2016-02-181-1/+3
| | | | | | | | | src/util/hash_table.h:111:23: warning: ‘_mesa_fnv32_1a_offset_bias’ defined but not used [-Wunused-const-variable] static const uint32_t _mesa_fnv32_1a_offset_bias = 2166136261u; ^~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Rob Clark <[email protected]> Reviewed-by: Ian Romanick <[email protected]>
* util/hash_table: add _mesa_hash_table_num_entriesNicolai Hähnle2016-02-031-0/+5
| | | | Reviewed-by: Marek Olšák <[email protected]>
* util/hash_table: add _mesa_hash_table_clear (v4)Nicolai Hähnle2016-02-031-0/+2
| | | | | | v4: coding style change (Matt Turner) Reviewed-by: Ian Romanick <[email protected]> (v3)
* hash_table: Rename insert_with_hash to insert_pre_hashedJason Ekstrand2015-01-151-2/+2
| | | | | | | We already have search_pre_hashed. This makes the APIs match better. Reviewed-by: Matt Turner <[email protected]> Reviewed-by: Eric Anholt <[email protected]>
* util/hash_table: Pull the details of the FNV-1a into helpersJason Ekstrand2015-01-151-0/+19
| | | | | | | This way the basics of the FNV-1a hash can be reused to easily create other hashing functions. Reviewed-by: Eric Anholt <[email protected]>
* util/hash_table: Rework the API to know about hashingJason Ekstrand2014-12-141-4/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the hash_table API required the user to do all of the hashing of keys as it passed them in. Since the hashing function is intrinsically tied to the comparison function, it makes sense for the hash table to know about it. Also, it makes for a somewhat clumsy API as the user is constantly calling hashing functions many of which have long names. This is especially bad when the standard call looks something like _mesa_hash_table_insert(ht, _mesa_pointer_hash(key), key, data); In the above case, there is no reason why the hash table shouldn't do the hashing for you. We leave the option for you to do your own hashing if it's more efficient, but it's no longer needed. Also, if you do do your own hashing, the hash table will assert that your hash matches what it expects out of the hashing function. This should make it harder to mess up your hashing. v2: change to call the old entrypoint "pre_hashed" rather than "with_hash", like cworth's equivalent change upstream (change by anholt, acked-in-general by Jason). Signed-off-by: Jason Ekstrand <[email protected]> Signed-off-by: Eric Anholt <[email protected]> Reviewed-by: Eric Anholt <[email protected]>
* util: include c99_compat.h in hash_table.h to get 'inline' definitionBrian Paul2014-08-041-0/+1
| | | | Reviewed-by: Jason Ekstrand <[email protected]>
* util: Gather some common macrosJason Ekstrand2014-08-041-2/+2
| | | | | | | | | | This gathers macros that have been included across components into util so that the include chain can be more vertical. In particular, this makes util stand on its own without any dependence whatsoever on the rest of mesa. Signed-off-by: "Jason Ekstrand" <[email protected]> Reviewed-by: Marek Olšák <[email protected]>
* util: Move the open-addressing linear-probing hash_table to src/util.Kenneth Graunke2014-08-041-0/+106
This hash table is used in core Mesa, the GLSL compiler, and the i965 driver, which makes it a good candidate for the new src/util module. It's much faster than program/hash_table.[ch] (see commit 6991c2922f5 for data), and José's u_hash_table.c has a comment saying Gallium should probably consider switching to a linear probing hash table at some point. So this seems like the best candidate for a shared data structure. Signed-off-by: Kenneth Graunke <[email protected]> v2 (Jason Ekstrand): Pick up another hash_table use and patch up scons Signed-off-by: Jason Ekstrand <[email protected]> Reviewed-by: Marek Olšák <[email protected]>