| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Quiet warnings in release builds where these look unused.
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3866>
|
|
|
|
|
|
|
|
|
|
|
| |
If we insert a NULL key, it will appear to succeed but will mess up
entry counting. Similar errors can occur if someone accidentally
inserts the deleted key. The later is highly unlikely but technically
possible so we should guard against it too.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Caio Marcelo de Oliveira Filho <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compilation times with my shader-db database:
Difference at 95.0% confidence
-1.22312 +/- 0.726033
-0.283979% +/- 0.168254%
(Student's t, pooled s = 1.02177)
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A significant portion of the time spent in nir_opt_cse for the Dolphin
ubershaders was in resizing the set. When resizing a hash table, we know
in advance that each new element to be inserted will be different from
every other element, so we don't have to compare them, and there will be
no tombstone elements, so we don't have to worry about caching the
first-seen tombstone. We add a specialized add function which skips
these steps entirely, speeding up resizing.
Compile-time results from my shader-db database:
Difference at 95.0% confidence
-2.29143 +/- 0.845534
-0.529475% +/- 0.194767%
(Student's t, pooled s = 1.08807)
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unfortunately GCC can't do this for us, probably because we call the key
comparison function which GCC can't prove won't modify arbitrary memory.
This is a pretty hot function, so do the optimization manually to be
sure the compiler will get it right.
While we're here, make the computation of the new probe address use a
single conditional subtract instead of a modulo, since we know that it
won't ever get as big as 2 * ht->size before the modulo. Modulos tend to
be pretty expensive operations.
shader-db compile time results for my database:
Difference at 95.0% confidence
-2.24934 +/- 0.69897
-0.516296% +/- 0.159993%
(Student's t, pooled s = 0.983684)
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Unlike _mesa_set_search_and_add(), it doesn't replace an entry if it's
found, returning it instead. This is useful for nir_instr_set, where
we have to know both the original original instruction and its
equivalent.
Reviewed-by: Eric Anholt <[email protected]>
Acked-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Often times you don't know how big a set will be and you want the code
to just grow it as needed. However, sometimes you do know and you can
avoid a lot of rehashing if you just specify a size up-front.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Thomas Helland <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This function is identical to _mesa_set_add except that it takes an
extra out parameter that lets the caller detect if a replacement
happened.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Thomas Helland <[email protected]>
|
|
|
|
|
|
|
| |
These combinations are common enough and deserve a shortcut.
Reviewed-by: Jason Ekstrand <[email protected]>
Acked-by: Eric Engestrom <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Eric Engestrom <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
| |
v2: Add unit test. (Eric Anholt)
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
| |
v2: Add unit test. (Eric Anholt)
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
| |
Clear a set back to the state of having zero entries.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
| |
This follows the same pattern as in the hash_table.
Reviewed-by: Jason Ekstrand <jason.ekstrand at intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This also silences following clang warnings:
no previous extern declaration for non-static variable 'deleted_key' [-Werror,-Wmissing-variable-declarations]
const void *deleted_key = &deleted_key_value;
^
no previous extern declaration for non-static variable 'deleted_key_value'
[-Werror,-Wmissing-variable-declarations]
uint32_t deleted_key_value;
^
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we delete entries in the hash set, we mark them "deleted" by
setting their key to the deleted_key, which points to a dummy
deleted_key_value. When searching for an entry, we normally skip over
those, but set_add() had some code for searching for duplicate entries
which forgot to skip over deleted entries. This led to a segfault inside
the NIR vectorization pass, since its key comparison function
interpreted the memory where deleted_key_value resides as a pointer and
tried to dereference it.
v2:
- add better commit message (Timothy)
- use entry_is_deleted (Timothy)
Reviewed-by: Timothy Arceri <[email protected]>
Signed-off-by: Connor Abbott <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the set_insert function would bail early if it found a deleted
slot that it could re-use. However, this is a problem if the key being
inserted is already in the set but further down the list. If this happens,
the element ends up getting inserted in the set twice. This commit makes
it so that we walk over all of the possible entries for the given key and
then, if we don't find the key, place it in the available free entry we
found.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: s/unsigned int/unsigned/ in prog_optimize.c
Signed-off-by: Jan Vesely <[email protected]>
Reviewed-by: David Heidelberg <[email protected]>
Reviewed-by: Jose Fonseca <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the set API required the user to do all of the hashing of keys
as it passed them in. Since the hashing function is intrinsically tied to
the comparison function, it makes sense for the hash set to know about
it. Also, it makes for a somewhat clumsy API as the user is constantly
calling hashing functions many of which have long names. This is
especially bad when the standard call looks something like
_mesa_set_add(ht, _mesa_pointer_hash(key), key);
In the above case, there is no reason why the hash set shouldn't do the
hashing for you. We leave the option for you to do your own hashing if
it's more efficient, but it's no longer needed. Also, if you do do your
own hashing, the hash set will assert that your hash matches what it
expects out of the hashing function. This should make it harder to mess up
your hashing.
This is analygous to 94303a0750 where we did this for hash_table
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|