summaryrefslogtreecommitdiffstats
path: root/src/mesa/main
diff options
context:
space:
mode:
authorNicolai Hähnle <[email protected]>2016-01-11 15:07:48 -0500
committerNicolai Hähnle <[email protected]>2016-02-03 14:04:06 +0100
commit761c7d59c4403832c33d931bb097d060ed07e555 (patch)
tree53db3635ba745e2e27131bf0f37159afb6f734b1 /src/mesa/main
parent115c643b1669bd050af8d890adbfc771d9ff8126 (diff)
vbo: disable the minmax cache when the hit rate is low
When applications stream their index buffers, the caches for those BOs become useless and add overhead, so we want to disable them. The tricky part is coming up with the right heuristic for *when* to disable them. The first question is which hit rate to aim for. Since I'm not aware of any interesting borderline applications that do something like "draw two or three times for each upload", I just kept it simple. The second question is how soon we should give up on the caching. Applications might have a warm-up phase where they fill a buffer gradually but then keep reusing it. For this reason, I count the number of indices that hit and miss (instead of the number of calls that hit or miss), since comparing that to the size of the buffer makes sense. Reviewed-by: Marek Olšák <[email protected]>
Diffstat (limited to 'src/mesa/main')
-rw-r--r--src/mesa/main/mtypes.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/src/mesa/main/mtypes.h b/src/mesa/main/mtypes.h
index 6099ae1c463..58064aac1cd 100644
--- a/src/mesa/main/mtypes.h
+++ b/src/mesa/main/mtypes.h
@@ -1286,6 +1286,8 @@ struct gl_buffer_object
/** Memoization of min/max index computations for static index buffers */
struct hash_table *MinMaxCache;
+ unsigned MinMaxCacheHitIndices;
+ unsigned MinMaxCacheMissIndices;
bool MinMaxCacheDirty;
};