diff options
author | Kenneth Graunke <kenneth@whitecape.org> | 2017-07-01 02:04:50 -0700 |
---|---|---|
committer | Kenneth Graunke <kenneth@whitecape.org> | 2017-07-10 15:55:26 -0700 |
commit | c2c37f5185ef80a770a9614f0317ad91b7672450 (patch) | |
tree | 23e1207c717a116fc83e8581a26bcd68e62457c2 /src/intel | |
parent | 3e50607a40541de81ef008ee187c26dd03cd6c9e (diff) |
intel: Fix clflushing on modern (Baytrail+) Atom CPUs.
Thanks to Chris Wilson for pointing this out.
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Diffstat (limited to 'src/intel')
-rw-r--r-- | src/intel/common/gen_clflush.h | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/src/intel/common/gen_clflush.h b/src/intel/common/gen_clflush.h index 9b971cac37e..d39f15807eb 100644 --- a/src/intel/common/gen_clflush.h +++ b/src/intel/common/gen_clflush.h @@ -50,6 +50,18 @@ static inline void gen_invalidate_range(void *start, size_t size) { gen_clflush_range(start, size); + + /* Modern Atom CPUs (Baytrail+) have issues with clflush serialization, + * where mfence is not a sufficient synchronization barrier. We must + * double clflush the last cacheline. This guarantees it will be ordered + * after the preceding clflushes, and then the mfence guards against + * prefetches crossing the clflush boundary. + * + * See kernel commit 396f5d62d1a5fd99421855a08ffdef8edb43c76e + * ("drm: Restore double clflush on the last partial cacheline") + * and https://bugs.freedesktop.org/show_bug.cgi?id=92845. + */ + __builtin_ia32_clflush(start + size - 1); __builtin_ia32_mfence(); } |