summaryrefslogtreecommitdiffstats
path: root/src/intel/common/gen_device_info.h
diff options
context:
space:
mode:
authorRobert Bragg <[email protected]>2016-10-27 22:08:19 +0100
committerRobert Bragg <[email protected]>2017-03-17 15:45:19 +0000
commit344d1a4015de94d27c20ea6f632be8e4c16b6a63 (patch)
treebd7b661388bc3f659bf1608e9f4d190751b1c0a7 /src/intel/common/gen_device_info.h
parent28b134c75c1fa3b2aaa00dc168f0eca35ccd346d (diff)
i965: Allow a per gen timebase scale factor
Prior to Skylake the Gen HW timestamps were driven by a 12.5MHz clock with the convenient property of being able to scale by an integer (80) to nanosecond units. For Skylake the frequency is 12MHz or a scale factor of 83.333333 This updates gen_device_info to track a floating point timebase_scale factor and makes corresponding _queryobj.c changes to no longer assume a scale factor of 80 works across all gens. Although the gen6_ code could have been been left alone, the changes keep the code more comparable, and it now shares a few utility functions for scaling raw timestamps and calculating deltas. The utility for calculating deltas takes into account 32 or 36bit overflow depending on the current kernel version. Note: this leaves the timestamp handling of ARB_query_buffer_object untouched, which continues to use an incorrect scale of 80 on Skylake for now. This is more awkward to solve since the scaling is currently done using a very limited uint64 ALU available to the command parser that doesn't support multiply or divide where it's already taking a large number of instructions just to effectively multiple by 80. This fixes piglit arb_timer_query-timestamp-get on Skylake v2: (Ken) Update timebase_scale for platforms past Skylake/Broxton too. Signed-off-by: Robert Bragg <[email protected]> Reviewed-by: Matt Turner <[email protected]> Reviewed-by: Kenneth Graunke <[email protected]>
Diffstat (limited to 'src/intel/common/gen_device_info.h')
-rw-r--r--src/intel/common/gen_device_info.h24
1 files changed, 24 insertions, 0 deletions
diff --git a/src/intel/common/gen_device_info.h b/src/intel/common/gen_device_info.h
index f0e8750d0ea..80676d0e003 100644
--- a/src/intel/common/gen_device_info.h
+++ b/src/intel/common/gen_device_info.h
@@ -147,6 +147,30 @@ struct gen_device_info
*/
unsigned max_entries[4];
} urb;
+
+ /**
+ * For the longest time the timestamp frequency for Gen's timestamp counter
+ * could be assumed to be 12.5MHz, where the least significant bit neatly
+ * corresponded to 80 nanoseconds.
+ *
+ * Since Gen9 the numbers aren't so round, with a a frequency of 12MHz for
+ * SKL (or scale factor of 83.33333333) and a frequency of 19200123Hz for
+ * BXT.
+ *
+ * For simplicty to fit with the current code scaling by a single constant
+ * to map from raw timestamps to nanoseconds we now do the conversion in
+ * floating point instead of integer arithmetic.
+ *
+ * In general it's probably worth noting that the documented constants we
+ * have for the per-platform timestamp frequencies aren't perfect and
+ * shouldn't be trusted for scaling and comparing timestamps with a large
+ * delta.
+ *
+ * E.g. with crude testing on my system using the 'correct' scale factor I'm
+ * seeing a drift of ~2 milliseconds per second.
+ */
+ double timebase_scale;
+
/** @} */
};