diff options
author | Ben Widawsky <[email protected]> | 2015-11-02 14:57:01 -0800 |
---|---|---|
committer | Anuj Phogat <[email protected]> | 2015-12-07 18:47:04 -0800 |
commit | 6ef8149bcd1f11c7e4b6e9191bfd9ba6d31170e1 (patch) | |
tree | f11ecb7a44d4e59a04335b8bb9fe19fbf94a9b5e /scons | |
parent | d5a5dbd71f0e8756494809025ba2119efdf26373 (diff) |
i965: Fix texture views of 2d array surfaces
It is legal to have a texture view of a single layer from a 2D array texture;
you can sample from it, or render to it. Intel hardware needs to be made aware
when it is using a 2d array surface in the surface state. The texture view is
just a 2d surface with the backing miptree actually being a 2d array surface.
This caused the previous code would not set the right bit in the surface state
since it wasn't considered an array texture.
I spotted this early on in debug but brushed it off because it is clearly not
needed on other platforms (since they all pass). I have no idea how this works
properly on other platforms (I think gen7 introduced the bit in the state, but I
am too lazy to check). As such, I have opted not to modify gen7, though I
believe the current code is wrong there as well.
Thanks to Chris for helping me debug this.
v2: Just use the underlying mt's target type to make the array determination.
This replaces a bug in the first patch which was incorrectly relying only
on non-zero depth (not sure how that had no failures). (Ilia)
Cc: Chris Forbes <[email protected]>
Reported-by: Mark Janes <[email protected]> (Jenkins)
References: https://www.opengl.org/registry/specs/ARB/texture_view.txt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92609
Signed-off-by: Ben Widawsky <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
Diffstat (limited to 'scons')
0 files changed, 0 insertions, 0 deletions