| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of plain snprintf(). To fix the MSVC 2013 build:
Compiling src\gallium\auxiliary\driver_ddebug\dd_draw.c ...
dd_draw.c
c:\projects\mesa\src\gallium\auxiliary\driver_ddebug\dd_util.h(60) : warning C4013: 'snprintf' undefined; assuming extern returning int
...
gallium.lib(dd_draw.obj) : error LNK2001: unresolved external symbol _snprintf
build\windows-x86-debug\gallium\targets\graw-gdi\graw.dll : fatal error LNK1120: 1 unresolved externals
scons: *** [build\windows-x86-debug\gallium\targets\graw-gdi\graw.dll] Error 1120
scons: building terminated because of errors.
Fixes: 6ff0c6f4ebc ("gallium: move ddebug, noop, rbug, trace to auxiliary to improve build times")
Cc: Marek Olšák <[email protected]>
Cc: Brian Paul <[email protected]>
Cc: Roland Scheidegger <[email protected]>
Cc: Nicolai Hähnle <[email protected]>
Signed-off-by: Andres Gomez <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of plain snprintf(). To fix the MSVC 2013 build:
Compiling src\util\u_queue.c ...
u_queue.c
src\util\u_queue.c(325) : warning C4013: 'snprintf' undefined; assuming extern returning int
...
mesautil.lib(u_queue.obj) : error LNK2001: unresolved external symbol _snprintf
scons: building terminated because of errors.
Fixes: b238e33bc9d ("kutil/queue: add a process name into a thread name")
Cc: Marek Olšák <[email protected]>
Cc: Brian Paul <[email protected]>
Cc: Roland Scheidegger <[email protected]>
Cc: Timothy Arceri <[email protected]>
Cc: Eric Engestrom <[email protected]>
Signed-off-by: Andres Gomez <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of plain snprintf(). To fix the MSVC 2013 build:
Compiling src\gallium\auxiliary\util\u_tests.c ...
u_tests.c
src\gallium\auxiliary\util\u_tests.c(624) : warning C4013: 'snprintf' undefined; assuming extern returning int
...
gallium.lib(u_tests.obj) : error LNK2019: unresolved external symbol _snprintf referenced in function _test_texture_barrier
build\windows-x86-debug\gallium\targets\graw-gdi\graw.dll : fatal error LNK1120: 1 unresolved externals
scons: *** [build\windows-x86-debug\gallium\targets\graw-gdi\graw.dll] Error 1120
scons: building terminated because of errors.
Fixes: 56342c97ee7 ("gallium/u_tests: test FBFETCH and shader-based blending with MSAA")
Cc: Marek Olšák <[email protected]>
Cc: Brian Paul <[email protected]>
Cc: Roland Scheidegger <[email protected]>
Cc: Dieter Nützel <[email protected]>
Signed-off-by: Andres Gomez <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of plain snprintf(). To fix the MSVC 2013 build.
Fixes: 6ff0c6f4ebc ("gallium: move ddebug, noop, rbug, trace to auxiliary to improve build times")
Cc: Marek Olšák <[email protected]>
Cc: Brian Paul <[email protected]>
Cc: Roland Scheidegger <[email protected]>
Signed-off-by: Andres Gomez <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During code review, Jason pointed out that:
2b3064c0731 "i965, anv: Use INTEL_DEBUG for disk_cache driver flags"
Didn't account for INTEL_SCALER_* environment variables.
To fix this, let the compiler return the disk_cache driver flags.
Another possible fix would be to pull the INTEL_SCALER_* into
INTEL_DEBUG bits, but as we are currently using 41 of 64 bits, I
didn't think it was a good use of 4 more of these bits. (5 since
INTEL_PRECISE_TRIG needs to be accounted for as well.)
Cc: Jason Ekstrand <[email protected]>
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Shader time hard codes an index of the shader time buffer within the
gen program.
In order to support shader time in the disk shader cache, we'd need to
add the shader time index into the program key. This should work, but
probably is not worth it for this particular debug feature.
Therefore, let's just disable the disk shader cache if the shader time
debug feature is used.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106382
Fixes: 96fe36f7acc "i965: Enable disk shader cache by default"
Cc: Eero Tamminen <[email protected]>
Cc: Kenneth Graunke <[email protected]>
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Fixes new piglit test:
tests/spec/glsl-1.20/execution/qualifiers/vs-out-conversion-int-to-float-vec4-index.shader_test
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
They don't really do anything interesting, but it's more consistent this
way.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
| |
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Instead of just looking at the number of color attachments, look at
which ones are actually used by the subpass. This lets us potentially
throw away chunks of the fragment shader. In DXVK, for example, all
subpasses have 8 attachments and most are VK_ATTACHMENT_UNUSED so this
is very helpful in that case.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The back-end compiler emits the number of color writes specified by
wm_prog_key::nr_color_regions regardless of what nir_store_outputs we
have. Once we've gone through and figured out which render targets
actually exist and are written by the shader, we should restrict the key
to avoid extra RT write messages.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
| |
With the new deref instructions, we have to keep the modes consistent
between the derefs and the variables they reference. Since we remove
outputs by changing them to local variables, we need to run the fixup
pass to fix the modes.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
| |
There's no point in walking the program if we're never going to actually
lower anything.
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Shader-db results on Kaby Lake:
total instructions in shared programs: 15166953 -> 15073611 (-0.62%)
instructions in affected programs: 2390284 -> 2296942 (-3.91%)
helped: 16469
HURT: 505
total loops in shared programs: 4954 -> 4951 (-0.06%)
loops in affected programs: 3 -> 0
helped: 3
HURT: 0
Reviewed-by: Timothy Arceri <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The NIR nir_lower_io_arrays_to_elements pass attempts to split I/O
variables which are arrays or matrices into a sequence of separate
variables. This can help link-time optimization by allowing us to
remove varyings at a more granular level.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15177645 -> 15168494 (-0.06%)
instructions in affected programs: 79857 -> 70706 (-11.46%)
helped: 392
HURT: 0
Reviewed-by: Timothy Arceri <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, only the first vec4 of a matrix or other complex type will
get marked as flat and we'll interpolate the others. This was caught by
a dEQP test which started failing because it did a SSO vs. non-SSO
comparison. Previously, we did the interpolation wrong consistently in
both versions. However, with one of Tim Arceri's NIR linkingpatches, we
started splitting the matrix input into vectors at link time in the
non-SSO version and it started getting correctly interpolated which
didn't match the broken SSO version. As of this commit, they both get
correctly interpolated.
Fixes: e61cc87c757f8bc "i965/fs: Add a flat_inputs field to prog_data"
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
| |
|
|
|
|
|
|
|
| |
Fixes: d1992255bb29054fa51763376d125183a9f602f3
("meson: Add build Intel "anv" vulkan driver")
Signed-off-by: Dylan Baker <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
| |
By including the proper headers for getpid and for mkdir.
Fixes: 6ff0c6f4ebcb87ea6c6fe5a4ba90b548f666067d
("gallium: move ddebug, noop, rbug, trace to auxiliary to improve build times")
Signed-off-by: Dylan Baker <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On windows process.h is a system provided header, and it's required in
include/c11/threads_win32.h. This header interferes with searching for
that header, and results in windows build warnings with scons, but
errors in meson which doesn't allow implicit function declarations. Just
rename process to u_process, which follows the style of utils anyway.
Fixes: 2e1e6511f76370870b5cde10caa9ca3b6d0dc65f
("util: extract get_process_name from xmlconfig.c")
Signed-off-by: Dylan Baker <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
| |
To make dEQP-GLES31.functional.ssbo.layout.random.all_shared_buffer.23
finish sooner on the older CPUs. (otherwise it gets killed and we fail
the test)
Acked-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
| |
I missed an important part when porting the change over, fixing my
compiler warning but breaking -Werror=format-security.
Fixes: e6ff5ac4468e ("v3d: use snprintf(..., "%s", ...) instead of strncpy")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107443
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CXXLD gallium_dri.la
../../../../src/gallium/drivers/vc4/.libs/libvc4.a(vc4_cl_dump.o): In function `vc4_dump_cl':
src/gallium/drivers/vc4/vc4_cl_dump.c:45: undefined reference to `clif_dump_init'
src/gallium/drivers/vc4/vc4_cl_dump.c:82: undefined reference to `clif_dump_destroy'
../../../../src/broadcom/cle/.libs/libbroadcom_cle.a(cle_libbroadcom_cle_la-v3d_decoder.o): In function `v3d_field_iterator_next':
src/broadcom/cle/v3d_decoder.c:902: undefined reference to `clif_lookup_bo'
Fixes: e92959c4e0 ("v3d: Pass the whole clif_dump structure to v3d_print_group().")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107423
CC: Eric Anholt <[email protected]>
Acked-by: Eric Anholt <[email protected]>
Reviewed-by: Andres Gomez <[email protected]>
|
|
|
|
|
|
|
| |
There is a bug with scons 2.3, used in Travis, where it fails to detect
some C functions.
Reviewed-by: Andres Gomez <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The ubuntu version provided by Travis is a bit old, and does not detect
correctly some C functions.
Use a more modern version through scons.
Reviewed-by: Andres Gomez <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Gen10+ has an additional bit in MI_BATCH_BUFFER_END to signal the end
of the context image.
We select the largest size for the context image regardless of the
generation.
Signed-off-by: Lionel Landwerlin <[email protected]>
Reviewed-by: Rafael Antognolli <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Python 2 had string_escape and unicode_escape codecs. Python 3 only has
the latter. These work the same as far as we're concerned, so let's use
the future-proof one.
However, the reste of the code expects unicode strings, so we need to
decode them again.
Signed-off-by: Mathieu Bridon <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Python 2 had two integer types: int and long. Python 3 dropped the
latter, as it made the int type automatically support bigger numbers.
As a result, Python 3 lost the 'L' suffix on integer litterals.
This probably doesn't make much difference when compiling the generated
C code, but adding it explicitly means that both Python 2 and 3 generate
the exact same C code anyway, which makes it easier to compare and check
for discrepencies when moving to Python 3.
Signed-off-by: Mathieu Bridon <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In both Python 2 and 3, zlib.Compress.compress() takes a byte string,
and returns a byte string as well.
In Python 2, the script was working because:
1. string literalls were byte strings;
2. opening a file in unicode mode, reading from it, then passing the
unicode string to compress() would automatically encode to a byte
string;
On Python 3, the above two points are not valid any more, so:
1. zlib.Compress.compress() refuses the passed unicode string;
2. compressed_data, defined as an empty unicode string literal, can't be
concatenated with the byte string returned by compress();
This commit fixes this by explicitly using byte strings where
appropriate, so that the script works on both Python 2 and 3.
Signed-off-by: Mathieu Bridon <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The latter is a constructor for file objects, but when actually opening
a file, using the former is more idiomatic.
In addition, file() is not a builtin any more in Python 3, so this makes
the script compatible with both Python 2 and Python 3.
Signed-off-by: Mathieu Bridon <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The XML parser wants byte strings, not unicode strings.
In both Python 2 and 3, opening a file without specifying the mode will
open it for reading in text mode ('r').
On Python 2, the read() method of the file object will return byte
strings, while on Python 3 it will return unicode strings.
Explicitly specifying the binary mode ('rb') makes the behaviour
identical in both Python 2 and 3, returning what the XML parser
expects.
Signed-off-by: Mathieu Bridon <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hex() builtin returns a string containing the hexa-decimal
representation of an integer.
When the argument is not an integer, then the function calls that
object's __hex__() method, if one is defined. That method is supposed to
return a string.
While that's not explicitly documented, that string is supposed to be a
valid hexa-decimal representation for a number. Python 2 doesn't enforce
this though, which is why we got away with returning things like
'NIR_TRUE' which are not numbers.
In Python 3, the hex() builtin instead calls an object's __index__()
method, which itself must return an integer. That integer is then
automatically converted to a string with its hexa-decimal representation
by the rest of the hex() function.
As a result, we really can't make this compatible with Python 3 as it
is.
The solution is to stop using the hex() builtin, and instead use a hex()
object method, which can return whatever we want, in Python 2 and 3.
Signed-off-by: Mathieu Bridon <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-by: Dylan Baker <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Python 2, iterating over a byte-string yields single-byte strings,
and we can pass them to ord() to get the corresponding integer.
In Python 3, iterating over a byte-string directly yields those
integers.
Transforming the byte string into a bytearray gives us a list of the
integers corresponding to each byte in the string, removing the need to
call ord().
This makes the script compatible with both Python 2 and 3.
Signed-off-by: Mathieu Bridon <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Detect if the display (X-Server) gpu and Prime renderoffload gpu prefer
different channel ordering for color depth 30 formats ([X/A]BGR2101010
vs. [X/A]RGB2101010) and perform format conversion during the blitImage()
detiling op from tiled backbuffer -> linear buffer.
For this we need to find the visual (= red channel mask) for the
X-Drawable used to display on the server gpu. We use the same proven
logic for finding that visual as in commit "egl/x11: Handle both depth
30 formats for eglCreateImage()".
This is mostly to allow "NVidia Optimus" at depth 30, as Intel/AMD
gpu's prefer xRGB2101010 ordering, whereas NVidia gpu's prefer
xBGR2101010 ordering, so we can offload to nouveau without getting
funky colors.
Tested on Intel single gpu, NVidia single gpu, Intel + NVidia prime
offload with DRI3/Present.
Note: An unintended but pleasant surprise of this patch is that it also
seems to make the modesetting-ddx of server 1.20.0 work at depth 30
on nouveau, at least with unredirected "classic" X rendering, and
with redirected desktop compositing under XRender accel, and with OpenGL
compositing under GLX. Only X11 compositing via OpenGL + EGL still gives
funky colors. modesetting-ddx + glamor are not yet ready to deal with
nouveau's ABGR2101010 format, and treat it as ARGB2101010, also exposing
X-visuals with ARGB2101010 style channel masks. Seems somehow this triggers
the logic in this patch on modesetting-ddx + depth 30 + DRI3 buffer sharing
and does the "wrong" channel swizzling that then cancels out the "wrong"
swizzling of glamor and we end up with the proper pixel formatting in
the scanout buffer :). This so far tested on a NVA5 Tesla card under KDE5
Plasma as shipping with Ubuntu 16.04.4 LTS.
Signed-off-by: Mario Kleiner <[email protected]>
Cc: Ilia Mirkin <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to distinguish if the backing storage of a pixmap
is XRGB2101010 or XBGR2101010, as different gpu hw supports
different formats. NVidia hw prefers XBGR, whereas AMD and
Intel are happy with XRGB.
Use the red channel mask of the first depth 30 visual of
the x-screen to distinguish which hw format to choose.
This fixes desktop composition of color depth 30 windows
when the X11 compositor uses EGL.
v2: Switch from using the visual of the root window to simply
using the first depth 30 visual for the x-screen, as testing
shows that each driver only exports either xrgb ordering or
xbgr ordering for the channel masks of its depth 30 visuals,
so this should be unambiguous and avoid trouble if X ever
supports depth 30 pixmaps on screens with a non-depth 30 root
window visual. This per Michels suggestion.
v3: No change to v2, but spent some time testing this more on
AMD hw, with my software hacked up to intentionally choose
pixel formats/visual with the non-preferred xBGR2101010
ordering on the ati-ddx, also with a standard non-OpenGL
X-Window with depth 30 visual, to make sure that things show
up properly with the right colors on the screen when going
through EGL+OpenGL based compositing on KDE-5. Iow. to confirm
that my explanation to the v2 patch on the mailing list of why
it should work and the actual practice agree (or possibly that
i am good at fooling myself during testing ;).
v4: Drop the local `red_mask` and just `return visual->red_mask`/
`return 0`, as suggested by Eric Engestrom.
Rebased onto current master, to take the cleanup via the new
function dri2_format_for_depth() into account.
Signed-off-by: Mario Kleiner <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Add support for XBGR2101010 and ABGR2101010 formats.
Signed-off-by: Daniel Stone <[email protected]>
Reviewed-by: Mario Kleiner <[email protected]>
Tested-by: Mario Kleiner <[email protected]>
Tested-by: Ilia Mirkin <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Add support for XBGR2101010 and ABGR2101010.
Signed-off-by: Daniel Stone <[email protected]>
Reviewed-by: Eric Engestrom <[email protected]>
Reviewed-by: Mario Kleiner <[email protected]>
Tested-by: Mario Kleiner <[email protected]>
Tested-by: Ilia Mirkin <[email protected]>
|
|
|
|
|
|
| |
Fixes VK-GL-CTS CL#2567
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
| |
The hardware doesn't support byte immediates, so similar to setup_imm_df()
for doubles, these helpers work by loading the constant value into a
VGRF.
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Rhys Perry <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Rhys Perry <[email protected]>
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Rhys Perry <[email protected]>
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Rhys Perry <[email protected]>
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
| |
This commit does not add support for the opcodes in gallivm or tgsi_to_nir.c
Signed-off-by: Rhys Perry <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
| |
virgl now exposes GLES3.1 and 3.2
|
|
|
|
| |
virgl with up to date host renderer now exposes GL 4.3.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes the following dEQP-GLES31 cases from NotSupported to
Pass for me:
- dEQP-GLES31.functional.blend_equation_advanced.state_query.*
- dEQP-GLES31.functional.blend_equation_advanced.basic.*
- dEQP-GLES31.functional.blend_equation_advanced.srgb.*
- dEQP-GLES31.functional.blend_equation_advanced.msaa.*
- dEQP-GLES31.functional.blend_equation_advanced.barrier.*
- dEQP-GLES31.functional.draw_buffers_indexed.overwrite_*advanced_blend_eq*
- dEQP-GLES31.functional.state_query.indexed.blend_equation_advanced_*
- dEQP-GLES31.functional.debug.negative_coverage.*.advanced_blend.*
Signed-off-by: Erik Faye-Lund <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In gallium, supporting FBFETCH means supporting non-coherent fetches, but
in virglrenderer, due to technical reasons this is backed by coherent
fetches instead. This means we don't need to do anything for the barriers.
However, if we don't have a texture_barrier implementation, we get crashes
because the non-coherent extensions is exposed.
So, let's leave this as a NOP for now.
[airlied: I've got a more complete impl of this somewhere, once we
land the host side].
Reviewed-by: Dave Airlie <[email protected]>
Signed-off-by: Erik Faye-Lund <[email protected]>
|