| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
| |
Move it from intel_screen.c to intel_context.c. Redeclare as non-static.
A future commit will use it in multiple files.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
v2: Use base-10 for versions like gl_context::Version. Suggested by Ken.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
v2: Use base-10 for versions like gl_context::Version. Suggested by Ken.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Signed-off-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
In the i965 dirver, the chipset must be i965.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This forces the drivers to do at least some validation of context API
and version before creating the context. In r100 and r200 drivers, this
means that they don't do any post-hoc validation.
v2: Actually reject compatibility profile 3.2+ contexts. Thanks Ken.
Signed-off-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
v2: fix bad comment from before I gave up and decided to just use doubles.
Reviewed-by: Jordan Justen <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
As we get into supporting GL 3.x core, we come across more and more features
of the API that depend on the version number as opposed to just the extension
list. This will let us more sanely do version checks than "(VersionMajor == 3
&& VersionMinor >= 2) || VersionMajor >= 4".
v2: Fix a bad <= 30 check.
Reviewed-by: Kenneth Graunke <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This turns on window system MSAA.
This patch changes the id of many GLX visuals and configs, but that
couldn't be prevented. I attempted to preserve the id's of extant configs
by appending the multisample configs to the end of the extant ones. But
somewhere, perhaps in the X server, the configs are reordered with
multisample configs interspersed among the singlesample ones.
Test results:
Tested with xonotic and `glxgears -samples 1` on Ivybridge.
No piglit regressions on Ivybridge.
On Sandybridge, passes 68/70 of oglconform's
winsys multisample tests. The two failing tests are:
multisample(advanced.pixelmap.depth)
multisample(advanced.pixelmap.depthCopyPixels)
These tests hang the gpu (on kernel 3.4.6) due to
a glDrawPixels/glReadPixels pair on an MSAA depth buffer. I don't expect
realworld apps to do that, so I'm not too concerned about the hang.
On Ivybridge, passes 69/70. The failing case is
multisample(advanced.line.changeWidth).
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function felt sloppy, so this patch cleans it up a little bit.
- Rename `color` to `i`. It is not a color value, only an iterator int.
- Move `depth_bits[0] = 0` into the non-accum loop because that is where
it used. The accum loop later overwrites depth_bits[0].
- Rename `depth_factor` to `num_depth_stencil_bits`.
- Redefine `msaa_samples_array` as static const because it is never
modified. Rename to `singlesample_samples`.
Reviewed-by: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If either argument to driConcatConfigs(a, b) is null or the empty list,
then simply return the other argument as the resultant list.
All callers were accomplishing that same behavior anyway. And each caller
accopmplished it with the same pattern. So this patch moves that external
pattern into the function.
Reviewed-by: <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
DRI2 configs were constructed in intelInitScreen2. That function already
does too much, so move verbatim the code for creating configs to a new
function, intel_screen_make_configs.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
| |
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new param, num_samples, to intel_create_renderbuffer and
intel_create_private_renderbuffer.
No multisample GL config is yet advertised, so the value of num_samples is
currently 0. For server-owned winsys buffers, gl_renderbuffer::NumSamples
is not yet used.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Paul Berry <[email protected]> (v1)
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
| |
The 16-bit depth case did not follow the function's prevalent pattern.
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Nearly the whole function body was contained in the 'else' branch. The
'if' branch did one thing: return early with an error. Clean things up by
moving all the code out of the 'else' branch. Decreases max nesting level
from 4 to 3.
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
After commit "intel: Convert to using private depth/stencil buffers", we
request from DRI2GetBuffersWithFormat only the front left and back left
buffers. We no longer request depth and stencil buffers.
Assert that in intelAllocateBuffer and remove the related dead code.
Reviewed-by: Paul Berry <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The error was being set on the non-error path, rather
than the error path.
NOTE: This is a candidate for the 8.0 branch.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unigine Heaven (at least) has a bug where it incorrectly uses the
GL_ARB_blend_func_extended extension.
Dual source blending allows two color outputs per render target;
individual shader outputs can be assigned to be either the first or
second blending input by setting the 'index' via one of two methods:
- An API call: glBindFragDataLocationIndexed()
- The GLSL 'layout' qualifier provided by GL_ARB_explicit_attrib_location
Both of these only work on user defined fragment shader outputs; it's an
error to use either on built-in outputs like gl_FragData.
Unigine uses gl_FragData and gl_FragColor exclusively, and doesn't even
attempt to use either method to set index == 1. However, it does set
the blending function to SRC1 enums, which requires a fragment shader
output with index == 1 or else rendering is undefined.
In other words, enabling ARB_blend_func_extended causes Unigine to
render incorrectly, resulting in an apparent regression, even though our
driver code (as far as I can tell) is perfectly fine.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=50291
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
It's been broken (using NULL getBuffersWithFormat() instead of
getBuffers()) due to a copy and paste error for a year now.
GetBuffersWithFormat has been around since 2009, so I don't feel any
guilt in not supporting it.
Reviewed-by: Chad Versace <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This means that GLX buffer sharing of these no longer works. On the
other hand, just *look* at this code reduction.
v2:
- [chad] Fix intelCreateBuffer for gen < 6. When the branch for
!screen->hw_has_separate_stencil was taken,
intel_create_private_renderbuffer was incorrectly not used.
- [chad] Remove all code in intel_process_dri2_buffer for processing
depth, stencil, and hiz buffers. That code is now dead.
CC: Eric Anholt <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
This generalizes and replaces gbm_bo_create_for_egl_image. gbm_bo_import
will create a gbm_bo from either an EGLImage or a struct wl_buffer.
|
|
|
|
|
|
|
|
|
|
| |
When we don't intend to texture from or render to a __DRIimage we
use __DRI_IMAGE_FORMAT_NONE. In that case, we just create the __DRIimage
to reference the underlying buffer, and will create usable __DRIimages
from it using createSubImage later.
If we try to use _mesa_get_format_bytes() on MESA_FORMAT_NONE in
a debug build, we hit an assertion, so let's not do that.
|
|
|
|
|
|
|
| |
We use the new miptree offset to pick out the sub-image when we bind
the EGLImage to a texture.
Signed-off-by: Kristian Høgsberg <[email protected]>
|
| |
|
|
|
|
|
|
|
| |
We have the same switch and allocation code in two places.
Signed-off-by: Kristian Høgsberg <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kristian Høgsberg <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Kristian Høgsberg <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new gbm entry point allows writing data into a gbm bo. The bo has
to be created with the GBM_BO_USE_WRITE flag, and it's only required to
work for GBM_BO_USE_CURSOR_64X64 bos.
The gbm API is designed to be the glue layer between EGL and KMS, but there
was never a mechanism initialize a buffer suitable for use with KMS
hw cursors. The hw cursor bo is typically not compatible with anything EGL
can render to, and thus there's no way to get data into such a bo.
gbm_bo_write() fills that gap while staying out of the efficient
cpu->gpu pixel transfer business.
Reviewed-by: Ander Conselvan de Oliveira <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
Enabled MESA_FORMAT_RGBX8888_REV for RGBX. Android software
requires RGBX8888 format to be supported for software rendering.
That requires EGL to be capable of generating images from this
format.
Signed-off-by: Sean V Kelley <[email protected]>
Signed-off-by: Chad Versace <[email protected]>
|
|
|
|
|
|
| |
Only images created with intel_create_image() had the field properly
set. Set it also on intel_dup_image(), intel_create_image_from_name()
and intel_create_image_from_renderbuffer().
|
| |
|
|
|
|
|
| |
The param wasn't added until drm-intel-next for 3.4, so we were
missing our various LLC fast-paths.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using a separate stencil buffer, i965 requires that the pitch of
the buffer (in the 3DSTATE_STENCIL_BUFFER command) be specified as 2x
the actual pitch.
Previously this was accomplished by doubling the "cpp" and "pitch"
values stored in the intel_region data structure, and halving the
height. However, this was confusing, and it led to a subtle (but
benign) bug: since a stencil buffer is W-tiled, its true height must
be aligned to a multiple of 64; we were accidentally aligning its faux
height to a multiple of 64, causing memory to be wasted.
Note that for window system stencil buffers, the DDX also doubles the
cpp and pitch values. To facilitate fixing this DDX server bug in the
future, we fix the cpp and pitch values we receive from the X server
only if cpp has the "incorrect" value of 2.
Acked-by: Kenneth Graunke <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
v2: Clarify comments about the DDX.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Certain applications don't call SwapBuffers before exiting. Yet, we'd
really like to see a bitmap containing the final rendered image even if
they choose never to present it.
In particular, Piglit tests (at least with -auto -fbo) fall into this
category. Many of them failed to dump any images at all.
Dumping one final image at context destruction time seems to work.
We may wish to pursue a more elegant solution later.
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Yuanhan Liu <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
It also asks for BMPs in the aub file at SwapBuffers time.
Reviewed-by: Yuanhan Liu <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The force-enable option is dropped, now that the hardware we were
concerned about has HiZ on by default. Now, instead of doing
INTEL_HIZ=0 to test disabling hiz, you can set hiz=false.
v2: Disable separate stencil on gen6 when HIZ is turned off.
(previously, this had to be done manually in addition).
Reviewed-by: Kenneth Graunke <[email protected]> (v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's even a comment in the code containing the right swizzling
computations!
Previously this has not been noticed because we need to manually
enabled swizzling on snb/ivb (kernel 3.4 will do that) and we
don't use the separate stencil on ilk (where the bios enables
swizzling). This fixes
piglit ./bin/fbo-stencil readpixels GL_DEPTH32F_STENCIL8 -auto
on recent drm-intel-next kernels.
Also remove the comment about ivb, it's stale now.
Swizzling detection is done by allocating a temporary x-tiled
buffer object. Unfortunately kernels before v3.2 lie on snb/ivb
because they claim that swizzling is enable, but it isn't. The
kernel commit that fixes this for backport to pre-v3.2 is
commit acc83eb5a1e0ae7dbbf89ca2a1a943ade224bb84
Author: Daniel Vetter <[email protected]>
Date: Mon Sep 12 20:49:16 2011 +0200
drm/i915: fix swizzling on gen6+
But if the kernel doesn't lie, this now works on swizzling and
not swizzling machines.
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
To indicate support for the format query.
Reviewed-by: Kristian Høgsberg <[email protected]>
Signed-off-by: Jesse Barnes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
GBM needs the buffer format in order to communicate with DRM and clients
for things like scanout.
So track the DRI format requested in the various back ends and use it to
return the DRI format back to GBM when requested. GBM will then map
this into the GBM surface type (which is in turn based on the DRM fb
format list).
Signed-off-by: Jesse Barnes <[email protected]>
|
|
|
|
| |
It was concerned that the 4 pad bytes on LP64 were uninitialized.
|
|
|
|
|
|
|
|
|
| |
Rely on libdrm HAS_LLC parameter to verify if hardware supports it. In
case the libdrm version does not supports this check, fallback to older
way of detecting it which assumed that GPUs newer than GEN6 have it.
Reviewed-by: Kenneth Graunke <[email protected]>
Signed-off-by: Eugeni Dodonov <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This can be used to work around broken application behavior, like in
Unigine where it attempts to use texture arrays without declaring
either "#extension GL_EXT_texture_array : enable" or "#version 130".
NOTE: This is a candidate for the 8.0 branch.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
| |
Drivers that rely on swrast need to do this, as with swrast_texture_image.
|
| |
|
|
|
|
|
| |
The entry point is supposed to validate that the EGLImage is suitable for
the passed in usage flags, but that was never implemented.
|
|
|
|
|
|
|
| |
It is better to test if(intel == NULL) and simply return in that case.
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Chad Versace <[email protected]>
|