| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Half float support already exists for desktop GL. Reuse the
ARB_half_float_vertex enable bit and account for the different enum to
enable the extension for ES2.
Signed-off-by: Kevin Strasser <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
| |
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97447
Signed-off-by: Jordan Justen <[email protected]>
Tested-by: Evan Odabashian <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch should have been the part of commit e592f7df.
In a situation when there are multiple render targets with alpha testing
enabled, if fragment shader doesn't write to draw buffer zero, it causes
the GPU hang on SKL. No GPU hang is seen on HSW. Simulator gives a
warning for all gen6+ h/w:
"Illegal render target write message length 0xa expected 0xc"
This patch fixes the GPU hang as well as the simulator warning with
new piglit test fbo-mrt-alphatest-no-buffer-zero-write:
https://patchwork.freedesktop.org/patch/118212
No regressions in Jenkins CI system.
Cc: "12.0 13.0" <[email protected]>
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For gen9+ this will indicate when we should allow hiz based sampling
during rendering.
Improves performance in :
- Synmark's OglDeferred by 2.2% (n=20)
- Synmark's OglShMapPcf by 0.44% (n=20)
v2 by Ben: Add spec reference, and make it fix with some of the changes made on
the previous patches
Change the check from mt->aux_buf to mt->num_samples. The presence of an aux_buf
isn't enough to determine there isn't a HiZ buffer to use.
v3: It seems all depth surface end up with num_samples = 0 by default,
so allow sampling from depth HiZ if num_samples <= 1. (Lionel)
Allow sampling from HiZ only if all LOD are available from the HiZ
buffer. (Lionel)
Signed-off-by: Jordan Justen <[email protected]> (v1)
Signed-off-by: Ben Widawsky <[email protected]> (v2)
Signed-off-by: Lionel Landwerlin <[email protected]> (v3)
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original functionality this patch introduces was authored by a patch from
Ken (the commit subject was the same). Since I ended up changing so many patches
in the code before this one, I had some non-trivial decisions to make, and I
didn't feel it was appropriate to keeps Ken's name as author (mostly because he
might not like what I've done). Ken's original patch was like 2 LOC :-)
In either case, some credit needs to go to Ken, and to Jordan for a few small
other changes in that original patch.
v2: Back to a smaller diff now that ISL handles most of the actual
programming (Lionel)
Signed-off-by: Ben Widawsky <[email protected]> (v1)
Signed-off-by: Lionel Landwerlin <[email protected]> (v2)
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently it indicates that this is never supported, but soon it will
be supported for gen8+^w gen9+
v2 by Ben:
- Explicitly disable aux_hiz for gen < 9 (with comment)
- squashed in next patch to avoid unused and useless functions
i965: Support sampling with hiz during rendering
For gen8, we can sample from depth while using the hiz buffer. This
allows us to sample depth without resolving from hiz to the depth
texture.
To do this we must resolve to hiz before drawing so we can use the hiz
buffer to sample while rendering. Hopefully the hiz buffer will
already be resolved in most cases because it was previously rendered,
meaning the hiz resolve is a no-op.
Note that this is still controlled by the
intel_miptree_sample_with_hiz function, and we will enable hiz
sampling for gen8 in a separate patch.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Signed-off-by: Jordan Justen <[email protected]> (v1)
Signed-off-by: Ben Widawsky <[email protected]> (v2)
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This seems counter to the goal of consolidating hiz, mcs, and later ccs buffers.
Unfortunately, hiz on gen6 is a thing the code supports, and this wart will be
helpful to achieve that. Overall, I believe it does help unify AUX buffers on
gen7+.
I updated the size field which I introduced in the previous patch, even though
we have no use for it.
XXX: As I mentioned in the last patch, the height given to the MCS buffer
allocation in intel_miptree_alloc_mcs() looks wrong, but I don't claim to fully
understand how the MCS buffer is laid out.
v2: rebase on master (Lionel)
Signed-off-by: Ben Widawsky <[email protected]> (v1)
Signed-off-by: Lionel Landwerlin <[email protected]> (v2)
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch will preserve the BO & offset, and not the miptree for the
aux_mcs buffer. Eventually it might make sense to pull put the sizing
function in miptree creation, but for now this should be sufficient
and not too hideous.
v2: Save BO's offset too (Lionel)
v3: Squash previous patch storing the size of the allocated aux buffer
(Lionel)
Fix memory leak with mcs_buf->bo (Lionel)
Signed-off-by: Ben Widawsky <[email protected]> (v1)
Signed-off-by: Lionel Landwerlin <[email protected]> (v2)
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The next patch will change the map type, and this will make sure there are no
regressions as a result of the other stuff. Since the miptree is newly created,
I believe it is always safe to just map.
It is possible to CPU map this buffer on LLC platforms (it additionally requires
rounding up to tile size). I did experiment with that patch, and found no
performance gains to be had.
I've added in error handling while here. Generally GTT mapping is an operation
which is highly unlikely to fail, but we may as well handle it when it does.
v2: rebase on master (Lionel)
v3: print out error if gtt mapping fails (Topi)
Signed-off-by: Ben Widawsky <[email protected]> (v1)
Signed-off-by: Lionel Landwerlin <[email protected]> (v2)
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow us to treat HiZ and MCS the same when using as an
auxiliary surface buffer.
v2: (Ben) Minor rebase conflict resolution.
Rename mcs_buf to aux_buf to address upcoming change for hiz specific buffers.
That second thing is essentially a squash of:
i965/gen8: Use intel_miptree_aux_buffer for auxiliary buffer - which didn't need
to be separate in my opinion.
v3: rebase on master (Lionel)
Signed-off-by: Jordan Justen <[email protected]> (v1)
Signed-off-by: Ben Widawsky <[email protected]>a (v2)
Signed-off-by: Lionel Landwerlin <[email protected]> (v3)
Reviewed-by: Topi Pohjolainen <[email protected]>
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Anuj Phogat <[email protected]>
Reviewed-by: Brian Paul <[email protected]>
Reviewed-by: Ben Widawsky <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a problem seen with gallium drivers vs android wallpaper.
Basically, what happens is:
EGLSurface tmpSurface = mEgl.eglCreatePbufferSurface(mEglDisplay, mEglConfig, attribs);
mEgl.eglMakeCurrent(mEglDisplay, tmpSurface, tmpSurface, mEglContext);
int[] maxSize = new int[1];
Rect frame = surfaceHolder.getSurfaceFrame();
glGetIntegerv(GL_MAX_TEXTURE_SIZE, maxSize, 0);
mEgl.eglMakeCurrent(mEglDisplay, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
mEgl.eglDestroySurface(mEglDisplay, tmpSurface);
... check maxSize vs frame size and bail if needed ...
mEglSurface = mEgl.eglCreateWindowSurface(mEglDisplay, mEglConfig, surfaceHolder, null);
... error checking ...
mEgl.eglMakeCurrent(mEglDisplay, mEglSurface, mEglSurface, mEglContext);
When the window-surface is created, it ends up with the same ptr address
as the recently freed tmpSurface pbuffer surface. Which after many
levels of indirection, results in st_framebuffer_validate() ending up with
the same/old framebuffer object, and in the end never calling the
DRIimageLoaderExtension::getBuffers(). Then in droid_swap_buffers(), the
dri2_surf is still the old pbuffer surface (with dri2_surf->buffer being
NULL, obviously, so when wallpaper app calls eglSwapBuffers() nothing
gets enqueued to the compositor). Resulting in a black/blank background
layer.
Note that at the EGL layer, when the context is unbound, EGL drops it's
references to the draw and read buffer as well.
Signed-off-by: Rob Clark <[email protected]>
Tested-by: Robert Foss <[email protected]>
Acked-by: Tapani Pälli <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes random crashes with MSVC release builds. It seems the
members are implicitly initialized to zero with gcc, but not MSVC.
In particular, the tex_offset_num_offset field was non-zero causing
a loop over the NULL tex_offsets array to crash.
Zero-init those fields and a few others to be safe.
The regression began with acc23b04cfd64e "ralloc: remove memset from
ralloc_size".
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL_ARB_ES3_compatibility brings ETC2/EAC formats to desktop GL.
The meaning of the GL compressed format list is pretty vague - it's
supposed to return formats for "general-purpose usage". (GL 4.2
deprecates the list because of this.) Basically everyone interprets
this as "linear RGB/RGBA".
ETC2/EAC meets that criteria, so while we shouldn't be required to add
it to the list, there's also little harm in doing so, at least on
platforms with native support. I doubt anyone is using this list for
much anyway, so even on platforms without native support, it's probably
not a big deal.
Makes the following GL45-CTS.gtf43 tests pass:
* GL3Tests.eac_compression_r11.gl_compressed_r11_eac
* GL3Tests.eac_compression_rg11.gl_compressed_rg11_eac
* GL3Tests.eac_compression_signed_r11.gl_compressed_signed_r11_eac
* GL3Tests.eac_compression_signed_rg11.gl_compressed_signed_rg11_eac
* GL3Tests.etc2_compression_rgb8.gl_compressed_rgb8_etc2
* GL3Tests.etc2_compression_rgb8_pt_alpha1.gl_compressed_rgb8_pt_alpha1_etc2
* GL3Tests.etc2_compression_rgba8.gl_compressed_rgba8_etc2
Signed-off-by: Kenneth Graunke <[email protected]>
Reviewed-by: Eduardo Lima Mitev <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A (latent) bug in VDPAU interop was exposed by commit
e5cc84dd43be066c1dd418e32f5ad258e31a150a.
Before that commit, the st_vdpau code created samplers with
first_layer == last_layer == 1 that the general texture handling code
would immediately delete and re-create, because the layer does not match
the information in the GL texture object.
This was correct behavior at least in the DMABUF case, because the imported
resource is supposed to have the correct offset already applied. In the
non-DMABUF case, this was just plain wrong but apparently nobody noticed.
After that commit, the state tracker assumes that an existing sampler is
correct at all times. Existing samplers are supposed to be deleted when
they may become invalid, and they will be created on-demand. This meant
that the sampler with first_layer == last_layer == 1 stuck around, leading
to rendering artefacts (on radeonsi), command stream failures (on r600), and
assertions (in debug builds everywhere).
This patch fixes the problem by simply not creating a sampler at all in
st_vdpau_map_surface. We rely on the generic texture code to do the right
thing, adding the layer_override to make the non-DMABUF case work.
v2: add the layer_override
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98512
Cc: 13.0 <[email protected]>
Cc: Christian König <[email protected]>
Cc: Ilia Mirkin <[email protected]>
Reviewed-by: Marek Olšák <[email protected]> (v1)
Reviewed-by: Christian König <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
When splitting up loads, we have to add 16 bytes to the offset for
the high components, just like already happens for stores.
Fixes arb_gpu_shader_fp64@shader_storage@layout-std140-fp64-shader.
Cc: 13.0 <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Valgrind reports that we use cfg.cycle_count uninitialised, so zero the
cfg_t on construction.
Fixes: 52d2b28f7f10 ("ralloc: use rzalloc where it's necessary")
Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
commit cc6aa1d161280f10ded7834d1ec2413bc97589fe changed to using rzalloc
for gl_program creation but one instance for program creation was still
using calloc.
Signed-off-by: Tapani Pälli <[email protected]>
Reviewed-by: Juan A. Suarez <[email protected]>
Reviewed-by: Iago Toral Quiroga <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves the delete linked shaders call to
_mesa_clear_shader_program_data() which makes sure we delete them
before returning due to any validation problems.
It also reduces some code duplication.
From the OpenGL 4.5 Core spec:
"If LinkProgram failed, any information about a previous link of
that program object is lost. Thus, a failed link does not restore
the old state of program.
...
If one of these commands is called with a program for which
LinkProgram failed, no error is generated unless otherwise noted.
Implementations may return information on variables and interface
blocks that would have been active had the program been linked
successfully. In cases where the link failed because the program
required too many resources, these commands may help applications
determine why limits were exceeded."
Therefore it's expected that we shouldn't be able to query the
program that failed to link and retrieve information about a
previously successful link.
Before this change the linker was doing validation before freeing
the previously linked shaders and therefore could exit on failure
before they were freed.
This change also fixes an issue in compat profile where a program
with no shaders attached is expect to fall back to fixed function
but was instead trying to relink IR from a previous link.
Reviewed-by: Tapani Pälli <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97715
Cc: "13.0" <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This will allow use to use ralloc_parent() on the info field and fix
a regression in nir_sweep() caused by e1af20f18a8.
This is intended to be a temporary requirement that will be removed
when we finish separating shader_info from nir_shader.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to use ralloc_parent() to see which data structure owns
shader_info which allows us to fix a regression in nir_sweep().
This will also allow us to move some fields from gl_linked_shader to
gl_program, which will allow us to do some clean-ups like storing
gl_program directly in the CurrentProgram array in gl_pipeline_object
enabling some small validation optimisations at draw time.
Also it is error prone to depend on the gl_linked_shader for
programs in current use because a failed linking attempt will free
infomation about the current program. In i965 we could be trying
to recompile a shader variant but may have lost some required fields.
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
| |
Signed-off-by: Jason Ekstrand <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98012
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Timothy Arceri <[email protected]>
Cc: "13.0" <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The emission of vertex attributes corresponding to dvec3 and dvec4
vertex shader input variables was not correct when the <size> passed
to the VertexAttribL* commands was <= 2.
This was because we were using the vertex array size when emitting vertices
to decide if we uploaded a 64-bit floating point attribute as 1 slot (128-bits)
for sizes 1 and 2, or 2 slots (256-bits) for sizes 3 and 4. This caused problems
when mapping the input variables to registers because, for deciding which
registers contain the values uploaded for a certain variable, we use the size
and type given to the variable in the shader, so we will be assigning 256-bits
to dvec3/4 variables, even if we only uploaded 128-bits for them, which happened
when the vertex array size was <= 2.
The patch uses the shader information to only emit as 128-bits those 64-bit floating
point variables that were declared as double or dvec2 in the vertex shader. Dvec3 and
dvec4 variables will be always uploaded as 256-bits, independently of the <size> given
to the VertexAttribL* command.
From the ARB_vertex_attrib_64bit specification:
"For the 64-bit double precision types listed in Table X.1, no default
attribute values are provided if the values of the vertex attribute variable
are specified with fewer components than required for the attribute
variable. For example, the fourth component of a variable of type dvec4
will be undefined if specified using VertexAttribL3dv or using a vertex
array specified with VertexAttribLPointer and a size of three."
We are filling these unspecified components with zeros, which coincidentally is
also what the GL44-CTS.vertex_attrib_binding.basic-inputL-case1 expects.
v2: Do not use bitcount (Kenneth Graunke)
Fixes: GL44-CTS.vertex_attrib_binding.basic-inputL-case1 test
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97287
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
3DSTATE_WM_CHROMAKEY isn't programmed anywhere else.
3DSTATE_WM_HZ_OP is programmed, then cleared by blorp during a
HZ op, so repeatedly clearing it after every blorp execution is
redundant.
Signed-off-by: Nanley Chery <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
| |
This packet is non-pipelined and doesn't ever change across emissions.
Signed-off-by: Nanley Chery <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No change in behavior. ralloc_size is equivalent to rzalloc_size.
That will change though.
Calls not switched to rzalloc_size:
- ralloc_vasprintf
- glsl_type::name allocation (it's filled with snprintf)
- C++ classes where valgrind didn't show uninitialized values
I switched most of non-glsl stuff to rzalloc without checking whether
it's really needed.
Reviewed-by: Edward O'Callaghan <[email protected]>
Tested-by: Edmondo Tommasina <[email protected]>
Reviewed-by: Nicolai Hähnle <[email protected]>
|
|
|
|
| |
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Juha-Pekka Heikkila <[email protected]>
Signed-off-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The address immediate field is only 9 bits and, since the value is in
bytes, the highest GRF we can point to with it is g15. This makes it
pretty close to useless for MOV_INDIRECT. There were already piles of
restrictions preventing us from using it prior to Broadwell, so let's get
rid of the gen8+ code path entirely.
Signed-off-by: Jason Ekstrand <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97779
Cc: "12.0 13.0" <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
Commit 66fcfa6894ab6 changed the vec4 version of offset() to have 3
parameters instead of 2 but the vec4_cmod_propagation test was never
updated.
Signed-off-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The term "client array" is a legacy thing dating back to the pre-VBO
era when _all_ vertex arrays lived in client memory.
Nowadays, it only contains vertex array state which is derived from
gl_array_attributes and gl_vertex_buffer_binding. It's used by the
VBO module and some drivers.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
| |
Init vars where declared, use const qualifiers.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
| |
Was missed in an earlier renaming patch.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
| |
To be a little more understandable.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
| |
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An assert is currently raised, preventing decompression of a texture image into
a GL_TEXTURE_3D target. I have not found any spec wording that would explain
this, or implementation detail that would prevent it. And in any case, the
driver should not cause a crash upon user input arguments.
Fixes most failing subcases in CTS tests:
* GL44-CTS.gtf32.GL3Tests.packed_pixels.packed_pixels_pixelstore
* GL45-CTS.gtf32.GL3Tests.packed_pixels.packed_pixels_pixelstore
These tests were crashing the driver before. Now they just fail, but due
to an unrelated issue affecting 2 out of the 45 test subcases.
No regressions observed against piglit or CTS-GL.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
These restrictions existed because intel_miptree_blit couldn't handle
surfaces bigger than 32k. How that we're chopping blits up into chunks, it
can handle any size we throw at it so we can get rid of this restriction.
This improves the terrain tests in synmark by 25-30% on my Sky Lake gt3.
Signed-off-by: Jason Ekstrand <[email protected]>
Reported-by: Ben Widawsky <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to blit much larger images than if we use the blitter
directly. In particular, it gives us an almost infinite image height
compared to the fairly limiting 32k. We do, however, still have a
restriction on stride of the image because handling larger strides, while
possible, is fairly difficult.
v2: Properly handle linear blit alignment restrictions
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: Properly handle linear blit alignment restrictions
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This assertion, while valid for linear buffers, doesn't work properly for
tiled memory. It used to work most of the time because the offset provided
was always to the left-hand edge of the image. However, if you use a byte
offset to get to the inside of the image, the height * stride calculation
may actually end up being too large.
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
| |
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The only actual user of this parameter was blorp and, since the conversion
to ISL, it no longer uses this function.
Signed-off-by: Jason Ekstrand <[email protected]>
Reviewed-by: Topi Pohjolainen <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So that it has the same semantics as the scalar backend implementation. The
helper will now take a simd width (which is always 8 in vec4 mode) and step
as many scalar components as specified by that width, respecting the size of
the scalar channels.
v2 (Curro):
- Remove the assertion in offset(), byte_offset() has the same checks.
- Use byte_offset() directly instead of add_byte_offset().
- Make things more clear by explicitly including the vertical stride
in the byte offset expression.
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
In a later patch we want to change the semantics of offset() to be in terms
of SIMD width and scalar channels so it is consistent with the definition
of the same helper in the scalar backend. However, some uses of offset()
in the vec4 backend do not operate naturally in terms of these
semantics. In these cases it is more natural to use the byte_offset() helper
instead.
Reviewed-by: Francisco Jerez <[email protected]>
|
|
|
|
|
|
|
|
| |
v2: wrap the helper in a namespace to make clear that it is an
implementation detail of byte_offset() and is not intended
to be used independently (Curro).
Reviewed-by: Francisco Jerez <[email protected]>
|