| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
In most places (glGetInteger, max_legal_texture_dimensions), we wanted the
number of pixels, not the number of levels. Number of levels is easily
recovered with util_next_power_of_two() and ffs(). More importantly, for
V3D we want to be able to expose a non-power-of-two maximum texture size
to cover 2x4k displays on HW that can't quite do 8192 wide.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
We've already verified this by _mesa_legal_texture_dimensions() before
this call.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
The shared function has some extension presence checks, but other than
that has the same switch statement contents.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glibc < 2.27 defines OVERFLOW in /usr/include/math.h.
This patch fixes this build error.
In file included from ../include/c99_math.h:37:0,
from ../src/util/u_math.h:44,
from ../src/mesa/main/macros.h:35,
from ../src/intel/compiler/brw_reg.h:47,
from ../src/intel/tools/i965_asm.h:32,
from ../src/intel/tools/i965_gram.y:29:
src/intel/tools/i965_gram.tab.c:562:5: error: expected identifier before numeric constant
OVERFLOW = 412,
^
Fixes: 70308a5a8a80 ("intel/tools: New i965 instruction assembler tool")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110656
Signed-off-by: Vinson Lee <[email protected]>
Acked-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
| |
Useful for dumping shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The overall goal is to support unaligned loads from vertex buffers
natively on SI.
In the unaligned case, we fall back to the general case implementation in
ac_build_opencoded_load_format. Since this function is fully general,
we will also use it going forward for cases requiring fully manual format
conversions of dwords anyway.
This requires a different encoding of the fix_fetch array, which will now
contain the entire format information if a fixup is required.
Having to check the alignment of vertex buffers is awkward. To keep the
impact on the fast path minimal, the si_context will keep track of which
vertex buffers are (not) at least dword-aligned, while the
si_vertex_elements will note which vertex buffers have some (at most dword)
alignment requirement. Vertex buffers should be dword-aligned most of the
time, which allows a fast early-out in almost all cases.
Add the radeonsi_vs_fetch_always_opencode configuration variable for
testing purposes. Note that it can only be used reliably on LLVM >= 9,
because support for byte and short load is required.
v2:
- add a missing check to si_bind_vertex_elements
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
| |
Purely as a shorthand in the remainder of the function.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
| |
Implement software emulation of buffer_load_format for all types required
by vertex buffer fetches.
Reviewed-by: Marek Olšák <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current SSA def validation we do in nir_validate validates three
things:
1. That each SSA def is only ever used in the function in which it is
defined.
2. That an nir_src exists in an SSA def's use list if and only if it
points to that SSA def.
3. That each nir_src is in the correct use list (uses or if_uses) based
on whether it's an if condition or not.
The way we were doing this before was that we had a hash table which
provided a map from SSA def to a small ssa_def_validate_state data
structure which contained a pointer to the nir_function_impl and two
hash sets, one for each use list. This meant piles of allocation and
creating of little hash sets. It also meant one hash lookup for each
SSA def plus one per use as well as two per src (because we have to look
up the ssa_def_validate_state and then look up the use.) It also
involved a second walk over the instructions as a post-validate step.
This commit changes us to use a single low-collision hash set of SSA
sources for all of this by being a bit more clever. We accomplish the
objectives above as follows:
1. The list is clear when we start validating a function. If the
nir_src references an SSA def which is defined in a different
function, it simply won't be in the set.
2. When validating the SSA defs, we walk the uses and verify that they
have is_ssa set and that the SSA def points to the SSA def we're
validating. This catches the case of a nir_src being in the wrong
list. We then put the nir_src in the set and, when we validate the
nir_src, we assert that it's in the set. This takes care of any
cases where a nir_src isn't in the use list. After checking that
the nir_src is in the set, we remove it from the set and, at the end
of nir_function_impl validation, we assert that the set is empty.
This takes care of any cases where a nir_src is in a use list but
the instruction is no longer in the shader.
3. When we put a nir_src in the set, we set the bottom bit of the
pointer to 1 if it's the condition of an if. This lets us detect
whether or not a nir_src is in the right list.
When running shader-db with an optimized debug build of mesa on my
laptop, I get the following shader-db CPU times:
With NIR_VALIDATE=0 3033.34 seconds
Before this commit 20224.83 seconds
After this commit 6255.50 seconds
Assuming shader-db is a representative sampling of GLSL shaders, this
means that making this change yields an 81% reduction in the time spent
in nir_validate. It still isn't cheap but enabling validation now only
increases compile times by 2x instead of 6.6x.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Thomas Helland <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Often times you don't know how big a set will be and you want the code
to just grow it as needed. However, sometimes you do know and you can
avoid a lot of rehashing if you just specify a size up-front.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Thomas Helland <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This function is identical to _mesa_set_add except that it takes an
extra out parameter that lets the caller detect if a replacement
happened.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Thomas Helland <[email protected]>
|
|
|
|
|
|
|
|
|
| |
All of our hash tables and sets are already using ralloc. There's
really no good reason why we don't just make a ralloc context rather
than try to remember to clean everything up manually.
Reviewed-by: Eric Anholt <[email protected]>
Reviewed-by: Thomas Helland <[email protected]>
|
|
|
|
|
|
|
|
| |
The H5 hardware variant requires a specific plb_max_blk number. This
value can't be probed at the hardware level.
Signed-off-by: Patrick Lerda <[email protected]>
Reviewed-by: Qiang Yu <[email protected]>
|
|
|
|
|
|
|
|
| |
Move plb_max_blk to lima_screen, and add a new debug option:
LIMA_PLB_MAX_BLK
Signed-off-by: Patrick Lerda <[email protected]>
Reviewed-by: Qiang Yu <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While ImageFormatProperties returns the number of internal descriptors,
it turns out that applications do not need to actually allocate more
descriptors in the descriptor pool.
So if we make descriptors with more planes larger we have to be
convervative and always allocate space for the larger descriptors
which is a waste given the low usage of this ext.
So let us make use of the fact that 3plane formats all have the
same formats & dimensions for the last two planes. This way we
only need the first half of the descriptor of the 3rd plane and
can share the second half of the second plane.
This allows us to use 16 bytes for the descriptor which nicely
fits into the 16 bytes that are unused right next to the sampler.
Fixes: 5564c38212a "radv: Update descriptor sets for multiple planes."
Reviewed-by: Samuel Pitoiset <[email protected]>
|
|
|
|
|
|
| |
Adds support for physical device functions unknown to the loader.
Acked-by: Samuel Pitoiset <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use an algebraic pass for the csel optimizations, and use proper
vectorized csel ops (i/fcsel_v) for mixed, rather lowering.
To avoid regressions along the way, we fix an issue with the copy
propagation pass (it should not attempt to propagate constants).
Similarly, we take care to break bundles when using csel to fix some
scheduler corner cases.
Signed-off-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
iris_draw_vbo is divided into two functions to remove unnecessary
operations from the loop. This implementation of ARB_indirect_parameters
takes into account NV_conditional_render by saving MI_PREDICATE_RESULT
at the start of a draw call and restoring it at the end also the result
of NV_conditional_render is taken into account when computing predicates
that limit draw calls for ARB_indirect_parameters in a similar way
to 1952fd8d in ANV.
v2: Optimize indirect draws (suggested by Kenneth Graunke)
v3: (by Kenneth Graunke)
- Fix an issue where indirect draws wouldn't set patch information
before updating the compiled TCS.
- Move some code back to iris_draw_vbo to avoid duplicating it.
- Fix minor indentation issues.
Signed-off-by: Illia Iorin <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
Shader draw parameters need updating on each iteration of a multidraw
loop, but the primitive based information only needs to be updated once.
Also, patch information needs to be recorded before filling out the TCS
program key, as it determines the number of HS instances.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The nested fma calls were supposed to implement
x_new = x + x * (1 - x*src),
but instead current code is equivalent to
x_new = x - x * (1 - x*src).
The result is that Newton-Raphson steps don't improve precision at all.
This patch fixes this problem.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110435
Reviewed-by: Kenneth Graunke <[email protected]>
|
| |
|
|
|
|
|
|
|
|
|
| |
Only vertex inputs accessed by vertex shader must have valid buffers
bound.
Signed-off-by: Józef Kucia <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
Fixes: 5010436e09f "radv: bail out when binding the same vertex buffers"
|
|
|
|
|
|
|
|
|
|
|
|
| |
src/mesa/state_tracker/st_tgsi_lower_yuv.c:68: void reg_dst(struct
tgsi_full_dst_register *, const struct tgsi_full_dst_register *, unsigned
int): assertion "dst->Register.WriteMask" failed
The second crash was due to insufficient allocated size for TGSI
instructions.
Cc: 19.0 19.1 <[email protected]>
Reviewed-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Anuj fixed this in i965 and anv, but the fix never landed in iris.
Fixes tessellation corruption on Icelake. Thanks to Rafael for
bisecting this and tracking it down.
Fixes: d0996d5fab6 iris: Emit default L3 config for the render pipeline
Reviewed-by: Rafael Antognolli <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update various limits in
VkPhysicalDeviceDescriptorIndexingPropertiesEXT that were previously
zero to their values from VkPhysicalDeviceLimits. When using
VK_EXT_descriptor_indexing, the former limits will apply to all the
descriptor layout sets -- not only those using the new feature bits.
For the reference, VK_EXT_descriptor_indexing says
"There are new descriptor set layout and descriptor pool creation
flags that are required to opt in to the update-after-bind
functionality, and there are separate maxPerStage* and
maxDescriptorSet* limits that apply to these descriptor set
layouts which may be much higher than the pre-existing limits. The
old limits only count descriptors in non-updateAfterBind
descriptor set layouts, and the new limits count descriptors in
all descriptor set layouts in the pipeline layout."
Fixes: 6e230d7607f "anv: Implement VK_EXT_descriptor_indexing"
Reviewed-by: Jason Ekstrand <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original implementation assumed that we could allocate the same
amount of command buffers as the number of images in the swapchain.
But the application could potentially render much faster and rerender
into images that have been submitted for presentation but not yet
presented.
This change keeps on allocating command buffers, vertex buffer, vertex
indices as well as a semaphore and a fence for as long as we can't
reuse a previously submitted one.
This fixes rendering issues in the overlay at high frame rates.
v2: Don't recreate semaphores constantly (Józef)
v3: Drop useless surface & FreeCommandBuffers (Józef)
Signed-off-by: Lionel Landwerlin <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110655
Cc: 19.1 <[email protected]>
Reviewed-by: Józef Kucia <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Non dispatchable handles can be uint64_t. When compiling the layer on
a 32bit platform, this will lead to casting uint64_t into (void *)
which is 32bit, leading to incorrect handles being mapped internally
in the layer.
v2: Use more HKEY() (Eric)
Signed-off-by: Lionel Landwerlin <[email protected]>
Reported-by: Józef Kucia <[email protected]>
Fixes: 2d2927938f074f ("vulkan/overlay-layer: fix cast errors")
Reviewed-by: Józef Kucia <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was taking a reference to the 64kB upload buffer and never
returning it, leaking a reference each time this atom triggered.
This leaked lots of 64kB upload BOs, eventually running us out of
of VMA space. This would usually happen when using mpv to watch a
movie, after 20-40 minutes.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110134
Fixes: 63d7b33f516 i965/cs: Setup surface binding for gl_NumWorkGroups
Reviewed-by: Caio Marcelo de Oliveira Filho <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes video being rendered incorrectly.
User wants height of 360 but internally pipe_video_buffer 's height
is 368 in the test below.
Test:
GST_GL_PLATFORM=egl gst-launch-1.0 videotestsrc ! video/x-raw, width=868, height=360, format=NV12 ! vaapipostproc ! glimagesink
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110443
Signed-off-by: Julien Isorce <[email protected]>
Reviewed-by: Marek Olšák <[email protected]>
Reviewed-by: Leo Liu <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids a conflict with the new (driver-agnostic) blend_func enum in
shader_enum.h, which broke the build of swrast (and i965 by extension).
My apologies :(
Signed-off-by: Alyssa Rosenzweig <[email protected]>
Fixes: f41be53a ("compiler: Add enums for blend state")
Cc: Caio Marcelo de Oliveira Filho <[email protected]>
Reviewed-by: Caio Marcelo de Oliveira Filho <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This represents a float vec4 constant color, as passed to glBlendColor.
While the existing 4 shader sysvals are retained to minimize code churn,
a single vectorized intrinsic is required for efficient blending on
vector architectures. (This may also apply to archictectures like
Bifrost where ALU is scalar but load/store is vector; it largely depends
on how blending is implemented per-driver.)
Signed-off-by: Alyssa Rosenzweig <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Complementing the new API-agnostic shader_enum blending style, we add
helpers to translate between the two forms. Ideally, we could just use
PIPE blending directly, but that makes Vulkan support challenging.
Signed-off-by: Alyssa Rosenzweig <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
We add enums corresponding to (GLES) blend state to shader_enums.h,
complementing the existing advanced blending enums in the file. This
allows us to represent blending state in a driver-agnostic, API-agnostic
way to permit lowering.
Signed-off-by: Alyssa Rosenzweig <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
Acked-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This can be used by both etnaviv and freedreno/a2xx as they are both vec4
architectures with some instructions being scalar-only.
Signed-off-by: Jonathan Marek <[email protected]>
Reviewed-by: Christian Gmeiner <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to set up KILL sets, the dataflow code was walking the entire
array of ACPs for every instruction. If you assume the number of ACPs
increases roughly with the number of instructions, this is O(n^2). As
it turns out, regions_overlap() is not nearly as cheap as one would like
and shows up as a significant chunk on perf traces.
This commit changes things around and instead first builds an array of
exec_lists which it uses like a hash table (keyed off ACP source or
destination) similar to what's done in the rest of the copy-prop code.
By first walking the list of ACPs and populating the table and then
walking instructions and only looking at ACPs which probably have the
same VGRF number, we can reduce the complexity to O(n). This takes the
execution time of the piglit vs-isnan-dvec test from about 56.4 seconds
on an unoptimized debug build (what we run in CI) with NIR_VALIDATE=0 to
about 38.7 seconds.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the destination of an ACP entry exists only within this block, then
there's no need to keep it for dataflow analysis. We can delete it from
the out_acp table and avoid growing the bitsets any bigger than we
absolutely have to. This reduces the maximum number of global ACP
entries in the vs-isnan-dvec with software fp64 on Kaby Lake from 8630
to 3942 and takes the execution time of the piglit vs-isnan-dvec test
from about 1:16.2 on an unoptimized debug build (what we run in CI) with
NIR_VALIDATE=0 to about 56.4 seconds.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
While the number of ACPs is generally not huge compared to the number of
blocks, 16 does seem a bit small. Bumping it to 64 takes the execution
time of the piglit vs-isnan-dvec test from about 1:18.1 on an unoptimized
debug build (what we run in CI) with NIR_VALIDATE=0 to about 1:16.2.
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
There is no user fence for JPEG, the bug triggering
kernel WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT)
Signed-off-by: Leo Liu <[email protected]>
Acked-by: Christian König <[email protected]>
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
Cc: [email protected]
|
|
|
|
|
|
|
|
| |
When width=4096 and shift_w=0, block_w=0x100 which overflow
the PLBU_CMD 8 bits for it.
Reviewed-by: Vasily Khoruzhick <[email protected]>
Signed-off-by: Qiang Yu <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just do what everybody else but Nouveau does and return 0.0f.
This prevents the repeated logging of these messages on startup:
Unexpected PIPE_CAPF 6 query
Unexpected PIPE_CAPF 7 query
Unexpected PIPE_CAPF 8 query
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As the functions operate on 16-byte blocks.
Fixes this Valgrind error:
Invalid read of size 4
at 0x5857568: swizzle_bpp1_align16 (pan_swizzle.c:85)
by 0x585780F: panfrost_texture_swizzle (pan_swizzle.c:171)
by 0x584F587: panfrost_tile_texture (pan_resource.c:489)
by 0x584F641: panfrost_transfer_unmap (pan_resource.c:525)
by 0x587718D: u_transfer_helper_transfer_unmap (u_transfer_helper.c:516)
by 0x5875D85: pipe_transfer_unmap (u_inlines.h:515)
by 0x5875F13: u_default_texture_subdata (u_transfer.c:80)
by 0x53FFDC3: st_TexSubImage (st_cb_texture.c:1480)
by 0x54005BB: st_TexImage (st_cb_texture.c:1709)
by 0x5391353: teximage (teximage.c:3105)
by 0x5391353: teximage_err (teximage.c:3132)
by 0x5391B9B: _mesa_TexImage2D (teximage.c:3170)
by 0x5097A77: shared_dispatch_stub_183 (glapi_mapi_tmp.h:18833)
Address 0x1e94f1e8 is 0 bytes after a block of size 16 alloc'd
at 0x483F5C8: malloc (vg_replace_malloc.c:299)
by 0x584F47D: panfrost_transfer_map (pan_resource.c:467)
by 0x587694D: u_transfer_helper_transfer_map (u_transfer_helper.c:243)
by 0x5875EA7: u_default_texture_subdata (u_transfer.c:59)
by 0x53FFDC3: st_TexSubImage (st_cb_texture.c:1480)
by 0x54005BB: st_TexImage (st_cb_texture.c:1709)
by 0x5391353: teximage (teximage.c:3105)
by 0x5391353: teximage_err (teximage.c:3132)
by 0x5391B9B: _mesa_TexImage2D (teximage.c:3170)
by 0x5097A77: shared_dispatch_stub_183 (glapi_mapi_tmp.h:18833)
by 0x4DA8AB: glu::CallLogWrapper::glTexImage2D(unsigned int, int, int, int, int, int, unsigned int, unsigned int, void const*) (in /home/tomeu/deqp-build/modules/gles2/deqp-gles2)
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
Cc: 19.1 <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Valgrind was complaining of those.
NIR_PASS only sets progress to TRUE if there was progress.
nir_const_load_to_arr() only sets as many constants as components has
the instruction.
This was causing some dEQP tests to flip-flop, such as:
dEQP-GLES2.functional.fragment_ops.blend.equation_src_func_dst_func.add_src_color_constant_color
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
Fixes: 14531d676b11 ("nir: make nir_const_value scalar")
|
|
|
|
|
|
|
|
| |
These tests add too much time to the total run time, and some of them
even hang the DUTs, even if I haven't been able to reproduce it locally.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
There doesn't seem to actually be any noticeably memory leaks on Weston
when running dEQP. We do seem to leak quiet a bit in the client, so we
still have to run the dEQP runner in batches.
This removes the risk of Weston not restarting properly and introducing
spurious failures.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
|
| |
This matches the current state of things on both RK3288 and RK3399.
Hopefully, from now on we'll only remove stuff from this list.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
|
| |
Make sure we have only test case names in the list, excluding names of
test groups.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
|
|
| |
To improve robustness, check that we got the expected number of results.
Right now we hard-code the expected number of tests run, but with some
effort we may be able to infer it.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
| |
These tests aren't giving reliable results. Mask them for now.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|
|
|
|
|
|
|
|
| |
Build artifacts for armhf and schedule them on a Veyron Chromebook with
RK3288.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Alyssa Rosenzweig <[email protected]>
|