| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
This is the recurring flake from the last week, including spuriously
failing a pipeline once.
Tested-by: Marge Bot <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3937>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3937>
|
|
|
|
|
|
|
|
|
|
| |
These are the tests I've seen flake twice while logged in to the IRC
channel this year. Also include fragment_out.random.5 which fully
spuriously failed recently.
Closes: #2516
Tested-by: Marge Bot <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3862>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3862>
|
|
|
|
|
|
|
|
|
|
| |
This brings in the surfaceless fixes so we don't need to check out the
whole repo to cherry pick any more (which was bothering me as I debugged
things late in the painfully slow ARM container build process).
Reviewed-by: Michel Dänzer <[email protected]>
Tested-by: Marge Bot <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3662>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3662>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On a daily basis I've been having to restart people's a630 jobs in the
front couple of pages of /merge_requests due to spurious failures from our
flaky tests, and fielding reports of spurious fails from other developers,
and babysitting my own marge merges that are failing due to our flakes.
Nobody should have to deal with that, especially not non-freedreno
developers, so just scrape the list of flakes reported to #freedreno-ci
for the last month and ban those tests that have failed more than once
until we have a credible fix.
Acked-by: Kristian H. Kristensen <[email protected]>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3662>
|
|
|
|
|
|
|
| |
xfb + lines/points still flakes too frequently (and the problem isn't
even related to xfb), but we can add the rest back into this mix now.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
|
|
|
| |
We've had flaps on at least:
- r16f_to_r16f
- r16i_to_rg16i
Reviewed-by: Daniel Stone <[email protected]>
|
|
|
|
|
|
|
| |
These have shown up with the new CTS runner, which has changed test
ordering.
Reviewed-by: Daniel Stone <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This runner is a little project by Bas, written in C++, that spawns
threads that then loop grabbing chunks of the (randomly shuffled but
consistently so) test list and hand it to a dEQP instance. As the
remaining list gets shorter, so do the chunks, so hopefully the
threads all complete effectively at once. It also handles restarting
after crashes automatically. I've extended the runner a bit to do
what I was doing in the bash scripts before, like the skip list and
expected failures handling. This project should also be a good
baseline for extending to handle retesting of intermittent failures.
By switching to it, we can have the swrast tests just take up one job
slot on the shared runners and keep their allotment of CPUs busy,
instead of taking up job slots with single-threaded dEQP jobs. It
will also let us (eventually, once I reprovision) switch the freedreno
runners over to threading within the job instead of running concurrent
jobs, so that memory scribbles in one pipeline don't affect unrelated
pipelines, and I can experiment with their parallelism (particularly
on a306 where we are frequently backed up) without trashing other
people's jobs.
What we lose in this process is per-test output in the log (not a big
loss, I think, since we summarize fails at the end and reducing log
length keeps chrome from choking on our logs so badly). We also drop
the renderer sanity checking, since it's not saving qpa files for us
to go poke through. Given that all the drivers involved have fail
lists, if we got the wrong renderer somehow, we'd get a job failure
anyway.
v2: Rebase on droppong of the autoscale cluster and the arm64
build/test split. Use a script to deduplicate the cts-runner
build.
v3: Rebase on the amd64 build/test container split.
Acked-by: Daniel Stone <[email protected]> (v1)
Reviewed-by: Tomeu Vizoso <[email protected]> (v2)
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bash scripts were using grep in the manner that matches any subset
of the line, but the new CTS runner matches the whole line and I think
that's a pretty good behavior. Given that some of the skip lists
already were written to match the full test name, just make them
consistently do so.
Reviewed-by: Eric Engestrom <[email protected]>
Acked-by: Daniel Stone <[email protected]>
Acked-by: Michel Dänzer <[email protected]>
|
|
|
|
|
|
| |
Some queries are still failing and layered rending needs more work.
Signed-off-by: Kristian H. Kristensen <[email protected]>
|
|
|
|
|
| |
Reviewed-by: Bas Nieuwenhuizen <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
|
|
|
|
|
|
|
| |
Seen a couple flakes on this one so far. Not sure if it is a real
driver problem or not, but skip it to unblock things.
Signed-off-by: Rob Clark <[email protected]>
|
|
|
|
|
| |
It started showing up as unreliable post-merge. There's a valgrind
complaint, but even fixing that doesn't make it stable.
|
|
Since freedreno's kernel and GPU reset seem to be totally solid, we
don't need to have the complexity of the LAVA setup that panfrost has.
Instead, we can register some boards as shared gitlab runners and have
the jobs run out of a docker container just like we do for llvmpipe.
Just make sure that the DRI device node is passed through to the
containers in the gitlab config ('devices = ["/dev/dri"]' under
runners.docker).
If a runner fails (networking dies, kernel panic, etc.) it'll take out
one build but the rest can keep going since gitlab-runner is what
pulls jobs. Since the runner pulls jobs, it also means that they can
live behind firewalls instead of needing some public address to be
accessed by gitlab.fd.o.
For now, enable it just on db410c (A307) and cheza (A630) as those are
the hardware that I have plenty of. A307 is only testing GLES2 since
running all of GLES3 takes too long for the number of boards I've
brought up.
Acked-by: Rob Clark <[email protected]>
Acked-by: Kenneth Graunke <[email protected]>
|