aboutsummaryrefslogtreecommitdiffstats
path: root/.gitlab-ci/build-cts-runner.sh
Commit message (Collapse)AuthorAgeFilesLines
* gitlab-ci: Update CTS runnerTomeu Vizoso2020-06-231-1/+1
| | | | | | | | | We need a newer version to be able to successfully run the OpenGL suites in dEQP. Signed-off-by: Tomeu Vizoso <[email protected]> Reviewed-by: Eric Anholt <[email protected]> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5494>
* ci: Consistently use -j4 across x86 build jobs and -j8 on ARM.Eric Anholt2020-04-011-1/+1
| | | | | | | | | | | | | | | | | Our shared runners are set up for concurrent jobs ~= CPUs / 4 (x86) or 8 (ARM). If you use more build processes than that, then jobs may be fighting each other for shared system resources, possibly to the point of failure (we've seen one of the runners OOM on some jobs before, though I'm not sure if this was the cause). To try to systematically prevent the problem, we make a ninja wrapper in the containers that passes the -j flags, and set MAKEFLAGS in the container builds. This doesn't cover make in non-container builds, but I believe we don't have any of those. Reviewed-by: Michel Dänzer <[email protected]> Tested-by: Marge Bot <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/3782> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/3782>
* gitlab-ci: add missing popd to the build-deqp-vk.sh scriptAndres Gomez2020-03-041-4/+4
| | | | | | | | | Since we are at it, replace "cd" with pushd / popd and homogenize how VK-GL-CTS is built in comparison with other build scripts. Signed-off-by: Andres Gomez <[email protected]> Reviewed-by: Samuel Pitoiset <[email protected]> Reviewed-by: Alexandros Frantzis <[email protected]>
* gitlab-ci: Switch LAVA jobs to use shared dEQP runnerTomeu Vizoso2020-01-061-2/+2
| | | | | | | | | | | | | Take one step towards sharing code between the LAVA and non-LAVA jobs, with the goals of reducing maintenance burden and use of computational resources. The env var DEQP_NO_SAVE_RESULTS allows us to skip the procesing of the XML result files, which can take a long time and is not useful in the LAVA case as we are not uploading artifacts anywhere at the moment. Signed-off-by: Tomeu Vizoso <[email protected]> Reviewed-by: Eric Anholt <[email protected]>
* ci: Use a tag from the parallel-deqp-runner repo.Eric Anholt2019-11-221-1/+1
| | | | | | | If the repo continues development, we don't want to accidentally pick up potentially breaking changes on our next container rebuild. Reviewed-by: Eric Engestrom <[email protected]>
* gitlab-ci/deqp: preserve caselists for blocks with failsRob Clark2019-11-221-3/+3
| | | | | | | | | | Bump cts_runner to pick up the change to preserve .qpa and caselist .txt files for blocks of tests that contain fails, and preserve the caselist files. To reproduce fails that depend on order of running tests, these are useful. Signed-off-by: Rob Clark <[email protected]> Acked-by: Eric Engestrom <[email protected]>
* ci: Use cts_runner for our dEQP runs.Eric Anholt2019-11-121-0/+10
This runner is a little project by Bas, written in C++, that spawns threads that then loop grabbing chunks of the (randomly shuffled but consistently so) test list and hand it to a dEQP instance. As the remaining list gets shorter, so do the chunks, so hopefully the threads all complete effectively at once. It also handles restarting after crashes automatically. I've extended the runner a bit to do what I was doing in the bash scripts before, like the skip list and expected failures handling. This project should also be a good baseline for extending to handle retesting of intermittent failures. By switching to it, we can have the swrast tests just take up one job slot on the shared runners and keep their allotment of CPUs busy, instead of taking up job slots with single-threaded dEQP jobs. It will also let us (eventually, once I reprovision) switch the freedreno runners over to threading within the job instead of running concurrent jobs, so that memory scribbles in one pipeline don't affect unrelated pipelines, and I can experiment with their parallelism (particularly on a306 where we are frequently backed up) without trashing other people's jobs. What we lose in this process is per-test output in the log (not a big loss, I think, since we summarize fails at the end and reducing log length keeps chrome from choking on our logs so badly). We also drop the renderer sanity checking, since it's not saving qpa files for us to go poke through. Given that all the drivers involved have fail lists, if we got the wrong renderer somehow, we'd get a job failure anyway. v2: Rebase on droppong of the autoscale cluster and the arm64 build/test split. Use a script to deduplicate the cts-runner build. v3: Rebase on the amd64 build/test container split. Acked-by: Daniel Stone <[email protected]> (v1) Reviewed-by: Tomeu Vizoso <[email protected]> (v2)