| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's currently only needed for the meson-main and meson-arm64 jobs, not
the other meson build jobs.
Also remove MESON_SHADERDB, just run .gitlab-ci/run-shader-db.sh
directly from the meson-main job.
v2:
* Also run prepare-artifacts.sh in meson-arm64 script
v3:
* Move tarball creation into the new script as well, as it prevented
ccache --show-stats from running in after_script
Reviewed-by: Eric Engestrom <[email protected]> # v1
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit c9df92bf795af878c38538c85f781291c78ec513.
It turns out that gitlab-runner uses kubernetes all wrong, spawning Pods
and sshing into them to run the script instead of Jobs containing the
script to run. This means that when anything goes wrong with the pod
(autoscale, preemption, VM maintenance, cluster reconfiguration), the job
fails and only sometimes gets handled as a runner system failure. Even
worse, due to bugs in either the runner or k8s itself, some classes of
timeout-related failure end up not being reported as failures, and the job
will incorrectly report success!
Disable using the "autoscale" cluster until we can do something else
(docker-machine instead of k8s, or the custom third-party k8s-native
runner).
Reviewed-by: Michel Dänzer <[email protected]>
Acked-by: Daniel Stone <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GKE pool we're using is 1-3 32-core VMs, preemptible (to keep
costs down), with 8 jobs concurrent per system. We have plenty of
memory (4G/core), so we run make -j8 to try to keep the cores busy even
when one job is in a single-threaded step (docker image download, git
clone, artifacts processing, etc.) When all jobs are generating work
for all the cores, they'll be scheduled fairly.
The nodes in the pool have 300GB boot disks (over-provisioned in space
to provide enough iops and throughput) mounted to /ccache, and
CACHE_DIR set pointing to them. This means that once a new
autoscaled-up node has run some jobs, it should have a hot ccache from
then on (instead of having to rely on the docker container cache
having our ccache laying around and not getting wiped out by some
other fd.o job). Local SSDs would provide higher performance, but
unfortunately are not supported with the cluster autoscaler.
For now, the softpipe/llvmpipe test runs are still on the shared
runners, until I can get them ported onto Bas's runner so they can be
parallelized in a single job.
Reviewed-by: Michel Dänzer <[email protected]>
|
|
|
|
| |
Cross builds don't use the llvm-config path from the native file.
|
|
|
|
|
|
|
| |
This will prevent us from accidentally falling back to the wrap-db
instead of using locally installed versions.
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Yes, some tests fail, but we can turn those into XFAILs at meson time.
Better to keep the things that work working than not cover them at all.
Unfortunately XPASS results will not cause the build to fail until we
update CI to meson 0.51 or newer.
Reviewed-by: Daniel Stone <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the start of doing CTS tests on merges to Mesa master. We use
the surfaceless platform so that we don't need to bother bringing up
weston or X11. The surface size is kept low to reduce runtime, but
this comes at the cost of many rendering tests skipping due to
too-small render targets (as we see the impact of Mesa on the shared
runner pool, we can reevaluate this and what set of CTS tests we want
to run).
We split the job up across 4 runners (each at 4 llvmpipe threads), so
that the job can load-balance across our shared runners and finish
sooner (since dEQP is very single-thread-performance bound).
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we're running the drivers we build, building with
optimization is important for keeping our runtime down. Shaves about
4 minutes of runtime off of GLES2 CTS of llvmpipe at 64x64.
v2: Only switch meson-main until we enable CTS for other builds
on request by Michel.
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
If we don't set DESTDIR, then the DEFAULT_DRIVER_DIR built into the
libraries is correct and we don't need to use LIBGL_DRIVERS_PATH and
friends for CI usage. Incidentally, this moves our installed paths
from /builds/anholt/mesa/install/usr/local/lib (for example) to
/builds/anholt/mesa/install/lib for simplicity.
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Eric Engestrom <[email protected]>
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This provides significant compiler coverage during CI at a fairly low
cost in CPU time (~17s per thread for 4 threads on
gst-gitlab-htz-runner3).
I'm leaving wget in the docker image, as once this is in master I'm
planning on having an automatic shader-db comparison between master
and the branch included in the artifacts. I also haven't done
freedreno yet, because it has some races when run in multithreaded
mode that I'm still tracking down.
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
| |
I introduced libdir for cross-builds so we could point at the
resulting drivers without per-arch dependencies, but I'd rather not
have to type x86_64-linux-whatever for non-cross-builds either.
Reviewed-by: Eric Engestrom <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I don't particularly care about getting x86/ARM cross-build coverage
of all the window systems, but we do want to be building src/mesa/
(for x86 asm) and gallium drivers (for vc4 NEON asm). I'm also hoping
to use these build products for testing freedreno on actual HW (which
we do using surfaceless).
This increases the docker image from 1.4G to 1.5G.
Reviewed-by: Michel Dänzer <[email protected]>
Acked-by: Eric Engestrom <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Eric Engestrom <[email protected]>
Reviewed-by: Eric Anholt <[email protected]>
|
|
|
|
|
| |
Signed-off-by: Eric Engestrom <[email protected]>
Reviewed-by: Lionel Landwerlin <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
And consolidate it all into a single job.
It doesn't take much longer than a single version, thanks to ccache.
Overall, this single job might be faster or at least use fewer CPU
cycles than the two jobs before, while covering thrice as many versions
of LLVM.
v2:
* Move "rm -rf _build" to meson-build.sh.
* Set GALLIUM_DRIVERS the same way both times in the meson-clover job,
for symmetry.
Reviewed-by: Eric Engestrom <[email protected]> # v1
|
|
No functional change intended (except for no longer running meson
--version separately, as the version appears early in meson's output
anyway).
Reviewed-by: Eric Engestrom <[email protected]>
|