This is a new feature in openQA that prevents worker hosts
picking up new jobs if their load average is above a certain
threshold. It defaults to 40. Our big worker hosts tend to run
above this, so let's bump it on those.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
I'm going to try splitting the tap jobs across multiple worker
hosts. We have quite a lot of tap jobs, now, and I have seen
sometimes a situation where all non-tap jobs have been run and
the non-tap worker hosts are sitting idle, but the single tap
worker host has a long queue of tap jobs to get through.
We can't just put multiple hosts per instance into the tap
class, because then we might get a case where job A from a tap
group is run on one host and job B from a tap group is run on
a different host, and they can't communicate. It's actually
possible to set this up so it works, but it needs yet more
complex networking stuff I don't want to mess with. So instead
I'm just gonna split the tap job groups across two classes,
'tap' and 'tap2'. That way we can have one 'tap' worker host
and one 'tap2' worker host per instance and arch, and they will
each get about half the tap jobs.
Unfortunately since we only have one aarch64 worker for lab it
will still have to run all the jobs, but for all other cases we
do have at least two workers, so we can split the load.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
The main goal of these changes is to allow all workers in each
deployment NFS write access to the factory share. This is because
I want to try using os-autoinst's at-job-run-time decompression
of disk images instead of openQA's at-asset-download-time
decompression; it avoids some awkwardness with the asset file
name, and should also actually allow us to drop the decompression
code from openQA I think.
I also rejigged various other things at the same time as they
kinda logically go together. It's mostly cleanups and tweaks to
group variables. I tried to handle more things explicitly with
variables, as it's better for use of these plays outside of
Fedora infra.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
swtpm is a TPM emulator we want to use for testing Clevis on
IoT (and potentially other things in future). We're implementing
this by having os-autoinst just add the qemu args but expect
swtpm itself to be running already - that's counted as the
sysadmin's responsibility. My approach to this is to have openQA
tap worker hosts also be tpm worker hosts, meaning they run one
instance of swtpm per worker instance (as a systemd service) and
are added to a 'tpm' worker class which tests can use to ensure
they run on a suitably-equipped worker. This sets up all of that.
We need a custom SELinux policy module to allow systemd to run
swtpm - this is blocked by default.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
openqa-aarch64-02.qa is broken in some very mysterious way:
https://pagure.io/fedora-infrastructure/issue/8750
until we can figure that out, this should prevent it picking up
normal jobs, but let us manually target a job at it whenever we
need to for debugging.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
I thought just having it WantedBy remote-fs.target should be
enough, but in fact this mount often fails on boot, and I forget
to check all the worker boxes until a bunch of tests fail and
everyone is sad. Let's try After=network-online.target and see
if that helps.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
OK, this GRE crap ain't working. Let's give up! Instead let's
have one tap-capable host per openQA deployment, so all the
tap jobs will go to it. This...should achieve that. Let's see
what blows up.
This adds openQA server, worker and dispatcher roles, and
applies them to the appropriate hosts. A few secret vars are
required. See trac #4958 for discussion.