The old cloud-noc-os01 was for the old openstack we used to have and
wanted to re-setup in rdu, but never did.
So, lets just move this to more our normal convention.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
network-scripts-openvswitch was removed in f40 and network-scripts
is going away in f41; we really need to get off using them.
This attempts to implement the same setup using NetworkManager,
based on a few different NM/ovs references, and the source of
openQA upstream's os-autoinst-setup-multi-machine . It might
need a bit of tweaking, so for now, we make it a separate task
and use it only on p09-worker01 for testing. This doesn't handle
tearing down the old network-scripts-based config as that's
pretty complex and will only need to happen once; I'll do it
manually before trying this out.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
The interface name changed (thanks, 'predictable' names...sigh)
and this box *is* encrypted currently.
Signed-off-by: Adam Williamson <awilliam@redhat.com>
trying to address
[WARNING]: Unhandled error in Python interpreter discovery for host logdetective01.fedorainfracloud.org: unexpected output from Python interpreter discovery
[WARNING]: Platform unknown on host logdetective01.fedorainfracloud.org is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the
meaning of that path. See https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html for more information.
fatal: [logdetective01.fedorainfracloud.org]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "", "module_stdout": "Please login as the user \"fedora\" rather than the user \"root\".\n\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 142}
We need to get onto the mgmt network from here so we can put the rdu-cc
noc on this vmhost. I am not sure if it's on eth2, but if not can adjust
it.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
We have these 7 emags that were bvmhosts running 32bit arm builders.
Since we no longer need those, lets repurpose them as aarch64 buildhw.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Lets retire these rhel7 vm's from ansible/running.
I will be saving off the disks and xml for all of the vm's, so in the
event we need to bring something back or look at something, we can do
so.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
This will be installed, then we will sync data from people02 to it and
finally cut over to using it tomorrow in an outage.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
We have 4 of the new mt snow boxes that were bvmhosts before, but we
moved the vm's to the newer generation versions, so we should use these
as buildhw boxes. I plan to add 2 of them to the runroot channel for
composes and 2 of them general builders in the heavybuild channel to
help with chromium builds.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
With the EOL of Fedora 38 yesterday, we are no longer building any
modules and can retire our module build service.
Note that toddlers needs to be adjusted still, that will happen after
this.
Thanks for all the modules!
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Move these vm's off the old rhel7 virthost to the new rhel9 one.
It's faster, better, newer, etc.
Once these are all moved (pagure02 is still live migrating), we can
decomission virthost-cc-rdu02 as it's end of life.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>