Files
fedora-infra_ansible/inventory/group_vars/all
Adam Williamson d9db9714d8 Handle systems where the main if is not eth0 a bit better
ifcfg.j2 has a pretty awkward assumption that the interface
connected to the infra network will be eth0 (or enc900) - it
only includes the GATEWAY, DOMAIN and DNS1/DNS2 lines if the
interface is one of those two. It seems we were trying quite
hard to make eth0 always be "the interface", but now that's
been broken in a few systems. enc900 was added as apparently
that's what the main interface is called on some s390 boxes;
on openqa-ppc64le-01 the if that's connected is eth2 (eth0 is
present, but not connected), and on the new qa01 and qa02, it's
em3 (according to smooge, we have to use 'predictable' interface
names on those boxes as the old names really *do* get assigned
to different interfaces on each boot).

So since we now have several different cases where the 'eth0'
assumption doesn't hold, let's build a slightly better system
for handling it. This replaces ifcfg.j2's hard-coded list with
a variable, and sets the default value of the variable to the
two names ifcfg.j2 handled before: [ 'eth0', 'enc900' ]. This
allows the systems where the main interface is *not* one of
these to set the variable accordingly, and hopefully that'll
give them correct ifcfg files.

This *should* solve the problem of openqa-ppc64le-01.qa and qa01
and qa02 constantly dropping out of network connectivity any
time they got rebooted or the network plays got run.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
2018-12-15 11:09:49 -08:00

297 lines
12 KiB
Plaintext

---
#######
# BEGIN: Ansible roles_path variables
#
# Background/reference about external repos pulled in:
# https://pagure.io/fedora-infrastructure/issue/5476
#
ansible_base: /srv/web/infra
# Path to the openshift-ansible checkout as external git repo brought into
# Fedora Infra
openshift_ansible: /srv/web/infra/openshift-ansible/
#
# END: Ansible roles_path variables
#######
freezes: true
# most of our systems are in phx2
datacenter: phx2
postfix_group: "none"
# usually we do not want to enable nested virt, only on some virthosts
nested: false
# most of our systems are 64bit.
# Used to install various nagios scripts and the like.
libdir: /usr/lib64
# Most EL systems need default EPEL repos.
# Some systems (notably fed-cloud*) need to get their own
# EPEL files because EPEL overrides packages in their core repos.
use_default_epel: true
# example of ports for default iptables
# tcp_ports: [ 22, 80, 443 ]
# udp_ports: [ 110, 1024, 2049 ]
# multiple lines can be handled as below
# custom_rules: [ '-A INPUT -p tcp -m tcp --dport 8888 -j ACCEPT',
# '-A INPUT -p tcp -m tcp --dport 8889 -j ACCEPT' ]
# We default these to empty
udp_ports: []
tcp_ports: []
custom_rules: []
nat_rules: []
custom6_rules: []
# defaults for hw installs
install_noc: none
# defaults for virt installs
ks_url: http://infrastructure.fedoraproject.org/repo/rhel/ks/kvm-rhel-7
ks_repo: http://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/
mem_size: 2048
num_cpus: 2
lvm_size: 20000
# on MOST infra systems, the interface connected to the infra network
# is eth0. but not on quite ALL systems. e.g. on s390 boxes it's enc900,
# on openqa-ppc64le-01.qa it's eth2 for some reason, and on qa01.qa and
# qa02.qa it's em3. currently this only affects whether GATEWAY, DOMAIN
# and DNS1/DNS2 lines are put into ifcfg-(device).
ansible_ifcfg_infra_net_devices: [ 'eth0', 'enc900' ]
# Default netmask. Almost all our phx2 nets are /24's with the
# exception of 10.5.124.128/25. Almost all of our non phx2 sites are
# less than a /24.
eth0_nm: 255.255.255.0
eth1_nm: 255.255.255.0
br0_nm: 255.255.255.0
br1_nm: 255.255.255.0
# Default to managing the network, we want to not do this on select hosts (like cloud nodes)
ansible_ifcfg_blacklist: false
# List of interfaces to explicitly disable
ansible_ifcfg_disabled: []
#
# The default virt-install works for rhel7 or fedora with 1 nic
#
virt_install_command: "{{ virt_install_command_one_nic }}"
main_bridge: br0
nfs_bridge: br1
mac_address: RANDOM
mac_address1: RANDOM
virt_install_command_one_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none'
--network bridge={{ main_bridge }},model=virtio,mac={{ mac_address }}
--autostart --noautoconsole --watchdog default --rng /dev/random --cpu host
virt_install_command_two_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none
ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none'
--network bridge={{ main_bridge }},model=virtio,mac={{ mac_address }}
--network=bridge={{ nfs_bridge }},model=virtio,mac={{ mac_address1 }}
--autostart --noautoconsole --watchdog default --rng /dev/random
virt_install_command_aarch64_one_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }}
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none'
--network bridge={{ main_bridge }},model=virtio,mac={{ mac_address }}
--autostart --noautoconsole
virt_install_command_aarch64_two_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }}
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none
ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none'
--network bridge={{ main_bridge }},model=virtio,mac={{ mac_address }}
--network=bridge={{ nfs_bridge }},model=virtio,mac={{ mac_address1 }}
--autostart --noautoconsole --rng /dev/random
virt_install_command_armv7_one_nic: virt-install -n {{ inventory_hostname }} --arch armv7l
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyAMA0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none'
--network bridge={{ main_bridge }},model=virtio
--autostart --noautoconsole --rng /dev/random
virt_install_command_rhel6: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }}
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
"ksdevice=eth0 ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }}
gateway={{ gw }} dns={{ dns }} console=tty0 console=ttyS0
hostname={{ inventory_hostname }}"
--network=bridge=br0 --autostart --noautoconsole --watchdog default
max_mem_size: "{{ mem_size * 5 }}"
max_cpu: "{{ num_cpus * 5 }}"
# This is the wildcard certname for our proxies. It has a different name for
# the staging group and is used in the proxies.yml playbook.
wildcard_cert_name: wildcard-2017.fedoraproject.org
wildcard_crt_file: wildcard-2017.fedoraproject.org.cert
wildcard_key_file: wildcard-2017.fedoraproject.org.key
wildcard_int_file: wildcard-2017.fedoraproject.org.intermediate.cert
# This is the openshift wildcard cert. Until it exists set it equal to wildcard
os_wildcard_cert_name: wildcard-2017.app.os.fedoraproject.org
os_wildcard_crt_file: wildcard-2017.app.os.fedoraproject.org.cert
os_wildcard_key_file: wildcard-2017.app.os.fedoraproject.org.key
os_wildcard_int_file: wildcard-2017.app.os.fedoraproject.org.intermediate.cert
# Everywhere, always, we should sign messages and validate signatures.
# However, we allow individual hosts and groups to override this. Use this very
# carefully.. and never in production (good for testing stuff in staging).
fedmsg_sign_messages: True
fedmsg_validate_signatures: True
# By default, nodes get no fedmsg certs. They need to declare them explicitly.
fedmsg_certs: []
# By default, fedmsg should not log debug info. Groups can override this.
fedmsg_loglevel: INFO
# By default, fedmsg sends error logs to sysadmin-datanommer-members@fp.o.
fedmsg_error_recipients:
- sysadmin-datanommer-members@fedoraproject.org
# By default, fedmsg hosts are in passive mode. External hosts are typically
# active.
fedmsg_active: False
# Other defaults for fedmsg environments
fedmsg_prefix: org.fedoraproject
fedmsg_env: prod
# Amount of time to wait for connections after a socket is first established.
fedmsg_post_init_sleep: 1.0
# A special flag that, when set to true, will disconnect the host from the
# global fedmsg-relay instance and set it up with its own local one. You can
# temporarily set this to true for a specific host to do some debugging -- so
# you can *replay real messages from the datagrepper history without having
# those broadcast to the rest of the bus*.
fedmsg_debug_loopback: False
# These are used to:
# 1) configure mod_wsgi
# 2) open iptables rules for fedmsg (per wsgi thread)
# 3) declare enough fedmsg endpoints for the service
#wsgi_fedmsg_service: bodhi
#wsgi_procs: 4
#wsgi_threads: 4
# By default, nodes don't backup any dbs on them unless they declare it.
dbs_to_backup: []
# by default the number of procs we allow before we whine
nrpe_procs_warn: 250
nrpe_procs_crit: 300
# by default, the number of emails in queue before we whine
nrpe_check_postfix_queue_warn: 2
nrpe_check_postfix_queue_crit: 5
# env is staging or production, we default it to production here.
env: production
env_suffix:
# nfs mount options, override at the group/host level
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3"
# by default set become to false here We can override it as needed.
# Note that if become is true, you need to unset requiretty for
# ssh controlpersist to work.
become: false
# default the root_auth_users to nothing.
# This should be set for cloud instances in their host or group vars.
root_auth_users: ''
# This vars get shoved into /etc/system_identification by the base role.
# Groups and individual hosts should override them with specific info.
# See http://infrastructure.fedoraproject.org/csi/security-policy/
csi_security_category: Unspecified
csi_primary_contact: Fedora Admins - admin@fedoraproject.org
csi_purpose: Unspecified
csi_relationship: |
Unspecified.
* What hosts/services does this rely on?
* What hosts/services rely on this?
To update this text, add the csi_* vars to group_vars/ in ansible.
#
# say if we want the apache role dependency for mod_wsgi or not
# In some cases we want mod_wsgi and no apache (for python3 httpaio stuff)
#
wsgi_wants_apache: true
# IPA settings
additional_host_keytabs: []
ipa_server: ipa01.phx2.fedoraproject.org
ipa_realm: FEDORAPROJECT.ORG
ipa_admin_password: "{{ ipa_prod_admin_password }}"
# Normal default sshd port is 22
sshd_port: 22
# List of names under which the host is available
ssh_hostnames: []
# assume collectd apache
collectd_apache: true
# assume vpn is false
vpn: False
# assume createrepo is true and this builder has the koji nfs mount to do that
createrepo: True
# Nagios global variables
nagios_Check_Services:
mail: true
nrpe: true
sshd: true
named: false
dhcpd: false
httpd: false
swap: true
ping: true
# Set variable if we want to use our global iptables defaults
# Some things need to set their own.
baseiptables: True
# Most of our machines have manual resolv.conf files
# These settings are for machines where NM is supposed to control resolv.conf.
nm_controlled_resolv: False
dns1: "10.5.126.21"
dns2: "10.5.126.22"
# This is a list of services that need to wait for VPN to be up before getting started.
postvpnservices: []