25 Commits

Author SHA1 Message Date
Adam Williamson
560745435a openqa: require libsemanage-python for seboolean 2015-11-13 09:37:57 -08:00
Adam Williamson
9ee4338308 openqa: use seboolean module instead of re-inventing it 2015-11-13 09:36:36 -08:00
Adam Williamson
8f8686d43d move openqa roles to a subdirectory (per puiterwijk) 2015-11-13 09:16:59 -08:00
Adam Williamson
9a233d20e5 openqa: add a couple of username vars 2015-11-13 09:09:35 -08:00
Adam Williamson
75191fdd5b add openqa_dispatcher task to set up the scheduler
This sets up the script we have for downloading ISOs and
triggering openQA runs (nightly runs for Rawhide, Branched,
and the post-release nightly cloud images, and the 'current'
TC/RC service). Currently this role should only be deployed to
the same box as the server it will schedule jobs for, but that
can change in future.
2015-11-12 17:55:20 -08:00
Adam Williamson
fb42855ea6 openqa: don't check ownership of client config
It gets ping-ponged around depending on whether the host is
a worker as well or not, if we check it here, it gets changed
to root then right back to _openqa-worker when the worker
play runs.
2015-11-12 17:54:12 -08:00
Adam Williamson
b86fcfc054 openqa: drop some unnecessary indentation 2015-11-12 16:22:18 -08:00
Adam Williamson
d6070b9081 make 'localhost' the openqa_worker default for openqa_hostname 2015-11-12 16:09:38 -08:00
Adam Williamson
749efbba97 create asset dirs, have geekotest own tests, create hdd images 2015-11-12 15:51:11 -08:00
Adam Williamson
4b347a9214 openqa: don't check ownership of shared data mount
after mounting it's not owned by root any more, and chowning
a mount doesn't work, so if we try and check ownership it blows
up
2015-11-12 15:50:21 -08:00
Adam Williamson
d4750a4f55 openqa: fix NFS mount so it works on boot 2015-11-12 14:42:07 -08:00
Kevin Fenzi
209c8a9a5d Switch repo2json to the new repo format we are using since the move to the batcave. 2015-11-12 14:02:38 -08:00
Adrian Reber
8b8606a0d6 Enable mirrorlist-server logging
With the logs from the mirrorlist-server logging it is possible
to create country/repository/architecture statistics.

The code which creates the actual statistics is partially already
included into mirrormanager.

Signed-off-by: Adrian Reber <adrian@lisas.de>
2015-11-12 14:02:38 -08:00
Patrick Uiterwijk
04c6a4fbf8 stg didnt need to be bumped
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
2015-11-12 14:02:38 -08:00
Patrick Uiterwijk
6f8ff306b5 Bump crawler memory up to 40G
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
2015-11-12 14:02:38 -08:00
Adrian Reber
3f26628ece Decrease number of parallel crawlers to 20 2015-11-12 14:02:38 -08:00
Kevin Fenzi
7e49120c7a Move yum repos to pre tasks 2015-11-12 14:02:38 -08:00
Kevin Fenzi
c775539707 Switch back to inventory_hostname_short for debugging 2015-11-12 14:02:38 -08:00
Kevin Fenzi
92cfa47aad Try this as inventory_hostname as it doesn't like _short 2015-11-12 14:02:38 -08:00
Kevin Fenzi
e1582cdd56 Drop duplicate when, should be handled by the first one. 2015-11-12 14:02:38 -08:00
Kevin Fenzi
76c2f6fd35 Another duplicate variable 2015-11-12 14:02:38 -08:00
Kevin Fenzi
0c793c3513 Fix a bunch of duplicate variables. 2015-11-12 14:02:38 -08:00
Kevin Fenzi
7fd931b389 Drop duplicate buildmaster_dir definitions 2015-11-12 14:02:38 -08:00
Kevin Fenzi
f024b35147 Drop duplicate slave_user definitions. 2015-11-12 14:02:38 -08:00
Adam Williamson
56246a62ec set up openQA
This adds openqa_server and openqa_worker roles, and applies
them to the appropriate host groups. Note that servers also
act as worker hosts for themselves.
2015-11-12 11:32:41 -08:00
5439 changed files with 79669 additions and 367220 deletions

View File

@@ -9,7 +9,7 @@ Playbook naming
===============
The top level playbooks directory should contain:
* Playbooks that are generic and used by several groups/hosts playbooks
* Playbooks that are generic and used by serveral groups/hosts playbooks
* Playbooks used for utility purposes from command line
* Groups and Hosts subdirs.
@@ -95,7 +95,7 @@ We would like to get ansible running over hosts in an automated way.
A git hook could do this.
* On commit:
If we have a way to determine exactly what hosts are affected by a
If we have a way to detemine exactly what hosts are affected by a
change we could simply run only on those hosts.
We might want a short delay (10m) to allow someone to see a problem

View File

@@ -169,13 +169,7 @@ and traceroute and friends).
=== TERMINATING INSTANCES ===
For transient:
1. source /srv/private/ansible/files/openstack/novarc
2. export OS_TENANT_NAME=transient
2. nova list | grep <ip of your instance or name of your instance>
3. nova delete <name of instance or ID of instance>
1. source /srv/private/ansible/files/openstack/transient-admin/keystonerc.sh
- OR -

View File

@@ -22,11 +22,6 @@ import pwd
import fedmsg
import fedmsg.config
try:
from ansible.plugins.callback import CallbackBase
except ImportError:
# Ansible v1 compat
CallbackBase = object
def getlogin():
try:
@@ -36,7 +31,7 @@ def getlogin():
return user
class CallbackModule(CallbackBase):
class CallbackModule(object):
""" Publish playbook starts and stops to fedmsg. """
playbook_path = None

View File

@@ -1,116 +0,0 @@
# (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# based on the log_plays example
# skvidal@fedoraproject.org
# rbean@redhat.com
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
import pwd
import fedmsg
import fedmsg.config
try:
from ansible.plugins.callback import CallbackBase
except ImportError:
# Ansible v1 compat
CallbackBase = object
try:
from ansible.utils.hashing import secure_hash
except ImportError:
from ansible.utils import md5 as secure_hash
def getlogin():
try:
user = os.getlogin()
except OSError, e:
user = pwd.getpwuid(os.geteuid())[0]
return user
class CallbackModule(CallbackBase):
""" Publish playbook starts and stops to fedmsg. """
CALLBACK_NAME = 'fedmsg_callback2'
CALLBACK_TYPE = 'notification'
CALLBACK_VERSION = 2.0
CALLBACK_NEEDS_WHITELIST = True
playbook_path = None
def __init__(self):
config = fedmsg.config.load_config()
config.update(dict(
name='relay_inbound',
cert_prefix='shell',
active=True,
))
# It seems like recursive playbooks call this over and over again and
# fedmsg doesn't like to be initialized more than once. So, here, just
# catch that and ignore it.
try:
fedmsg.init(**config)
except ValueError:
pass
self.play = None
self.playbook = None
super(CallbackModule, self).__init__()
def set_play_context(self, play_context):
self.play_context = play_context
def v2_playbook_on_start(self, playbook):
self.playbook = playbook
def v2_playbook_on_play_start(self, play):
# This gets called once for each play.. but we just issue a message once
# for the first one. One per "playbook"
if self.playbook:
# figure out where the playbook FILE is
path = os.path.abspath(self.playbook._file_name)
# Bail out early without publishing if we're in --check mode
if self.play_context.check_mode:
return
if not self.playbook_path:
fedmsg.publish(
modname="ansible", topic="playbook.start",
msg=dict(
playbook=path,
userid=getlogin(),
extra_vars=play._variable_manager.extra_vars,
inventory=play._variable_manager._inventory._sources,
playbook_checksum=secure_hash(path),
check=self.play_context.check_mode,
),
)
self.playbook_path = path
def v2_playbook_on_stats(self, stats):
if not self.playbook_path:
return
results = dict([(h, stats.summarize(h)) for h in stats.processed])
fedmsg.publish(
modname="ansible", topic="playbook.complete",
msg=dict(
playbook=self.playbook_path,
userid=getlogin(),
results=results,
),
)

View File

@@ -15,20 +15,12 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import absolute_import
import os
import time
import json
import pwd
from ansible import utils
try:
from ansible.plugins.callback import CallbackBase
except ImportError:
# Ansible v1 compat
CallbackBase = object
TIME_FORMAT="%b %d %Y %H:%M:%S"
MSG_FORMAT="%(now)s\t%(count)s\t%(category)s\t%(name)s\t%(data)s\n"
@@ -160,15 +152,10 @@ class LogMech(object):
logmech = LogMech()
class CallbackModule(CallbackBase):
class CallbackModule(object):
"""
logs playbook results, per host, in /var/log/ansible/hosts
"""
CALLBACK_NAME = 'logdetail'
CALLBACK_TYPE = 'notification'
CALLBACK_VERSION = 2.0
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
self._task_count = 0
self._play_count = 0

View File

@@ -1,278 +0,0 @@
# (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# based on the log_plays example
# skvidal@fedoraproject.org
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import absolute_import
import os
import time
import json
import pwd
try:
from ansible.utils.hashing import secure_hash
except ImportError:
from ansible.utils import md5 as secure_hash
try:
from ansible.plugins.callback import CallbackBase
except ImportError:
# Ansible v1 compat
CallbackBase = object
TIME_FORMAT="%b %d %Y %H:%M:%S"
MSG_FORMAT="%(now)s\t%(count)s\t%(category)s\t%(name)s\t%(data)s\n"
LOG_PATH = '/var/log/ansible'
def getlogin():
try:
user = os.getlogin()
except OSError, e:
user = pwd.getpwuid(os.geteuid())[0]
return user
class LogMech(object):
def __init__(self):
self.started = time.time()
self.pid = str(os.getpid())
self._pb_fn = None
self._last_task_start = None
self.play_info = {}
self.logpath = LOG_PATH
if not os.path.exists(self.logpath):
try:
os.makedirs(self.logpath, mode=0750)
except OSError, e:
if e.errno != 17:
raise
# checksum of full playbook?
@property
def playbook_id(self):
if self._pb_fn:
return os.path.basename(self._pb_fn).replace('.yml', '').replace('.yaml', '')
else:
return "ansible-cmd"
@playbook_id.setter
def playbook_id(self, value):
self._pb_fn = value
@property
def logpath_play(self):
# this is all to get our path to look nice ish
tstamp = time.strftime('%Y/%m/%d/%H.%M.%S', time.localtime(self.started))
path = os.path.normpath(self.logpath + '/' + self.playbook_id + '/' + tstamp + '/')
if not os.path.exists(path):
try:
os.makedirs(path)
except OSError, e:
if e.errno != 17: # if it is not dir exists then raise it up
raise
return path
def play_log(self, content):
# record out playbook.log
# include path to playbook, checksums, user running playbook
# any args we can get back from the invocation
fd = open(self.logpath_play + '/' + 'playbook-' + self.pid + '.info', 'a')
fd.write('%s\n' % content)
fd.close()
def task_to_json(self, task):
res = {}
res['task_name'] = task.name
res['task_module'] = task.action
res['task_args'] = task.args
if self.playbook_id == 'ansible-cmd':
res['task_userid'] = getlogin()
for k in ("delegate_to", "environment", "with_first_found",
"local_action", "notified_by", "notify",
"register", "sudo", "sudo_user", "tags",
"transport", "when"):
v = getattr(task, k, None)
if v:
res['task_' + k] = v
return res
def log(self, host, category, data, task=None, count=0):
if not host:
host = 'HOSTMISSING'
if type(data) == dict:
name = data.get('module_name',None)
else:
name = "unknown"
# we're in setup - move the invocation info up one level
if 'invocation' in data:
invoc = data['invocation']
if not name and 'module_name' in invoc:
name = invoc['module_name']
#don't add this since it can often contain complete passwords :(
del(data['invocation'])
if task:
name = task.name
data['task_start'] = self._last_task_start
data['task_end'] = time.time()
data.update(self.task_to_json(task))
if 'task_userid' not in data:
data['task_userid'] = getlogin()
if category == 'OK' and data.get('changed', False):
category = 'CHANGED'
if self.play_info.get('check', False) and self.play_info.get('diff', False):
category = 'CHECK_DIFF:' + category
elif self.play_info.get('check', False):
category = 'CHECK:' + category
# Sometimes this is None.. othertimes it's fine. Othertimes it has
# trailing whitespace that kills logview. Strip that, when possible.
if name:
name = name.strip()
sanitize_host = host.replace(' ', '_').replace('>', '-')
fd = open(self.logpath_play + '/' + sanitize_host + '.log', 'a')
now = time.strftime(TIME_FORMAT, time.localtime())
fd.write(MSG_FORMAT % dict(now=now, name=name, count=count, category=category, data=json.dumps(data)))
fd.close()
logmech = LogMech()
class CallbackModule(CallbackBase):
"""
logs playbook results, per host, in /var/log/ansible/hosts
"""
CALLBACK_NAME = 'logdetail2'
CALLBACK_TYPE = 'notification'
CALLBACK_VERSION = 2.0
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
self._task_count = 0
self._play_count = 0
self.task = None
self.playbook = None
super(CallbackModule, self).__init__()
def set_play_context(self, play_context):
self.play_context = play_context
def v2_runner_on_failed(self, result, ignore_errors=False):
category = 'FAILED'
logmech.log(result._host.get_name(), category, result._result, self.task, self._task_count)
def v2_runner_on_ok(self, result):
category = 'OK'
logmech.log(result._host.get_name(), category, result._result, self.task, self._task_count)
def v2_runner_on_skipped(self, result):
category = 'SKIPPED'
res = {}
res['item'] = self._get_item(getattr(result._result, 'results', {}))
logmech.log(result._host.get_name(), category, res, self.task, self._task_count)
def v2_runner_on_unreachable(self, result):
category = 'UNREACHABLE'
res = {}
res['output'] = result._result
logmech.log(result._host.get_name(), category, res, self.task, self._task_count)
def v2_runner_on_async_failed(self, result):
category = 'ASYNC_FAILED'
logmech.log(result._host.get_name(), category, result._result, self.task, self._task_count)
def v2_playbook_on_start(self, playbook):
self.playbook = playbook
def v2_playbook_on_task_start(self, task, is_conditional):
self.task = task
logmech._last_task_start = time.time()
self._task_count += 1
def v2_playbook_on_setup(self):
self._task_count += 1
def v2_playbook_on_import_for_host(self, result, imported_file):
res = {}
res['imported_file'] = imported_file
logmech.log(result._host.get_name(), 'IMPORTED', res, self.task)
def v2_playbook_on_not_import_for_host(self, result, missing_file):
res = {}
res['missing_file'] = missing_file
logmech.log(result._host.get_name(), 'NOTIMPORTED', res, self.task)
def v2_playbook_on_play_start(self, play):
self._task_count = 0
if play:
# figure out where the playbook FILE is
path = os.path.abspath(self.playbook._file_name)
# tel the logger what the playbook is
logmech.playbook_id = path
# if play count == 0
# write out playbook info now
if not self._play_count:
pb_info = {}
pb_info['playbook_start'] = time.time()
pb_info['playbook'] = path
pb_info['userid'] = getlogin()
pb_info['extra_vars'] = play._variable_manager.extra_vars
pb_info['inventory'] = play._variable_manager._inventory._sources
pb_info['playbook_checksum'] = secure_hash(path)
pb_info['check'] = self.play_context.check_mode
pb_info['diff'] = self.play_context.diff
logmech.play_log(json.dumps(pb_info, indent=4))
self._play_count += 1
# then write per-play info that doesn't duplcate the playbook info
info = {}
info['play'] = play.name
info['hosts'] = play.hosts
info['transport'] = self.play_context.connection
info['number'] = self._play_count
info['check'] = self.play_context.check_mode
info['diff'] = self.play_context.diff
logmech.play_info = info
logmech.play_log(json.dumps(info, indent=4))
def v2_playbook_on_stats(self, stats):
results = {}
for host in stats.processed.keys():
results[host] = stats.summarize(host)
logmech.log(host, 'STATS', results[host])
logmech.play_log(json.dumps({'stats': results}, indent=4))
logmech.play_log(json.dumps({'playbook_end': time.time()}, indent=4))
print('logs written to: %s' % logmech.logpath_play)

View File

@@ -0,0 +1,40 @@
import time
class CallbackModule(object):
"""
A plugin for timing tasks
"""
def __init__(self):
self.stats = {}
self.current = None
def playbook_on_task_start(self, name, is_conditional):
"""
Logs the start of each task
"""
if self.current is not None:
# Record the running time of the last executed task
self.stats[self.current] = time.time() - self.stats[self.current]
# Record the start time of the current task
self.current = name
self.stats[self.current] = time.time()
def playbook_on_stats(self, stats):
"""
Prints the timings
"""
# Record the timing of the very last task
if self.current is not None:
self.stats[self.current] = time.time() - self.stats[self.current]
# Sort the tasks by their running time
results = sorted(self.stats.items(), key=lambda value: value[1], reverse=True)
# Just keep the top 10
results = results[:10]
# Print the timings
for name, elapsed in results:
print "{0:-<70}{1:->9}".format('{0} '.format(name), ' {0:.02f}s'.format(elapsed))

View File

@@ -3,6 +3,8 @@ auth required pam_env.so
auth sufficient pam_url.so config=/etc/pam_url.conf
auth requisite pam_succeed_if.so uid >= 500 quiet
auth required pam_deny.so
auth include system-auth
account include system-auth
password include system-auth
session optional pam_keyinit.so revoke

View File

@@ -1,30 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQINBFfZrzsBEADGLYtUW4YZNKSq/bawWYSg3Z8OAD3amoWx9BTdiBjWyIn7PzBQ
g/Y2QpTj9Sylhi4ZDqcP6eikrC2bqZdBeJyOAHSkV6Nvt+D/ijHOViEsSg+OwHmC
9axbsNHI+WKYPR7GBb40/hu7miHTOWd7puuJ000nyeHckicSHNYb+KxwoN9TTyON
utqTtzUb1v0f+GZ2E3XHCa/SgHG+syFbKhFiPRqSmwuhESgz7JIPx9UPz/pkg/rA
qHILJDt5PGaxhRNcK4rOVhpIBxTdjyYvtkCzlMr8ZaLqlQx2B5Ub9osYSv7CwQD5
tJTb9ed/p5HKuT9JEDSgtxV2yy6bxEMkBjlD5m4ISnOnZ8GGjPl434FdufusIwDX
vFUQDH5BSGV1xUcoCoNAMY+CUCoUaTBkv5PqLOgsCirSImvXhSCFBT1VVb2sPhuG
J6q9Nk18+i2sMtjflM9PzCblMe7C1gySiuH4q+hvB6IDnYirLLy0ctBvr3siY4hY
lTydy+4z7UuquLv02t5Zbw9jxqX1LEyiMvUppx5XgGyQ0cGQpkRHXRzQqI6bjUny
e8Ub2sfjidjqRWyycY4F7KGG/DeKE3UeclDjFlA+CTvgu88RGgzTMZym5NxgjgfJ
PYj+etPXth3PNzxd8FAC4tWP5b6kEVVJ2Oxiy6Z8dYQJVsAVP110bo/MFwARAQAB
tEBGZWRvcmEgSW5mcmFzdHJ1Y3R1cmUgKGluZnJhc3RydWN0dXJlKSA8YWRtaW5A
ZmVkb3JhcHJvamVjdC5vcmc+iQI4BBMBAgAiBQJX2a87AhsPBgsJCAcDAgYVCAIJ
CgsEFgIDAQIeAQIXgAAKCRCAWYFeR92O+RbAD/9QzUyyoDPvPjlxn341BdT1iG3s
BvKjNOAtQkHeDzRQ0rBXG40yoTjQ+s4X+3aNumy4C+xeGqUiFMcBED/5EdahWcXm
5dqEAysTpiWOaamVfvQaNuBZjKP6GXXUeAVvkEVXggTI18tpNR/xFqfvHMCYuRUJ
QERNDtEPweQn9U3ewr7VOIrF8OnxVEQe9xOPKnGr0yD22NHz5hCiIKXwt34I7m9j
IlKMETTUflmERzzzwWp9CwmwU2o+g9hILqtvLFV/9TDSiWTvr2Ynj/hlNZPG8MhB
K73S8oQADP/ogmwYkK3cx06CkaSEiQciAkpL4v7GzWfw3hTScIxbf/R5YU5i5qHj
N+XJRLoW4AdNRAtrJ1KsLrFhFso9o7cfUlGGDPOwwQu3etoY3t0vViXYanOJrXqA
DaHZ7Ynj7V5KNB97xbjohT+YiApBV1jmMbydAMhNxo2ZlAC9hmlDEwD9L9CSPt1s
PvjcY20/RjVrm62vmXI/Sqa1zPjjYaxceEZzDIcxVDAneeeAdV99zHRDjZLqucux
GGJWwUNyxnuA7ZNdD3ZQBJlefOCT4Tg2Yj2ssH6PdGBoWS2gibnGdUsc/LhIaES4
afRLHVbHRu1HJ3s7pAgxNRY5Cjc5GEqdvm+5LOt/usyyaUwds0cJp55KKovsqZ1v
+h4JFKdsC+6/ZUHRQQ==
=MNfm
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,6 +0,0 @@
[infrastructure-tags-stg]
name=Fedora Infrastructure staging tag $releasever - $basearch
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/f$releasever-infra-stg/latest/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS

View File

@@ -1,6 +0,0 @@
[infrastructure-tags]
name=Fedora Infrastructure tag $releasever - $basearch
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/f$releasever-infra/latest/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS

View File

@@ -11,7 +11,7 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
[fedora-debuginfo]
name=Fedora $releasever - $basearch - Debug
failovermethod=priority
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/tree/
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
enabled=0
metadata_expire=7d
@@ -21,7 +21,7 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
[fedora-source]
name=Fedora $releasever - Source
failovermethod=priority
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/source/tree/
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
enabled=0
metadata_expire=7d

View File

@@ -11,7 +11,7 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
[fedora-debuginfo]
name=Fedora $releasever - $basearch - Debug
failovermethod=priority
baseurl=http://infrastructure.fedoraproject.org/pub/fedora-secondary/releases/$releasever/Everything/$basearch/debug/tree/
baseurl=http://infrastructure.fedoraproject.org/pub/fedora-secondary/releases/$releasever/Everything/$basearch/debug/
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
enabled=0
metadata_expire=7d
@@ -21,7 +21,7 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
[fedora-source]
name=Fedora $releasever - Source
failovermethod=priority
baseurl=http://infrastructure.fedoraproject.org/pub/fedora-secondary/releases/$releasever/Everything/source/tree/
baseurl=http://infrastructure.fedoraproject.org/pub/fedora-secondary/releases/$releasever/Everything/source/SRPMS/
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
enabled=0
metadata_expire=7d

View File

@@ -1,4 +0,0 @@
[rhel7-aarch64-server]
name = rhel7 $basearch server
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-rpms
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

View File

@@ -1,6 +0,0 @@
[infrastructure-tags-stg]
name=Fedora Infrastructure tag $releasever - $basearch
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/epel$releasever-infra-stg/latest/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS

View File

@@ -1,6 +0,0 @@
[infrastructure-tags]
name=Fedora Infrastructure tag $releasever - $basearch
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/epel$releasever-infra/latest/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS

View File

@@ -1,4 +0,0 @@
[rhel7-rhev]
name = rhel7 rhev $basearch
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-for-rhev-power-agents-rpms
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

View File

@@ -1,4 +0,0 @@
[rhel7-atomic-host]
name = rhel7 Atomic Host $basearch
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-atomic-host-rpms
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

View File

@@ -1,4 +1,3 @@
# run twice daily rsync of download. but lock it
MAILTO=smooge@gmail.com,root@fedoraproject.org
MAILTO=smooge@gmail.com
00 11,23 * * * root /usr/local/bin/lock-wrapper sync-up-downloads "/usr/local/bin/sync-up-downloads"

View File

@@ -1,2 +0,0 @@
# Run quick mirror fedora every 10minutes
*/10 * * * * root flock -n -E0 /tmp/download-sync -c '/root/quick-fedora-mirror/quick-fedora-mirror -c /root/quick-fedora-mirror/quick-fedora-mirror.conf'

View File

@@ -1,162 +0,0 @@
#!/usr/bin/python
# Copyright (C) 2014 by Adrian Reber
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import requests
import time
import sys
import getopt
fedora = 'org.fedoraproject.prod.bodhi.updates.fedora.sync'
epel = 'org.fedoraproject.prod.bodhi.updates.epel.sync'
branched = 'org.fedoraproject.prod.compose.branched.rsync.complete'
rawhide = 'org.fedoraproject.prod.compose.rawhide.rsync.complete'
base_url = 'https://apps.fedoraproject.org/datagrepper/raw'
topics = []
# default time interval to query for syncs: 1 day
delta = 86400
# return 0 and no output if a sync happened during <delta>
# if no sync happened 1 is returned
quiet = False
secondary = False
rawtime = False
def usage():
print
print "last-sync queries the Fedora Message Bus if new data is available on the public servers"
print
print "Usage: last-sync [options]"
print
print "Options:"
print " -a, --all query all possible releases (default)"
print " (fedora, epel, branched, rawhide)"
print " -f, --fedora only query if fedora has been updated during <delta>"
print " -e, --epel only query if epel has been updated"
print " -b, --branched only query if the branched off release"
print " has been updated"
print " -r, --rawhide only query if rawhide has been updated"
print " -q, --quiet do not print out any informations"
print " -t, --time print date in seconds since 1970-01-01"
print " -d DELTA, --delta=DELTA specify the time interval which should be used"
print " for the query (default: 86400)"
# -a -f -e -b -r -s -q -d
def parse_args():
global topics
global delta
global quiet
global secondary
global rawtime
try:
opts, args = getopt.getopt(sys.argv[1:], "afhebrsqtd:", ["all", "fedora", "epel", "rawhide", "branched", "secondary", "quiet", "time", "delta="])
except getopt.GetoptError as err:
print str(err)
usage()
sys.exit(2)
for option, argument in opts:
if option in ("-a", "--all"):
topics = [ fedora, epel, branched, rawhide ]
secondary = True
if option in ("-f", "--fedora"):
topics.append(fedora)
if option in ("-e", "--epel"):
topics.append(epel)
if option in ("-r", "--rawhide"):
topics.append(rawhide)
if option in ("-b", "--branched"):
topics.append(branched)
if option in ("-s", "--secondary"):
topics.append(rawhide)
secondary = True
if option in ("-q", "--quiet"):
quiet = True
if option in ("-t", "--time"):
rawtime = True
if option in ("-d", "--delta"):
delta = argument
if option in ("-h"):
usage();
sys.exit(0)
def getKey(item):
return item[1]
def create_url(url, topics, delta):
topic = ""
for i in topics:
topic += "&topic=%s" % i
return '%s?delta=%s%s' % (url, delta, topic)
parse_args()
if topics == []:
topics = [ fedora, epel, branched, rawhide ]
secondary = True
i = 0
data = None
while i < 5:
try:
data = requests.get(create_url(base_url, topics, delta), timeout=1).json()
break
except:
pass
if not data:
sys.exit(1)
repos = []
for i in range(0, data['count']):
try:
repo = "%s-%s" % (data['raw_messages'][i]['msg']['repo'], data['raw_messages'][i]['msg']['release'])
except:
# the rawhide and branch sync message has no repo information
arch = data['raw_messages'][i]['msg']['arch']
if arch == '':
arch = 'primary'
elif not secondary:
continue
repo = "%s-%s" % (data['raw_messages'][i]['msg']['branch'], arch)
repos.append([repo, data['raw_messages'][i]['timestamp']])
if quiet == False:
for repo, timestamp in sorted(repos, key=getKey):
if rawtime == True:
# this is useful if you want to compare the timestamp in seconds versus string
print "%s: %s" % (repo, timestamp)
else:
print "%s: %s" % (repo, time.strftime("%a, %d %b %Y %H:%M:%S +0000", time.gmtime(timestamp)))
if data['count'] > 0:
sys.exit(0)
else:
sys.exit(1)

View File

@@ -1,66 +0,0 @@
#!/bin/bash
##
## This script is used to sync data from main download servers to
## secondary server at ibiblio.
##
RSYNC='/usr/bin/rsync'
RS_OPT="-avSHP --numeric-ids "
RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates"
ALT_EXCLUDES=""
EPL_EXCLUDES=""
FED_EXCLUDES=""
DATE_EPEL='/root/last-epel-sync'
DATE_FED='/root/last-fed-sync'
DATE_ARCHIVE='/root/last-archive-sync'
DATE_ALT='/root/last-alt-sync'
DATE_SECOND='/root/last-second-sync'
for i in ${DATE_EPEL} ${DATE_FED} ${DATE_ARCHIVE} ${DATE_ALT} ${DATE_SECOND}; do
touch ${i}
done
LAST_SYNC='/usr/local/bin/last-sync'
SERVER=dl.fedoraproject.org
function sync_stuff() {
if [[ $# -ne 5 ]]; then
echo "Illegal number of arguments to sync_stuff: " $#
exit 1
fi
DATE_FILE=$1
LOGGER_NAME=$2
RSYNC_FROM=$3
RSYNC_TO=$4
FLAG="$5"
CURDATE=$( date +%s )
if [[ -s ${DATE_FILE} ]]; then
LASTRUN=$( cat ${DATE_FILE} | awk '{print int($NF)}' )
else
LASTRUN=$( date +%s --date="Jan 1 00:00:00 UTC 2007" )
fi
DELTA=`echo ${CURDATE}-${LASTRUN} | bc`
${LAST_SYNC} -d ${DELTA} -q ${FLAG}
if [ "$?" -eq "0" ]; then
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::${RSYNC_FROM} ${RSYNC_TO} | tail -n2 | logger -p local0.notice -t ${LOGGER_NAME}
echo ${CURDATE} > ${DATE_FILE}
else
logger -p local0.notice -t ${LOGGER_NAME} "No change found. Not syncing"
fi
}
sync_stuff ${DATE_EPEL} rsync_epel fedora-epel0 /srv/pub/epel/ "-e"
sync_stuff ${DATE_FED} rsync_fedora fedora-enchilada0 /srv/pub/fedora/ "-f"
sync_stuff ${DATE_ARCHIVE} rsync_archive fedora-archive0 /srv/pub/archive/ "-f"
sync_stuff ${DATE_ALT} rsync_alt fedora-alt0 /srv/pub/alt/ "-f"
sync_stuff ${DATE_SECOND} rsync_second fedora-secondary0 /srv/pub/fedora-secondary/ "-f"
# Let MM know I'm all up to date
#/usr/bin/report_mirror

View File

@@ -1,28 +0,0 @@
#!/bin/bash
##
## This script is used to sync data from main download servers to
## secondary server at ibiblio.
##
RSYNC='/usr/bin/rsync'
RS_OPT="-avSHP --numeric-ids"
RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates"
ALT_EXCLUDES=""
EPL_EXCLUDES=""
FED_EXCLUDES=""
LAST_SYNC='/usr/local/bin/last-sync'
SERVER=dl.fedoraproject.org
# Alt
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-alt0/ /srv/pub/alt/ | tail -n2 | logger -p local0.notice -t rsync_alt
# Secondary
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-secondary/ /srv/pub/fedora-secondary/ | tail -n2 | logger -p local0.notice -t rsync_2nd
# Archives
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-archive/ /srv/pub/archive/ | tail -n2 | logger -p local0.notice -t rsync_archive
# Let MM know I'm all up to date
#/usr/bin/report_mirror

42
files/gnome/backup.sh Normal file
View File

@@ -0,0 +1,42 @@
#!/bin/bash
# backup.sh will run FROM backup03 TO the various GNOME boxes on the set. (there's two set
# of machines, one being the ones with a public IP and the others being the IP-less ones that
# will forward their agent through bastion.gnome.org)
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin
MACHINES='signal.gnome.org
webapps2.gnome.org
clutter.gnome.org
blogs.gnome.org
chooser.gnome.org
git.gnome.org
webapps.gnome.org
progress.gnome.org
clipboard.gnome.org
cloud-ssh.gnome.org
bastion.gnome.org
spinner.gnome.org
master.gnome.org
combobox.gnome.org
restaurant.gnome.org
expander.gnome.org
live.gnome.org
extensions.gnome.org
view.gnome.org
puppet.gnome.org
accelerator.gnome.org
range.gnome.org
pentagon.gimp.org
account.gnome.org
bugzilla-new.gnome.org
socket.gnome.org'
BACKUP_DIR='/fedora_backups/gnome/'
LOGS_DIR='/fedora_backups/gnome/logs'
for MACHINE in $MACHINES; do
rsync -avz -e 'ssh -F /usr/local/etc/gnome_ssh_config' --bwlimit=2000 $MACHINE:/etc/rsyncd/backup.exclude $BACKUP_DIR/excludes/$MACHINE.exclude
rdiff-backup --remote-schema 'ssh -F /usr/local/etc/gnome_ssh_config %s rdiff-backup --server' --print-statistics --exclude-device-files --exclude /selinux --exclude /sys --exclude /proc --exclude-globbing-filelist $BACKUP_DIR/excludes/$MACHINE.exclude $MACHINE::/ $BACKUP_DIR/$MACHINE/ | mail -s "Daily backup: $MACHINE" backups@gnome.org
rdiff-backup --remove-older-than 6M --force $BACKUP_DIR/$MACHINE/
done

8
files/gnome/ssh_config Normal file
View File

@@ -0,0 +1,8 @@
Host live.gnome.org extensions.gnome.org puppet.gnome.org view.gnome.org drawable.gnome.org
User root
IdentityFile /usr/local/etc/gnome_backup_id.rsa
ProxyCommand ssh -W %h:%p bastion.gnome.org -F /usr/local/etc/gnome_ssh_config
Host *.gnome.org pentagon.gimp.org
User root
IdentityFile /usr/local/etc/gnome_backup_id.rsa

View File

@@ -1,135 +0,0 @@
# -*- coding: utf-8 -*-
from datetime import datetime
import requests
import fedmsg.consumers
import fedfind.release
from sqlalchemy import exc
import autocloud
from autocloud.models import init_model, ComposeDetails, ComposeJobDetails
from autocloud.producer import publish_to_fedmsg
from autocloud.utils import is_valid_image, produce_jobs
import logging
log = logging.getLogger("fedmsg")
DEBUG = autocloud.DEBUG
class AutoCloudConsumer(fedmsg.consumers.FedmsgConsumer):
"""
Fedmsg consumer for Autocloud
"""
if DEBUG:
topic = [
'org.fedoraproject.dev.__main__.pungi.compose.status.change'
]
else:
topic = [
'org.fedoraproject.prod.pungi.compose.status.change'
]
config_key = 'autocloud.consumer.enabled'
def __init__(self, *args, **kwargs):
self.supported_archs = [arch for arch, _ in ComposeJobDetails.ARCH_TYPES]
log.info("Autocloud Consumer is ready for action.")
super(AutoCloudConsumer, self).__init__(*args, **kwargs)
def consume(self, msg):
""" This is called when we receive a message matching the topic. """
log.info('Received %r %r' % (msg['topic'], msg['body']['msg_id']))
STATUS_F = ('FINISHED_INCOMPLETE', 'FINISHED',)
VARIANTS_F = ('CloudImages',)
images = []
compose_db_update = False
msg_body = msg['body']
status = msg_body['msg']['status']
compose_images_json = None
if status in STATUS_F:
location = msg_body['msg']['location']
json_metadata = '{}/metadata/images.json'.format(location)
resp = requests.get(json_metadata)
compose_images_json = getattr(resp, 'json', False)
if compose_images_json is not None:
compose_images_json = compose_images_json()
compose_images = compose_images_json['payload']['images']
compose_details = compose_images_json['payload']['compose']
compose_images = dict((variant, compose_images[variant])
for variant in VARIANTS_F
if variant in compose_images)
compose_id = compose_details['id']
rel = fedfind.release.get_release(cid=compose_id)
release = rel.release
compose_details.update({'release': release})
compose_images_variants = [variant for variant in VARIANTS_F
if variant in compose_images]
for variant in compose_images_variants:
compose_image = compose_images[variant]
for arch, payload in compose_image.iteritems():
if arch not in self.supported_archs:
continue
for item in payload:
relative_path = item['path']
if not is_valid_image(relative_path):
continue
absolute_path = '{}/{}'.format(location, relative_path)
item.update({
'compose': compose_details,
'absolute_path': absolute_path,
})
images.append(item)
compose_db_update = True
if compose_db_update:
session = init_model()
compose_date = datetime.strptime(compose_details['date'], '%Y%m%d')
try:
cd = ComposeDetails(
date=compose_date,
compose_id=compose_details['id'],
respin=compose_details['respin'],
type=compose_details['type'],
status=u'q',
location=location,
)
session.add(cd)
session.commit()
compose_details.update({
'status': 'queued',
'compose_job_id': cd.id,
})
publish_to_fedmsg(topic='compose.queued',
**compose_details)
except exc.IntegrityError:
session.rollback()
cd = session.query(ComposeDetails).filter_by(
compose_id=compose_details['id']).first()
log.info('Compose already exists %s: %s' % (
compose_details['id'],
cd.id
))
session.close()
num_images = len(images)
for pos, image in enumerate(images):
image.update({'pos': (pos+1, num_images)})
produce_jobs(images)

File diff suppressed because it is too large Load Diff

View File

@@ -1,243 +0,0 @@
RewriteEngine On
RewriteRule ^/fedora-commops/ticket/(.*) https://pagure.io/fedora-commops/issue/$1 [R=301]
RewriteRule ^/fedora-commops/report https://pagure.io/fedora-commops/issues [R=301]
RewriteRule ^/fedora-commops https://pagure.io/fedora-commops/ [R=301]
RewriteRule ^/marketing-team/report https://pagure.io/fedora-marketing/issues [R=301]
RewriteRule ^/marketing-team/ticket/(.*) https://pagure.io/fedora-marketing/issue/$1 [R=301]
RewriteRule ^/marketing-team https://pagure.io/fedora-marketing [R=301]
RewriteRule ^/fedora-project-schedule/report https://pagure.io/fedora-project-schedule/issues [R=301]
RewriteRule ^/fedora-project-schedule https://pagure.io/fedora-project-schedule [R=301]
RewriteRule ^/firewalld https://github.com/t-woerner/firewalld [R=301]
RewriteRule ^/fpaste https://pagure.io/fpaste [R=301]
RewriteRule ^/rpmfluff https://pagure.io/rpmfluff [R=301]
RewriteRule ^/weatheralert https://pagure.io/weatheralert [R=301]
RewriteRule ^/create-tx-configuration https://pagure.io/create-tx-configuration [R=301]
RewriteRule ^/fedora-magazine https://pagure.io/fedoramagazine-images [R=301]
RewriteRule ^/newt https://pagure.io/newt [R=301]
RewriteRule ^/releases/n/e/newt/(.*) https://pagure.io/releases/newt/$1 [R=301]
RewriteRule ^/releases/n/e/newt https://pagure.io/releases/newt [R=301]
Rewriterule ^/cloud/report https://pagure.io/atomic-wg/issues [R=301]
Rewriterule ^/cloud/ticket/(.*) https://pagure.io/atomic-wg/issue/$1 [R=301]
Rewriterule ^/cloud https://pagure.io/atomic-wg [R=301]
RewriteRule ^/imapsync https://pagure.io/imapsync [R=301]
RewriteRule ^/releases/i/m/imapsync/(.*) https://pagure.io/releases/imapsync/$1 [R=301]
RewriteRule ^/releases/i/m/imapsync https://pagure.io/releases/imapsync [R=301]
RewriteRule ^/released/i/m/imapsync/(.*) https://pagure.io/releases/imapsync/$1 [R=301]
RewriteRule ^/released/i/m/imapsync https://pagure.io/releases/imapsync [R=301]
RewriteRule ^/released/imapsync https://pagure.io/releases/imapsync [R=301]
RewriteRule ^/fedora-infrastructure/report https://pagure.io/fedora-infrastructure/issues [R=301]
RewriteRule ^/fedora-infrastructure/ticket/(.*) https://pagure.io/fedora-infrastructure/issue/$1 [R=301]
RewriteRule ^/fedora-infrastructure https://pagure.io/fedora-infrastructure [R=301]
RewriteRule ^/fesco/report https://pagure.io/fesco/issues [R=301]
RewriteRule ^/fesco/ticket/(.*) https://pagure.io/fesco/issue/$1 [R=301]
RewriteRule ^/fesco https://pagure.io/fesco [R=301]
Rewriterule ^/fedora-packager/report https://pagure.io/fedora-packager/issues [R=301]
RewriteRule ^/fedora-packager/ticket/(.*) https://pagure.io/fedora-packager/issue/$1 [R=301]
Rewriterule ^/fedora-packager https://pagure.io/fedora-packager [R=301]
RewriteRule ^/rpkg/report https://pagure.io/rpkg/issues [R=301]
RewriteRule ^/rpkg/ticket/(.*) https://pagure.io/rpkg/issue/$1 [R=301]
RewriteRule ^/rpkg https://pagure.io/rpkg [R=301]
RewriteRule ^/fedpkg/report https://pagure.io/fedpkg/issues [R=301]
Rewriterule ^/fedpkg/ticket/(.*) https://pagure.io/fedpkg/issue/$1 [R=301]
Rewriterule ^/fedpkg https://pagure.io/fedpkg [R=301]
RewriteRule ^/ELAPI/report https://pagure.io/ELAPI/issues [R=301]
RewriteRule ^/ELAPI/ticket/(.*) https://pagure.io/ELAPI/issue/$1 [R=301]
RewriteRule ^/ELAPI https://pagure.io/ELAPI [R=301]
RewriteRule ^/irc-support-sig/report https://pagure.io/irc-support-sig/issues [R=301]
Rewriterule ^/irc-support-sig/ticket/(.*) https://pagure.io/irc-support-sig/issue/$1 [R=301]
RewriteRule ^/irc-support-sig https://pagure.io/irc-support-sig [R=301]
RewriteRule ^/packager-sponsors/report https://pagure.io/packager-sponsors/issues [R=301]
Rewriterule ^/packager-sponsors/ticket/(.*) https://pagure.io/packager-sponsors/issue/$1 [R=301]
Rewriterule ^/packager-sponsors https://pagure.io/packager-sponsors [R=301]
RewriteRule ^/epel/report https://pagure.io/epel/issues [R=301]
Rewriterule ^/epel/ticket/(.*) https://pagure.io/epel/issue/$1 [R=301]
RewriteRule ^/epel https://pagure.io/epel [R=301]
RewriteRule ^/elfutils/wiki/(.*) https://sourceware.org/elfutils/$1 [R=301]
RewriteRule ^/elfutils https://sourceware.org/elfutils [R=301]
Rewriterule ^/releases/e/l/elfutils/(.*) https://sourceware.org/elfutils/ftp/$1 [R=301]
RewriteRule ^/rpmdevtools/report https://pagure.io/rpmdevtools/issues [R=301]
RewriteRule ^/rpmdevtools/ticket/(.*) https://pagure.io/rpmdevtools/issue/$1 [R=301]
Rewriterule ^/rpmdevtools https://pagure.io/rpmdevtools [R=301]
RewriteRule ^/fudcon-planning/report https://pagure.io/fudcon-planning/issues [R=301]
RewriteRule ^/fudcon-planning/ticket/(.*) https://pagure.io/fudcon-planning/issue/$1 [R=301]
Rewriterule ^/fudcon-planning https://pagure.io/fudcon-planning [R=301]
RewriteRule ^/gfs2-utils/report https://pagure.io/gfs2-utils/issues [R=301]
RewriteRule ^/gfs2-utils/ticket/(.*) https://pagure.io/gfs2-utils/issue/$1 [R=301]
RewriteRule ^/gfs2-utils https://pagure.io/gfs2-utils [R=301]
RewriteRule ^/elections/report https://pagure.io/elections/issues [R=301]
RewriteRule ^/elections/ticket/(.*) https://pagure.io/elections/issue/$1 [R=301]
RewriteRule ^/elections https://pagure.io/elections [R=301]
RewriteRule ^/fedocal/report https://pagure.io/fedocal/issues [R=301]
RewriteRule ^/fedocal/ticket/(.*) https://pagure.io/fedocal/issue/$1 [R=301]
RewriteRule ^/fedocal https://pagure.io/fedocal [R=301]
RewriteRule ^/FedoraReview/report https://pagure.io/FedoraReview/issues [R=301]
RewriteRule ^/FedoraReview/ticket/(.*) https://pagure.io/FedoraReview/issue/$1 [R=301]
RewriteRule ^/FedoraReview https://pagure.io/FedoraReview [R=301]
RewriteRule ^/packagedb-cli/report https://pagure.io/pkgdb-cli/issues [R=301]
RewriteRule ^/packagedb-cli/ticket/(.*) https://pagure.io/pkgdb-cli/issue/$1 [R=301]
RewriteRule ^/packagedb-cli https://pagure.io/pkgdb-cli [R=301]
RewriteRule ^/r2spec/report https://pagure.io/r2spec/issues [R=301]
RewriteRule ^/r2spec/ticket/(.*) https://pagure.io/r2spec/issue/$1 [R=301]
RewriteRule ^/r2spec https://pagure.io/r2spec [R=301]
RewriteRule ^/pkgdb2/report https://pagure.io/pkgdb2/issues [R=301]
RewriteRule ^/pkgdb2/ticket/(.*) https://pagure.io/pkgdb2/issue/$1 [R=301]
RewriteRule ^/pkgdb2 https://pagure.io/pkgdb2/ [R=301]
RewriteRule ^/tgcapcha22/report https://pagure.io/tgcapcha22/issues [R=301]
RewriteRule ^/tgcapcha22/ticket/(.*) https://pagure.io/tgcapcha22/issue/$1 [R=301]
RewriteRule ^/tgcapcha22 https://pagure.io/tgcapcha22 [R=301]
RewriteRule ^/fedora-gather-easyfix/report https://pagure.io/fedora-gather-easyfi/issues [R=301]
RewriteRule ^/fedora-gather-easyfix/ticket/(.*) https://pagure.io/fedora-gather-easyfile/issue/$1 [R=301]
RewriteRule ^/fedora-gather-easyfix https://pagure.io/fedora-gather-easyfile [R=301]
RewriteRule ^/389/report https://pagure.io/389-ds-base/issues [R=301]
RewriteRule ^/389/ticket/(.*) https://pagure.io/389-ds-base/issue/$1 [R=301]
RewriteRule ^/389 https:///pagure.io/389-ds-base [R=301]
RewriteRule ^/ipsilon/report https://pagure.io/ipsilon/issues [R=301]
RewriteRule ^/ipsilon/ticket/(.*) https://pagure.io/ipsilon/issue/$1 [R=301]
RewriteRule ^/released/ipsilon/(.*) http://releases.pagure.org/ipsilon/$1 [R=301]
RewriteRule ^/released/ipsilon http://releases.pagure.org/ipsilon/ [R=301]
RedirectMatch ^/ipsilon https://pagure.io/ipsilon
RedirectMatch ^/ipsilon/ https://pagure.io/ipsilon/
RewriteRule ^/mod_nss/report https://pagure.io/mod_nss/issues [R=301]
Rewriterule ^/mod_nss/ticket/(.*) https://pagure.io/mod_nss/issue/$1 [R=301]
RewriteRule ^/mod_nss https:///pagure.io/mod_nss [R=301]
RewriteRule ^/mod_revocator/report https://pagure.io/mod_revocator/issues [R=301]
RewriteRule ^/mod_revocator/ticket/(.*) https://pagure.io/mod_revocator/issue/$1 [R=301]
RewriteRule ^/mod_revocator https:///pagure.io/mod_revocator [R=301]
RewriteRule ^/fpc/report https://pagure.io/packaging-committee/issues [R=301]
Rewriterule ^/fpc/ticket/(.*) https://pagure.io/packaging-committee/issue/$1 [R=301]
RewriteRule ^/fpc https:///pagure.io/packaging-committee [R=301]
RewriteRule ^/certmonger/report https://pagure.io/certmonger/issues [R=301]
RewriteRule ^/certmonger/ticket/(.*) https://pagure.io/certmonger/issue/$1 [R=301]
RewriteRule ^/certmonger https:///pagure.io/certmonger [R=301]
RewriteRule ^/publican https:///sourceware.org/publican [R=301]
RewriteRule ^/fedora-apac/report https://pagure.io/ambassadors-apac/issues [R=301]
RewriteRule ^/fedora-apac/ticket/(.*) https://pagure.io/ambassadors-apac/issue/$1 [R=301]
Rewriterule ^/fedora-apac https:///pagure.io/ambassadors-apac [R=301]
RewriteRule ^/sssd/report https://pagure.io/SSSD/sssd/issues [L,R]
RewriteRule ^/sssd/ticket/(.*) https://pagure.io/SSSD/sssd/issue/$1 [L,R]
RewriteRule ^/releases/s/s/sssd/(.*) https://releases.pagure.org/SSSD/sssd/$1 [L,R]
RewriteRule ^/releases/s/s/sssd https://releases.pagure.org/SSSD/sssd/ [L,R]
RewriteRule ^/released/sssd/(.*) https://releases.pagure.org/SSSD/sssd/$1 [L,R]
RewriteRule ^/released/sssd https://releases.pagure.org/SSSD/sssd/ [L,R]
#RewriteRule ^/sssd https://pagure.io/SSSD/sssd [L,R]
#RewriteRule ^/sssd/ https://pagure.io/SSSD/sssd/ [L,R]
RewriteRule ^/freeipa/changeset/(.*) https://pagure.io/freeipa/c/$1 [L,R]
RewriteRule ^/freeipa/report https://pagure.io/freeipa/issues [L,R]
RewriteRule ^/freeipa/ticket/(.*) https://pagure.io/freeipa/issue/$1 [L,R]
RewriteRule ^/freeipa https://pagure.io/freeipa [L,R]
RewriteRule ^/freeipa/(.*) https://pagure.io/freeipa [L,R]
RewriteRule ^/rel-eng/report https://pagure.io/releng/issues [R=301]
RewriteRule ^/rel-eng/ticket/(.*) https://pagure.io/releng/issue/$1 [R=301]
RewriteRule ^/rel-eng https://pagure.io/releng [R=301]
RewriteRule ^/fedora-badges/report https://pagure.io/Fedora-Badges/issues [R=301]
RewriteRule ^/fedora-badges/ticket/(.*) https://pagure.io/Fedora-Badges/issue/$1 [R=301]
RewriteRule ^/fedora-badges https://pagure.io/Fedora-Badges [R=301]
RewriteRule ^/bind-dyndb-ldap/wiki https://docs.pagure.org/bind-dyndb-ldap/ [R=301]
RewriteRule ^/bind-dyndb-ldap/wiki/ https://docs.pagure.org/bind-dyndb-ldap/ [R=301]
RewriteRule ^/bind-dyndb-ldap/wiki/(.*) https://docs.pagure.org/bind-dyndb-ldap/$1.html [R=301]
RewriteRule ^/bind-dyndb-ldap/wiki/(.*)/ https://docs.pagure.org/bind-dyndb-ldap/$1.html [R=301]
RewriteRule ^/bind-dyndb-ldap/report https://pagure.io/bind-dyndb-ldap/issues [R=301]
RewriteRule ^/bind-dyndb-ldap/ticket/(.*) https://pagure.io/bind-dyndb-ldap/issue/$1 [R=301]
RewriteRule ^/bind-dyndb-ldap/changeset/(.*) https://pagure.io/bind-dyndb-ldap/c/$1 [R=301]
RewriteRule ^/bind-dyndb-ldap https://pagure.io/bind-dyndb-ldap [R=301]
RewriteRule ^/released/bind-dyndb-ldap/(.*) https://releases.pagure.io/bind-dyndb-ldap [R=301]
RewriteRule ^/released/bind-dyndb-ldap https://releases.pagure.io/bind-dyndb-ldap [R=301]
RewriteRule ^/released/ding-libs/(.*) https://releases.pagure.org/SSSD/ding-libs/$1 [R=301]
RewriteRule ^/released/ding-libs https://releases.pagure.org/SSSD/ding-libs/ [R=301]
RewriteRule ^/webauthinfra/wiki/mod_lookup_identity https://www.adelton.com/apache/mod_lookup_identity/ [R]
RewriteRule ^/webauthinfra/wiki/mod_intercept_form_submit https://www.adelton.com/apache/mod_intercept_form_submit/ [R]
RewriteRule ^/webauthinfra/wiki/mod_authnz_pam https://www.adelton.com/apache/mod_authnz_pam/ [R]
RewriteRule ^/webauthinfra https://pagure.io/webauthinfra [R]
RewriteRule ^/spacewalk/wiki/(.*) https://github.com/spacewalkproject/spacewalk/wiki/$1 [R]
RewriteRule ^/spacewalk/wiki https://github.com/spacewalkproject/spacewalk/wiki [R]
RewriteRule ^/spacewalk https://github.com/spacewalkproject/spacewalk [R]
RewriteRule ^/famnarequests/report https://pagure.io/ambassadors-na/requests/issues [R=301]
RewriteRule ^/famnarequests/ticket/(.*) https://pagure.io/ambassadors-na/requests/issue/$1 [R=301]
RewriteRule ^/famnarequests https://pagure.io/ambassadors-na/requests [R=301]
RewriteCond %{HTTP_HOST} ^fedorahosted.org [NC,OR]
RewriteCond %{REQUEST_URI} ^/released/javapackages/doc/(.*)
RewriteRule ^/released/javapackages/doc/(.*)$ https://fedora-java.github.io/howto/latest/$1 [L,R=301,NC]
RewriteRule ^/liberation-fonts/report https://pagure.io/liberation-fonts/issues [L,R]
RewriteRule ^/liberation-fonts/ticket/(.*) https://pagure.io/liberation-fonts/issue/$1 [L,R]
RewriteRule ^/liberation-fonts/l/i/liberation-fonts/(.*) https://releases.pagure.org/liberation-fonts/$1 [L,R]
RewriteRule ^/liberation-fonts/l/i/liberation-fonts https://releases.pagure.org/liberation-fonts/ [L,R]
RewriteRule ^/liberation-fonts https://pagure.io/liberation-fonts [R=301]
RewriteRule ^/lohit/report https://pagure.io/lohit/issues [L,R]
RewriteRule ^/lohit/ticket/(.*) https://pagure.io/lohit/issue/$1 [L,R]
RewriteRule ^/lohit/l/i/liberation-fonts/(.*) https://releases.pagure.org/lohit/$1 [L,R]
RewriteRule ^/lohit/l/i/liberation-fonts https://releases.pagure.org/lohit/ [L,R]
RewriteRule ^/lohit/ https://pagure.io/lohit [R=301]
RewriteRule ^/aplaws/ https://aplaws.org/ [R=301]
RewriteRule ^/aplaws https://aplaws.org [R=301]
RewriteRule ^/fedora-medical/report https://pagure.io/fedora-medical/issues [L,R]
RewriteRule ^/fedora-medical/ticket/(.*) https://pagure.io/fedora-medical/issue/$1 [L,R]
RewriteRule ^/fedora-medical/ https://pagure.io/fedora-medical [R=301]
RewriteRule ^/libverto/ https://github.com/latchset/libverto/ [R=301]
RewriteRule ^/libverto https://github.com/latchset/libverto [R=301]
RewriteRule ^/pki/report https://pagure.io/dogtagpki/issues [L,R]
RewriteRule ^/pki/ticket/(.*) https://pagure.io/dogtagpki/issue/$1 [L,R]
RewriteRule ^/pki/p/k/pki/(.*) https://releases.pagure.org/dogtagpki/$1 [L,R]
RewriteRule ^/pki/p/k/pki https://releases.pagure.org/dogtagpki/ [L,R]
RewriteRule ^/pki https://pagure.io/dogtagpki [R=301]
# Ipsilon wiki is now moving content
ReWriteCond %{REQUEST_URI} !^/ipsilon/.*
RewriteRule ^/.* https://fedoraproject.org/wiki/Infrastructure/Fedorahosted-retirement

View File

@@ -1,2 +0,0 @@
RewriteEngine on
RewriteRule ^/\.well-known/(.*) /srv/web/acme-challenge/.well-known/$1 [L]

View File

@@ -1,49 +0,0 @@
RewriteEngine on
RewriteRule ^/git/ipsilon.git(.*)$ https://pagure.io/ipsilon.git$1 [L,R]
RewriteRule ^/git/rpkg.git(.*)$ https://pagure.io/rpkg.git$1 [L,R]
RewriteRule ^/git/weatheralert.git(.*)$ https://pagure.io/weatheralert.git$1 [L,R]
RewriteRule ^/git/create-tx-configuration.git(.*)$ https://pagure.io/create-tx-configuration.git$1 [L,R]
RewriteRule ^/git/kernel-tests.git(.*)$ https://pagure.io/kernel-tests.git$1 [L,R]
RewriteRule ^/git/elfutils.git$ https://sourceware.org/git/?p=elfutils.git;a=summary [L,R]
RewriteRule ^/c*git/389/ds.git(.*)$ https://pagure.io/389-ds-base [L,R]
RewriteRule ^/c*git/389/lib389.git(.*)$ https://pagure.io/lib389 [L,R]
RewriteRule ^/c*git/389/console.git(.*)$ https://pagure.io/389-console [L,R]
RewriteRule ^/c*git/389/ds-console.git(.*)$ https://pagure.io/389-ds-console [L,R]
RewriteRule ^/c*git/389/dsgw.git(.*)$ https://pagure.io/389-dsgw [L,R]
RewriteRule ^/c*git/389/admin.git(.*)$ https://pagure.io/389-admin [L,R]
RewriteRule ^/c*git/389/adminutil.git(.*)$ https://pagure.io/389-adminutil [L,R]
RewriteRule ^/c*git/389/admin-console.git(.*)$ https://pagure.io/389-admin-console [L,R]
RewriteRule ^/c*git/idm-console-framework.git(.*)$ https://pagure.io/idm-console-framework [L,R]
RewriteRule ^/c*git/gss-ntlmssp.git(.*)$ https://pagure.io/gssntlmssp [L,R]
RewriteRule ^/c*git/mod_nss.git(.*)$ https://pagure.io/mod_nss [L,R]
RewriteRule ^/c*git/freeipa.git(.*)$ https://pagure.io/freeipa [L,R]
RewriteRule ^/c*git/certmonger.git(.*)$ https://pagure.io/certmonger [L,R]
RewriteCond %{REQUEST_URI} /cgit/sanlock\.git/commit/
RewriteCond %{query_string} id=(.+)$
RewriteRule ^/.*$ https://pagure.io/sanlock/c/%1 [R,L,NE]
RewriteRule ^/git/sanlock.git$ https://pagure.io/sanlock.git [L,R]
RewriteCond %{REQUEST_URI} /cgit/dlm\.git/commit/
RewriteCond %{query_string} id=(.+)$
RewriteRule ^/.*$ https://pagure.io/dlm/c/%1 [R,L,NE]
RewriteRule ^/git/dlm.git(.*)$ https://pagure.io/dlm.git$1 [L,R]
RewriteCond %{REQUEST_URI} /cgit/lvm2\.git/commit/
RewriteCond %{query_string} id=(.+)$
RewriteRule ^/.*$ https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=%1 [R,L,NE]
RewriteCond %{REQUEST_URI} /cgit/lvm2\.git/patch/
RewriteCond %{query_string} id=(.+)$
RewriteRule ^/.*$ https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=%1 [R,L,NE]
RewriteCond %{REQUEST_URI} /cgit/lvm2\.git(.*)$
RewriteRule ^/.*$ https://sourceware.org/git/?p=lvm2.git [R,L,NE]
RewriteRule ^/git/lvm2.git https://sourceware.org/git/?p=lvm2.git [L,R]
# redirect vdsm to ovirt git server - since ?p == querystring we have to match that sanely
RewriteCond %{QUERY_STRING} ^.*p=(.*vdsm\.git.*)$
RewriteRule ^.*$ http://gerrit.ovirt.org/gitweb\?p=%1 [R,L,NE]
RedirectMatch permanent ^/.* https://fedoraproject.org/wiki/Infrastructure/Fedorahosted-retirement

View File

@@ -15,12 +15,13 @@
# SSL Protocol support:
# List the enable protocol levels with which clients will be able to
# connect. Disable SSLv2 access by default:
SSLProtocol {{ ssl_protocols }}
SSLProtocol all -SSLv2
# SSL Cipher Suite:
# List the ciphers that the client is permitted to negotiate.
# See the mod_ssl documentation for a complete list.
SSLCipherSuite {{ ssl_ciphers }}
#SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
# Server Certificate:
# Point SSLCertificateFile at a PEM encoded certificate. If

View File

@@ -1,27 +0,0 @@
# this is meant for proxied stuff only, hence the lack of ssl
<VirtualHost *:80>
# Change this to the domain which points to your host.
ServerName {{ item.external_name }}
ServerAlias {{ item.name }}
DocumentRoot {{ item.document_root }}
ErrorLog "/var/log/httpd/{{ item.name }}.error_log"
CustomLog "/var/log/httpd/{{ item.name }}.access_log" common
<Directory "{{ item.document_root }}">
Options Indexes FollowSymLinks
Require all granted
</Directory>
<Location "/">
Options +Indexes
DirectoryIndex default.html
</Location>
<Location "/docs">
DirectoryIndex index.html
</Location>
</VirtualHost>

46
files/iptables/iptables Normal file
View File

@@ -0,0 +1,46 @@
# {{ ansible_managed }}
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
# allow ping and traceroute
-A INPUT -p icmp -j ACCEPT
# localhost is fine
-A INPUT -i lo -j ACCEPT
# Established connections allowed
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# allow ssh - always
-A INPUT -m conntrack --ctstate NEW -m tcp -p tcp --dport 22 -j ACCEPT
# for nrpe - allow it from nocs
-A INPUT -p tcp -m tcp --dport 5666 -s 192.168.1.10 -j ACCEPT
# FIXME - this is the global nat-ip and we need the noc01-specific ip
-A INPUT -p tcp -m tcp --dport 5666 -s 209.132.181.102 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5666 -s 209.132.181.35 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5666 -s 10.5.126.41 -j ACCEPT
# if the host/group defines incoming tcp_ports - allow them
{% for port in tcp_ports %}
-A INPUT -p tcp -m tcp --dport {{ port }} -j ACCEPT
{% endfor %}
# if the host/group defines incoming udp_ports - allow them
{% for port in udp_ports %}
-A INPUT -p udp -m udp --dport {{ port }} -j ACCEPT
{% endfor %}
# if there are custom rules - put them in as-is
{% for rule in custom_rules %}
{{ rule }}
{% endfor %}
# otherwise kick everything out
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

View File

@@ -0,0 +1,14 @@
# {{ ansible_managed }}
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
# Allow connections from client/server
-A INPUT -p tcp -m tcp --dport 44333:44334 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

View File

@@ -0,0 +1,58 @@
# {{ ansible_managed }}
*nat
:PREROUTING ACCEPT []
:POSTROUTING ACCEPT []
:OUTPUT ACCEPT []
# Redirect staging attempts to talk to the external proxy to an internal ip.
# This is primarily for openid in staging which needs to get around proxy
# redirects.
-A OUTPUT -d 209.132.181.5 -j DNAT --to-destination 10.5.126.88
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
# allow ping and traceroute
-A INPUT -p icmp -j ACCEPT
# localhost is fine
-A INPUT -i lo -j ACCEPT
# Established connections allowed
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# allow ssh - always
-A INPUT -m conntrack --ctstate NEW -m tcp -p tcp --dport 22 -j ACCEPT
# for nrpe - allow it from nocs
-A INPUT -p tcp -m tcp --dport 5666 -s 192.168.1.10 -j ACCEPT
# FIXME - this is the global nat-ip and we need the noc01-specific ip
-A INPUT -p tcp -m tcp --dport 5666 -s 209.132.181.102 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5666 -s 209.132.181.35 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5666 -s 10.5.126.41 -j ACCEPT
# if the host/group defines incoming tcp_ports - allow them
{% for port in tcp_ports %}
-A INPUT -p tcp -m tcp --dport {{ port }} -j ACCEPT
{% endfor %}
# if the host/group defines incoming udp_ports - allow them
{% for port in udp_ports %}
-A INPUT -p udp -m udp --dport {{ port }} -j ACCEPT
{% endfor %}
# if there are custom rules - put them in as-is
{% for rule in custom_rules %}
{{ rule }}
{% endfor %}
# otherwise kick everything out
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

View File

@@ -11,7 +11,7 @@
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
#SSLCertificateChainFile /etc/pki/tls/cert.pem
SSLHonorCipherOrder On
SSLCipherSuite {{ ssl_ciphers }}
SSLProtocol {{ ssl_protocols }}
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
SSLProtocol -All +TLSv1 +TLSv1.1 +TLSv1.2
</VirtualHost>

View File

@@ -1 +0,0 @@
/var/log/hosts

View File

@@ -1,17 +0,0 @@
[Unit]
Description=loopabull worker #%i
After=network.target
Documentation=https://github.com/maxamillion/loopabull
[Service]
ExecStart=/usr/bin/loopabull $CONFIG_FILE
User=root
Group=root
Restart=on-failure
Type=simple
EnvironmentFile=-/etc/sysconfig/loopabull
Restart=on-failure
PrivateTmp=yes
[Install]
WantedBy=multi-user.target

View File

@@ -1 +0,0 @@
config = { "rabbitmq.serializer.enabled": True }

View File

@@ -1,22 +0,0 @@
[rhel7-openshift-3.4]
name = rhel7 openshift 3.4 $basearch
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-openshift-3.4-rpms/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[rhel7-openshift-3.5]
name = rhel7 openshift 3.5 $basearch
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-openshift-3.5-rpms/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
{% if env == 'staging' %}
[rhel7-openshift-3.6]
name = rhel7 openshift 3.6 $basearch
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-openshift-3.6-rpms/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
# OpenShift 3.6 needs this for new openvswitch
[rhel7-fast-datapath]
name = rhel7 fast datapath $basearch
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-fast-datapath/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
{% endif %}

View File

@@ -0,0 +1,8 @@
[atomic-reactor]
name=Copr repo for atomic-reactor owned by maxamillion
baseurl=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/epel-7-$basearch/
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/pubkey.gpg
enabled=1
enabled_metadata=1

View File

@@ -1,9 +0,0 @@
FROM registry.fedoraproject.org/fedora
ADD ./infra-tags.repo /etc/yum.repos.d/infra-tags.repo
RUN dnf -y install --refresh dnf-plugins-core && dnf -y install docker git python-setuptools e2fsprogs koji python-backports-lzma osbs-client python-osbs-client gssproxy fedpkg python-docker-squash atomic-reactor python-atomic-reactor* go-md2man
RUN sed -i 's|.*default_ccache_name.*| default_ccache_name = DIR:/tmp/ccache_%{uid}|g' /etc/krb5.conf
ADD ./krb5.osbs_{{osbs_url}}.keytab /etc/
ADD ./ca.crt /etc/pki/ca-trust/source/anchors/osbs.ca.crt
RUN update-ca-trust
CMD ["python2", "/usr/bin/atomic-reactor", "--verbose", "inside-build"]

View File

@@ -1,8 +0,0 @@
FROM registry.fedoraproject.org/fedora
ADD ./infra-tags.repo /etc/yum.repos.d/infra-tags.repo
RUN dnf -y install --refresh dnf-plugins-core && dnf -y install docker git python3-docker-py python3-setuptools e2fsprogs koji osbs-client gssproxy fedpkg python3-docker-squash atomic-reactor python3-atomic-reactor* go-md2man
RUN sed -i 's|.*default_ccache_name.*| default_ccache_name = DIR:/tmp/ccache_%{uid}|g' /etc/krb5.conf
ADD ./krb5.osbs_{{osbs_url}}.keytab /etc/
ADD ./ca.crt /etc/pki/ca-trust/source/anchors/osbs.ca.crt
RUN update-ca-trust
CMD ["python3", "/usr/bin/atomic-reactor", "--verbose", "inside-build"]

View File

@@ -1,5 +0,0 @@
SHELL=/bin/bash
MAILTO=maxamillion@fedoraproject.org
5 0 * * * root for i in $(docker ps -a | awk '/Exited/ { print $1 }'); do docker rm $i; done && for i in $(docker images -q -f 'dangling=true'); do docker rmi $i; done

View File

@@ -1,4 +0,0 @@
SHELL=/bin/bash
MAILTO=maxamillion@fedoraproject.org
0 0 * * * root oadm prune builds --orphans --keep-younger-than=720h0m0s --confirm

View File

@@ -1 +0,0 @@
VG="vg-docker"

View File

@@ -1 +0,0 @@
STORAGE_DRIVER="overlay2"

View File

@@ -1,8 +0,0 @@
# Ansible managed
[Unit]
Wants=iptables.service
After=iptables.service
[Service]
ExecStartPost=/usr/local/bin/fix-docker-iptables

View File

@@ -1,32 +0,0 @@
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
ExecStart=/usr/bin/docker daemon \
--exec-opt native.cgroupdriver=systemd \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$INSECURE_REGISTRY
ExecStartPost=/usr/local/bin/fix-docker-iptables
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
MountFlags=slave
StandardOutput=null
StandardError=null
TimeoutStartSec=0
Restart=on-abnormal
[Install]
WantedBy=multi-user.target

View File

@@ -1,2 +0,0 @@
server=/fedoraproject.org/10.5.126.21
server=/fedoraproject.org/10.5.126.22

View File

@@ -1,2 +0,0 @@
server=/fedoraproject.org/10.5.126.21
server=/fedoraproject.org/10.5.126.22

View File

@@ -1,74 +0,0 @@
#!/bin/bash -xe
# Note: this is done as a script because it needs to be run after
# every docker service restart.
# And just doing an iptables-restore is going to mess up kubernetes'
# NAT table.
# And it gets even better with openshift! It thinks I'm stupid and need
# to be corrected by automatically adding the "allow all" rules back at
# the top as soon as I remove them.
# To circumvent that, we're just adding a new chain for this, as it seems
# that it doesn't do anything with the firewall if we keep its rules in
# place. (it doesn't check the order of its rules, only that they exist)
if [ "`iptables -nL | grep FILTER_FORWARD`" == "" ];
then
iptables -N FILTER_FORWARD
fi
if [ "`iptables -nL | grep 'FILTER_FORWARD all'`" == "" ];
then
iptables -I FORWARD 1 -j FILTER_FORWARD
iptables -I FORWARD 2 -j REJECT
iptables -I DOCKER-ISOLATION 1 -j FILTER_FORWARD
fi
# Delete all old rules
iptables --flush FILTER_FORWARD
# Re-insert some basic rules
iptables -A FILTER_FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FILTER_FORWARD --src 10.1.0.0/16 --dst 10.1.0.0/16 -j ACCEPT
# Now insert access to allowed boxes
# docker-registry
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.125.56 --dport 443 -j ACCEPT
#koji.fp.o
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.125.61 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.125.61 --dport 443 -j ACCEPT
# pkgs
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.125.44 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.125.44 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.125.44 --dport 9418 -j ACCEPT
# DNS
iptables -A FILTER_FORWARD -p udp -m udp -d 10.5.126.21 --dport 53 -j ACCEPT
iptables -A FILTER_FORWARD -p udp -m udp -d 10.5.126.22 --dport 53 -j ACCEPT
# mirrors.fp.o
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.51 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.52 --dport 443 -j ACCEPT
# Kerberos
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.51 --dport 1088 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.52 --dport 1088 -j ACCEPT
# dl.phx2
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.93 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.93 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.94 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.94 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.95 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.95 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.96 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.96 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.97 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.97 --dport 443 -j ACCEPT
# Docker is CRAZY and forces Google DNS upon us.....
iptables -A FILTER_FORWARD -p udp -m udp -d 8.8.8.8 --dport 53 -j ACCEPT
iptables -A FILTER_FORWARD -p udp -m udp -d 8.8.4.4 --dport 53 -j ACCEPT
iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited

View File

@@ -1,81 +0,0 @@
#!/bin/bash -xe
# Note: this is done as a script because it needs to be run after
# every docker service restart.
# And just doing an iptables-restore is going to mess up kubernetes'
# NAT table.
# And it gets even better with openshift! It thinks I'm stupid and need
# to be corrected by automatically adding the "allow all" rules back at
# the top as soon as I remove them.
# To circumvent that, we're just adding a new chain for this, as it seems
# that it doesn't do anything with the firewall if we keep its rules in
# place. (it doesn't check the order of its rules, only that they exist)
if [ "`iptables -nL | grep FILTER_FORWARD`" == "" ];
then
iptables -N FILTER_FORWARD
fi
if [ "`iptables -nL | grep 'FILTER_FORWARD all'`" == "" ];
then
iptables -I FORWARD 1 -j FILTER_FORWARD
iptables -I FORWARD 2 -j REJECT
iptables -I DOCKER-ISOLATION 1 -j FILTER_FORWARD
fi
# Delete all old rules
iptables --flush FILTER_FORWARD
# Re-insert some basic rules
iptables -A FILTER_FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FILTER_FORWARD --src 10.1.0.0/16 --dst 10.1.0.0/16 -j ACCEPT
# Now insert access to allowed boxes
# osbs
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.177 --dport 443 -j ACCEPT
# docker-registry
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.123 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.124 --dport 443 -j ACCEPT
#koji.fp.o
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.139 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.139 --dport 443 -j ACCEPT
# pkgs.stg
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.175 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.175 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.128.175 --dport 9418 -j ACCEPT
# DNS
iptables -A FILTER_FORWARD -p udp -m udp -d 10.5.126.21 --dport 53 -j ACCEPT
iptables -A FILTER_FORWARD -p udp -m udp -d 10.5.126.22 --dport 53 -j ACCEPT
# mirrors.fp.o
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.51 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.52 --dport 443 -j ACCEPT
# dl.phx2
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.93 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.93 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.94 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.94 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.95 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.95 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.96 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.96 --dport 443 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.97 --dport 80 -j ACCEPT
iptables -A FILTER_FORWARD -p tcp -m tcp -d 10.5.126.97 --dport 443 -j ACCEPT
# Docker is CRAZY and forces Google DNS upon us.....
iptables -A FILTER_FORWARD -p udp -m udp -d 8.8.8.8 --dport 53 -j ACCEPT
iptables -A FILTER_FORWARD -p udp -m udp -d 8.8.4.4 --dport 53 -j ACCEPT
# proxy
iptables -A FILTER_FORWARD -p tcp --dst 10.5.128.177 --dport 443 -j ACCEPT
# Kerberos
iptables -A FILTER_FORWARD -p tcp --dst 10.5.128.177 --dport 1088 -j ACCEPT
iptables -A FILTER_FORWARD -j REJECT --reject-with icmp-host-prohibited

View File

@@ -1,8 +0,0 @@
[maxamillion-atomic-reactor]
name=Copr repo for atomic-reactor owned by maxamillion
baseurl=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/epel-7-$basearch/
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/pubkey.gpg
enabled=1
enabled_metadata=1

View File

@@ -1,8 +0,0 @@
[maxamillion-atomic-reactor]
name=Copr repo for atomic-reactor owned by maxamillion
baseurl=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/fedora-$releasever-$basearch/
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/maxamillion/atomic-reactor/pubkey.gpg
enabled=1
enabled_metadata=1

18
files/osbs/osbs.conf Normal file
View File

@@ -0,0 +1,18 @@
[general]
build_json_dir = /usr/share/osbs/
[default]
openshift_uri = https://losbs.example.com:8443/
# if you want to get packages from koji (koji plugin in dock)
# you need to setup koji hub and root
# this sample is for fedora
koji_root = http://koji.fedoraproject.org/
koji_hub = http://koji.fedoraproject.org/kojihub
# in case of using artifacts plugin, you should provide a command
# how to fetch artifacts
sources_command = fedpkg sources
# from where should be images pulled and where should be pushed?
# registry_uri = your.example.registry
registry_uri = localhost:5000
verify_ssl = false
build_type = simple

View File

@@ -1,5 +0,0 @@
SHELL=/bin/bash
MAILTO=maxamillion@fedoraproject.org
*/5 * * * * root cd /var/lib/reg-server/ && reg-server -r registry.fedoraproject.org --once

View File

@@ -1,5 +0,0 @@
SHELL=/bin/bash
MAILTO=maxamillion@fedoraproject.org
*/5 * * * * root cd /var/lib/reg-server/ && reg-server -r registry.stg.fedoraproject.org --once

View File

@@ -1,69 +0,0 @@
{{define "repositories"}}
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<base href="/" >
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>{{ .RegistryDomain }}</title>
<link rel="icon" type="image/ico" href="/favicon.ico">
<link rel="stylesheet" href="/css/styles.css" />
</head>
<body>
<h1>{{ .RegistryDomain }}</h1>
<form>
<input name="filter" type="search"><a class="clear">clear</a>
</form>
<div class="wrapper">
<table>
<tr>
<th>Repository Name</th>
<th>Pull Command</th>
</tr>
{{ range $key, $value := .Repositories }}
<tr>
<td valign="top">
<a href="/repo/{{ $value.Name | urlquery }}/tags">
{{ $value.Name }}
</a>
</td>
<td align="right" nowrap>
<a href="/repo/{{ $value.Name | urlquery }}/tags">
<code>docker pull {{ $value.URI }}</code>
</a>
</td>
</tr>
{{ end }}
</table>
</div>
<div class="footer">
<p>Last Updated: {{ .LastUpdated }}</p>
<p>
Fedora Container Layered Images brought to you by the
<a href="https://fedoraproject.org/wiki/Atomic_WG">Fedora Atomic Working
Group</a>
</p>
<p>
<a href="https://github.com/jessfraz/reg/tree/master/server">reg-server
was originally written and is maintained upstream by</a>
<a href="https://twitter.com/jessfraz">@jessfraz</a>
</p>
</div><!--/.footer-->
<script src="/js/scripts.js"></script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-29404280-12', 'jessfraz.com');
ga('send', 'pageview');
</script>
</body>
</html>
{{end}}

View File

@@ -1,74 +0,0 @@
{{define "tags"}}
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<base href="/" >
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>{{ .RegistryDomain }}/{{ .Name }}</title>
<link rel="icon" type="image/ico" href="/favicon.ico">
<link rel="stylesheet" href="/css/styles.css" />
</head>
<body>
<h1>{{ .RegistryDomain }}/{{ .Name }}</h1>
<div class="wrapper">
<table>
<tr>
<th>Name</th>
<th>Tag</th>
<th>Created</th>
</tr>
{{ range $key, $value := .Repositories }}
<tr>
<td valign="left" nowrap>
{{ $value.Name }}
</td>
<td align="right" nowrap>
{{ $value.Tag }}
</td>
<td align="right" nowrap>
{{ $value.Created.Format "02 Jan, 2006 15:04:05 UTC" }}
</td>
</tr>
{{ end }}
</table>
</div>
<div class="footer">
Fedora Container Layered Images brought to you by the
<a href="https://fedoraproject.org/wiki/Atomic_WG">Fedora Atomic Working
Group</a>
</p>
<p>
<a href="https://github.com/jessfraz/reg/tree/master/server">reg-server
was originally written and is maintained upstream by</a>
<a href="https://twitter.com/jessfraz">@jessfraz</a>
</p>
</div><!--/.footer-->
<script src="/js/scripts.js"></script>
<script type="text/javascript">
var ajaxCalls = [
{{ range $key, $value := .Repositories }}
'/repo/{{ $value.Name | urlquery }}/tag/{{ $value.Tag }}/vulns.json',
{{ end }}
];
window.onload = function() {
Array.prototype.forEach.call(ajaxCalls, function(url, index){
loadVulnerabilityCount(url);
});
};
</script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-29404280-12', 'jessfraz.com');
ga('send', 'pageview');
</script>
</body>
</html>
{{end}}

View File

@@ -1,198 +0,0 @@
#!/usr/bin/python
from __future__ import print_function
# A simple script to generate a file list in a format easily consumable by a
# shell script.
# Originally written by Jason Tibbitts <tibbs@math.uh.edu> in 2016.
# Donated to the public domain. If you require a statement of license, please
# consider this work to be licensed as "CC0 Universal", any version you choose.
import argparse
import hashlib
import os
import stat
import sys
# Get scandir from whatever module provides it today
try:
from os import scandir
except ImportError:
from scandir import scandir
# productmd is optional, needed only for the imagelist feature
try:
from productmd.images import SUPPORTED_IMAGE_FORMATS
except ImportError:
SUPPORTED_IMAGE_FORMATS = []
class SEntry(object):
"""A simpler DirEntry-like object."""
def __init__(self, direntry, restricted=False):
self.direntry = direntry
self.restricted = restricted
self.path = direntry.path
self.name = direntry.name
info = direntry.stat(follow_symlinks=False)
self.modtime = max(info.st_mtime, info.st_ctime)
self.readable_group = info.st_mode & stat.S_IRGRP
self.readable_world = info.st_mode & stat.S_IROTH
self.size = info.st_size
ftype = 'f'
perm = ''
if direntry.is_symlink():
ftype = 'l'
elif direntry.is_dir():
ftype = 'd'
if self.restricted:
perm = '*'
# Note that we want an unreadable state to override the restricted state
if not self.readable_world:
perm = '-'
self.ftype = ftype + perm
def sha1(fname):
"""Return the SHA1 checksum of a file in hex."""
fh = open(fname, 'rb')
sha1 = hashlib.sha1()
block = fh.read(2 ** 16)
while len(block) > 0:
sha1.update(block)
block = fh.read(2 ** 16)
return sha1.hexdigest()
def recursedir(path='.', skip=[], alwaysskip=['.~tmp~'], in_restricted=False):
"""Like scandir, but recursively.
Will skip everything in the skip array, but only at the top level
directory.
Returns SEntry objects. If in_restricted is true, all returned entries will
be marked as restricted even if their permissions are not restricted.
"""
for dentry in scandir(path):
if dentry.name in skip:
continue
if dentry.name in alwaysskip:
continue
# Skip things which are not at least group readable
# Symlinks are followed here so that clients won't see dangling
# symlinks to content they can't transfer. It's the default, but to
# avoid confusion it's been made explicit.
if not (dentry.stat(follow_symlinks=True).st_mode & stat.S_IRGRP):
# print('{} is not group readable; skipping.'.format(dentry.path))
continue
se = SEntry(dentry, in_restricted)
if dentry.is_dir(follow_symlinks=False):
this_restricted = in_restricted
if not se.readable_world:
# print('{} is not world readable; marking as restricted.'.format(se.path), file=sys.stderr)
this_restricted = True
# Don't pass skip here, because we only skip in the top level
for re in recursedir(se.path, alwaysskip=alwaysskip, in_restricted=this_restricted):
yield re
yield se
def parseopts():
null = open(os.devnull, 'w')
p = argparse.ArgumentParser(
description='Generate a list of files and times, suitable for consumption by quick-fedora-mirror, '
'and (optionally) a much smaller list of only files that match one of the productmd '
' supported image types, for use by fedfind.')
p.add_argument('-c', '--checksum', action='store_true',
help='Include checksums of all repomd.xml files in the file list.')
p.add_argument('-C', '--checksum-file', action='append', dest='checksum_files',
help='Include checksums of all instances of the specified file.')
p.add_argument('-s', '--skip', action='store_true',
help='Skip the file lists in the top directory')
p.add_argument('-S', '--skip-file', action='append', dest='skip_files',
help='Skip the specified file in the top directory.')
p.add_argument('-d', '--dir', help='Directory to scan (default: .).')
p.add_argument('-t', '--timelist', type=argparse.FileType('w'), default=sys.stdout,
help='Filename of the file list with times (default: stdout).')
p.add_argument('-f', '--filelist', type=argparse.FileType('w'), default=null,
help='Filename of the file list without times (default: no plain file list is generated).')
p.add_argument('-i', '--imagelist', type=argparse.FileType('w'), default=null,
help='Filename of the image file list for fedfind (default: not generated). Requires '
'the productmd library.')
opts = p.parse_args()
if not opts.dir:
opts.dir = '.'
opts.checksum_files = opts.checksum_files or []
if opts.checksum:
opts.checksum_files += ['repomd.xml']
opts.skip_files = opts.skip_files or []
if opts.skip:
if not opts.timelist.name == '<stdout>':
opts.skip_files += [os.path.basename(opts.timelist.name)]
if not opts.filelist.name == '<stdout>':
opts.skip_files += [os.path.basename(opts.filelist.name)]
if not opts.imagelist.name == '<stdout>':
opts.skip_files += [os.path.basename(opts.imagelist.name)]
return opts
def main():
opts = parseopts()
if opts.imagelist.name != os.devnull and not SUPPORTED_IMAGE_FORMATS:
sys.exit("--imagelist requires the productmd library!")
checksums = {}
os.chdir(opts.dir)
print('[Version]', file=opts.timelist)
# XXX Technically this should be version 3. But old clients will simply
# ignore the extended file types for restricted directories, and so we can
# add this now and let things simmer for a while before bumping the format
# and hard-breaking old clients.
print('2', file=opts.timelist)
print(file=opts.timelist)
print('[Files]', file=opts.timelist)
for entry in recursedir(skip=opts.skip_files):
print(entry.path, file=opts.filelist)
# write to filtered list if appropriate
imgs = ['.{0}'.format(form) for form in SUPPORTED_IMAGE_FORMATS]
if any(entry.path.endswith(img) for img in imgs):
print(entry.path, file=opts.imagelist)
if entry.name in opts.checksum_files:
checksums[entry.path[2:]] = True
print('{0}\t{1}\t{2}\t{3}'.format(entry.modtime, entry.ftype,
entry.size, entry.path[2:]),
file=opts.timelist)
print('\n[Checksums SHA1]', file=opts.timelist)
# It's OK if the checksum section is empty, but we should include it anyway
# as the client expects it.
for f in sorted(checksums):
print('{0}\t{1}'.format(sha1(f), f), file=opts.timelist)
print('\n[End]', file=opts.timelist)
if __name__ == '__main__':
main()

View File

@@ -1,165 +0,0 @@
#!/bin/bash
# Note: this is only an example of how you'd call create-filelist. Edit to fit
# your requirements. Note that you must supply a valid path for the lockfile,
# and it must be outside of your repository unless you want that lockfile to
# show up in your file lists.
# Takes a list of module names. Generates file lists for all of them and them
# moves them into place at once. If you are creating hardlinks between rsync
# modules, it is required that you update the file lists of both mirrors at the
# same time. Otherwise the clients may make separate copies of the files.
# The directory where all of the modules live
# Or pass it with -t
TOPD=/srv/mirror/pub
# The modules to process. Or pass them on the command line.
MODS=()
# Path to the create-filelist program.
# Or specify it with -p.
CREATE=/usr/local/bin/create-filelist
# These strings will be eval'ed later with $mod replaced by its value in
# context.
FILELIST=fullfilelist
TIMELIST='fullfiletimelist-$mod'
IMAGELIST='imagelist-$mod'
usage () {
echo
echo "Usage: $0 [-l lockfile] [-p creator path] [-t top directory] module [module ...]"
echo
echo " -l: Path to the lock file"
echo " -p: Path to the create-filelist program"
echo " -t: Path to directory containing modules"
echo
echo "At least one module to process must be provided."
echo "All paths must be absolute."
}
while [[ $# > 0 ]]; do
opt=$1
case $opt in
-l)
LOCKFILE=$(realpath $2)
shift
;;
-p)
CREATE=$(realpath $2)
shift
;;
-t)
TOPD=$(realpath $2)
shift
;;
-*)
(>&2 echo "Unknown option $opt."; usage)
exit 1
;;
*) # Remaining args are modules
MODS+=($opt)
;;
esac
shift
done
if [[ -z $LOCKFILE ]]; then
(>&2 echo "Must specify LOCKFILE, either by editing the source or via the -l option."; usage)
exit 2
fi
if [[ ! -d $(dirname $LOCKFILE) ]]; then
(>&2 echo "Given directory $(dirname $LOCKFILE) does not exist."; usage)
exit 2
fi
if [[ ! -f $CREATE ]]; then
(>&2 echo "Specified executable $CREATE does not exist."; usage)
exit 2
fi
if [[ ! -d $TOPD ]]; then
(>&2 echo "Provided directory $TOPD does not exist."; usage)
exit 2
fi
if [[ ${#MODS[@]} -eq 0 ]]; then
(>&2 echo "No modules specified"; usage)
exit 2
fi
tmpd=$(mktemp -d -t create-filelist.XXXXXXXXXX)
if [[ $? -ne 0 ]]; then
(>&2 echo "Creating temporary directory failed?")
exit 1
fi
trap "rm -rf $tmpd" EXIT
cd $tmpd
(
# We want to wait forever until we can do what we're asked
flock -x 9
# If you don't want to wait forever, try one of the following:
# flock -n 9 || exit 1 - Gives up immediately
# flock -w 120 9 || exit 1 - Waits 120 seconds and then gives up
# Don't change the '9', unless you change the last line of this script.
for mod in ${MODS[@]}; do
currentfl=$TOPD/$mod/${FILELIST/'$mod'/$mod}
currenttl=$TOPD/$mod/${TIMELIST/'$mod'/$mod}
currentil=$TOPD/$mod/${IMAGELIST/'$mod'/$mod}
flname=$(basename $currentfl)
tlname=$(basename $currenttl)
ilname=$(basename $currentil)
$CREATE -c -s -d $TOPD/$mod -f $flname -t $tlname -i $ilname
# If a file list exists and doesn't differ from what we just generated,
# delete the latter.
if [[ -f $currentfl ]] && diff -q $currentfl $flname > /dev/null; then
rm -f $flname
fi
if [[ -f $currenttl ]] && diff -q $currenttl $tlname > /dev/null; then
rm -f $tlname
fi
if [[ -f $currentil ]] && diff -q $currentil $ilname > /dev/null; then
rm -f $ilname
fi
done
# Now we have the new file lists but in a temporary directory which
# probably isn't on the same filesystem. Copy them to temporary files in
# the right place.
for mod in ${MODS[@]}; do
currentfl=$TOPD/$mod/${FILELIST/'$mod'/$mod}
currenttl=$TOPD/$mod/${TIMELIST/'$mod'/$mod}
currentil=$TOPD/$mod/${IMAGELIST/'$mod'/$mod}
flname=$(basename $currentfl)
fldir=$(dirname $currentfl)
tlname=$(basename $currenttl)
tldir=$(dirname $currenttl)
ilname=$(basename $currentil)
ildir=$(dirname $currentil)
if [[ -f $flname ]]; then
tmpf=$(mktemp -p $fldir $flname.XXXXXXXXXX)
cp -p $flname $tmpf
chmod 644 $tmpf
mv $tmpf $currentfl
fi
if [[ -f $tlname ]]; then
tmpf=$(mktemp -p $tldir $tlname.XXXXXXXXXX)
cp -p $tlname $tmpf
chmod 644 $tmpf
mv $tmpf $currenttl
fi
if [[ -f $ilname ]]; then
tmpf=$(mktemp -p $ildir $ilname.XXXXXXXXXX)
cp -p $ilname $tmpf
chmod 644 $tmpf
mv $tmpf $currentil
fi
done
) 9>$LOCKFILE

30
files/sign/bridge.conf.j2 Normal file
View File

@@ -0,0 +1,30 @@
# This is a configuration for the sigul bridge.
[bridge]
# Nickname of the bridge's certificate in the NSS database specified below
bridge-cert-nickname: sign-bridge1 - Fedora Project
# Port on which the bridge expects client connections
client-listen-port: 44334
# Port on which the bridge expects server connections
server-listen-port: 44333
# A Fedora account system group required for access to the signing server. If
# empty, no Fedora account check is done.
required-fas-group: signers
# User name and password for an account on the Fedora account system that can
# be used to verify group memberships
fas-user-name: {{ fedoraDummyUser }}
fas-password: {{ fedoraDummyUserPassword }}
[daemon]
# The user to run as
unix-user: sigul
# The group to run as
unix-group: sigul
[nss]
# Path to a directory containing a NSS database
nss-dir: /var/lib/sigul
# Password for accessing the NSS database. If not specified, the bridge will
# ask on startup
# Currently no password is used
nss-password:

View File

@@ -0,0 +1,45 @@
# This is a configuration for the sigul bridge.
#
[bridge]
# Nickname of the bridge's certificate in the NSS database specified below
bridge-cert-nickname: secondary-signer
# Port on which the bridge expects client connections
client-listen-port: 44334
# Port on which the bridge expects server connections
server-listen-port: 44333
# A Fedora account system group required for access to the signing server. If
# empty, no Fedora account check is done.
; required-fas-group:
# User name and password for an account on the Fedora account system that can
# be used to verify group memberships
; fas-user-name:
; fas-password:
#
[koji]
# Config file used to connect to the Koji hub
# ; koji-config: ~/.koji/config
# # Recognized alternative instances
koji-instances: ppc s390 arm sparc
#
# # Example configuration of alternative instances:
# # koji-instances: ppc64 s390
# # Configuration paths for alternative instances:
koji-config-ppc: /etc/koji-ppc.conf
koji-config-s390: /etc/koji-s390.conf
koji-config-arm: /etc/koji-arm.conf
koji-config-sparc: /etc/koji-sparc.conf
#
#
[daemon]
# The user to run as
unix-user: sigul
# The group to run as
unix-group: sigul
#
[nss]
# Path to a directory containing a NSS database
nss-dir: /var/lib/sigul
# Password for accessing the NSS database. If not specified, the bridge will
# ask on startup
# Currently no password is used
nss-password:

View File

@@ -0,0 +1,6 @@
[builder-rpms]
name=Builder Packages from Fedora Infrastructure $releasever - $basearch
baseurl=http://infrastructure.fedoraproject.org/repo/builder-rpms/$releasever/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://infrastructure.fedoraproject.org/repo/RPM-GPG-KEY-INFRASTRUCTURE

27
files/sign/koji-arm.conf Normal file
View File

@@ -0,0 +1,27 @@
[koji]
;configuration for koji cli tool
;url of XMLRPC server
server = http://arm.koji.fedoraproject.org/kojihub
;url of web interface
weburl = http://arm.koji.fedoraproject.org/koji
;url of package download site
topurl = http://armpkgs.fedoraproject.org/
;path to the koji top directory
;topdir = /mnt/koji
;configuration for SSL athentication
;client certificate
cert = ~/.fedora.cert
;certificate of the CA that issued the client certificate
ca = ~/.fedora-upload-ca.cert
;certificate of the CA that issued the HTTP server certificate
serverca = ~/.fedora-server-ca.cert

27
files/sign/koji-ppc.conf Normal file
View File

@@ -0,0 +1,27 @@
[koji]
;configuration for koji cli tool
;url of XMLRPC server
server = http://ppc.koji.fedoraproject.org/kojihub
;url of web interface
weburl = http://ppc.koji.fedoraproject.org/koji
;url of package download site
topurl = http://ppc.koji.fedoraproject.org/
;path to the koji top directory
;topdir = /mnt/koji
;configuration for SSL athentication
;client certificate
cert = ~/.fedora.cert
;certificate of the CA that issued the client certificate
ca = ~/.fedora-upload-ca.cert
;certificate of the CA that issued the HTTP server certificate
serverca = ~/.fedora-server-ca.cert

27
files/sign/koji-s390.conf Normal file
View File

@@ -0,0 +1,27 @@
[koji]
;configuration for koji cli tool
;url of XMLRPC server
server = http://s390.koji.fedoraproject.org/kojihub
;url of web interface
weburl = http://s390.koji.fedoraproject.org/koji
;url of package download site
topurl = http://s390pkgs.fedoraproject.org/
;path to the koji top directory
;topdir = /mnt/koji
;configuration for SSL athentication
;client certificate
cert = ~/.fedora.cert
;certificate of the CA that issued the client certificate
ca = ~/.fedora-upload-ca.cert
;certificate of the CA that issued the HTTP server certificate
serverca = ~/.fedora-server-ca.cert

View File

@@ -0,0 +1,46 @@
# This is a configuration for the sigul server.
[server]
# Host name of the publically acessible bridge to clients
bridge-hostname: sign-bridge1
# Port on which the bridge expects server connections
bridge-port: 44333
# Maximum accepted size of payload stored on disk
max-file-payload-size: 2073741824
# Maximum accepted size of payload stored in server's memory
max-memory-payload-size: 1048576
# Nickname of the server's certificate in the NSS database specified below
server-cert-nickname: sign-vault1 - Fedora Project
[database]
# Path to a directory containing a SQLite database
;database-path: /var/lib/sigul
[gnupg]
# Path to a directory containing GPG configuration and keyrings
gnupg-home: /var/lib/sigul/gnupg
# Default primary key type for newly created keys
gnupg-key-type: RSA
# Default primary key length for newly created keys
gnupg-key-length: 4096
# Default subkey type for newly created keys, empty for no subkey
gnupg-subkey-type:
# Default subkey length for newly created keys if gnupg-subkey-type is not empty
; gnupg-subkey-length: 2048
# Default key usage flags for newly created keys
gnupg-key-usage: encrypt, sign
# Length of key passphrases used for newsly created keys
passphrase-length: 64
[daemon]
# The user to run as
unix-user: sigul
# The group to run as
unix-group: sigul
[nss]
# Path to a directory containing a NSS database
nss-dir: /var/lib/sigul
# Password for accessing the NSS database. If not specified, the server will
# ask on startup
; nss-password is not specified by default

View File

@@ -0,0 +1,51 @@
# This is a configuration for the sigul server.
# FIXME: remove my data
[server]
# Host name of the publically acessible bridge to clients
bridge-hostname: secondary-signer
# Port on which the bridge expects server connections
; bridge-port: 44333
# Maximum accepted size of payload stored on disk
max-file-payload-size: 2073741824
# Maximum accepted size of payload stored in server's memory
max-memory-payload-size: 1048576
# Nickname of the server's certificate in the NSS database specified below
server-cert-nickname: secondary-signer-server
signing-timeout: 4000
[database]
# Path to a SQLite database
; database-path: /var/lib/sigul/server.conf
[gnupg]
# Path to a directory containing GPG configuration and keyrings
gnupg-home: /var/lib/sigul/gnupg
# Default primary key type for newly created keys
gnupg-key-type: RSA
# Default primary key length for newly created keys
gnupg-key-length: 4096
# Default subkey type for newly created keys, empty for no subkey
#gnupg-subkey-type: ELG-E
# Default subkey length for newly created keys if gnupg-subkey-type is not empty
# gnupg-subkey-length: 4096
# Default key usage flags for newly created keys
gnupg-key-usage: encrypt, sign
# Length of key passphrases used for newsly created keys
; passphrase-length: 64
[daemon]
# The user to run as
unix-user: sigul
# The group to run as
unix-group: sigul
[nss]
# Path to a directory containing a NSS database
nss-dir: /var/lib/sigul
# Password for accessing the NSS database. If not specified, the server will
# ask on startup
; nss-password is not specified by default

View File

@@ -1,6 +0,0 @@
{% for key, value in virt_info.items() %}
{% if value and 'state' in value %}
{{inventory_hostname}}:{{key}}:{{value['state']}}:{{value['autostart']}}:{{value['nrVirtCpu']}}:{{value['memory']}}
{% else %}
{% endif %}
{% endfor %}

View File

@@ -9,9 +9,9 @@ def invert_fedmsg_policy(groups, vars, env):
"""
if env == 'staging':
hosts = groups['staging'] + groups['fedmsg-qa-network-stg'] + groups['openshift-pseudohosts-stg']
hosts = groups['staging']
else:
hosts = [h for h in groups['all'] if h not in groups['staging'] + groups['openshift-pseudohosts-stg']]
hosts = [h for h in groups['all'] if h not in groups['staging']]
inverted = {}
for host in hosts:

View File

@@ -0,0 +1,315 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# vim: expandtab:tabstop=4:shiftwidth=4
'''
Custom filters for use in openshift-ansible
'''
from ansible import errors
from operator import itemgetter
import pdb
import re
import json
class FilterModule(object):
''' Custom ansible filters '''
@staticmethod
def oo_pdb(arg):
''' This pops you into a pdb instance where arg is the data passed in
from the filter.
Ex: "{{ hostvars | oo_pdb }}"
'''
pdb.set_trace()
return arg
@staticmethod
def get_attr(data, attribute=None):
''' This looks up dictionary attributes of the form a.b.c and returns
the value.
Ex: data = {'a': {'b': {'c': 5}}}
attribute = "a.b.c"
returns 5
'''
if not attribute:
raise errors.AnsibleFilterError("|failed expects attribute to be set")
ptr = data
for attr in attribute.split('.'):
ptr = ptr[attr]
return ptr
@staticmethod
def oo_flatten(data):
''' This filter plugin will flatten a list of lists
'''
if not issubclass(type(data), list):
raise errors.AnsibleFilterError("|failed expects to flatten a List")
return [item for sublist in data for item in sublist]
@staticmethod
def oo_collect(data, attribute=None, filters=None):
''' This takes a list of dict and collects all attributes specified into a
list. If filter is specified then we will include all items that
match _ALL_ of filters. If a dict entry is missing the key in a
filter it will be excluded from the match.
Ex: data = [ {'a':1, 'b':5, 'z': 'z'}, # True, return
{'a':2, 'z': 'z'}, # True, return
{'a':3, 'z': 'z'}, # True, return
{'a':4, 'z': 'b'}, # FAILED, obj['z'] != obj['z']
]
attribute = 'a'
filters = {'z': 'z'}
returns [1, 2, 3]
'''
if not issubclass(type(data), list):
raise errors.AnsibleFilterError("|failed expects to filter on a List")
if not attribute:
raise errors.AnsibleFilterError("|failed expects attribute to be set")
if filters is not None:
if not issubclass(type(filters), dict):
raise errors.AnsibleFilterError("|fialed expects filter to be a"
" dict")
retval = [FilterModule.get_attr(d, attribute) for d in data if (
all([d.get(key, None) == filters[key] for key in filters]))]
else:
retval = [FilterModule.get_attr(d, attribute) for d in data]
return retval
@staticmethod
def oo_select_keys(data, keys):
''' This returns a list, which contains the value portions for the keys
Ex: data = { 'a':1, 'b':2, 'c':3 }
keys = ['a', 'c']
returns [1, 3]
'''
if not issubclass(type(data), dict):
raise errors.AnsibleFilterError("|failed expects to filter on a dict")
if not issubclass(type(keys), list):
raise errors.AnsibleFilterError("|failed expects first param is a list")
# Gather up the values for the list of keys passed in
retval = [data[key] for key in keys]
return retval
@staticmethod
def oo_prepend_strings_in_list(data, prepend):
''' This takes a list of strings and prepends a string to each item in the
list
Ex: data = ['cart', 'tree']
prepend = 'apple-'
returns ['apple-cart', 'apple-tree']
'''
if not issubclass(type(data), list):
raise errors.AnsibleFilterError("|failed expects first param is a list")
if not all(isinstance(x, basestring) for x in data):
raise errors.AnsibleFilterError("|failed expects first param is a list"
" of strings")
retval = [prepend + s for s in data]
return retval
@staticmethod
def oo_combine_key_value(data, joiner='='):
'''Take a list of dict in the form of { 'key': 'value'} and
arrange them as a list of strings ['key=value']
'''
if not issubclass(type(data), list):
raise errors.AnsibleFilterError("|failed expects first param is a list")
rval = []
for item in data:
rval.append("%s%s%s" % (item['key'], joiner, item['value']))
return rval
@staticmethod
def oo_ami_selector(data, image_name):
''' This takes a list of amis and an image name and attempts to return
the latest ami.
'''
if not issubclass(type(data), list):
raise errors.AnsibleFilterError("|failed expects first param is a list")
if not data:
return None
else:
if image_name is None or not image_name.endswith('_*'):
ami = sorted(data, key=itemgetter('name'), reverse=True)[0]
return ami['ami_id']
else:
ami_info = [(ami, ami['name'].split('_')[-1]) for ami in data]
ami = sorted(ami_info, key=itemgetter(1), reverse=True)[0][0]
return ami['ami_id']
@staticmethod
def oo_ec2_volume_definition(data, host_type, docker_ephemeral=False):
''' This takes a dictionary of volume definitions and returns a valid ec2
volume definition based on the host_type and the values in the
dictionary.
The dictionary should look similar to this:
{ 'master':
{ 'root':
{ 'volume_size': 10, 'device_type': 'gp2',
'iops': 500
}
},
'node':
{ 'root':
{ 'volume_size': 10, 'device_type': 'io1',
'iops': 1000
},
'docker':
{ 'volume_size': 40, 'device_type': 'gp2',
'iops': 500, 'ephemeral': 'true'
}
}
}
'''
if not issubclass(type(data), dict):
raise errors.AnsibleFilterError("|failed expects first param is a dict")
if host_type not in ['master', 'node', 'etcd']:
raise errors.AnsibleFilterError("|failed expects etcd, master or node"
" as the host type")
root_vol = data[host_type]['root']
root_vol['device_name'] = '/dev/sda1'
root_vol['delete_on_termination'] = True
if root_vol['device_type'] != 'io1':
root_vol.pop('iops', None)
if host_type == 'node':
docker_vol = data[host_type]['docker']
docker_vol['device_name'] = '/dev/xvdb'
docker_vol['delete_on_termination'] = True
if docker_vol['device_type'] != 'io1':
docker_vol.pop('iops', None)
if docker_ephemeral:
docker_vol.pop('device_type', None)
docker_vol.pop('delete_on_termination', None)
docker_vol['ephemeral'] = 'ephemeral0'
return [root_vol, docker_vol]
elif host_type == 'etcd':
etcd_vol = data[host_type]['etcd']
etcd_vol['device_name'] = '/dev/xvdb'
etcd_vol['delete_on_termination'] = True
if etcd_vol['device_type'] != 'io1':
etcd_vol.pop('iops', None)
return [root_vol, etcd_vol]
return [root_vol]
@staticmethod
def oo_split(string, separator=','):
''' This splits the input string into a list
'''
return string.split(separator)
@staticmethod
def oo_filter_list(data, filter_attr=None):
''' This returns a list, which contains all items where filter_attr
evaluates to true
Ex: data = [ { a: 1, b: True },
{ a: 3, b: False },
{ a: 5, b: True } ]
filter_attr = 'b'
returns [ { a: 1, b: True },
{ a: 5, b: True } ]
'''
if not issubclass(type(data), list):
raise errors.AnsibleFilterError("|failed expects to filter on a list")
if not issubclass(type(filter_attr), str):
raise errors.AnsibleFilterError("|failed expects filter_attr is a str")
# Gather up the values for the list of keys passed in
return [x for x in data if x[filter_attr]]
@staticmethod
def oo_parse_heat_stack_outputs(data):
''' Formats the HEAT stack output into a usable form
The goal is to transform something like this:
+---------------+-------------------------------------------------+
| Property | Value |
+---------------+-------------------------------------------------+
| capabilities | [] | |
| creation_time | 2015-06-26T12:26:26Z | |
| description | OpenShift cluster | |
| … | … |
| outputs | [ |
| | { |
| | "output_value": "value_A" |
| | "description": "This is the value of Key_A" |
| | "output_key": "Key_A" |
| | }, |
| | { |
| | "output_value": [ |
| | "value_B1", |
| | "value_B2" |
| | ], |
| | "description": "This is the value of Key_B" |
| | "output_key": "Key_B" |
| | }, |
| | ] |
| parameters | { |
| … | … |
+---------------+-------------------------------------------------+
into something like this:
{
"Key_A": "value_A",
"Key_B": [
"value_B1",
"value_B2"
]
}
'''
# Extract the “outputs” JSON snippet from the pretty-printed array
in_outputs = False
outputs = ''
line_regex = re.compile(r'\|\s*(.*?)\s*\|\s*(.*?)\s*\|')
for line in data['stdout_lines']:
match = line_regex.match(line)
if match:
if match.group(1) == 'outputs':
in_outputs = True
elif match.group(1) != '':
in_outputs = False
if in_outputs:
outputs += match.group(2)
outputs = json.loads(outputs)
# Revamp the “outputs” to put it in the form of a “Key: value” map
revamped_outputs = {}
for output in outputs:
revamped_outputs[output['output_key']] = output['output_value']
return revamped_outputs
def filters(self):
''' returns a mapping of filters to methods '''
return {
"oo_select_keys": self.oo_select_keys,
"oo_collect": self.oo_collect,
"oo_flatten": self.oo_flatten,
"oo_pdb": self.oo_pdb,
"oo_prepend_strings_in_list": self.oo_prepend_strings_in_list,
"oo_ami_selector": self.oo_ami_selector,
"oo_ec2_volume_definition": self.oo_ec2_volume_definition,
"oo_combine_key_value": self.oo_combine_key_value,
"oo_split": self.oo_split,
"oo_filter_list": self.oo_filter_list,
"oo_parse_heat_stack_outputs": self.oo_parse_heat_stack_outputs
}

View File

@@ -0,0 +1,79 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# vim: expandtab:tabstop=4:shiftwidth=4
'''
Custom zabbix filters for use in openshift-ansible
'''
import pdb
class FilterModule(object):
''' Custom zabbix ansible filters '''
@staticmethod
def create_data(data, results, key, new_key):
'''Take a dict, filter through results and add results['key'] to dict
'''
new_list = [app[key] for app in results]
data[new_key] = new_list
return data
@staticmethod
def oo_set_zbx_trigger_triggerid(item, trigger_results):
'''Set zabbix trigger id from trigger results
'''
if isinstance(trigger_results, list):
item['triggerid'] = trigger_results[0]['triggerid']
return item
item['triggerid'] = trigger_results['triggerids'][0]
return item
@staticmethod
def oo_set_zbx_item_hostid(item, template_results):
''' Set zabbix host id from template results
'''
if isinstance(template_results, list):
item['hostid'] = template_results[0]['templateid']
return item
item['hostid'] = template_results['templateids'][0]
return item
@staticmethod
def oo_pdb(arg):
''' This pops you into a pdb instance where arg is the data passed in
from the filter.
Ex: "{{ hostvars | oo_pdb }}"
'''
pdb.set_trace()
return arg
@staticmethod
def select_by_name(ans_data, data):
''' test
'''
for zabbix_item in data:
if ans_data['name'] == zabbix_item:
data[zabbix_item]['params']['hostid'] = ans_data['templateid']
return data[zabbix_item]['params']
return None
@staticmethod
def oo_build_zabbix_list_dict(values, string):
''' Build a list of dicts with string as key for each value
'''
rval = []
for value in values:
rval.append({string: value})
return rval
def filters(self):
''' returns a mapping of filters to methods '''
return {
"select_by_name": self.select_by_name,
"oo_set_zbx_item_hostid": self.oo_set_zbx_item_hostid,
"oo_set_zbx_trigger_triggerid": self.oo_set_zbx_trigger_triggerid,
"oo_build_zabbix_list_dict": self.oo_build_zabbix_list_dict,
"create_data": self.create_data,
}

View File

@@ -2,6 +2,9 @@
# Handlers for restarting services
#
- name: restart auditd
action: service name=auditd state=restarted
- name: restart apache
command: /usr/local/bin/conditional-restart.sh httpd httpd
@@ -20,9 +23,9 @@
- name: restart fedmsg-hub
command: /usr/local/bin/conditional-restart.sh fedmsg-hub fedmsg-hub
# Note that, we're cool with arbitrary restarts on bodhi-backend02, just
# not bodhi-backend01 or bodhi-backend03. 01 and 03 is where the releng/mash
# stuff happens and we # don't want to interrupt that.
when: inventory_hostname not in ['bodhi-backend01.phx2.fedoraproject.org', 'bodhi-backend03.phx2.fedoraproject.org']
# not bodhi-backend01. 01 is where the releng/mash stuff happens and we
# don't want to interrupt that.
when: inventory_hostname != 'bodhi-backend01.phx2.fedoraproject.org'
- name: restart fedmsg-irc
command: /usr/local/bin/conditional-restart.sh fedmsg-irc fedmsg-irc
@@ -30,11 +33,8 @@
- name: restart fedmsg-relay
command: /usr/local/bin/conditional-restart.sh fedmsg-relay fedmsg-relay
- name: restart koji-sync-listener
action: service name=koji-sync-listener state=restarted
- name: reload httpd
command: /usr/local/bin/conditional-reload.sh httpd httpd
action: service name=httpd state=reloaded
- name: restart iptables
action: service name=iptables state=restarted
@@ -45,21 +45,45 @@
- name: restart jenkins
action: service name=jenkins state=restarted
- name: restart kojid
action: service name=kojid state=restarted
- name: restart koschei-polling
action: service name=koschei-polling state=restarted
- name: restart koschei-resolver
action: service name=koschei-resolver state=restarted
- name: restart koschei-scheduler
action: service name=koschei-scheduler state=restarted
- name: restart koschei-watcher
action: service name=koschei-watcher state=restarted
- name: restart libvirtd
action: service name=libvirtd state=restarted
- name: restart lighttpd
action: service name=lighttpd state=restarted
- name: restart mailman
action: service name=mailman state=restarted
- name: restart named
action: service name=named state=restarted
- name: restart nfs
action: service name=nfs state=restarted
- name: restart nfslock
action: service name=nfslock state=restarted
- name: restart ntpd
action: service name=ntpd state=restarted
- name: restart openvpn (Fedora)
when: ansible_distribution == "Fedora"
action: service name=openvpn-client@openvpn state=restarted
action: service name=openvpn@openvpn state=restarted
#notify:
#- fix openvpn routing
@@ -71,13 +95,28 @@
- name: restart openvpn (RHEL7)
when: ansible_distribution == "RedHat" and ansible_distribution_major_version|int == 7
action: service name=openvpn-client@openvpn state=restarted
action: service name=openvpn@openvpn state=restarted
#notify:
#- fix openvpn routing
- name: fix openvpn routing
action: shell /etc/openvpn/fix-routes.sh
- name: restart postfix
action: service name=postfix state=restarted
- name: restart rpcbind
action: service name=rpcbind state=restarted
- name: restart rpcidmapd
action: service name=rpcidmapd state=restarted
- name: restart rsyslog
action: service name=rsyslog state=restarted
- name: restart sshd
action: service name=sshd state=restarted
- name: restart xinetd
action: service name=xinetd state=restarted
@@ -87,18 +126,12 @@
- name: restart network
action: service name=network state=restarted
- name: restart unbound
action: service name=unbound state=restarted
- name: rebuild postfix transport
command: /usr/sbin/postmap /etc/postfix/transport
- name: rebuild postfix tls_policy
command: /usr/sbin/postmap /etc/postfix/tls_policy
- name: restart postfix
service: name=postfix state=restarted
- name: reload proxyhttpd
command: /usr/local/bin/proxy-conditional-reload.sh httpd httpd
- name: restart glusterd
service: name=glusterd state=restarted
@@ -131,52 +164,43 @@
ignore_errors: true
when: ansible_virtualization_role == 'host'
- name: restart pagure_ev
service: name=pagure_ev state=restarted
- name: restart fcomm-cache-worker
service: name=fcomm-cache-worker state=restarted
- name: restart haproxy
service: name=haproxy state=restarted
- name: restart varnish
service: name=varnish state=restarted
- name: restart keepalived
service: name=keepalived state=restarted
- name: restart mariadb
service: name=mariadb state=restarted
- name: restart squid
service: name=squid state=restarted
- name: "update ca-trust"
command: /usr/bin/update-ca-trust
- name: restart openstack-keystone
service: name=openstack-keystone state=restarted
- name: restart stunnel
service: name=stunnel state=restarted
- name: restart cinder api
- name: restart cinder
service: name=openstack-cinder-api state=restarted
- name: restart cinder scheduler
service: name=openstack-cinder-scheduler state=restarted
- name: restart cinder volume
service: name=openstack-cinder-volume state=restarted
- name: restart autocloud
service: name=autocloud state=restarted
- name: restart infinoted
service: name=infinoted state=restarted
- name: restart mirrorlist-server
service: name=mirrorlist-server state=restarted
- name: restart NetworkManager
service: name=NetworkManager state=restarted
- name: reload NetworkManager-connections
command: nmcli c reload
- name: restart basset-worker
service: name=basset-worker state=restarted
- name: apply interface-changes
command: nmcli con up {{ item.split()[1] }}
async: 1
poll: 0
with_items:
- "{{ if_uuid.stdout_lines }}"
- name: flush journald tmpfiles to persistent store
command: pkill -f -USR1 systemd-journald
- name: restart idmapd
service: name=nfs-idmapd state=restarted
- name: restart darkserver
service: name=darkserver state=restarted

8
handlers/semanage.yml Normal file
View File

@@ -0,0 +1,8 @@
- name: semanage dns80
command: /usr/sbin/semanage port -m -t dns_port_t -p tcp 80
- name: semanage dns443
command: /usr/sbin/semanage port -m -t dns_port_t -p tcp 443
- name: semanage dns8953
command: /usr/sbin/semanage port -a -t dns_port_t -p tcp 8953

View File

@@ -2,26 +2,26 @@
# This is the list of clients we backup with rdiff-backup.
#
[backup_clients]
collab03.fedoraproject.org
db01.phx2.fedoraproject.org
db03.phx2.fedoraproject.org
db-datanommer02.phx2.fedoraproject.org
db-fas01.phx2.fedoraproject.org
hosted03.fedoraproject.org
hosted-lists01.fedoraproject.org
batcave01.phx2.fedoraproject.org
infinote.fedoraproject.org
pagure01.fedoraproject.org
people02.fedoraproject.org
people01.fedoraproject.org
pkgs02.phx2.fedoraproject.org
log01.phx2.fedoraproject.org
qadevel.qa.fedoraproject.org:222
db-qa01.qa.fedoraproject.org
db-koji01.phx2.fedoraproject.org
#copr-be.cloud.fedoraproject.org
copr-be.cloud.fedoraproject.org
copr-fe.cloud.fedoraproject.org
copr-keygen.cloud.fedoraproject.org
#copr-dist-git.fedorainfracloud.org
copr-dist-git.fedorainfracloud.org
value01.phx2.fedoraproject.org
taiga.fedorainfracloud.org
taiga.cloud.fedoraproject.org
taskotron01.qa.fedoraproject.org
nuancier01.phx2.fedoraproject.org
magazine2.fedorainfracloud.org
communityblog.fedorainfracloud.org
upstreamfirst.fedorainfracloud.org

View File

@@ -1,3 +1,4 @@
[buildvm]
buildvm-01.phx2.fedoraproject.org
buildvm-02.phx2.fedoraproject.org
@@ -26,140 +27,33 @@ buildvm-24.phx2.fedoraproject.org
buildvm-25.phx2.fedoraproject.org
buildvm-26.phx2.fedoraproject.org
buildvm-27.phx2.fedoraproject.org
buildvm-28.phx2.fedoraproject.org
buildvm-29.phx2.fedoraproject.org
buildvm-30.phx2.fedoraproject.org
buildvm-31.phx2.fedoraproject.org
buildvm-32.phx2.fedoraproject.org
[buildvm-stg]
buildvm-01.stg.phx2.fedoraproject.org
buildvm-02.stg.phx2.fedoraproject.org
buildvm-03.stg.phx2.fedoraproject.org
buildvm-04.stg.phx2.fedoraproject.org
buildvm-05.stg.phx2.fedoraproject.org
[buildvm-ppc64-stg]
buildvm-ppc64-01.stg.ppc.fedoraproject.org
[buildvm-ppc64]
buildvm-ppc64-02.qa.fedoraproject.org
buildvm-ppc64-03.qa.fedoraproject.org
buildvm-ppc64-04.qa.fedoraproject.org
buildvm-ppc64-06.qa.fedoraproject.org
buildvm-ppc64-07.qa.fedoraproject.org
buildvm-ppc64-08.qa.fedoraproject.org
[buildvm-ppc64le-stg]
buildvm-ppc64le-01.stg.ppc.fedoraproject.org
[buildvm-aarch64-stg]
buildvm-aarch64-01.stg.arm.fedoraproject.org
[buildvm-armv7-stg]
buildvm-armv7-01.stg.arm.fedoraproject.org
[buildvm-aarch64]
buildvm-aarch64-01.arm.fedoraproject.org
buildvm-aarch64-02.arm.fedoraproject.org
buildvm-aarch64-03.arm.fedoraproject.org
buildvm-aarch64-04.arm.fedoraproject.org
buildvm-aarch64-05.arm.fedoraproject.org
buildvm-aarch64-06.arm.fedoraproject.org
buildvm-aarch64-07.arm.fedoraproject.org
buildvm-aarch64-08.arm.fedoraproject.org
buildvm-aarch64-09.arm.fedoraproject.org
buildvm-aarch64-10.arm.fedoraproject.org
buildvm-aarch64-11.arm.fedoraproject.org
buildvm-aarch64-12.arm.fedoraproject.org
buildvm-aarch64-13.arm.fedoraproject.org
buildvm-aarch64-14.arm.fedoraproject.org
buildvm-aarch64-15.arm.fedoraproject.org
buildvm-aarch64-16.arm.fedoraproject.org
# these vm's are too slow to use, cause still under investigation
#buildvm-aarch64-17.arm.fedoraproject.org
buildvm-aarch64-18.arm.fedoraproject.org
buildvm-aarch64-19.arm.fedoraproject.org
buildvm-aarch64-20.arm.fedoraproject.org
buildvm-aarch64-21.arm.fedoraproject.org
buildvm-aarch64-22.arm.fedoraproject.org
buildvm-aarch64-23.arm.fedoraproject.org
buildvm-aarch64-24.arm.fedoraproject.org
[buildvm-armv7]
buildvm-armv7-01.arm.fedoraproject.org
buildvm-armv7-02.arm.fedoraproject.org
buildvm-armv7-03.arm.fedoraproject.org
buildvm-armv7-04.arm.fedoraproject.org
buildvm-armv7-05.arm.fedoraproject.org
buildvm-armv7-06.arm.fedoraproject.org
buildvm-armv7-07.arm.fedoraproject.org
buildvm-armv7-08.arm.fedoraproject.org
buildvm-armv7-09.arm.fedoraproject.org
buildvm-armv7-10.arm.fedoraproject.org
buildvm-armv7-11.arm.fedoraproject.org
buildvm-armv7-12.arm.fedoraproject.org
buildvm-armv7-13.arm.fedoraproject.org
buildvm-armv7-14.arm.fedoraproject.org
buildvm-armv7-15.arm.fedoraproject.org
buildvm-armv7-16.arm.fedoraproject.org
# these vm's are too slow to use, cause still under investigation
#buildvm-armv7-17.arm.fedoraproject.org
buildvm-armv7-18.arm.fedoraproject.org
buildvm-armv7-19.arm.fedoraproject.org
buildvm-armv7-20.arm.fedoraproject.org
buildvm-armv7-21.arm.fedoraproject.org
buildvm-armv7-22.arm.fedoraproject.org
buildvm-armv7-23.arm.fedoraproject.org
buildvm-armv7-24.arm.fedoraproject.org
[buildvm-s390]
buildvm-s390-01.s390.fedoraproject.org
[buildvm-s390x]
buildvm-s390x-01.s390.fedoraproject.org
buildvm-s390x-02.s390.fedoraproject.org
buildvm-s390x-03.s390.fedoraproject.org
buildvm-s390x-04.s390.fedoraproject.org
buildvm-s390x-05.s390.fedoraproject.org
buildvm-s390x-06.s390.fedoraproject.org
buildvm-s390x-07.s390.fedoraproject.org
buildvm-s390x-08.s390.fedoraproject.org
buildvm-s390x-09.s390.fedoraproject.org
buildvm-s390x-10.s390.fedoraproject.org
buildvm-s390x-11.s390.fedoraproject.org
buildvm-s390x-12.s390.fedoraproject.org
buildvm-s390x-13.s390.fedoraproject.org
buildvm-s390x-14.s390.fedoraproject.org
buildvm-s390x-15.s390.fedoraproject.org
[buildvm-ppc64le]
buildvm-ppc64le-02.qa.fedoraproject.org
buildvm-ppc64le-03.qa.fedoraproject.org
buildvm-ppc64le-04.qa.fedoraproject.org
buildvm-ppc64le-06.qa.fedoraproject.org
buildvm-ppc64le-07.qa.fedoraproject.org
buildvm-ppc64le-08.qa.fedoraproject.org
[buildvmhost]
buildvmhost-01.phx2.fedoraproject.org
buildvmhost-02.phx2.fedoraproject.org
buildvmhost-03.phx2.fedoraproject.org
buildvmhost-04.phx2.fedoraproject.org
buildvmhost-10.phx2.fedoraproject.org
buildvmhost-11.phx2.fedoraproject.org
buildvmhost-12.phx2.fedoraproject.org
ppc8-01.ppc.fedoraproject.org
ppc8-02.ppc.fedoraproject.org
ppc8-03.ppc.fedoraproject.org
ppc8-04.ppc.fedoraproject.org
aarch64-c01n1.arm.fedoraproject.org
aarch64-c02n1.arm.fedoraproject.org
aarch64-c03n1.arm.fedoraproject.org
aarch64-c04n1.arm.fedoraproject.org
aarch64-c05n1.arm.fedoraproject.org
aarch64-c06n1.arm.fedoraproject.org
aarch64-c07n1.arm.fedoraproject.org
aarch64-c08n1.arm.fedoraproject.org
aarch64-c09n1.arm.fedoraproject.org
aarch64-c10n1.arm.fedoraproject.org
aarch64-c11n1.arm.fedoraproject.org
aarch64-c12n1.arm.fedoraproject.org
aarch64-c13n1.arm.fedoraproject.org
aarch64-c14n1.arm.fedoraproject.org
aarch64-c15n1.arm.fedoraproject.org
aarch64-c16n1.arm.fedoraproject.org
aarch64-c17n1.arm.fedoraproject.org
aarch64-c18n1.arm.fedoraproject.org
aarch64-c19n1.arm.fedoraproject.org
aarch64-c20n1.arm.fedoraproject.org
aarch64-c21n1.arm.fedoraproject.org
aarch64-c22n1.arm.fedoraproject.org
aarch64-c23n1.arm.fedoraproject.org
aarch64-c24n1.arm.fedoraproject.org
aarch64-c25n1.arm.fedoraproject.org
ppc8-02.qa.fedoraproject.org
ppc8-03.qa.fedoraproject.org
ppc8-04.qa.fedoraproject.org
[buildhw]
buildhw-01.phx2.fedoraproject.org
@@ -172,83 +66,114 @@ buildhw-07.phx2.fedoraproject.org
buildhw-08.phx2.fedoraproject.org
buildhw-09.phx2.fedoraproject.org
buildhw-10.phx2.fedoraproject.org
#buildhw-11.phx2.fedoraproject.org
#buildhw-12.phx2.fedoraproject.org
buildhw-aarch64-01.arm.fedoraproject.org
buildhw-aarch64-02.arm.fedoraproject.org
buildhw-aarch64-03.arm.fedoraproject.org
buildhw-11.phx2.fedoraproject.org
buildhw-12.phx2.fedoraproject.org
#
# These are primary koji builders.
#
[buildvm-ppc64]
buildvm-ppc64-01.ppc.fedoraproject.org
buildvm-ppc64-02.ppc.fedoraproject.org
buildvm-ppc64-03.ppc.fedoraproject.org
buildvm-ppc64-04.ppc.fedoraproject.org
buildvm-ppc64-05.ppc.fedoraproject.org
buildvm-ppc64-06.ppc.fedoraproject.org
buildvm-ppc64-07.ppc.fedoraproject.org
buildvm-ppc64-08.ppc.fedoraproject.org
buildvm-ppc64-09.ppc.fedoraproject.org
buildvm-ppc64-10.ppc.fedoraproject.org
buildvm-ppc64-11.ppc.fedoraproject.org
buildvm-ppc64-12.ppc.fedoraproject.org
buildvm-ppc64-13.ppc.fedoraproject.org
#
# These are primary koji builders.
#
[buildvm-ppc64le]
buildvm-ppc64le-01.ppc.fedoraproject.org
buildvm-ppc64le-02.ppc.fedoraproject.org
buildvm-ppc64le-03.ppc.fedoraproject.org
buildvm-ppc64le-04.ppc.fedoraproject.org
buildvm-ppc64le-05.ppc.fedoraproject.org
buildvm-ppc64le-06.ppc.fedoraproject.org
buildvm-ppc64le-07.ppc.fedoraproject.org
buildvm-ppc64le-08.ppc.fedoraproject.org
buildvm-ppc64le-09.ppc.fedoraproject.org
buildvm-ppc64le-10.ppc.fedoraproject.org
buildvm-ppc64le-11.ppc.fedoraproject.org
buildvm-ppc64le-12.ppc.fedoraproject.org
buildvm-ppc64le-13.ppc.fedoraproject.org
#
# These are secondary arch builders.
#
[buildppc]
buildppc-01.ppc.fedoraproject.org
buildppc-02.ppc.fedoraproject.org
buildppc-03.ppc.fedoraproject.org
buildppc-04.ppc.fedoraproject.org
#buildppc-01.phx2.fedoraproject.org
#buildppc-02.phx2.fedoraproject.org
buildppc-03.phx2.fedoraproject.org
buildppc-04.phx2.fedoraproject.org
#
# These are secondary arch builders.
#
[buildppcle]
buildppcle-01.ppc.fedoraproject.org
buildppcle-02.ppc.fedoraproject.org
buildppcle-03.ppc.fedoraproject.org
buildppcle-04.ppc.fedoraproject.org
buildppcle-01.phx2.fedoraproject.org
buildppcle-02.phx2.fedoraproject.org
buildppcle-03.phx2.fedoraproject.org
buildppcle-04.phx2.fedoraproject.org
[buildppc64]
ppc8-01.qa.fedoraproject.org
[buildaarch64]
aarch64-02a.arm.fedoraproject.org
# Marked DEAD in pdu
#aarch64-03a.arm.fedoraproject.org
aarch64-03a.arm.fedoraproject.org
aarch64-04a.arm.fedoraproject.org
aarch64-05a.arm.fedoraproject.org
aarch64-06a.arm.fedoraproject.org
aarch64-07a.arm.fedoraproject.org
aarch64-08a.arm.fedoraproject.org
aarch64-09a.arm.fedoraproject.org
aarch64-10a.arm.fedoraproject.org
#aarch64-11a.arm.fedoraproject.org
#aarch64-12a.arm.fedoraproject.org
[bkernel]
bkernel01.phx2.fedoraproject.org
bkernel02.phx2.fedoraproject.org
[buildarm:children]
arm01
arm02
arm04
#
# These are secondary arch builders.
#
[arm01]
# 00 and 01 and 02 are in use as releng and retrace instances
#arm01-releng00.arm.fedoraproject.org
#arm01-retrace01.arm.fedoraproject.org
#arm01-releng02.arm.fedoraproject.org
arm01-builder03.arm.fedoraproject.org
arm01-builder04.arm.fedoraproject.org
arm01-builder05.arm.fedoraproject.org
arm01-builder06.arm.fedoraproject.org
arm01-builder07.arm.fedoraproject.org
arm01-builder08.arm.fedoraproject.org
arm01-builder09.arm.fedoraproject.org
arm01-builder10.arm.fedoraproject.org
# weird drive issue, needs chassis power cycle.
#arm01-builder11.arm.fedoraproject.org
arm01-builder12.arm.fedoraproject.org
arm01-builder13.arm.fedoraproject.org
arm01-builder14.arm.fedoraproject.org
arm01-builder15.arm.fedoraproject.org
arm01-builder16.arm.fedoraproject.org
arm01-builder17.arm.fedoraproject.org
arm01-builder18.arm.fedoraproject.org
arm01-builder19.arm.fedoraproject.org
arm01-builder20.arm.fedoraproject.org
arm01-builder21.arm.fedoraproject.org
arm01-builder22.arm.fedoraproject.org
arm01-builder23.arm.fedoraproject.org
[arm-stg]
arm01-builder22.arm.fedoraproject.org
arm01-builder23.arm.fedoraproject.org
#
# These are primary arch builders.
#
[arm02]
arm02-builder00.arm.fedoraproject.org
arm02-builder01.arm.fedoraproject.org
arm02-builder02.arm.fedoraproject.org
arm02-builder03.arm.fedoraproject.org
arm02-builder04.arm.fedoraproject.org
arm02-builder05.arm.fedoraproject.org
arm02-builder06.arm.fedoraproject.org
arm02-builder07.arm.fedoraproject.org
arm02-builder08.arm.fedoraproject.org
arm02-builder09.arm.fedoraproject.org
arm02-builder10.arm.fedoraproject.org
arm02-builder11.arm.fedoraproject.org
arm02-builder12.arm.fedoraproject.org
arm02-builder13.arm.fedoraproject.org
arm02-builder14.arm.fedoraproject.org
arm02-builder15.arm.fedoraproject.org
arm02-builder16.arm.fedoraproject.org
arm02-builder17.arm.fedoraproject.org
arm02-builder18.arm.fedoraproject.org
arm02-builder19.arm.fedoraproject.org
arm02-builder20.arm.fedoraproject.org
arm02-builder21.arm.fedoraproject.org
arm02-builder22.arm.fedoraproject.org
arm02-builder23.arm.fedoraproject.org
#
# These are misc
#
[arm03]
# These are in use as arm03-releng00 - 03
#arm03-builder00.arm.fedoraproject.org
#arm03-builder01.arm.fedoraproject.org
#arm03-builder02.arm.fedoraproject.org
@@ -277,43 +202,49 @@ bkernel02.phx2.fedoraproject.org
#arm03-builder22.arm.fedoraproject.org
#arm03-builder23.arm.fedoraproject.org
[arm04]
arm04-builder00.arm.fedoraproject.org
arm04-builder01.arm.fedoraproject.org
arm04-builder02.arm.fedoraproject.org
arm04-builder03.arm.fedoraproject.org
arm04-builder04.arm.fedoraproject.org
arm04-builder05.arm.fedoraproject.org
arm04-builder06.arm.fedoraproject.org
arm04-builder07.arm.fedoraproject.org
arm04-builder08.arm.fedoraproject.org
arm04-builder09.arm.fedoraproject.org
arm04-builder10.arm.fedoraproject.org
arm04-builder11.arm.fedoraproject.org
arm04-builder12.arm.fedoraproject.org
arm04-builder13.arm.fedoraproject.org
arm04-builder14.arm.fedoraproject.org
arm04-builder15.arm.fedoraproject.org
arm04-builder16.arm.fedoraproject.org
arm04-builder17.arm.fedoraproject.org
arm04-builder18.arm.fedoraproject.org
arm04-builder19.arm.fedoraproject.org
arm04-builder20.arm.fedoraproject.org
arm04-builder21.arm.fedoraproject.org
arm04-builder22.arm.fedoraproject.org
#arm04-builder23.arm.fedoraproject.org
# These hosts get the runroot plugin installed.
# They should be added to their own 'compose' channel in the koji db
# .. and they should not appear in the default channel for builds.
[runroot]
buildvm-01.stg.phx2.fedoraproject.org
buildvm-02.stg.phx2.fedoraproject.org
buildvm-01.phx2.fedoraproject.org
buildhw-01.phx2.fedoraproject.org
buildvm-aarch64-01.arm.fedoraproject.org
buildvm-aarch64-02.arm.fedoraproject.org
buildvm-armv7-01.arm.fedoraproject.org
buildvm-armv7-02.arm.fedoraproject.org
buildvm-armv7-03.arm.fedoraproject.org
aarch64-02a.arm.fedoraproject.org
buildvm-ppc64-01.ppc.fedoraproject.org
buildvm-ppc64-02.ppc.fedoraproject.org
buildvm-ppc64le-01.ppc.fedoraproject.org
buildvm-ppc64le-02.ppc.fedoraproject.org
buildvm-s390x-01.s390.fedoraproject.org
arm04-builder00.arm.fedoraproject.org
arm04-builder01.arm.fedoraproject.org
[builders:children]
buildhw
buildvm
buildvm-aarch64
buildvm-armv7
buildvm-ppc64
buildvm-ppc64le
buildppc
buildppcle
buildarm
buildaarch64
buildvm-s390
buildvm-s390x
bkernel
[builders-stg:children]
buildvm-stg
buildvm-ppc64-stg
buildvm-ppc64le-stg
buildvm-aarch64-stg
buildvm-armv7-stg
buildppc64

View File

@@ -1,85 +0,0 @@
[cloud]
ansiblemagazine.fedorainfracloud.org
arm03-packager00.cloud.fedoraproject.org
arm03-packager01.cloud.fedoraproject.org
arm03-qa00.cloud.fedoraproject.org
arm03-qa01.cloud.fedoraproject.org
artboard.fedorainfracloud.org
cloud-noc01.cloud.fedoraproject.org
commops.fedorainfracloud.org
communityblog.fedorainfracloud.org
copr-be.cloud.fedoraproject.org
copr-be-dev.cloud.fedoraproject.org
copr-dist-git-dev.fedorainfracloud.org
copr-dist-git.fedorainfracloud.org
copr-fe.cloud.fedoraproject.org
copr-fe-dev.cloud.fedoraproject.org
copr-keygen.cloud.fedoraproject.org
copr-keygen-dev.cloud.fedoraproject.org
darkserver-dev.fedorainfracloud.org
developer.fedorainfracloud.org
eclipse.fedorainfracloud.org
el6-test.fedorainfracloud.org
el7-test.fedorainfracloud.org
f25-test.fedorainfracloud.org
f26-test.fedorainfracloud.org
f27-test.fedorainfracloud.org
faitout.fedorainfracloud.org
fas2-dev.fedorainfracloud.org
fas3-dev.fedorainfracloud.org
#fed-cloud01.cloud.fedoraproject.org
#fed-cloud02.cloud.fedoraproject.org
fed-cloud03.cloud.fedoraproject.org
fed-cloud04.cloud.fedoraproject.org
fed-cloud05.cloud.fedoraproject.org
fed-cloud06.cloud.fedoraproject.org
fed-cloud07.cloud.fedoraproject.org
fed-cloud08.cloud.fedoraproject.org
fed-cloud09.cloud.fedoraproject.org
fed-cloud10.cloud.fedoraproject.org
fed-cloud11.cloud.fedoraproject.org
fed-cloud12.cloud.fedoraproject.org
fed-cloud13.cloud.fedoraproject.org
fed-cloud14.cloud.fedoraproject.org
fed-cloud15.cloud.fedoraproject.org
#fed-cloud16.cloud.fedoraproject.org
#fed-cloud-ppc01.cloud.fedoraproject.org
fed-cloud-ppc02.cloud.fedoraproject.org
fedimg-dev.fedorainfracloud.org
fedora-bootstrap.fedorainfracloud.org
glittergallery-dev.fedorainfracloud.org
grafana.cloud.fedoraproject.org
graphite.fedorainfracloud.org
hubs-dev.fedorainfracloud.org
iddev.fedorainfracloud.org
insim.fedorainfracloud.org
java-deptools.fedorainfracloud.org
jenkins.fedorainfracloud.org
jenkins-slave-el6.fedorainfracloud.org
jenkins-slave-el7.fedorainfracloud.org
jenkins-slave-f26.fedorainfracloud.org
jenkins-slave-f25.fedorainfracloud.org
jenkins-slave-f25-ppc64le.fedorainfracloud.org
lists-dev.fedorainfracloud.org
magazine2.fedorainfracloud.org
modernpaste.fedorainfracloud.org
modularity.fedorainfracloud.org
modularity2.fedorainfracloud.org
ppc64le-test.fedorainfracloud.org
ppc64-test.fedorainfracloud.org
rawhide-test.fedorainfracloud.org
regcfp2.fedorainfracloud.org
respins.fedorainfracloud.org
shumgrepper-dev.fedorainfracloud.org
taiga.fedorainfracloud.org
taigastg.fedorainfracloud.org
testdays.fedorainfracloud.org
twisted-fedora24-1.fedorainfracloud.org
twisted-fedora24-2.fedorainfracloud.org
twisted-fedora25-1.fedorainfracloud.org
twisted-fedora25-2.fedorainfracloud.org
twisted-fedora26-1.fedorainfracloud.org
twisted-fedora26-2.fedorainfracloud.org
twisted-rhel7-1.fedorainfracloud.org
twisted-rhel7-2.fedorainfracloud.org
upstreamfirst.fedorainfracloud.org

View File

@@ -1,37 +1,12 @@
---
#######
# BEGIN: Ansible roles_path variables
#
# Background/reference about external repos pulled in:
# https://pagure.io/fedora-infrastructure/issue/5476
#
ansible_base: /srv/web/infra
# Path to the openshift-ansible checkout as external git repo brought into
# Fedora Infra
openshift_ansible: /srv/web/infra/openshift-ansible/
#
# END: Ansible roles_path variables
#######
freezes: true
# most of our systems are in phx2
datacenter: phx2
postfix_group: "none"
# usually we do not want to enable nested virt, only on some virthosts
nested: false
# most of our systems are 64bit.
# most of our systems are 64bit.
# Used to install various nagios scripts and the like.
libdir: /usr/lib64
# Most EL systems need default EPEL repos.
# Some systems (notably fed-cloud*) need to get their own
# EPEL files because EPEL overrides packages in their core repos.
use_default_epel: true
# example of ports for default iptables
# tcp_ports: [ 22, 80, 443 ]
# udp_ports: [ 110, 1024, 2049 ]
@@ -51,105 +26,28 @@ mem_size: 2048
num_cpus: 2
lvm_size: 20000
# Default netmask. Almost all our phx2 nets are /24's with the
# exception of 10.5.124.128/25. Almost all of our non phx2 sites are
# less than a /24.
eth0_nm: 255.255.255.0
eth1_nm: 255.255.255.0
br0_nm: 255.255.255.0
br1_nm: 255.255.255.0
# Default to managing the network, we want to not do this on select hosts (like cloud nodes)
ansible_ifcfg_blacklist: false
#
# The default virt-install works for rhel7 or fedora with 1 nic
#
virt_install_command: "{{ virt_install_command_one_nic }}"
# default virt install command is for a single nic-device
# define in another group file for more nics (see buildvm)
#virt_install_command: /usr/sbin/virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
# --disk {{ volgroup }}/{{ inventory_hostname }}
# --vcpus={{ num_cpus }} -l {{ ks_repo }} -x
# "ksdevice=eth0 ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }}
# gateway={{ gw }} dns={{ dns }} console=tty0 console=ttyS0
# hostname={{ inventory_hostname }}"
# --network=bridge=br0 --autostart --noautoconsole
main_bridge: br0
nfs_bridge: br1
virt_install_command_one_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
virt_install_command: virt-install -n {{ inventory_hostname }} -r {{ mem_size }}
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
--vcpus={{ num_cpus }} -l {{ ks_repo }} -x
'ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none'
--network bridge={{ main_bridge }},model=virtio
--autostart --noautoconsole --watchdog default
virt_install_command_two_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyS0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none
ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none'
--network bridge={{ main_bridge }},model=virtio --network=bridge={{ nfs_bridge }},model=virtio
--autostart --noautoconsole --watchdog default
virt_install_command_aarch64_one_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyAMA0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none'
--network bridge={{ main_bridge }},model=virtio
--network bridge=br0,model=virtio
--autostart --noautoconsole
virt_install_command_aarch64_two_nic: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyAMA0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none
ip={{ eth1_ip }}:::{{ nm }}:{{ inventory_hostname }}-nfs:eth1:none'
--network bridge={{ main_bridge }},model=virtio --network=bridge={{ nfs_bridge }},model=virtio
--autostart --noautoconsole
virt_install_command_armv7_one_nic: virt-install -n {{ inventory_hostname }} --arch armv7l
--memory={{ mem_size }},maxmemory={{ max_mem_size }} --memballoon virtio
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
'net.ifnames=0 ksdevice=eth0 ks={{ ks_url }} console=tty0 console=ttyAMA0
hostname={{ inventory_hostname }} nameserver={{ dns }}
ip={{ eth0_ip }}::{{ gw }}:{{ nm }}:{{ inventory_hostname }}:eth0:none'
--network bridge={{ main_bridge }},model=virtio
--autostart --noautoconsole
virt_install_command_rhel6: virt-install -n {{ inventory_hostname }}
--memory={{ mem_size }},maxmemory={{ max_mem_size }}
--disk bus=virtio,path={{ volgroup }}/{{ inventory_hostname }}
--vcpus={{ num_cpus }},maxvcpus={{ max_cpu }} -l {{ ks_repo }} -x
"ksdevice=eth0 ks={{ ks_url }} ip={{ eth0_ip }} netmask={{ nm }}
gateway={{ gw }} dns={{ dns }} console=tty0 console=ttyS0
hostname={{ inventory_hostname }}"
--network=bridge=br0 --autostart --noautoconsole --watchdog default
max_mem_size: "{{ mem_size * 5 }}"
max_cpu: "{{ num_cpus * 5 }}"
# This is the wildcard certname for our proxies. It has a different name for
# the staging group and is used in the proxies.yml playbook.
wildcard_cert_name: wildcard-2017.fedoraproject.org
wildcard_crt_file: wildcard-2017.fedoraproject.org.cert
wildcard_key_file: wildcard-2017.fedoraproject.org.key
wildcard_int_file: wildcard-2017.fedoraproject.org.intermediate.cert
# This is the openshift wildcard cert. Until it exists set it equal to wildcard
os_wildcard_cert_name: wildcard-2017.app.os.fedoraproject.org
os_wildcard_crt_file: wildcard-2017.app.os.fedoraproject.org.cert
os_wildcard_key_file: wildcard-2017.app.os.fedoraproject.org.key
os_wildcard_int_file: wildcard-2017.app.os.fedoraproject.org.intermediate.cert
# Everywhere, always, we should sign messages and validate signatures.
# However, we allow individual hosts and groups to override this. Use this very
# carefully.. and never in production (good for testing stuff in staging).
fedmsg_sign_messages: True
fedmsg_validate_signatures: True
wildcard_cert_name: wildcard-2014.fedoraproject.org
# By default, nodes get no fedmsg certs. They need to declare them explicitly.
fedmsg_certs: []
@@ -157,10 +55,6 @@ fedmsg_certs: []
# By default, fedmsg should not log debug info. Groups can override this.
fedmsg_loglevel: INFO
# By default, fedmsg sends error logs to sysadmin-datanommer-members@fp.o.
fedmsg_error_recipients:
- sysadmin-datanommer-members@fedoraproject.org
# By default, fedmsg hosts are in passive mode. External hosts are typically
# active.
fedmsg_active: False
@@ -169,9 +63,6 @@ fedmsg_active: False
fedmsg_prefix: org.fedoraproject
fedmsg_env: prod
# Amount of time to wait for connections after a socket is first established.
fedmsg_post_init_sleep: 1.0
# A special flag that, when set to true, will disconnect the host from the
# global fedmsg-relay instance and set it up with its own local one. You can
# temporarily set this to true for a specific host to do some debugging -- so
@@ -198,19 +89,19 @@ nrpe_procs_crit: 300
nrpe_check_postfix_queue_warn: 2
nrpe_check_postfix_queue_crit: 5
# env is staging or production, we default it to production here.
# env is staging or production, we default it to production here.
env: production
env_suffix:
# nfs mount options, override at the group/host level
nfs_mount_opts: "ro,hard,bg,intr,noatime,nodev,nosuid,nfsvers=3"
# by default set become to false here We can override it as needed.
# Note that if become is true, you need to unset requiretty for
# ssh controlpersist to work.
become: false
# by default set sudo to false here We can override it as needed.
# Note that if sudo is true, you need to unset requiretty for
# ssh controlpersist to work.
sudo: false
# default the root_auth_users to nothing.
# default the root_auth_users to nothing.
# This should be set for cloud instances in their host or group vars.
root_auth_users: ''
@@ -226,49 +117,3 @@ csi_relationship: |
* What hosts/services rely on this?
To update this text, add the csi_* vars to group_vars/ in ansible.
#
# say if we want the apache role dependency for mod_wsgi or not
# In some cases we want mod_wsgi and no apache (for python3 httpaio stuff)
#
wsgi_wants_apache: true
# IPA settings
additional_host_keytabs: []
ipa_server: ipa01.phx2.fedoraproject.org
ipa_realm: FEDORAPROJECT.ORG
ipa_admin_password: "{{ ipa_prod_admin_password }}"
# Normal default sshd port is 22
sshd_port: 22
# List of names under which the host is available
ssh_hostnames: []
# assume collectd apache
collectd_apache: true
# assume vpn is false
vpn: False
# assume createrepo is true and this builder has the koji nfs mount to do that
createrepo: True
# Nagios global variables
nagios_Check_Services:
nrpe: true
sshd: true
named: false
dhcpd: false
httpd: false
swap: true
# Set variable if we want to use our global iptables defaults
# Some things need to set their own.
baseiptables: True
# Most of our machines have manual resolv.conf files
# These settings are for machines where NM is supposed to control resolv.conf.
nm_controlled_resolv: False
dns1: "10.5.126.21"
dns2: "10.5.126.22"

View File

@@ -16,20 +16,15 @@ custom_rules: [
# No other ports open. no web service running here.
#tcp_ports: []
fas_client_groups: sysadmin-noc,sysadmin-veteran
fas_client_groups: sysadmin-noc
freezes: false
# Don't use testing repos in production
testing: False
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: anitya
owner: root
group: fedmsg

View File

@@ -18,20 +18,15 @@ custom_rules: [
'-A INPUT -p tcp -m tcp -s 140.211.169.230 --dport 9941 -j ACCEPT',
]
fas_client_groups: sysadmin-noc,sysadmin-web,sysadmin-veteran
# Don't use testing repos in production
testing: False
fas_client_groups: sysadmin-noc,sysadmin-web
freezes: false
vpn: true
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: anitya
owner: root
group: apache
@@ -42,8 +37,6 @@ fedmsg_certs:
- anitya.project.add
- anitya.project.add.tried
- anitya.project.edit
- anitya.project.flag
- anitya.project.flag.set
- anitya.project.map.new
- anitya.project.map.remove
- anitya.project.map.update

View File

@@ -4,4 +4,3 @@ freezes: false
sudoers: "{{ private }}/files/sudo/arm-packager-sudoers"
sudoers_main: nopasswd
host_group: cloud
ansible_ifcfg_blacklist: true

View File

@@ -5,4 +5,3 @@ sudoers: "{{ private }}/files/sudo/arm-qa-sudoers"
sudoers_main: nopasswd
libdir: /usr/lib
host_group: cloud
ansible_ifcfg_blacklist: true

View File

@@ -7,13 +7,13 @@ num_cpus: 2
tcp_ports: [ 80, 443,
# This port is required by gluster
6996,
# These 12 ports are used by fedmsg. One for each wsgi thread.
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 30010, 3011, 3012]
# These 8 ports are used by fedmsg. One for each wsgi thread.
3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007]
# Neeed for rsync from log01 for logs.
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
fas_client_groups: sysadmin-noc,sysadmin-ask,fi-apprentice,sysadmin-veteran
fas_client_groups: sysadmin-noc,sysadmin-ask,fi-apprentice
freezes: false
@@ -22,8 +22,6 @@ fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: askbot
owner: root
group: apache
@@ -34,7 +32,6 @@ fedmsg_certs:
- askbot.post.flag_offensive.delete
- askbot.tag.update
virt_install_command: "{{ virt_install_command_rhel6 }}"
# For the MOTD
csi_security_category: Low

View File

@@ -13,7 +13,7 @@ tcp_ports: [ 80, 443,
# Neeed for rsync from log01 for logs.
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
fas_client_groups: sysadmin-noc,sysadmin-ask,fi-apprentice,sysadmin-veteran
fas_client_groups: sysadmin-noc,sysadmin-ask,fi-apprentice
freezes: false
@@ -22,8 +22,6 @@ fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: askbot
owner: root
group: apache
@@ -34,8 +32,6 @@ fedmsg_certs:
- askbot.post.flag_offensive.delete
- askbot.tag.update
virt_install_command: "{{ virt_install_command_rhel6 }}"
# For the MOTD
csi_security_category: Low
csi_primary_contact: Fedora admins - admin@fedoraproject.org

View File

@@ -0,0 +1,9 @@
---
host_group: atomicbuilder
freezes: false
nrpe_procs_warn: 700
nrpe_procs_crit: 800
fas_client_groups: atomic,sysadmin-atomic
tcp_ports: [ 80, 443, 873 ]

View File

@@ -15,36 +15,18 @@ tcp_ports: [
3000, 3001, 3002, 3003,
]
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng,sysadmin-veteran
sudoers: "{{ private }}/files/sudo/autocloud-backend"
# These are hw boxes and don't use the NM ifconfig setup
ansible_ifcfg_blacklist: true
# These people get told when something goes wrong.
fedmsg_error_recipients:
- sysadmin-fedimg-members@fedoraproject.org
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: autocloud
owner: root
group: fedmsg
can_send:
- autocloud.image
- autocloud.image.running
- autocloud.image.success
- autocloud.image.failed
- autocloud.image.queued
- autocloud.compose
- autocloud.compose.queued
- autocloud.compose.running
- autocloud.compose.complete
# For the MOTD
csi_security_category: Moderate

View File

@@ -12,28 +12,18 @@ tcp_ports: [
3000, 3001, 3002, 3003,
]
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng,sysadmin-veteran
# These people get told when something goes wrong.
fedmsg_error_recipients:
- sysadmin-fedimg-members@fedoraproject.org
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: autocloud
owner: root
group: fedmsg
can_send:
- autocloud.image
- autocloud.image.running
- autocloud.image.success
- autocloud.image.failed
- autocloud.image.queued
# For the MOTD
csi_security_category: Moderate

View File

@@ -1,7 +1,7 @@
---
# Define resources for this group of hosts here.
lvm_size: 20000
mem_size: 2048
mem_size: 1024
num_cpus: 2
# for systems that do not match the above - specify the same parameter in
@@ -14,7 +14,7 @@ wsgi_threads: 2
tcp_ports: [ 80 ]
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng,sysadmin-veteran
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng
# For the MOTD
csi_security_category: Moderate

View File

@@ -1,7 +1,7 @@
---
# Define resources for this group of hosts here.
lvm_size: 20000
mem_size: 2048
mem_size: 1024
num_cpus: 1
# for systems that do not match the above - specify the same parameter in
@@ -14,7 +14,7 @@ wsgi_threads: 2
tcp_ports: [ 80 ]
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng,sysadmin-veteran
fas_client_groups: sysadmin-noc,sysadmin-fedimg,sysadmin-releng
# For the MOTD
csi_security_category: Moderate

View File

@@ -1,2 +0,0 @@
# This var should never be set for more than one machine
autocloudreporter_prod: true

View File

@@ -7,22 +7,9 @@ num_cpus: 2
# for systems that do not match the above - specify the same parameter in
# the host_vars/$hostname file
# Make connections from signing bridges stateless, they break sigul connections
# https://bugzilla.redhat.com/show_bug.cgi?id=1283364
custom_rules: ['-A INPUT --proto tcp --sport 44334 --source sign-bridge01.phx2.fedoraproject.org,secondary-bridge01.phx2.fedoraproject.org -j ACCEPT']
ansible_ifcfg_whitelist:
- eth0
- eth1
fas_client_groups: sysadmin-releng
host_group: autosign
fedmsg_error_recipients:
- puiterwijk@fedoraproject.org
nfs_mount_opts: "rw,hard,bg,intr,noatime,nodev,nosuid,sec=sys,nfsvers=3"
# For the MOTD
csi_security_category: High
csi_primary_contact: Release Engineering - rel-eng@lists.fedoraproject.org
@@ -35,4 +22,4 @@ csi_relationship: |
The script[1] currently runs in the foreground from a git checkout.
[1] https://pagure.io/releng/blob/master/f/scripts/autosigner.py
[1] https://git.fedorahosted.org/cgit/releng/tree/scripts/autosigner.py

View File

@@ -1,6 +0,0 @@
---
# Make connections from signing bridges stateless, they break sigul connections
# https://bugzilla.redhat.com/show_bug.cgi?id=1283364
custom_rules: ['-A INPUT --proto tcp --sport 44334 --source sign-bridge01.phx2.fedoraproject.org,secondary-bridge01.phx2.fedoraproject.org -j ACCEPT']
host_group: autosign

View File

@@ -10,20 +10,13 @@ freezes: false
tcp_ports: [ 3000, 3001, 3002, 3003,
3004, 3005, 3006, 3007 ]
fas_client_groups: sysadmin-noc,sysadmin-badges,sysadmin-veteran
# These people get told when something goes wrong.
fedmsg_error_recipients:
- sysadmin-badges-members@fedoraproject.org
fas_client_groups: sysadmin-noc,sysadmin-badges
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: fedbadges
owner: root
group: fedmsg

View File

@@ -10,19 +10,13 @@ num_cpus: 2
tcp_ports: [ 3000, 3001, 3002, 3003,
3004, 3005, 3006, 3007 ]
fas_client_groups: sysadmin-noc,sysadmin-badges,sysadmin-veteran
# These people get told when something goes wrong.
fedmsg_error_recipients:
- sysadmin-badges-members@fedoraproject.org
fas_client_groups: sysadmin-noc,sysadmin-badges
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: fedbadges
owner: root
group: fedmsg

View File

@@ -17,15 +17,13 @@ tcp_ports: [ 80 ]
# Neeed for rsync from log01 for logs.
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
fas_client_groups: sysadmin-noc,sysadmin-badges,sysadmin-veteran
fas_client_groups: sysadmin-noc,sysadmin-badges
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: tahrir
owner: root
group: tahrir

View File

@@ -1,7 +1,7 @@
---
# Define resources for this group of hosts here.
lvm_size: 20000
mem_size: 2048
mem_size: 1024
num_cpus: 2
# Definining these vars has a number of effects
@@ -17,15 +17,13 @@ tcp_ports: [ 80 ]
# Neeed for rsync from log01 for logs.
custom_rules: [ '-A INPUT -p tcp -m tcp -s 10.5.126.13 --dport 873 -j ACCEPT', '-A INPUT -p tcp -m tcp -s 192.168.1.59 --dport 873 -j ACCEPT' ]
fas_client_groups: sysadmin-noc,sysadmin-badges,sysadmin-veteran
fas_client_groups: sysadmin-noc,sysadmin-badges
# These are consumed by a task in roles/fedmsg/base/main.yml
fedmsg_certs:
- service: shell
owner: root
group: sysadmin
can_send:
- logger.log
- service: tahrir
owner: root
group: tahrir

View File

@@ -1,17 +0,0 @@
---
# Define resources for this group of hosts here.
lvm_size: 30000
mem_size: 4096
num_cpus: 2
custom_rules: [
# fas01, fas02, and fas03
'-A INPUT -p tcp -m tcp -s 10.5.126.25 --dport 80 -j ACCEPT',
'-A INPUT -p tcp -m tcp -s 10.5.126.26 --dport 80 -j ACCEPT',
'-A INPUT -p tcp -m tcp -s 10.5.126.30 --dport 80 -j ACCEPT',
# wiki01, wiki02
'-A INPUT -p tcp -m tcp -s 10.5.126.63 --dport 80 -j ACCEPT',
'-A INPUT -p tcp -m tcp -s 10.5.126.73 --dport 80 -j ACCEPT',
]
fas_client_groups: sysadmin-main

Some files were not shown because too many files have changed in this diff Show More