mirror of
https://pagure.io/fedora-infra/ansible.git
synced 2026-02-03 13:13:22 +08:00
Compare commits
4 Commits
openvpn_ha
...
denyhosts
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d8f01f8b08 | ||
|
|
f458aec69e | ||
|
|
755e5e81ae | ||
|
|
c6cbf75e92 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,2 +1 @@
|
||||
*.swp
|
||||
*.pyc
|
||||
|
||||
112
CONVENTIONS
112
CONVENTIONS
@@ -1,112 +0,0 @@
|
||||
This file describes some conventions we are going to try and use
|
||||
to keep things organized and everyone on the same page.
|
||||
|
||||
If you find you need to diverge from this document for something,
|
||||
please discuss it on the infrastructure list and see if we can
|
||||
adjust this document for that use case.
|
||||
|
||||
Playbook naming
|
||||
===============
|
||||
The top level playbooks directory should contain:
|
||||
|
||||
* Playbooks that are generic and used by serveral groups/hosts playbooks
|
||||
* Playbooks used for utility purposes from command line
|
||||
* Groups and Hosts subdirs.
|
||||
|
||||
Generic playbooks are included in other playbooks and perform
|
||||
basic setup that is used by other groups/hosts.
|
||||
Examples: cloud setup, collectd, webserver, iptables, etc
|
||||
|
||||
Utility playbooks are used by sysadmins command line to perform some
|
||||
specific function. Examples: host update, vhost update, vhost reboot.
|
||||
|
||||
The playbooks/groups/ directory should contain one playbook per
|
||||
group. This should be used in the case of multiple machines/instances
|
||||
in a group. MUST include a hosts entry that describes the hosts in the group.
|
||||
Examples: packages, proxy, unbound, virthost, etc.
|
||||
Try and be descriptive with the name here.
|
||||
|
||||
The playbooks/hosts/ directory should contain one playbook per 'host'
|
||||
for when a role is handled by only one host. Hosts playbooks
|
||||
MUST be FQDN.yml, MUST contain Hosts: the host or ip.
|
||||
Examples: persistent cloud images, special hosts.
|
||||
|
||||
Where possible groups should be used. Hosts playbooks should only
|
||||
be used in specific cases where a generic group playbook would not work.
|
||||
|
||||
Both groups and hosts playbooks should always include:
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private}}/vars.yml"
|
||||
- /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
|
||||
|
||||
Play naming
|
||||
===========
|
||||
Plays in playbooks should be a short readable description of what the play
|
||||
is doing. This will be displayed to the user and/or mailed out, so think
|
||||
about what you would like to see if the play you are writing failed that
|
||||
would be descriptive to the reader to help fix it.
|
||||
|
||||
Inventory
|
||||
=========
|
||||
The inventory file should add all hosts to one (or more) groups.
|
||||
|
||||
When there are staging hosts for a role/service, they should be in the
|
||||
main group for that role as well as a staging for the role.
|
||||
FIXME: will depend on how we do staging. (see below)
|
||||
|
||||
Tags
|
||||
====
|
||||
Tags allow you to run just a subset of plays with a specific tag(s).
|
||||
|
||||
We have some standard tags we should use on all plays:
|
||||
|
||||
packages - this play installs or removes packages.
|
||||
|
||||
config - this play installs config files.
|
||||
|
||||
check - we could use this tag to include 'is everything running that should be'
|
||||
type tasks.
|
||||
|
||||
FIXME: others?
|
||||
|
||||
Production vs Staging vs Development
|
||||
====================================
|
||||
In the default state, we should strive to have production and staging using
|
||||
the same exact playbooks. development can also do so, or just be a more
|
||||
minimal free form for the developer.
|
||||
|
||||
When needing to make changes to test in staging the following process should
|
||||
be used:
|
||||
|
||||
FIXME... :)
|
||||
|
||||
Requirements:
|
||||
|
||||
1. shouldn't touch prod playbook by default
|
||||
2. should be easy to merge changes back to prod
|
||||
3. should not require people to remember to do a bunch of steps.
|
||||
4. should be easy to see exactly what changes are pending only in stg.
|
||||
|
||||
Cron job/automatic execution
|
||||
============================
|
||||
|
||||
We would like to get ansible running over hosts in an automated way.
|
||||
A git hook could do this.
|
||||
|
||||
* On commit:
|
||||
If we have a way to detemine exactly what hosts are affected by a
|
||||
change we could simply run only on those hosts.
|
||||
|
||||
We might want a short delay (10m) to allow someone to see a problem
|
||||
or others to note one from the commit.
|
||||
|
||||
* Once a day: (more often? less often?)
|
||||
|
||||
We may want to re-run on all hosts once a day and yell loudly
|
||||
if anything changed.
|
||||
|
||||
FIXME: perhaps we want a tag of items to run at this time?
|
||||
FIXME: alternately we could have a util playbook that runs a
|
||||
bunch of checks for us?
|
||||
|
||||
37
README
37
README
@@ -1,15 +1,9 @@
|
||||
== ansible repository/structure ==
|
||||
ansible repository/structure
|
||||
|
||||
files - files and templates for use in playbooks/tasks
|
||||
- subdirs for specific tasks/dirs highly recommended
|
||||
|
||||
inventory - where the inventory and additional vars is stored
|
||||
- All files in this directory in ini format
|
||||
- added together for total inventory
|
||||
group_vars:
|
||||
- per group variables set here in a file per group
|
||||
host_vars:
|
||||
- per host variables set here in a file per host
|
||||
|
||||
library - library of custom local ansible modules
|
||||
|
||||
@@ -17,10 +11,6 @@ playbooks - collections of plays we want to run on systems
|
||||
|
||||
tasks - snippets of tasks that should be included in plays
|
||||
|
||||
roles - specific roles to be use in playbooks.
|
||||
Each role has it's own files/templates/vars
|
||||
|
||||
== Paths ==
|
||||
|
||||
public path for everything is:
|
||||
|
||||
@@ -30,11 +20,12 @@ private path - which is sysadmin-main accessible only is:
|
||||
|
||||
/srv/private/ansible
|
||||
|
||||
|
||||
In general to run any ansible playbook you will want to run:
|
||||
|
||||
sudo -i ansible-playbook /path/to/playbook.yml
|
||||
|
||||
== Cloud information ==
|
||||
|
||||
|
||||
cloud instances:
|
||||
to startup a new cloud instance and configure for basic server use run (as
|
||||
@@ -70,6 +61,9 @@ define these with:
|
||||
|
||||
--extra-vars="varname=value varname1=value varname2=value"
|
||||
|
||||
|
||||
|
||||
|
||||
Name Memory_MB Disk VCPUs
|
||||
m1.tiny 512 0 1
|
||||
m1.small 2048 20 1
|
||||
@@ -130,7 +124,7 @@ description: some description so someone else can know what this is
|
||||
|
||||
The available images can be found by running::
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-describe-images | grep ami
|
||||
euca-describe-images | grep emi
|
||||
|
||||
4. setup a host playbook ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
Note: the name of this file doesn't really matter but it should normally
|
||||
@@ -143,10 +137,10 @@ The available images can be found by running::
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private }}/vars.yml"
|
||||
- ${private}/vars.yml
|
||||
|
||||
tasks:
|
||||
- include: "{{ tasks }}/persistent_cloud.yml"
|
||||
- include: $tasks/persistent_cloud.yml
|
||||
|
||||
- name: provision instance
|
||||
hosts: $YOUR_HOSTNAME/IP HERE
|
||||
@@ -155,15 +149,15 @@ The available images can be found by running::
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private }}/vars.yml"
|
||||
- /srv/web/infra/ansible/vars//{{ ansible_distribution }}.yml
|
||||
- ${private}/vars.yml
|
||||
- ${vars}/${ansible_distribution}.yml
|
||||
|
||||
tasks:
|
||||
- include: "{{ tasks }}/cloud_setup_basic.yml
|
||||
- include: $tasks/cloud_setup_basic.yml
|
||||
# fill in other actions/includes/etc here
|
||||
|
||||
handlers:
|
||||
- include: "{{ handlers }}/restart_services.yml
|
||||
- include: $handlers/restart_services.yml
|
||||
|
||||
|
||||
5. add/commit the above to the git repo and push your changes
|
||||
@@ -177,6 +171,10 @@ The available images can be found by running::
|
||||
You should be able to run that playbook over and over again safely, it will
|
||||
only setup/create a new instance if the ip is not up/responding.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
SECURITY GROUPS
|
||||
- to edit security groups you must either have your own cloud account or
|
||||
be a member of sysadmin-main
|
||||
@@ -214,7 +212,6 @@ euca-create-group -d "group description here" groupname
|
||||
|
||||
To add a rule to a group:
|
||||
euca-authorize -P tcp -p 22 groupname
|
||||
euca-authorize -P icmp -t -1:-1 groupname
|
||||
|
||||
To delete a rule from a group:
|
||||
euca-revoke -P tcp -p 22 groupname
|
||||
|
||||
@@ -1,93 +0,0 @@
|
||||
# (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
|
||||
# based on the log_plays example
|
||||
# skvidal@fedoraproject.org
|
||||
# rbean@redhat.com
|
||||
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
import pwd
|
||||
|
||||
import fedmsg
|
||||
import fedmsg.config
|
||||
|
||||
|
||||
def getlogin():
|
||||
try:
|
||||
user = os.getlogin()
|
||||
except OSError, e:
|
||||
user = pwd.getpwuid(os.geteuid())[0]
|
||||
return user
|
||||
|
||||
|
||||
class CallbackModule(object):
|
||||
""" Publish playbook starts and stops to fedmsg. """
|
||||
|
||||
playbook_path = None
|
||||
|
||||
def __init__(self):
|
||||
config = fedmsg.config.load_config()
|
||||
config.update(dict(
|
||||
name='relay_inbound',
|
||||
cert_prefix='shell',
|
||||
active=True,
|
||||
))
|
||||
# It seems like recursive playbooks call this over and over again and
|
||||
# fedmsg doesn't like to be initialized more than once. So, here, just
|
||||
# catch that and ignore it.
|
||||
try:
|
||||
fedmsg.init(**config)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
|
||||
def playbook_on_play_start(self, pattern):
|
||||
# This gets called once for each play.. but we just issue a message once
|
||||
# for the first one. One per "playbook"
|
||||
play = getattr(self, 'play', None)
|
||||
if play:
|
||||
# figure out where the playbook FILE is
|
||||
path = os.path.abspath(play.playbook.filename)
|
||||
|
||||
# Bail out early without publishing if we're in --check mode
|
||||
if play.playbook.check:
|
||||
return
|
||||
|
||||
if not self.playbook_path:
|
||||
fedmsg.publish(
|
||||
modname="ansible", topic="playbook.start",
|
||||
msg=dict(
|
||||
playbook=path,
|
||||
userid=getlogin(),
|
||||
extra_vars=play.playbook.extra_vars,
|
||||
inventory=play.playbook.inventory.host_list,
|
||||
playbook_checksum=play.playbook.check,
|
||||
check=play.playbook.check,
|
||||
),
|
||||
)
|
||||
self.playbook_path = path
|
||||
|
||||
def playbook_on_stats(self, stats):
|
||||
if not self.playbook_path:
|
||||
return
|
||||
|
||||
results = dict([(h, stats.summarize(h)) for h in stats.processed])
|
||||
fedmsg.publish(
|
||||
modname="ansible", topic="playbook.complete",
|
||||
msg=dict(
|
||||
playbook=self.playbook_path,
|
||||
userid=getlogin(),
|
||||
results=results,
|
||||
),
|
||||
)
|
||||
@@ -50,24 +50,24 @@ class LogMech(object):
|
||||
raise
|
||||
|
||||
# checksum of full playbook?
|
||||
|
||||
|
||||
@property
|
||||
def playbook_id(self):
|
||||
if self._pb_fn:
|
||||
return os.path.basename(self._pb_fn).replace('.yml', '').replace('.yaml', '')
|
||||
else:
|
||||
return "ansible-cmd"
|
||||
|
||||
|
||||
@playbook_id.setter
|
||||
def playbook_id(self, value):
|
||||
self._pb_fn = value
|
||||
|
||||
|
||||
@property
|
||||
def logpath_play(self):
|
||||
# this is all to get our path to look nice ish
|
||||
tstamp = time.strftime('%Y/%m/%d/%H.%M.%S', time.localtime(self.started))
|
||||
path = os.path.normpath(self.logpath + '/' + self.playbook_id + '/' + tstamp + '/')
|
||||
|
||||
|
||||
if not os.path.exists(path):
|
||||
try:
|
||||
os.makedirs(path)
|
||||
@@ -76,13 +76,13 @@ class LogMech(object):
|
||||
raise
|
||||
|
||||
return path
|
||||
|
||||
|
||||
def play_log(self, content):
|
||||
# record out playbook.log
|
||||
# include path to playbook, checksums, user running playbook
|
||||
# any args we can get back from the invocation
|
||||
fd = open(self.logpath_play + '/' + 'playbook-' + self.pid + '.info', 'a')
|
||||
fd.write('%s\n' % content)
|
||||
fd.write('%s\n' % content)
|
||||
fd.close()
|
||||
|
||||
def task_to_json(self, task):
|
||||
@@ -92,25 +92,25 @@ class LogMech(object):
|
||||
res['task_args'] = task.module_args
|
||||
if self.playbook_id == 'ansible-cmd':
|
||||
res['task_userid'] = getlogin()
|
||||
for k in ("delegate_to", "environment", "with_first_found",
|
||||
"local_action", "notified_by", "notify",
|
||||
"register", "sudo", "sudo_user", "tags",
|
||||
for k in ("delegate_to", "environment", "first_available_file",
|
||||
"local_action", "notified_by", "notify", "only_if",
|
||||
"register", "sudo", "sudo_user", "tags",
|
||||
"transport", "when"):
|
||||
v = getattr(task, k, None)
|
||||
if v:
|
||||
res['task_' + k] = v
|
||||
|
||||
|
||||
return res
|
||||
|
||||
|
||||
def log(self, host, category, data, task=None, count=0):
|
||||
if not host:
|
||||
host = 'HOSTMISSING'
|
||||
|
||||
|
||||
if type(data) == dict:
|
||||
name = data.get('module_name',None)
|
||||
else:
|
||||
name = "unknown"
|
||||
|
||||
|
||||
|
||||
# we're in setup - move the invocation info up one level
|
||||
if 'invocation' in data:
|
||||
@@ -126,23 +126,21 @@ class LogMech(object):
|
||||
data['task_start'] = self._last_task_start
|
||||
data['task_end'] = time.time()
|
||||
data.update(self.task_to_json(task))
|
||||
|
||||
|
||||
if 'task_userid' not in data:
|
||||
data['task_userid'] = getlogin()
|
||||
|
||||
|
||||
if category == 'OK' and data.get('changed', False):
|
||||
category = 'CHANGED'
|
||||
|
||||
if self.play_info.get('check', False) and self.play_info.get('diff', False):
|
||||
category = 'CHECK_DIFF:' + category
|
||||
elif self.play_info.get('check', False):
|
||||
|
||||
if self.play_info.get('check', False):
|
||||
category = 'CHECK:' + category
|
||||
|
||||
|
||||
fd = open(self.logpath_play + '/' + host + '.log', 'a')
|
||||
now = time.strftime(TIME_FORMAT, time.localtime())
|
||||
fd.write(MSG_FORMAT % dict(now=now, name=name, count=count, category=category, data=json.dumps(data)))
|
||||
fd.close()
|
||||
|
||||
|
||||
|
||||
logmech = LogMech()
|
||||
|
||||
@@ -240,7 +238,7 @@ class CallbackModule(object):
|
||||
|
||||
def playbook_on_play_start(self, pattern):
|
||||
self._task_count = 0
|
||||
|
||||
|
||||
play = getattr(self, 'play', None)
|
||||
if play:
|
||||
# figure out where the playbook FILE is
|
||||
@@ -260,29 +258,27 @@ class CallbackModule(object):
|
||||
pb_info['inventory'] = play.playbook.inventory.host_list
|
||||
pb_info['playbook_checksum'] = utils.md5(path)
|
||||
pb_info['check'] = play.playbook.check
|
||||
pb_info['diff'] = play.playbook.diff
|
||||
logmech.play_log(json.dumps(pb_info, indent=4))
|
||||
|
||||
self._play_count += 1
|
||||
# then write per-play info that doesn't duplcate the playbook info
|
||||
|
||||
self._play_count += 1
|
||||
# then write per-play info that doesn't duplcate the playbook info
|
||||
info = {}
|
||||
info['play'] = play.name
|
||||
info['hosts'] = play.hosts
|
||||
info['transport'] = play.transport
|
||||
info['number'] = self._play_count
|
||||
info['check'] = play.playbook.check
|
||||
info['diff'] = play.playbook.diff
|
||||
logmech.play_info = info
|
||||
logmech.play_log(json.dumps(info, indent=4))
|
||||
|
||||
|
||||
def playbook_on_stats(self, stats):
|
||||
results = {}
|
||||
results = {}
|
||||
for host in stats.processed.keys():
|
||||
results[host] = stats.summarize(host)
|
||||
logmech.log(host, 'STATS', results[host])
|
||||
logmech.play_log(json.dumps({'stats': results}, indent=4))
|
||||
logmech.play_log(json.dumps({'playbook_end': time.time()}, indent=4))
|
||||
print 'logs written to: %s' % logmech.logpath_play
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,40 +0,0 @@
|
||||
import time
|
||||
|
||||
|
||||
class CallbackModule(object):
|
||||
"""
|
||||
A plugin for timing tasks
|
||||
"""
|
||||
def __init__(self):
|
||||
self.stats = {}
|
||||
self.current = None
|
||||
|
||||
def playbook_on_task_start(self, name, is_conditional):
|
||||
"""
|
||||
Logs the start of each task
|
||||
"""
|
||||
if self.current is not None:
|
||||
# Record the running time of the last executed task
|
||||
self.stats[self.current] = time.time() - self.stats[self.current]
|
||||
|
||||
# Record the start time of the current task
|
||||
self.current = name
|
||||
self.stats[self.current] = time.time()
|
||||
|
||||
def playbook_on_stats(self, stats):
|
||||
"""
|
||||
Prints the timings
|
||||
"""
|
||||
# Record the timing of the very last task
|
||||
if self.current is not None:
|
||||
self.stats[self.current] = time.time() - self.stats[self.current]
|
||||
|
||||
# Sort the tasks by their running time
|
||||
results = sorted(self.stats.items(), key=lambda value: value[1], reverse=True)
|
||||
|
||||
# Just keep the top 10
|
||||
results = results[:10]
|
||||
|
||||
# Print the timings
|
||||
for name, elapsed in results:
|
||||
print "{0:-<70}{1:->9}".format('{0} '.format(name), ' {0:.02f}s'.format(elapsed))
|
||||
@@ -1,27 +0,0 @@
|
||||
pam_url:
|
||||
{
|
||||
settings:
|
||||
{
|
||||
{% if env == 'staging' %}
|
||||
url = "https://fas-all.stg.phx2.fedoraproject.org:8443/"; # URI to fetch
|
||||
{% elif datacenter == 'phx2' %}
|
||||
url = "https://fas-all.phx2.fedoraproject.org:8443/"; # URI to fetch
|
||||
{% else %}
|
||||
url = "https://fas-all.vpn.fedoraproject.org:8443/"; # URI to fetch
|
||||
{% endif %}
|
||||
returncode = "OK"; # The remote script/cgi should return a 200 http code and this string as its only results
|
||||
userfield = "user"; # userfield name to send
|
||||
passwdfield = "token"; # passwdfield name to send
|
||||
extradata = "&do=login"; # extradata to send
|
||||
prompt = "Password+Token: "; # password prompt
|
||||
};
|
||||
|
||||
ssl:
|
||||
{
|
||||
verify_peer = true; # Should we verify SSL ?
|
||||
verify_host = true; # Should we verify the CN in the SSL cert?
|
||||
client_cert = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side certificate
|
||||
client_key = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side key (can be same file as above if a single cert)
|
||||
ca_cert = "/etc/pki/tls/private/totpcgi-ca.cert";
|
||||
};
|
||||
};
|
||||
1148
files/bacula/bacula-dir.conf.j2
Normal file
1148
files/bacula/bacula-dir.conf.j2
Normal file
File diff suppressed because it is too large
Load Diff
45
files/bacula/bacula-fd.conf.j2
Normal file
45
files/bacula/bacula-fd.conf.j2
Normal file
@@ -0,0 +1,45 @@
|
||||
#
|
||||
# Default Bacula File Daemon Configuration file
|
||||
#
|
||||
# For Bacula release 2.0.3 (06 March 2007) -- redhat (Zod)
|
||||
#
|
||||
# There is not much to change here except perhaps the
|
||||
# File daemon Name to
|
||||
#
|
||||
|
||||
#
|
||||
# List Directors who are permitted to contact this File daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-dir
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
}
|
||||
|
||||
#
|
||||
# Restricted Director, used by tray-monitor to get the
|
||||
# status of the file daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-mon
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
Monitor = yes
|
||||
}
|
||||
|
||||
#
|
||||
# "Global" File daemon configuration specifications
|
||||
#
|
||||
FileDaemon { # this is me
|
||||
Name = bacula-fd
|
||||
FDport = 9102 # where we listen for the director
|
||||
WorkingDirectory = /var/spool/bacula
|
||||
Pid Directory = /var/run
|
||||
Maximum Concurrent Jobs = 10
|
||||
Heartbeat Interval = 10
|
||||
#Maximum Network Buffer Size = 131072
|
||||
}
|
||||
|
||||
# Send all messages except skipped files back to Director
|
||||
Messages {
|
||||
Name = Standard
|
||||
director = bacula-dir = all, !skipped, !restored
|
||||
}
|
||||
104
files/bacula/bacula-sd.conf.j2
Normal file
104
files/bacula/bacula-sd.conf.j2
Normal file
@@ -0,0 +1,104 @@
|
||||
#
|
||||
# Default Bacula Storage Daemon Configuration file
|
||||
#
|
||||
# For Bacula release 2.0.3 (06 March 2007) -- redhat (Zod)
|
||||
#
|
||||
# You may need to change the name of your tape drive
|
||||
# on the "Archive Device" directive in the Device
|
||||
# resource. If you change the Name and/or the
|
||||
# "Media Type" in the Device resource, please ensure
|
||||
# that dird.conf has corresponding changes.
|
||||
#
|
||||
|
||||
Storage { # definition of myself
|
||||
Name = bacula-sd
|
||||
SDPort = 9103 # Director's port
|
||||
WorkingDirectory = "/var/spool/bacula"
|
||||
Pid Directory = "/var/run"
|
||||
Maximum Concurrent Jobs = 10
|
||||
Heartbeat Interval = 5
|
||||
}
|
||||
|
||||
#
|
||||
# List Directors who are permitted to contact Storage daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-dir
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
}
|
||||
|
||||
#
|
||||
# Restricted Director, used by tray-monitor to get the
|
||||
# status of the storage daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-mon
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
Monitor = yes
|
||||
}
|
||||
|
||||
#
|
||||
# Devices supported by this Storage daemon
|
||||
# To connect, the Director's bacula-dir.conf must have the
|
||||
# same Name and MediaType.
|
||||
#
|
||||
|
||||
Device {
|
||||
Name = FileStorage
|
||||
Media Type = File
|
||||
Archive Device = /bacula/
|
||||
LabelMedia = yes; # lets Bacula label unlabeled media
|
||||
Random Access = Yes;
|
||||
AutomaticMount = yes; # when device opened, read it
|
||||
RemovableMedia = no;
|
||||
AlwaysOpen = no;
|
||||
}
|
||||
|
||||
|
||||
Device {
|
||||
Name = FileStorage2
|
||||
Media Type = File
|
||||
Archive Device = /bacula2/
|
||||
LabelMedia = yes; # lets Bacula label unlabeled media
|
||||
Random Access = Yes;
|
||||
AutomaticMount = yes; # when device opened, read it
|
||||
RemovableMedia = no;
|
||||
AlwaysOpen = no;
|
||||
}
|
||||
|
||||
#
|
||||
# An autochanger device with two drives
|
||||
|
||||
Autochanger {
|
||||
Name = Autochanger
|
||||
Device = Drive-1
|
||||
Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
|
||||
Changer Device = /dev/sg1
|
||||
}
|
||||
|
||||
Device {
|
||||
Name = Drive-1 #
|
||||
Drive Index = 0
|
||||
Media Type = LTO-5
|
||||
Archive Device = /dev/nst0
|
||||
AutomaticMount = yes; # when device opened, read it
|
||||
AlwaysOpen = yes;
|
||||
RemovableMedia = yes;
|
||||
RandomAccess = no;
|
||||
AutoChanger = yes
|
||||
SpoolDirectory = /bacula/bacula/spool/;
|
||||
Maximum Spool Size = 1600G;
|
||||
# Label Media = yes
|
||||
# Enable the Alert command only if you have the mtx package loaded
|
||||
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
|
||||
# If you have smartctl, enable this, it has more info than tapeinfo
|
||||
Alert Command = "sh -c 'smartctl -H -l error %c'"
|
||||
}
|
||||
#
|
||||
# Send all messages to the Director,
|
||||
# mount messages also are sent to the email address
|
||||
#
|
||||
Messages {
|
||||
Name = Standard
|
||||
director = bacula-dir = all
|
||||
}
|
||||
10
files/bacula/bconsole.conf.j2
Normal file
10
files/bacula/bconsole.conf.j2
Normal file
@@ -0,0 +1,10 @@
|
||||
#
|
||||
# Bacula User Agent (or Console) Configuration File
|
||||
#
|
||||
|
||||
Director {
|
||||
Name = bacula-dir
|
||||
DIRport = 9101
|
||||
address = localhost
|
||||
Password = "{{ bacula5PasswordCon }}"
|
||||
}
|
||||
5
files/bacula/fedora_delete_catalog_backup
Executable file
5
files/bacula/fedora_delete_catalog_backup
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# This script deletes a catalog dump
|
||||
#
|
||||
rm -f /bacula/bacula.sql
|
||||
3
files/bacula/fedora_make_catalog_backup
Executable file
3
files/bacula/fedora_make_catalog_backup
Executable file
@@ -0,0 +1,3 @@
|
||||
#!/bin/sh
|
||||
rm -f /bacula/bacula.sql
|
||||
/usr/bin/mysqldump -u bacula -f bacula > /bacula/bacula.sql
|
||||
@@ -1,5 +1,5 @@
|
||||
LoadPlugin network
|
||||
|
||||
<Plugin "network">
|
||||
Server "log01"
|
||||
Server "log02"
|
||||
</Plugin>
|
||||
8
files/collectd/rrdtool.conf
Normal file
8
files/collectd/rrdtool.conf
Normal file
@@ -0,0 +1,8 @@
|
||||
LoadPlugin rrdtool
|
||||
|
||||
<Plugin rrdtool>
|
||||
CacheTimeout 160
|
||||
CacheFlush 1200
|
||||
WritesPerSecond 50
|
||||
</Plugin>
|
||||
|
||||
@@ -1 +1,3 @@
|
||||
#ansible root key
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmS3g5fSXizcCqKMI1n5WPFrfMyu7BMrMkMYyck07rB/cf2orO8kKj5schjILA8NYJFStlv2CGRXmQlendj523FPzPmzxvTP/OT4qdywa4LKGvAxOkRGCMMxWzVFLdEMzsLUE/+FLX+xd1US9UPLGRsbMkdz4ORCc0G8gqTr835H56mQPI+/zPFeQjHoHGYtQA1wnJH/0LCuFFfU82IfzrXzFDIBAA5i2S+eEOk7/SA4Ciek1CthNtqPX27M6UqkJMBmVpnAdeDz2noWMvlzAAUQ7dHL84CiXbUnF3hhYrHDbmD+kEK+KiRrYh3PT+5YfEPVI/xiDJ2fdHGxY7Dr2TQ== root@lockbox01.phx2.fedoraproject.org
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
[epel]
|
||||
name=Extras Packages for Enterprise Linux $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/7/$basearch/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
|
||||
|
||||
[epel-testing]
|
||||
name=Extras Packages for Enterprise Linux $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/testing/7/$basearch/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
|
||||
|
||||
[epel-beta]
|
||||
name=Extras Packages for Enterprise Linux beta $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/beta/7/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
|
||||
@@ -1,26 +0,0 @@
|
||||
[updates-testing]
|
||||
name=Fedora $releasever - $basearch - Test Updates
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Test Updates Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/debug/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-testing-source]
|
||||
name=Fedora $releasever - Test Updates Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/SRPMS/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,26 +0,0 @@
|
||||
[updates-testing]
|
||||
name=Fedora $releasever - $basearch - Test Updates
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Test Updates Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/debug/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
|
||||
[updates-testing-source]
|
||||
name=Fedora $releasever - Test Updates Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/SRPMS/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
@@ -1,26 +0,0 @@
|
||||
[updates]
|
||||
name=Fedora $releasever - $basearch - Updates
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Updates - Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/debug/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-source]
|
||||
name=Fedora $releasever - Updates Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/SRPMS/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,26 +0,0 @@
|
||||
[updates]
|
||||
name=Fedora $releasever - $basearch - Updates
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
|
||||
[updates-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Updates - Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/debug/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
|
||||
[updates-source]
|
||||
name=Fedora $releasever - Updates Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/SRPMS/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
@@ -1,29 +0,0 @@
|
||||
[fedora]
|
||||
name=Fedora $releasever - $basearch
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
|
||||
enabled=1
|
||||
metadata_expire=7d
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[fedora-source]
|
||||
name=Fedora $releasever - Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,29 +0,0 @@
|
||||
[fedora]
|
||||
name=Fedora $releasever - $basearch
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
|
||||
enabled=1
|
||||
metadata_expire=7d
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
|
||||
[fedora-source]
|
||||
name=Fedora $releasever - Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
|
||||
@@ -1,24 +0,0 @@
|
||||
[rhel7-dvd]
|
||||
name = rhel7 base dvd
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/RHEL7-$basearch/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-base]
|
||||
name = rhel7 base $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-optional]
|
||||
name = rhel7 optional $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-optional-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-extras]
|
||||
name = rhel7 extras $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-extras-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-ha]
|
||||
name = rhel7 ha $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-ha-for-rhel-7-server-rpms/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
@@ -1,17 +1,42 @@
|
||||
#ausil
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAD9QDskl41P2f4wqBuDBRD3VJ7MfKD6gMetMEaOy2b/CzfxN1vzeoxEvUxefi4+uh5b5ht5+BhQVhvBV7sTxxYftEH+B7IRmWigqcS1Ndnw+ML6zCbSTCJOqDvTLxmkZic0NUBIBP907ztMCoZjaOW9SSCrdA9Vp87V3x/KEQaeSNntmnFqtnpQI/N0NlmqxB78p97W/QDpLuftqJ33sM0uyvxXSusThLSFBHjisezsWox49nEKY8HW+Kwkmw+k7EF4tsDWymPB+S0gMsMlTxzjutNASVDmn6H+lgkzns+5Xxii4/mZWrcjqfLuH7vCI2mWykZJ6ek0LiQea9tNN+KZomqX6NbTUK3riaDPrZPNexa4I83Fp+DYNmYgnGMInqn+cZ5PoUJ3u3LaqZGBQeuuONTw0yQ8Pkkn5xibpPO6qblHKcet0pfmWQ5ab+5BDrsyLcPXolMci5h45GNWebr7UMuXT6+q+EolnYgbgDzzGJ4xPohF04OW8CwflK64KEnYcqlGs+DF4TNgGFlhKiyCWfXSjizmQusxn17ayi6+yrkiGeqfz72qyZ1pSKlwA8XRYC2VkAAquJP6zAtAKjCUdmRTSyYgCpoIAlMwBO07BiPLLov6lKdphZYY1DI7pTXA98fhVU04PDqJJYR1GKkttmCsjbRWnxjkPl/Zka1+ei3k9DNidT6j4hFj+uTj8SS70qZUtKLNpc5IcedHaGEK0vcXJm9lIEKBIEnN0PCLZCa4kQZnfdsbuep1fbXNf4WYPXea29aRKJc4hiqsdrccTp4KueHgWt1Jj6CZDZcFgX+NlUVWwk6djgjRzHUryExtsjCcgGMPRJWdUnVcpgkQ1qJhEXng3W+nFFboArWfwU8u1pXEdeE1Z+m+ows3nJHdEgQevyy/cUx6BPNPZkBh10MWskSV8Z+vb02vJB+QikRMwQs3Ywf6RMaZFrBkWD4FfUaU24f4wgtPQN7j5xxJ2rWLJ/s9ZOWSl9yrytC6ZUQwmayLmiPUdm4u/7ZZmaly39K1YWqFDl3eUrRAZwf1L/NAqFu/qcQQ3Xf20K0nI55nVbZ8ODyx6BtfwoioblnTEcehK0uud5Vamc5mfpErFY0agEecsc0sMZO+ky9pf/gCUdM7je7kMDI2hdx61fOa8Wypb5u9WNBWKRKx8xT1XUKhb2uFumm3sR1iNm1Qhj92mo/NO2aETOA1lsYSL0XK571Yy0iFK3X1nOqp/gCsEGLI8OPQk6XuFqv8hmfiIXNKV8IwuDStw7eIvuQIgT7bmMkj+1Ca25foSmg3w5FqJux1gO9t5F018LeQZ6LVlYHZaQnaN+eTU7KfoCozhWw1H9pprDz Dennis Gilmore
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAACAEAstHxky7hl1inyHBy+q/9M+Aen2HSfy8IoW+sAO6HSuHEUT7qWB8AlSNjHhahjXx7sy/BUkUed+NB/177rjlThokZDJ0yoM9KKymp26ETGaamBSkWBxZatTj96BWfD0P2K9jc/9vxtgKBq3VK9UaOt6VtJ9q6mKY3DdWLZn+K6iGQAKMCAgd8cCMgD6epBB5/litz7WhYv+aYTyjZGUGbBojQUiWgXDv9lR7p0w+VP7pnZEeb3//k4pZhsPrKFwwRVRLxBvWgVKNvA6nMXmsdikHCLLj8YAevhEY1xAba+iCKOpTqT7Bu+1Fnb9St8u5iDod21gRmN7MGGWYsO+Iu2MNAW9sw2nsA/sdNR0HEEgBqJLhERjGv399fWKyiZaF90n59lg8Pb6EzE6wHRs6rSB+9uKApBzPk99BEHLvC6mhn6RjrOC+TWSTcmXojAwQYCadqIdgWUaBsxaugKEXBFcmRuDWtpDfsqmM1kjeGU6MiaMlqPW0KjsMaVVChLO5ZvB/T7qW4wr5ZjLri475MuHocCMP0ECSUk7I3YW2h8RU6FEFmTpuULFRQo01iPreY5XJ7l0+xy2eggAWo+X2h3nGjXhCPOelBg+LYe0WOmPgB5oc1m5HZtFTcFzYbhAE+xQKlbwNeYT8HmNmEMhPjVoNyOOV7NAap+ueS2u/7li5D59O5Iy8aa5n/WiuYfkqH4pG796nFyLr5L/LVudzyaYFb/Gk8C1j/NAWYw53D/9aOA277HHe5t0/daJhbo98u0asF5mvPld3swPuPqkEZzgUfmNgH5CkvcQcMzaOvj6qr6xNmQfgsHroCShb46kplQ2uSf1pMAqsjN7jGhk6l+Bu6hKHnJKhZJVLiuAZtgYvkCB1ahaO3wRVozA1VKCAlqHOqoCq4YLIobUL95H08Kwcz7vIRIadX1TkOoLb2EwPkE/xrhDp4BySh+j6YNklSBkiRHvJMBNnRIj8NTRjYyj2o1Om7kJ770lEdryg2og8QBaFWCmFkwzg1QVrBOuu0dN7kt2l7VI7Ib4lavKSVTrqUdxdSbthUlu/b4Qif+pbyEtUFgykRsHVs+5Ofg7FZpsgCJ8rLFjzeVF/hAYX7t3XaIPLu+DL8kzamb/CRy1b7+iAw9nJbd7ED2SGyU6+c2coMPG23y6+YxgEmNG/rkCLCypkEEDOZe4DuMerZQ/RxMo06+glC6HC/3VN2dHlVLtEEV33B04/6Z0plAhqtjG7PVs08f8a5msV/VYn5ifa4z0oIXX1r5CIg3Ejp1JguLhBHpWa7YbS2Mwu6GAbD+hQfCYrsUkFonoOLu5czpITLo7ceJFTQmAt7OxZEoZBfmtYfzADQsQVYQb6J4QwvM3iKJOn30dgtYnJOVlDZEn+0fivedxoBAt9jHJ8lVp2ov/dOFnimi5V+2QIMB0fKTkChsk10zsDZ/KUk6zfijjEju0WfjRHCd357KswNv3aXHazfRIw77S2UOenD+xmUDZ6WgnxservUSDNDz7NldLf/gdPOMO4uSwKZixzsoCNioeLEmQv4gomNK7DyZBLMHLlWlbliqP+QWuIJO1rfoH2vaxzzA7l5tJW1gfnxm87RrrwIf9v5kpdJM6gQZxqmBCRsKQd5VkrEJ/xaFfkv080pWNV0drWTZW8fAAgfUNYB260Hyk3rHsjQlVtQxGJ1aAcgjMi3eGKQMwptbUMYHqct75czX6xp6zgXPiC/glX6AtuiZQ5bOI07imil20ien/ks/dnel8L+dmYDasL9m0B2jZ3lbl3eR1Dy7UhqGyERx//vYQapEBuwFcqQ9UdIWCGGG2Pte1I39BSehUUGSCOOD38a/GCu0l7OWZKdwq80MK/Ixgz4neiZQZ7MD2wPy6vk6Num18PZPN7OynMrI2UG5MViQ0GAhRgxwbUCvc7uKnGRqZo9q2mCabCxLbv+hJ4bppxpHHJxMDDXilTKMfZb0YRbvjBUi7LFKLN3MBMK2U1jHE+PjBgweqF8Jtuw04CQMxK3unajZOVkYAIq8IdMbw0oBVP4++eGB9z0x1eH+IsqL6IgknbbyoMgQqW9/8atm8HW2QYCX47oPd4FHs8rgJZk3bz8MwN3tp8WCRtYnJuwkWGWSq77ans0Ycl/tUfSSwUjnSvMsJnuSbxvdX0XbP5eRWikk0pJz5lM9sjYFOPHrQ44/U254yBa0N6UhyNTQnMGzRvY+fADE49b10hXZwCCrxpY9KvGr1XNJMnMcUke+4p9RS5LUwcZ8A6v7oWtZaZwnuBzvKk+HAn2gevD7Stjto+TnRCx1qcbx8iOhAEC6nvbLl+U313TmawrO/usrI5w3EFKP/4BnlKJDtNBeklJ0MpU3R1fmisqfegjuBW2bbaxq8Uo6m7uqPsYuAl7E6rOyZHLbtA8szvbQ46MSqAHezqxHJajWn2oZXMtbddgO5vlkxbRp3SSVKaPOeIj3XOGl78Owp4gFNRE0RY2EuUvrwUhXZR4wx1VHYjS6o9HAwOx3dH+pf1OiblUEanLQ9HLuOBkLhP8wn1M2slsSw+A1gyuI0ayjRujYFXdw6Mqp6XKTdU8vNue2c3d0I+TMifBypP0oJtxXmEoPp/VsU9yLKA2FF7Xvv/Xq1gtZcuZWAbSwMok/ENY1xeIFyjV+0yBidmax3jaf9yus/XEpyeBS3iIz63ymU10Kb2vrWjubg/sa2yd+q0y96dLdDRbnbwGwMmg6mXvTlVXf8c= ricky@padlock01.home.elrod.me
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDi5bNJQBrvT/YuvfLO0y6smZW5N+946uISkzmDi9myffLgHAZP4nBGeH/4GcB5ns9HJ19xVtbIwqOz4QwIqKh4gKU7DgaqND2Iu0bUUFL1KXPLGyAIW+9N3yHB+nKkH31alDnF4dpKkvO63DRkqh4ptxwEQbZDCFqn+vXuMnG4cPmDEweR3QZUt5m0Vc7HXzbehZxjUZ3xRWvT/pu+khBhJcRFkLlA60Fnqv7Q+MQP1C0Cpf3hiX1LcXUogXkNooAqx1YYRd8VqvI8e9yQW+a99x8FftnmXKlGCxP33ng6+U6Y2H7u3cRDrlRTbWqkry4SuUYo+6MtvZVgL0fw6PsZ jstanley@hawtness.rmrf.net
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJH1lA7WHRCbaFtvzbw0HxHYJstZjuXhax1+eL+SUJ5fFRGosEc4fLrSCP0gSFDfXmNzuspoBgcQTqnNO8FdIUwkJLDEu0vTQls1aT9YUXb+RVwKB7ULA3b1dqFkmOgLEjTJL9AplK4OJ9Su0kq6QBV4mXCxMsgEML/gn6r8muZmu2L/LdzUnxKKggyq7O5q1K/eW5Yy21fpvbHt2UPQX1f6gt4ty7E9Nnuhi7SHCI7fNIa+kHyIesfTm/SzeK/PY9rDwZKjuyS8o22GJXGEScJomK1cjMESH/J+t8Hffaj88BjGHNczvcnXAjq6y73VJQ9DiGLD4zmFquQMxDu0Tf kevin@jelerak.scrye.com
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDefONrBaBJlCxKtDwkYWVhf96lMhRQfwVJyBoBd4Pk6TqKMlAu2eST1xRZlV4cJSxAWgZpOaFgqJ5EGd6mq8PvVk+mKXdtX7CAoWm4f3c6otUFsFDCTw3gVvYSlEk23XBHuACsbAVNL4HmP+9C7PxQBePukbMBFD2smsyQkPcX7lZw+lDJW5lOTz3dHAA92bcopDycxRDI99gGkawzjlmxpm2C9nhRabKS6mpGw3N64d8hwHkkFbtHY7rS0/0Cka0geYYYv0NVki1IIctkhZE9LndcWbVcVe1pIlR0RyW2sorfgCgoa5fRZZhukUCtspdv981h/0b87RpRVUJKuRd1 lmacken@tomservo
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCsmLoA/97DrE7roCHOY7NdB5TV/g7oxAsk74HgHcFRYAbn/rkoa7r9ZsgR7qzwd6Z+5Z77qFqvl1Bs3XtJf+1vJ3kwdcNFdKTw1DgTdE/rNPI7QzUgXKKKv/WCiU6UDBX4HHWq8Yuq4tkr/yepS8sLzMz2e0pHU4uWFQuvr5ttP9ABGohhDnPr0IcaT5vm+uBTJItJBrhqGws2fnVxhWEm8Y96AZb2vFZVwiMdcKKqfVZby3/wTuEtaDbv0krQNtLJcjaOTWLHWnxJEvLWSdFgkuIDvoNKR7ZV2lsmh5UD/smStgf8TkORR59r63dp2kWAn0/Jl59ARsdXDXGCiduF3GamxglTUA+kYbkN/PBQbl6o+nNKy4Q5TI53WNmhpdsbEJWCjzT+V1ju5JejFEHIhnWyBoBUWB2NKxWaSlToI2B9E0iJ0HK68IlA7bO4X7SD8q5cZBVTKMByFxt9uQXFeZeG7QRCPIsg6bXsirnFn5028iz+RfVFe3Mavp18v1hObvH6SDTczQauuAhTwYOtphaPZj+iHbaKvKndvlOWdGoyrNxgcx+t4loyEEcEWD0Astdp0bZD39nag94PD7hnoENOC0oE6mbtyUuSCGrU6ogee8qxYAt0AP3Rq1LLaRWXqe/1rM5A9oaDNwNkWA/JWbJbZQf0vvWTZmTib3rfew== mdomsch@fedoraproject.org
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7U0WbKLL/D6iR03/vdDZJ8Lkj1jjAkindSvC4PkXVgi6qJo1YBZnIgsmoQopYcra2yzHFt58crygIh79P/rpQowWY99W+Sk4kB9UNuiAiX/LRi+1YdxwCKcRNTVOwuji6MGZoscACERmIjPY6P1oFPERoXhUkOuzPcrDK/0z/Bp9dpNRVZE/0zN6dvHA9QODLGvcFtgnX73SbZfoIbaVP/37IvOZvjGI1jxC5DwCmY+ihM13GpELP6BM8iihlnl1pjk1vtqPxD9g9Llr14Sc6cZJKl1WCulqhde4SEMOjpMJ8J8cGYBSsdh49hB36pdKQuTTnuCXpEt5Tl8PUKCrr mmcgrath@desktop.mmcgrath.net
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC3eVd6Ccegp1r1mhm7tPnlGUcw0zsAbR2p9hrFZ7RKxdIponuVV9ix4lgwpNEVDs0j4vxAApeLpJrsV8R8+YLUZO3Mzi+2s8nM8LXrKHtJT9wKKqoU3O/lC79drbWk3EMgETyP61Zpjkub0hwG2MjviPee63zCuRbxzxyalzk+AtwkRSxYaS2Ha0uKxGDiq1c/Iu6HRgm8HrtW+Pr6QbSSoHLhGUpR0HkgoC6852xXGhrRMkzXXbD9L6vaK9F39YmzD7Z8yey+xDTFW529avkEIWDeqBpbae+HjKqEQaBx71/rcmXhqKYrEagzUGpS8Bwskp3JMksd/v9tMuUhGQ2XaooCeKzvM0KnVUk/Q031ZtjNYxLpy/rEqbyt18+8wYOvVoGgnRZ/yJ/UVwYbGJrttYrrQmaJv7b357bkgDJobkIki+zGzi1xkvb85JWEt0mfh38H2vCnpwQtSAIyF/hmrS+1xsD/oAoc83IUhsVYcDhLbBEVKMX2IsJLMAPwCE6GexRYyVE5vEN4PMV9A8VmGuIC3IzkPEbStdtlbP4ttNKtfwS+MrY+ceAABDixls6xpedgT1he44R+7C1p+w4uj4TnYReLVce6+KgfJ6mz8CTXVULLWM4l2H3PylEUyoHGRDpVanGAvm7h2D0HgxErWIkjZkL79GFhzQc1xjzixQ== notting@nostromo.devel.redhat.com
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDDAeAohiRJ2v/RO7R9GS93TF92Gc9ixK6HM7wlbMdlZ4yYAbeoEX8VpeNaSTfo/Nw3zazr9VpmpHg+H70K8ljQsPgRwcgpetRVpF55M5FYjqM5oM+N94HV3nSGcnWbSIho1R31DaDH2ptxVqgh2m5DG7Bc45w9Bd4wjfdQ8nBrGv93tuH7X/cee4g6GvexLm5nXhAngdEmiyxw5MHuJAvj+54l4wMXRWpeF6XlI2iamW42nLSfRMCFkGNiXvBm8zkfkeH2L7I2cNKXXoP/cPCd3G/teIsI9FDqYpZ6CS0zMkWhlTuh7rlCjc9+nJsLdDLgwhb75skiUOOfimGvCCxWeHuCsSL+KpCu4AgI9UAVgO6xblDlmbQXxlGopep29U/s00W/0qv3Zp8Ks4Za0xHdoIwHiaLM0OYymFaNDd3ZqFG0FN23ZjcGqUmFGhGfUQRDt72+e9HtXlBJ0mUaCX9+e4wFGTVciG1/5CKsLHCaLRf+knsWXrv2zcv9BoZ9SCAK32zCZw05wjcmr7jYDCTLmtC6kEBNaOeE9Qqi2oomo4ji8ybg+Qq+1BwOtJKExvmZaooBZud0qd24HmCU0/0ysw732jGcqexzxsCR0VArd+7LKexOD7KwMW0VUss6fdOWac9gwCLx9FaKYh8mVvcQjKhKGI3aO2sXRUWSbBJw8w== ricky@alpha.rzhou.org
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAgEAxnzCHH11nDM1m7yvqo6Uanq5vcZjBcs/mr3LccxwJ59ENzSXwUgEQy/P8vby9VKMwsskoaqZcvJdOSZBFhNV970NTPb69OIXPQAl/xhaLwiJOn606fB+/S8WepeuntS0qLiebbEiA9vIQLteZ+bWl1s/didD/sFo3/wItoTGA4GuShUu1AyWJx5Ue7Y34rwGR+kIvDoy2GHUcunn2PjGt4r3v2vpiR8GuK0JRupJAGYbYCiMBDRMkR0cgEyHW6+QQNqMlA6nRJjp94PcUMKaZK6Tc+6h5v8kLLtzuZ6ZupwMMC4X8sh85YcxqoW9DynrvO28pzaMNBHm7qr9LeY9PIhXscSa35GAcGZ7UwPK4aJAAuIzCf8BzazyvUM3Ye7GPCXHxUwY0kdXk+MHMVKFzZDChNp/ovgdhxNrw9Xzcs4yw7XYambN9Bk567cI6/tWcPuYLYD4ZJQP0qSXVzVgFEPss1lDcgd0k4if+pINyxM8eVFZVAqU+BMeDC+6W8HUUPgv6LiyTWs+xTXTuORwBTSF1pOqWB4LjqsCGIiMAc6n/xdALBGUN7qsuKDU6Q7bwPppaxypi4KCvuJsqW+8sDtMUaZ34I5Zo1q7cu03wqnOljUGoAY6IDn3J66F2KlPPyb/q3PDV3WbY/jnH16L29/xUA73nFUW1p+WXutwmSU= ssmoogen@ponyo.int.smoogespace.com
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFZ3AD/I0OfU84IrK573amZptucuBrDxHoue/c+PUsD3MGIA6QXRceq3ZkLuz25OAAu53hFxzCE4d6eVS299rVR8Cd+tVU8aqBdTHzdqv52Vs8zRfXMW69sV7fhwRLaQDcRTwY90Wmz2MbZmN996XmJDNtUIWI2mML+PBYEdO0PyiB2ttb7mmA3SwtC/rwEMJL2YHh+bTzlJ9W4BgFcFwizMXU3mk5uGp2/q3nKzEvgTROM8yWvqdM34cRYpjFKyOlpo6k3SPt76hgDUEIsAu6Ul1S0FHTCRMIihcxZOSN4frMtXVjX0NhW9mKcn1IRBpzd0Yon/gPB8OJ31ojIIop spot@pterodactyl
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQDfgKJEBuHFlFc8/IHDeIpdprNnAFQHkicXAFfAzIJSkhUaOJFjsulmgPZn2TJJpYqFAxYUjhWJOdrOwx7AHSg6gWu4TT4a0sTay+Z0eqZOShf5UL/M587DxJk1JZU8g812yDKZMc7Sv7K6zdteONnCvno1kALSg0F2MVMJXFjE/tSontkIRH6IuG19R19NGEj1h56uGwdfe78xjOmv5wk6RZBjaOKqiPSQKNqCKbY9Kyz6yrem2M5uxRK45u3wSPJdmopo8l/nwf0p6ydrUSL5C/aXGh7LPqh31eTBDQUbWHw9LQMk1SibMGQPwJt59lLMlzc5OQZAJEbadsDAgl6VVA6MZkBQROiK9E087kvPesMoGWE0KBgvTqzpBZj0uHATP9i097dv80gjupMyaePsnQOxk0wRho9nRkxRo18Drt3QPVND4YGHzahMe/YR2N83MkbnGoP8K+GsFhLMAp3NKh6yUofFxTgRiB6H8ULKf3CV+hlk0Z9RJR3CpgMTKILYHPlaleJqoP6sXg6tJxI0rUE+0jUKvaTj+N2gX0MjKfUINk5mTbjD2mdVrPtKOBvos2luNhY5nTDpJuAHQqnFHPlPw8l3lXC2VBWOjqfTeeS+qD7ArKe6F7IO5ZNxJ2mTUuodhaPySta1MS37DWoz6UqeJu+wKIsHok90+EU4aAvUABh3RXSQA1E3IaxkooMhhrdIQO6K4L0M+CZ7lP35sW5pnwsN4sFlPec9Xn5e15LTlb9yFlx7Nm4DE2SX1s9QyMRE7z0LNO0X7wiihojuyQM6OQwc+ZaaDw5HerBisX/3LcC9osVLQQg1pt91YcCczUQ08qfUJV6aOD962K+EGzVFQGGauJDzgEH9BHQg7QwCWr0f3mu8/TNBzys2c0YsywDUc3AT1KP6TEJcR/dy6WbhJD3qyO/BLfCzRrHUOIaz+WbwmfTX8tGEQnVV5sEkZ39PWA1hRQ83b3MNV8cRJl+h/FnTk62yM4ZqGu73+x8JiEG3HAJp9/xYfNSwg8++PojJBXe+yM6DrTh5fTnBhxatLEKB658p8jTqJtF4+YD9D8+L39xEns6GQ7FphNqTC6IcpXyqq+zNuzF7vs/T+5n7978dUs3sK6YpBX4BlDxK6MsRF1WYqajEVeBJEMwdX2rfGkN9B5GfWdmdrzBjZQ6yyvlx5Dg++qgxpMiVOXSnw5v7H03PrT1we9wKre/2SQ1A2Oq/UDt/7tR2cMLoaPDNBpFT1W44LJB7o9iDT9YHUG3dC7R8JoeJ5YjyFmxbUQ5xg1oHnrBaPrGCuEYdQWhuDmp9Px2yRu8Agxzr9rNCZ/W8nWJVmvwvlXoldrum2rAECx0wiWqBhQ/+eX65 badger@unaka.lan
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmS3g5fSXizcCqKMI1n5WPFrfMyu7BMrMkMYyck07rB/cf2orO8kKj5schjILA8NYJFStlv2CGRXmQlendj523FPzPmzxvTP/OT4qdywa4LKGvAxOkRGCMMxWzVFLdEMzsLUE/+FLX+xd1US9UPLGRsbMkdz4ORCc0G8gqTr835H56mQPI+/zPFeQjHoHGYtQA1wnJH/0LCuFFfU82IfzrXzFDIBAA5i2S+eEOk7/SA4Ciek1CthNtqPX27M6UqkJMBmVpnAdeDz2noWMvlzAAUQ7dHL84CiXbUnF3hhYrHDbmD+kEK+KiRrYh3PT+5YfEPVI/xiDJ2fdHGxY7Dr2TQ== root@lockbox01.phx2.fedoraproject.org
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDnI+8JwOdXUO6T7gI6oXHUG4oQJsMsCwEGnRBjU4po93i9g9C5sShgqJMvBI2wzDdgL/xOFJeHuo+WTP6W/oiv8KHEco3wXSI4OlsPanORGn2TajwzEaYlfxJNlQPvmuxFxcrfkPF8cOGa0DRNTLZK7abO3tKfZV7IJyNX3Z0LFZ+VwcJBy1ryg0GonMYkjEreiAgJyGCJ1crnKiRMPSu/QONb0MTytMlJRtc/Lfi/KkT8C/LQ/e3zA5DWo9Ykb79M1k4MmtmE8mIUlWUQ9hagMhCj3/6Uze04H48fpYzDPr6AHU6rqxLTdBGgLCeSIUkE1ReZpAk2E+QAB/fTliydT93ig5i2RDt3YHcAa994C85bc0D+A21u0H/LzR1wbIItx+MpOkZePHevDSe4y8ULx0cUiEHxmTTZ2C6j+1EqaP5PeWEqlU3iXTgiqOzTEwfEaH7nScBpGbFmPnzdgO7xLuKebnvWjGu6d8Jd41KN5dN5WNMJaNEXBl65ySfeQYCCX/JZ5bfvC/07zAKj0/RKOFMyS07rb0rKh3EBcRx/tHgCq0hJ23NwfkShchj7v2Zh+JjgHKBv1+ZiIwnx2/WuYwvKwyqXZ5Jpy+lgxcC7l11w1ZN3tCd66E6NdU8AJIOz0n+trIorsipQBY0In3ZBLUU0PUYwno73e7ZabgcE7Q== patrick-new@fedora.thuis.local
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDep2yv5JTFJ0IdCiqumMFfNdu3H5Ej/rVVDEotS+3n5+1plKvajPXOA9c/0RLrBC/vL8LqDVrxBaiCvPFCIRN9a3Y1ru3Dwg++NmcMEvYq/H3SMHhZsH1yjlCD2r38znpX+D+CBMQnn7F5jqYFAnaMeESrgGGFFANfJN9HdHjb6eIrBGJyUOJ2JnZnhLFT5y7ru2xRMDmgsO3U+crmecYAeX/4iUadUxit36defAniVOA/3Jwva4Gjz73vIDTHNy1mxB8Y2ZBBl9WcL4qHc6wnAyFaiULcT5++Gdjn+MIyL86G/7mIIgC+fcVk/5JrdwMBiAZYMUZO/pzPobOe0spF threebean@marat
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2xAeq5uO72kY4mSFgFl9ZSveiAqe4tUv8hemrxwZH+w24RFOGrW1nOV+hjQhRpYVNwvqJkrd9N7VY/HXkd9df2AgQyYoiVfeMPTA7lB0/e/S1Bd6XGdWudvqRU1O6Rug0j3RQOuz7WDJgnanBVcBl8+X7EaPGpv9aILgh6CJDOVAO2GgaFdzI7CHtR99CMqNG7BsQF8C9Y8ALK+8HOPRE0R1wzgaAw85HTo0gyIWcrZqr4HI/QDuLjUQ6AZSgzE7dTiwZuFnUjLBnL0YP1bxJglt9IFx6r6jvdp/yMD+Bn/91WvmBL/AD+GIQ/ZydoeLo+JQW22ibiX/SzdAE4Cd3 pingou@FedoraProject
|
||||
|
||||
#codeblock
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAACAEAstHxky7hl1inyHBy+q/9M+Aen2HSfy8IoW+sAO6HSuHEUT7qWB8AlSNjHhahjXx7sy/BUkUed+NB/177rjlThokZDJ0yoM9KKymp26ETGaamBSkWBxZatTj96BWfD0P2K9jc/9vxtgKBq3VK9UaOt6VtJ9q6mKY3DdWLZn+K6iGQAKMCAgd8cCMgD6epBB5/litz7WhYv+aYTyjZGUGbBojQUiWgXDv9lR7p0w+VP7pnZEeb3//k4pZhsPrKFwwRVRLxBvWgVKNvA6nMXmsdikHCLLj8YAevhEY1xAba+iCKOpTqT7Bu+1Fnb9St8u5iDod21gRmN7MGGWYsO+Iu2MNAW9sw2nsA/sdNR0HEEgBqJLhERjGv399fWKyiZaF90n59lg8Pb6EzE6wHRs6rSB+9uKApBzPk99BEHLvC6mhn6RjrOC+TWSTcmXojAwQYCadqIdgWUaBsxaugKEXBFcmRuDWtpDfsqmM1kjeGU6MiaMlqPW0KjsMaVVChLO5ZvB/T7qW4wr5ZjLri475MuHocCMP0ECSUk7I3YW2h8RU6FEFmTpuULFRQo01iPreY5XJ7l0+xy2eggAWo+X2h3nGjXhCPOelBg+LYe0WOmPgB5oc1m5HZtFTcFzYbhAE+xQKlbwNeYT8HmNmEMhPjVoNyOOV7NAap+ueS2u/7li5D59O5Iy8aa5n/WiuYfkqH4pG796nFyLr5L/LVudzyaYFb/Gk8C1j/NAWYw53D/9aOA277HHe5t0/daJhbo98u0asF5mvPld3swPuPqkEZzgUfmNgH5CkvcQcMzaOvj6qr6xNmQfgsHroCShb46kplQ2uSf1pMAqsjN7jGhk6l+Bu6hKHnJKhZJVLiuAZtgYvkCB1ahaO3wRVozA1VKCAlqHOqoCq4YLIobUL95H08Kwcz7vIRIadX1TkOoLb2EwPkE/xrhDp4BySh+j6YNklSBkiRHvJMBNnRIj8NTRjYyj2o1Om7kJ770lEdryg2og8QBaFWCmFkwzg1QVrBOuu0dN7kt2l7VI7Ib4lavKSVTrqUdxdSbthUlu/b4Qif+pbyEtUFgykRsHVs+5Ofg7FZpsgCJ8rLFjzeVF/hAYX7t3XaIPLu+DL8kzamb/CRy1b7+iAw9nJbd7ED2SGyU6+c2coMPG23y6+YxgEmNG/rkCLCypkEEDOZe4DuMerZQ/RxMo06+glC6HC/3VN2dHlVLtEEV33B04/6Z0plAhqtjG7PVs08f8a5msV/VYn5ifa4z0oIXX1r5CIg3Ejp1JguLhBHpWa7YbS2Mwu6GAbD+hQfCYrsUkFonoOLu5czpITLo7ceJFTQmAt7OxZEoZBfmtYfzADQsQVYQb6J4QwvM3iKJOn30dgtYnJOVlDZEn+0fivedxoBAt9jHJ8lVp2ov/dOFnimi5V+2QIMB0fKTkChsk10zsDZ/KUk6zfijjEju0WfjRHCd357KswNv3aXHazfRIw77S2UOenD+xmUDZ6WgnxservUSDNDz7NldLf/gdPOMO4uSwKZixzsoCNioeLEmQv4gomNK7DyZBLMHLlWlbliqP+QWuIJO1rfoH2vaxzzA7l5tJW1gfnxm87RrrwIf9v5kpdJM6gQZxqmBCRsKQd5VkrEJ/xaFfkv080pWNV0drWTZW8fAAgfUNYB260Hyk3rHsjQlVtQxGJ1aAcgjMi3eGKQMwptbUMYHqct75czX6xp6zgXPiC/glX6AtuiZQ5bOI07imil20ien/ks/dnel8L+dmYDasL9m0B2jZ3lbl3eR1Dy7UhqGyERx//vYQapEBuwFcqQ9UdIWCGGG2Pte1I39BSehUUGSCOOD38a/GCu0l7OWZKdwq80MK/Ixgz4neiZQZ7MD2wPy6vk6Num18PZPN7OynMrI2UG5MViQ0GAhRgxwbUCvc7uKnGRqZo9q2mCabCxLbv+hJ4bppxpHHJxMDDXilTKMfZb0YRbvjBUi7LFKLN3MBMK2U1jHE+PjBgweqF8Jtuw04CQMxK3unajZOVkYAIq8IdMbw0oBVP4++eGB9z0x1eH+IsqL6IgknbbyoMgQqW9/8atm8HW2QYCX47oPd4FHs8rgJZk3bz8MwN3tp8WCRtYnJuwkWGWSq77ans0Ycl/tUfSSwUjnSvMsJnuSbxvdX0XbP5eRWikk0pJz5lM9sjYFOPHrQ44/U254yBa0N6UhyNTQnMGzRvY+fADE49b10hXZwCCrxpY9KvGr1XNJMnMcUke+4p9RS5LUwcZ8A6v7oWtZaZwnuBzvKk+HAn2gevD7Stjto+TnRCx1qcbx8iOhAEC6nvbLl+U313TmawrO/usrI5w3EFKP/4BnlKJDtNBeklJ0MpU3R1fmisqfegjuBW2bbaxq8Uo6m7uqPsYuAl7E6rOyZHLbtA8szvbQ46MSqAHezqxHJajWn2oZXMtbddgO5vlkxbRp3SSVKaPOeIj3XOGl78Owp4gFNRE0RY2EuUvrwUhXZR4wx1VHYjS6o9HAwOx3dH+pf1OiblUEanLQ9HLuOBkLhP8wn1M2slsSw+A1gyuI0ayjRujYFXdw6Mqp6XKTdU8vNue2c3d0I+TMifBypP0oJtxXmEoPp/VsU9yLKA2FF7Xvv/Xq1gtZcuZWAbSwMok/ENY1xeIFyjV+0yBidmax3jaf9yus/XEpyeBS3iIz63ymU10Kb2vrWjubg/sa2yd+q0y96dLdDRbnbwGwMmg6mXvTlVXf8c= ricky@padlock01.home.elrod.me
|
||||
|
||||
#jstanley
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDi5bNJQBrvT/YuvfLO0y6smZW5N+946uISkzmDi9myffLgHAZP4nBGeH/4GcB5ns9HJ19xVtbIwqOz4QwIqKh4gKU7DgaqND2Iu0bUUFL1KXPLGyAIW+9N3yHB+nKkH31alDnF4dpKkvO63DRkqh4ptxwEQbZDCFqn+vXuMnG4cPmDEweR3QZUt5m0Vc7HXzbehZxjUZ3xRWvT/pu+khBhJcRFkLlA60Fnqv7Q+MQP1C0Cpf3hiX1LcXUogXkNooAqx1YYRd8VqvI8e9yQW+a99x8FftnmXKlGCxP33ng6+U6Y2H7u3cRDrlRTbWqkry4SuUYo+6MtvZVgL0fw6PsZ jstanley@hawtness.rmrf.net
|
||||
|
||||
#kevin
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJH1lA7WHRCbaFtvzbw0HxHYJstZjuXhax1+eL+SUJ5fFRGosEc4fLrSCP0gSFDfXmNzuspoBgcQTqnNO8FdIUwkJLDEu0vTQls1aT9YUXb+RVwKB7ULA3b1dqFkmOgLEjTJL9AplK4OJ9Su0kq6QBV4mXCxMsgEML/gn6r8muZmu2L/LdzUnxKKggyq7O5q1K/eW5Yy21fpvbHt2UPQX1f6gt4ty7E9Nnuhi7SHCI7fNIa+kHyIesfTm/SzeK/PY9rDwZKjuyS8o22GJXGEScJomK1cjMESH/J+t8Hffaj88BjGHNczvcnXAjq6y73VJQ9DiGLD4zmFquQMxDu0Tf kevin@jelerak.scrye.com
|
||||
|
||||
#lmacken
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDefONrBaBJlCxKtDwkYWVhf96lMhRQfwVJyBoBd4Pk6TqKMlAu2eST1xRZlV4cJSxAWgZpOaFgqJ5EGd6mq8PvVk+mKXdtX7CAoWm4f3c6otUFsFDCTw3gVvYSlEk23XBHuACsbAVNL4HmP+9C7PxQBePukbMBFD2smsyQkPcX7lZw+lDJW5lOTz3dHAA92bcopDycxRDI99gGkawzjlmxpm2C9nhRabKS6mpGw3N64d8hwHkkFbtHY7rS0/0Cka0geYYYv0NVki1IIctkhZE9LndcWbVcVe1pIlR0RyW2sorfgCgoa5fRZZhukUCtspdv981h/0b87RpRVUJKuRd1 lmacken@tomservo
|
||||
|
||||
#mdomsch
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCsmLoA/97DrE7roCHOY7NdB5TV/g7oxAsk74HgHcFRYAbn/rkoa7r9ZsgR7qzwd6Z+5Z77qFqvl1Bs3XtJf+1vJ3kwdcNFdKTw1DgTdE/rNPI7QzUgXKKKv/WCiU6UDBX4HHWq8Yuq4tkr/yepS8sLzMz2e0pHU4uWFQuvr5ttP9ABGohhDnPr0IcaT5vm+uBTJItJBrhqGws2fnVxhWEm8Y96AZb2vFZVwiMdcKKqfVZby3/wTuEtaDbv0krQNtLJcjaOTWLHWnxJEvLWSdFgkuIDvoNKR7ZV2lsmh5UD/smStgf8TkORR59r63dp2kWAn0/Jl59ARsdXDXGCiduF3GamxglTUA+kYbkN/PBQbl6o+nNKy4Q5TI53WNmhpdsbEJWCjzT+V1ju5JejFEHIhnWyBoBUWB2NKxWaSlToI2B9E0iJ0HK68IlA7bO4X7SD8q5cZBVTKMByFxt9uQXFeZeG7QRCPIsg6bXsirnFn5028iz+RfVFe3Mavp18v1hObvH6SDTczQauuAhTwYOtphaPZj+iHbaKvKndvlOWdGoyrNxgcx+t4loyEEcEWD0Astdp0bZD39nag94PD7hnoENOC0oE6mbtyUuSCGrU6ogee8qxYAt0AP3Rq1LLaRWXqe/1rM5A9oaDNwNkWA/JWbJbZQf0vvWTZmTib3rfew== mdomsch@fedoraproject.org
|
||||
|
||||
#mmcgrath
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7U0WbKLL/D6iR03/vdDZJ8Lkj1jjAkindSvC4PkXVgi6qJo1YBZnIgsmoQopYcra2yzHFt58crygIh79P/rpQowWY99W+Sk4kB9UNuiAiX/LRi+1YdxwCKcRNTVOwuji6MGZoscACERmIjPY6P1oFPERoXhUkOuzPcrDK/0z/Bp9dpNRVZE/0zN6dvHA9QODLGvcFtgnX73SbZfoIbaVP/37IvOZvjGI1jxC5DwCmY+ihM13GpELP6BM8iihlnl1pjk1vtqPxD9g9Llr14Sc6cZJKl1WCulqhde4SEMOjpMJ8J8cGYBSsdh49hB36pdKQuTTnuCXpEt5Tl8PUKCrr mmcgrath@desktop.mmcgrath.net
|
||||
|
||||
#notting
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC3eVd6Ccegp1r1mhm7tPnlGUcw0zsAbR2p9hrFZ7RKxdIponuVV9ix4lgwpNEVDs0j4vxAApeLpJrsV8R8+YLUZO3Mzi+2s8nM8LXrKHtJT9wKKqoU3O/lC79drbWk3EMgETyP61Zpjkub0hwG2MjviPee63zCuRbxzxyalzk+AtwkRSxYaS2Ha0uKxGDiq1c/Iu6HRgm8HrtW+Pr6QbSSoHLhGUpR0HkgoC6852xXGhrRMkzXXbD9L6vaK9F39YmzD7Z8yey+xDTFW529avkEIWDeqBpbae+HjKqEQaBx71/rcmXhqKYrEagzUGpS8Bwskp3JMksd/v9tMuUhGQ2XaooCeKzvM0KnVUk/Q031ZtjNYxLpy/rEqbyt18+8wYOvVoGgnRZ/yJ/UVwYbGJrttYrrQmaJv7b357bkgDJobkIki+zGzi1xkvb85JWEt0mfh38H2vCnpwQtSAIyF/hmrS+1xsD/oAoc83IUhsVYcDhLbBEVKMX2IsJLMAPwCE6GexRYyVE5vEN4PMV9A8VmGuIC3IzkPEbStdtlbP4ttNKtfwS+MrY+ceAABDixls6xpedgT1he44R+7C1p+w4uj4TnYReLVce6+KgfJ6mz8CTXVULLWM4l2H3PylEUyoHGRDpVanGAvm7h2D0HgxErWIkjZkL79GFhzQc1xjzixQ== notting@nostromo.devel.redhat.com
|
||||
|
||||
#ricky
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDDAeAohiRJ2v/RO7R9GS93TF92Gc9ixK6HM7wlbMdlZ4yYAbeoEX8VpeNaSTfo/Nw3zazr9VpmpHg+H70K8ljQsPgRwcgpetRVpF55M5FYjqM5oM+N94HV3nSGcnWbSIho1R31DaDH2ptxVqgh2m5DG7Bc45w9Bd4wjfdQ8nBrGv93tuH7X/cee4g6GvexLm5nXhAngdEmiyxw5MHuJAvj+54l4wMXRWpeF6XlI2iamW42nLSfRMCFkGNiXvBm8zkfkeH2L7I2cNKXXoP/cPCd3G/teIsI9FDqYpZ6CS0zMkWhlTuh7rlCjc9+nJsLdDLgwhb75skiUOOfimGvCCxWeHuCsSL+KpCu4AgI9UAVgO6xblDlmbQXxlGopep29U/s00W/0qv3Zp8Ks4Za0xHdoIwHiaLM0OYymFaNDd3ZqFG0FN23ZjcGqUmFGhGfUQRDt72+e9HtXlBJ0mUaCX9+e4wFGTVciG1/5CKsLHCaLRf+knsWXrv2zcv9BoZ9SCAK32zCZw05wjcmr7jYDCTLmtC6kEBNaOeE9Qqi2oomo4ji8ybg+Qq+1BwOtJKExvmZaooBZud0qd24HmCU0/0ysw732jGcqexzxsCR0VArd+7LKexOD7KwMW0VUss6fdOWac9gwCLx9FaKYh8mVvcQjKhKGI3aO2sXRUWSbBJw8w== ricky@alpha.rzhou.org
|
||||
|
||||
#skvidal
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDjlnCEiFMrKpkiIBjs5IW1+RXDald3aKvTszj0hUw9Gl6w3vt3RAiqTD/XRKcNdP0+pVIK/I4KexKfZzemNZ8UYmZ+a9EK+Gj7OQbJv7TQDeR0zyJ8ZgFXaWoN+CnWXLO2mp9poysUR6CILjaDJt4GDxJaD+bebRu+zxUQSlgrjObhIUTSfwsEJu++zK+fy4+xSEMG7SANEJHd+zOAw6+isLnnbp8qY2fs3reKpc8XPkyJscLU4BQV2cGXwlPUhzPVv/itUUV/uWHeAqoz2i5XG4C0/BXk6D85qkGIyE08Nl3COxn6giivrdTIH6W4dUtBdYgTMZ3RgMHL9ClLpS17 skvidal@opus
|
||||
|
||||
#smooge
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAgEAxnzCHH11nDM1m7yvqo6Uanq5vcZjBcs/mr3LccxwJ59ENzSXwUgEQy/P8vby9VKMwsskoaqZcvJdOSZBFhNV970NTPb69OIXPQAl/xhaLwiJOn606fB+/S8WepeuntS0qLiebbEiA9vIQLteZ+bWl1s/didD/sFo3/wItoTGA4GuShUu1AyWJx5Ue7Y34rwGR+kIvDoy2GHUcunn2PjGt4r3v2vpiR8GuK0JRupJAGYbYCiMBDRMkR0cgEyHW6+QQNqMlA6nRJjp94PcUMKaZK6Tc+6h5v8kLLtzuZ6ZupwMMC4X8sh85YcxqoW9DynrvO28pzaMNBHm7qr9LeY9PIhXscSa35GAcGZ7UwPK4aJAAuIzCf8BzazyvUM3Ye7GPCXHxUwY0kdXk+MHMVKFzZDChNp/ovgdhxNrw9Xzcs4yw7XYambN9Bk567cI6/tWcPuYLYD4ZJQP0qSXVzVgFEPss1lDcgd0k4if+pINyxM8eVFZVAqU+BMeDC+6W8HUUPgv6LiyTWs+xTXTuORwBTSF1pOqWB4LjqsCGIiMAc6n/xdALBGUN7qsuKDU6Q7bwPppaxypi4KCvuJsqW+8sDtMUaZ34I5Zo1q7cu03wqnOljUGoAY6IDn3J66F2KlPPyb/q3PDV3WbY/jnH16L29/xUA73nFUW1p+WXutwmSU= ssmoogen@ponyo.int.smoogespace.com
|
||||
|
||||
#spot
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFZ3AD/I0OfU84IrK573amZptucuBrDxHoue/c+PUsD3MGIA6QXRceq3ZkLuz25OAAu53hFxzCE4d6eVS299rVR8Cd+tVU8aqBdTHzdqv52Vs8zRfXMW69sV7fhwRLaQDcRTwY90Wmz2MbZmN996XmJDNtUIWI2mML+PBYEdO0PyiB2ttb7mmA3SwtC/rwEMJL2YHh+bTzlJ9W4BgFcFwizMXU3mk5uGp2/q3nKzEvgTROM8yWvqdM34cRYpjFKyOlpo6k3SPt76hgDUEIsAu6Ul1S0FHTCRMIihcxZOSN4frMtXVjX0NhW9mKcn1IRBpzd0Yon/gPB8OJ31ojIIop spot@pterodactyl
|
||||
|
||||
#toshio
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQDfgKJEBuHFlFc8/IHDeIpdprNnAFQHkicXAFfAzIJSkhUaOJFjsulmgPZn2TJJpYqFAxYUjhWJOdrOwx7AHSg6gWu4TT4a0sTay+Z0eqZOShf5UL/M587DxJk1JZU8g812yDKZMc7Sv7K6zdteONnCvno1kALSg0F2MVMJXFjE/tSontkIRH6IuG19R19NGEj1h56uGwdfe78xjOmv5wk6RZBjaOKqiPSQKNqCKbY9Kyz6yrem2M5uxRK45u3wSPJdmopo8l/nwf0p6ydrUSL5C/aXGh7LPqh31eTBDQUbWHw9LQMk1SibMGQPwJt59lLMlzc5OQZAJEbadsDAgl6VVA6MZkBQROiK9E087kvPesMoGWE0KBgvTqzpBZj0uHATP9i097dv80gjupMyaePsnQOxk0wRho9nRkxRo18Drt3QPVND4YGHzahMe/YR2N83MkbnGoP8K+GsFhLMAp3NKh6yUofFxTgRiB6H8ULKf3CV+hlk0Z9RJR3CpgMTKILYHPlaleJqoP6sXg6tJxI0rUE+0jUKvaTj+N2gX0MjKfUINk5mTbjD2mdVrPtKOBvos2luNhY5nTDpJuAHQqnFHPlPw8l3lXC2VBWOjqfTeeS+qD7ArKe6F7IO5ZNxJ2mTUuodhaPySta1MS37DWoz6UqeJu+wKIsHok90+EU4aAvUABh3RXSQA1E3IaxkooMhhrdIQO6K4L0M+CZ7lP35sW5pnwsN4sFlPec9Xn5e15LTlb9yFlx7Nm4DE2SX1s9QyMRE7z0LNO0X7wiihojuyQM6OQwc+ZaaDw5HerBisX/3LcC9osVLQQg1pt91YcCczUQ08qfUJV6aOD962K+EGzVFQGGauJDzgEH9BHQg7QwCWr0f3mu8/TNBzys2c0YsywDUc3AT1KP6TEJcR/dy6WbhJD3qyO/BLfCzRrHUOIaz+WbwmfTX8tGEQnVV5sEkZ39PWA1hRQ83b3MNV8cRJl+h/FnTk62yM4ZqGu73+x8JiEG3HAJp9/xYfNSwg8++PojJBXe+yM6DrTh5fTnBhxatLEKB658p8jTqJtF4+YD9D8+L39xEns6GQ7FphNqTC6IcpXyqq+zNuzF7vs/T+5n7978dUs3sK6YpBX4BlDxK6MsRF1WYqajEVeBJEMwdX2rfGkN9B5GfWdmdrzBjZQ6yyvlx5Dg++qgxpMiVOXSnw5v7H03PrT1we9wKre/2SQ1A2Oq/UDt/7tR2cMLoaPDNBpFT1W44LJB7o9iDT9YHUG3dC7R8JoeJ5YjyFmxbUQ5xg1oHnrBaPrGCuEYdQWhuDmp9Px2yRu8Agxzr9rNCZ/W8nWJVmvwvlXoldrum2rAECx0wiWqBhQ/+eX65 badger@unaka.lan
|
||||
|
||||
#ansible root key
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmS3g5fSXizcCqKMI1n5WPFrfMyu7BMrMkMYyck07rB/cf2orO8kKj5schjILA8NYJFStlv2CGRXmQlendj523FPzPmzxvTP/OT4qdywa4LKGvAxOkRGCMMxWzVFLdEMzsLUE/+FLX+xd1US9UPLGRsbMkdz4ORCc0G8gqTr835H56mQPI+/zPFeQjHoHGYtQA1wnJH/0LCuFFfU82IfzrXzFDIBAA5i2S+eEOk7/SA4Ciek1CthNtqPX27M6UqkJMBmVpnAdeDz2noWMvlzAAUQ7dHL84CiXbUnF3hhYrHDbmD+kEK+KiRrYh3PT+5YfEPVI/xiDJ2fdHGxY7Dr2TQ== root@lockbox01.phx2.fedoraproject.org
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIEsTCCA5mgAwIBAgIQBOHnpNxc8vNtwCtCuF0VnzANBgkqhkiG9w0BAQsFADBs
|
||||
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
|
||||
d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
|
||||
ZSBFViBSb290IENBMB4XDTEzMTAyMjEyMDAwMFoXDTI4MTAyMjEyMDAwMFowcDEL
|
||||
MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
|
||||
LmRpZ2ljZXJ0LmNvbTEvMC0GA1UEAxMmRGlnaUNlcnQgU0hBMiBIaWdoIEFzc3Vy
|
||||
YW5jZSBTZXJ2ZXIgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2
|
||||
4C/CJAbIbQRf1+8KZAayfSImZRauQkCbztyfn3YHPsMwVYcZuU+UDlqUH1VWtMIC
|
||||
Kq/QmO4LQNfE0DtyyBSe75CxEamu0si4QzrZCwvV1ZX1QK/IHe1NnF9Xt4ZQaJn1
|
||||
itrSxwUfqJfJ3KSxgoQtxq2lnMcZgqaFD15EWCo3j/018QsIJzJa9buLnqS9UdAn
|
||||
4t07QjOjBSjEuyjMmqwrIw14xnvmXnG3Sj4I+4G3FhahnSMSTeXXkgisdaScus0X
|
||||
sh5ENWV/UyU50RwKmmMbGZJ0aAo3wsJSSMs5WqK24V3B3aAguCGikyZvFEohQcft
|
||||
bZvySC/zA/WiaJJTL17jAgMBAAGjggFJMIIBRTASBgNVHRMBAf8ECDAGAQH/AgEA
|
||||
MA4GA1UdDwEB/wQEAwIBhjAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIw
|
||||
NAYIKwYBBQUHAQEEKDAmMCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2Vy
|
||||
dC5jb20wSwYDVR0fBEQwQjBAoD6gPIY6aHR0cDovL2NybDQuZGlnaWNlcnQuY29t
|
||||
L0RpZ2lDZXJ0SGlnaEFzc3VyYW5jZUVWUm9vdENBLmNybDA9BgNVHSAENjA0MDIG
|
||||
BFUdIAAwKjAoBggrBgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQ
|
||||
UzAdBgNVHQ4EFgQUUWj/kK8CB3U8zNllZGKiErhZcjswHwYDVR0jBBgwFoAUsT7D
|
||||
aQP4v0cB1JgmGggC72NkK8MwDQYJKoZIhvcNAQELBQADggEBABiKlYkD5m3fXPwd
|
||||
aOpKj4PWUS+Na0QWnqxj9dJubISZi6qBcYRb7TROsLd5kinMLYBq8I4g4Xmk/gNH
|
||||
E+r1hspZcX30BJZr01lYPf7TMSVcGDiEo+afgv2MW5gxTs14nhr9hctJqvIni5ly
|
||||
/D6q1UEL2tU2ob8cbkdJf17ZSHwD2f2LSaCYJkJA69aSEaRkCldUxPUd1gJea6zu
|
||||
xICaEnL6VpPX/78whQYwvwt/Tv9XBZ0k7YXDK/umdaisLRbvfXknsuvCnQsH6qqF
|
||||
0wGjIChBWUMo0oHjqvbsezt3tkBigAVBRQHvFwY+3sAzm2fTYS5yh+Rp/BIAV0Ae
|
||||
cPUeybQ=
|
||||
-----END CERTIFICATE-----
|
||||
@@ -1,2 +0,0 @@
|
||||
[Boto]
|
||||
https_validate_certificates = False
|
||||
@@ -1,60 +0,0 @@
|
||||
[backend]
|
||||
|
||||
# URL where are results visible
|
||||
# default is http://copr
|
||||
results_baseurl=http://copr-be.cloud.fedoraproject.org/results
|
||||
|
||||
# ??? What is this
|
||||
# default is http://coprs/rest/api
|
||||
#frontend_url=http://copr-fe.cloud.fedoraproject.org/backend
|
||||
frontend_url=http://172.16.5.31/backend
|
||||
|
||||
# must have same value as BACKEND_PASSWORD from have frontend in /etc/copr/copr.conf
|
||||
# default is PASSWORDHERE but you really should change it. really.
|
||||
frontend_auth={{ copr_backend_password }}
|
||||
|
||||
# path to ansible playbook which spawns builder
|
||||
# see /usr/share/copr*/playbooks/ for examples
|
||||
# default is /etc/copr/builder_playbook.yml
|
||||
spawn_playbook=/home/copr/provision/builderpb.yml
|
||||
|
||||
# path to ansible playbook which terminate builder
|
||||
# default is /etc/copr/terminate_playbook.yml
|
||||
terminate_playbook=/home/copr/provision/terminatepb.yml
|
||||
|
||||
terminate_vars=vm_name
|
||||
|
||||
# directory where jobs are stored
|
||||
# no defaults
|
||||
jobsdir=/var/lib/copr/jobs
|
||||
|
||||
# directory where results are stored
|
||||
# should be accessible from web using 'results_baseurl' URL
|
||||
# no default
|
||||
destdir=/var/lib/copr/public_html/results
|
||||
|
||||
# default is 10
|
||||
sleeptime=30
|
||||
|
||||
# default is 8
|
||||
num_workers=8
|
||||
|
||||
# path to log file
|
||||
# default is /var/log/copr/backend.log
|
||||
logfile=/var/log/copr/backend.log
|
||||
|
||||
# default is /var/log/copr/workers/
|
||||
worker_logdir=/var/log/copr/workers/
|
||||
|
||||
# exit on worker failure
|
||||
# default is false
|
||||
#exit_on_worker=false
|
||||
|
||||
# publish fedmsg notifications from workers if true
|
||||
# default is false
|
||||
#fedmsg_enabled=false
|
||||
fedmsg_enabled=true
|
||||
|
||||
[builder]
|
||||
# default is 1800
|
||||
timeout=3600
|
||||
@@ -1,57 +0,0 @@
|
||||
[backend]
|
||||
|
||||
# URL where are results visible
|
||||
# default is http://copr
|
||||
results_baseurl=http://copr-be-dev.cloud.fedoraproject.org/results
|
||||
|
||||
# ??? What is this
|
||||
# default is http://coprs/rest/api
|
||||
frontend_url=http://copr-fe-dev.cloud.fedoraproject.org/backend
|
||||
|
||||
# must have same value as BACKEND_PASSWORD from have frontend in /etc/copr/copr.conf
|
||||
# default is PASSWORDHERE but you really should change it. really.
|
||||
frontend_auth=PASSWORDHERE
|
||||
|
||||
# path to ansible playbook which spawns builder
|
||||
# see /usr/share/copr*/playbooks/ for examples
|
||||
# default is /etc/copr/builder_playbook.yml
|
||||
spawn_playbook=/home/copr/provision/builderpb.yml
|
||||
|
||||
# path to ansible playbook which terminate builder
|
||||
# default is /etc/copr/terminate_playbook.yml
|
||||
terminate_playbook=/home/copr/provision/terminatepb.yml
|
||||
|
||||
# directory where jobs are stored
|
||||
# no defaults
|
||||
jobsdir=/var/lib/copr/jobs
|
||||
|
||||
# directory where results are stored
|
||||
# should be accessible from web using 'results_baseurl' URL
|
||||
# no default
|
||||
destdir=/var/lib/copr/public_html/results
|
||||
|
||||
# default is 10
|
||||
sleeptime=30
|
||||
|
||||
# default is 8
|
||||
num_workers=5
|
||||
|
||||
# path to log file
|
||||
# default is /var/log/copr/backend.log
|
||||
logfile=/var/log/copr/backend.log
|
||||
|
||||
# default is /var/log/copr/workers/
|
||||
worker_logdir=/var/log/copr/workers/
|
||||
|
||||
# exit on worker failure
|
||||
# default is false
|
||||
#exit_on_worker=false
|
||||
|
||||
# publish fedmsg notifications from workers if true
|
||||
# default is false
|
||||
#fedmsg_enabled=false
|
||||
|
||||
|
||||
[builder]
|
||||
# default is 1800
|
||||
timeout=3600
|
||||
@@ -5,6 +5,6 @@ if [ -f /etc/bashrc ]; then
|
||||
. /etc/bashrc
|
||||
fi
|
||||
|
||||
if [ -f /home/copr/cloud/ec2rc.sh ]; then
|
||||
. /home/copr/cloud/ec2rc.sh
|
||||
if [ -f /srv/copr-work/copr/cloud/ec2rc.sh ]; then
|
||||
. /srv/copr-work/copr/cloud/ec2rc.sh
|
||||
fi
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
#!/usr/bin/bash
|
||||
|
||||
source /home/copr/cloud/ec2rc.sh
|
||||
/home/copr/delete-forgotten-instances.pl
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/usr/bin/perl
|
||||
# this scrip query for all running VM and terminate those
|
||||
# which are not currently started by some ansible script
|
||||
|
||||
while (chomp($a = qx(ps ax |grep -v 'sh -c ps ax' |grep /home/copr/provision/builderpb.yml | grep -v grep))) {
|
||||
# we are starting some VM and could not determine correct list of running VMs
|
||||
sleep 5;
|
||||
}
|
||||
|
||||
#print qx(ps ax |grep ' 172.16.3.' |awk '{ print \$33 }');
|
||||
@IPs = split('\s+', qx(ps ax |grep ' 172.16.3.' |awk '{ print \$33 }'));
|
||||
|
||||
#print "Running instances\n";
|
||||
#print join(", ", @IPs), "\n";
|
||||
for my $i (@IPs) {
|
||||
$check{$i} = 1;
|
||||
}
|
||||
|
||||
@instances = split('\n', qx(/bin/euca-describe-instances));
|
||||
@TO_DELETE = ();
|
||||
for my $i (@instances) {
|
||||
my @COLUMNS = split('\s+', $i);
|
||||
next if $COLUMNS[0] eq 'RESERVATION';
|
||||
#print $COLUMNS[1], ", ", $COLUMNS[15], "\n";
|
||||
push(@TO_DELETE, $COLUMNS[1]) unless $check{$COLUMNS[15]};
|
||||
}
|
||||
$id_merged = join(" ", @TO_DELETE);
|
||||
qx|euca-terminate-instances $id_merged| if ($id_merged);
|
||||
@@ -1,33 +0,0 @@
|
||||
# Directory and files where is stored Copr database files
|
||||
DATA_DIR = '/var/lib/copr/data'
|
||||
DATABASE = '/var/lib/copr/data/copr.db'
|
||||
OPENID_STORE = '/var/lib/copr/data/openid_store'
|
||||
WHOOSHEE_DIR = '/var/lib/copr/data/whooshee'
|
||||
|
||||
SECRET_KEY = '{{ copr_secret_key }}'
|
||||
BACKEND_PASSWORD = '{{ copr_backend_password }}'
|
||||
|
||||
# restrict access to a set of users
|
||||
#USE_ALLOWED_USERS = False
|
||||
#ALLOWED_USERS = ['bonnie', 'clyde']
|
||||
|
||||
SQLALCHEMY_DATABASE_URI = '{{ copr_database_uri }}'
|
||||
|
||||
# Token length, defaults to 30 (max 255)
|
||||
#API_TOKEN_LENGTH = 30
|
||||
|
||||
# Expiration of API token in days
|
||||
#API_TOKEN_EXPIRATION = 180
|
||||
|
||||
# logging options
|
||||
#SEND_LOGS_TO = ['root@localhost']
|
||||
#LOGGING_LEVEL = logging.ERROR
|
||||
|
||||
DEBUG = False
|
||||
SQLALCHEMY_ECHO = False
|
||||
|
||||
CSRF_ENABLED = True
|
||||
WTF_CSRF_ENABLED = True
|
||||
|
||||
# send emails when user's perms change in project?
|
||||
SEND_EMAILS = True
|
||||
@@ -7,58 +7,15 @@ WSGISocketPrefix /var/run/wsgi
|
||||
|
||||
WSGIPassAuthorization On
|
||||
WSGIDaemonProcess 127.0.0.1 user=copr-fe group=copr-fe threads=5
|
||||
WSGIScriptAlias / /usr/share/copr/coprs_frontend/application
|
||||
WSGIScriptAlias / /srv/copr-fe/copr/coprs_frontend/application
|
||||
WSGIProcessGroup 127.0.0.1
|
||||
|
||||
#ErrorLog logs/error_coprs
|
||||
#CustomLog logs/access_coprs common
|
||||
ErrorLog logs/error_coprs
|
||||
CustomLog logs/access_coprs common
|
||||
|
||||
<Directory /usr/share/copr>
|
||||
<Directory /srv/copr-fe/copr>
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
Require all granted
|
||||
Order deny,allow
|
||||
Allow from all
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
<VirtualHost *:443>
|
||||
SSLEngine on
|
||||
SSLProtocol all -SSLv2
|
||||
#optimeize on speed
|
||||
SSLCipherSuite RC4-SHA:AES128-SHA:HIGH:!aNULL:!MD5
|
||||
SSLHonorCipherOrder on
|
||||
|
||||
SSLCertificateFile /etc/pki/tls/ca.crt
|
||||
SSLCertificateKeyFile /etc/pki/tls/private/ca.key
|
||||
ServerName copr-fe.cloud.fedoraproject.org:443
|
||||
|
||||
WSGIPassAuthorization On
|
||||
#WSGIDaemonProcess 127.0.0.1 user=copr-fe group=copr-fe threads=5
|
||||
WSGIScriptAlias / /usr/share/copr/coprs_frontend/application
|
||||
WSGIProcessGroup 127.0.0.1
|
||||
|
||||
#ErrorLog logs/error_coprs
|
||||
#CustomLog logs/access_coprs common
|
||||
|
||||
<Directory /usr/share/copr>
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
Require all granted
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
<IfModule mod_status.c>
|
||||
ExtendedStatus On
|
||||
|
||||
<Location /server-status>
|
||||
SetHandler server-status
|
||||
Require all denied
|
||||
Require host localhost .redhat.com
|
||||
</Location>
|
||||
</IfModule>
|
||||
|
||||
<IfModule mpm_prefork_module>
|
||||
StartServers 8
|
||||
MinSpareServers 8
|
||||
MaxSpareServers 20
|
||||
MaxClients 50
|
||||
MaxRequestsPerChild 10000
|
||||
</IfModule>
|
||||
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
local coprdb copr-fe md5
|
||||
host coprdb copr-fe 127.0.0.1/8 md5
|
||||
host coprdb copr-fe ::1/128 md5
|
||||
local coprdb postgres ident
|
||||
|
||||
# TYPE DATABASE USER ADDRESS METHOD
|
||||
|
||||
# "local" is for Unix domain socket connections only
|
||||
local all all peer
|
||||
# IPv4 local connections:
|
||||
host all all 127.0.0.1/32 ident
|
||||
# IPv6 local connections:
|
||||
host all all ::1/128 ident
|
||||
@@ -1,10 +0,0 @@
|
||||
[Copr]
|
||||
name=Copr
|
||||
failovermethod=priority
|
||||
#baseurl=http://copr-be.cloud.fedoraproject.org/results/msuchy/copr/fedora-19-x86_64/
|
||||
# 172.16.5.4 is copr-be.cloud.fedoraproject.org
|
||||
# see https://fedorahosted.org/fedora-infrastructure/ticket/4025
|
||||
baseurl=http://172.16.5.4/results/msuchy/copr/fedora-20-x86_64/
|
||||
enabled=1
|
||||
gpgcheck=0
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
msuchy+coprmachine@redhat.com
|
||||
kevin@scrye.com
|
||||
nb@fedoraproject.org
|
||||
sgallagh@redhat.com
|
||||
@@ -1,2 +0,0 @@
|
||||
msuchy+coprmachine@redhat.com
|
||||
asamalik@redhat.com
|
||||
@@ -1,7 +0,0 @@
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
172.16.5.31 copr-fe.cloud.fedoraproject.org
|
||||
172.16.5.31 copr.fedoraproject.org
|
||||
172.16.5.4 copr-be.cloud.fedoraproject.org
|
||||
172.16.5.5 copr-be-dev.cloud.fedoraproject.org
|
||||
172.16.5.15 copr-fe-dev.cloud.fedoraproject.org
|
||||
@@ -1,23 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# With the addition of Keystone, to use an openstack cloud you should
|
||||
# authenticate against keystone, which returns a **Token** and **Service
|
||||
# Catalog**. The catalog contains the endpoint for all services the
|
||||
# user/tenant has access to - including nova, glance, keystone, swift.
|
||||
#
|
||||
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We
|
||||
# will use the 1.1 *compute api*
|
||||
export OS_AUTH_URL=http://172.23.0.2:5000/v2.0
|
||||
|
||||
# With the addition of Keystone we have standardized on the term **tenant**
|
||||
# as the entity that owns the resources.
|
||||
|
||||
export OS_TENANT_ID={{ copr_tenant_id }}
|
||||
export OS_TENANT_NAME="copr"
|
||||
|
||||
# In addition to the owning entity (tenant), openstack stores the entity
|
||||
# performing the action as the **user**.
|
||||
export OS_USERNAME=msuchy
|
||||
|
||||
# With Keystone you pass the keystone password.
|
||||
export OS_PASSWORD={{ copr_nova_password }}
|
||||
@@ -90,7 +90,7 @@ server.port = 80
|
||||
##
|
||||
## Use IPv6?
|
||||
##
|
||||
server.use-ipv6 = "disable"
|
||||
server.use-ipv6 = "enable"
|
||||
|
||||
##
|
||||
## bind to a specific IP
|
||||
@@ -112,7 +112,7 @@ server.groupname = "lighttpd"
|
||||
##
|
||||
## Document root
|
||||
##
|
||||
server.document-root = "/var/lib/copr/public_html"
|
||||
server.document-root = "/srv/copr-repo"
|
||||
|
||||
##
|
||||
## The value for the "Server:" response field.
|
||||
@@ -445,11 +445,3 @@ server.upload-dirs = ( "/var/tmp" )
|
||||
#include_shell "cat /etc/lighttpd/vhosts.d/*.conf"
|
||||
##
|
||||
#######################################################################
|
||||
|
||||
$SERVER["socket"] == ":443" {
|
||||
ssl.engine = "enable"
|
||||
ssl.pemfile = "/etc/lighttpd/copr-be.fedoraproject.org.pem"
|
||||
ssl.ca-file = "/etc/lighttpd/DigiCertCA.crt"
|
||||
ssl.disable-client-renegotiation = "enable"
|
||||
ssl.cipher-list = "ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4-SHA:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM"
|
||||
}
|
||||
|
||||
@@ -6,11 +6,11 @@
|
||||
|
||||
# location of inventory file, eliminates need to specify -i
|
||||
|
||||
hostfile = /home/copr/provision/inventory
|
||||
hostfile = /srv/copr-work/provision/inventory
|
||||
|
||||
# location of ansible library, eliminates need to specify --module-path
|
||||
|
||||
library = /home/copr/provision/library:/usr/share/ansible
|
||||
library = /srv/copr-work/provision/library:/usr/share/ansible
|
||||
|
||||
# default module name used in /usr/bin/ansible when -m is not specified
|
||||
|
||||
@@ -48,11 +48,7 @@ sudo_user=root
|
||||
|
||||
# connection to use when -c <connection_type> is not specified
|
||||
|
||||
#transport=paramiko
|
||||
transport=ssh
|
||||
|
||||
# this is needed for paramiko, ssh already have this said in .ssh/config
|
||||
host_key_checking = False
|
||||
transport=paramiko
|
||||
|
||||
# remote SSH port to be used when --port or "port:" or an equivalent inventory
|
||||
# variable is not specified.
|
||||
@@ -73,12 +69,11 @@ remote_user=root
|
||||
|
||||
# additional plugin paths for non-core plugins
|
||||
|
||||
action_plugins = /usr/lib/python2.7/site-packages/ansible/runner/action_plugins:/home/copr/provision/action_plugins/
|
||||
action_plugins = /usr/lib/python2.6/site-packages/ansible/runner/action_plugins:/srv/copr-work/provision/action_plugins/
|
||||
|
||||
|
||||
private_key_file=/home/copr/.ssh/id_rsa
|
||||
|
||||
[paramiko_connection]
|
||||
record_host_keys=False
|
||||
|
||||
# nothing to configure yet
|
||||
|
||||
@@ -88,6 +83,6 @@ record_host_keys=False
|
||||
# will result in poor performance, so use transport=paramiko on older platforms rather than
|
||||
# removing it
|
||||
|
||||
ssh_args=-o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
|
||||
ssh_args=-o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#jinja2:variable_start_string:'[%' , variable_end_string:'%]'
|
||||
---
|
||||
- name: check/create instance
|
||||
hosts: localhost
|
||||
@@ -6,73 +5,53 @@
|
||||
gather_facts: False
|
||||
|
||||
vars:
|
||||
- keypair: buildsys
|
||||
- image: ami-0000000e
|
||||
- instance_type: m1.builder
|
||||
- security_group: builder
|
||||
- OS_AUTH_URL: http://172.23.0.2:5000/v2.0
|
||||
- OS_TENANT_NAME: copr
|
||||
- OS_USERNAME: msuchy
|
||||
- OS_PASSWORD: [% copr_nova_password %]
|
||||
# rhel 6.4 2013-02-21 x86_64 - ami
|
||||
- image_id: cba0c766-84ac-4048-b0f5-6d4000af62f8
|
||||
|
||||
tasks:
|
||||
- name: generate builder name
|
||||
local_action: command echo "Copr builder {{ 999999999 | random }}"
|
||||
register: vm_name
|
||||
|
||||
- name: spin it up
|
||||
local_action: nova_compute auth_url={{OS_AUTH_URL}} flavor_id=6 image_id={{ image_id }} key_name=buildsys login_password={{OS_PASSWORD}} login_tenant_name={{OS_TENANT_NAME}} login_username={{OS_USERNAME}} security_groups={{security_group}} wait=yes name="{{vm_name.stdout}}"
|
||||
register: nova
|
||||
local_action: ec2 keypair=${keypair} image=${image} type=${instance_type} wait=true group=${security_group}
|
||||
register: inst_res
|
||||
|
||||
# should be able to use nova.private_ip, but it does not work with Fedora Cloud.
|
||||
- debug: msg="IP={{ nova.info.addresses.vlannet_3[0].addr }}"
|
||||
|
||||
- debug: msg="vm_name={{vm_name.stdout}}"
|
||||
- name: get its internal ip b/c openstack is sometimes stupid
|
||||
local_action: shell euca-describe-instances ${inst_res.instances[0].id} | grep INSTANCE | cut -f 18
|
||||
register: int_ip
|
||||
|
||||
- name: add it to the special group
|
||||
local_action: add_host hostname={{ nova.info.addresses.vlannet_3[0].addr }} groupname=builder_temp_group
|
||||
local_action: add_host hostname=${int_ip.stdout} groupname=builder_temp_group
|
||||
|
||||
- name: wait for the host to be hot
|
||||
local_action: wait_for host={{ nova.info.addresses.vlannet_3[0].addr }} port=22 delay=5 timeout=600
|
||||
local_action: wait_for host=${int_ip.stdout} port=22 delay=5 timeout=600
|
||||
|
||||
|
||||
- hosts: builder_temp_group
|
||||
user: root
|
||||
gather_facts: False
|
||||
vars:
|
||||
- files: files/
|
||||
|
||||
tasks:
|
||||
- name: edit hostname to be instance name
|
||||
action: shell hostname `curl -s http://169.254.169.254/2009-04-04/meta-data/instance-id`
|
||||
|
||||
- name: install pkgs
|
||||
action: yum state=present pkg={{ item }}
|
||||
with_items:
|
||||
- rsync
|
||||
- openssh-clients
|
||||
- libselinux-python
|
||||
- libsemanage-python
|
||||
|
||||
|
||||
- name: add repos
|
||||
action: copy src={{ files }}/{{ item }} dest=/etc/yum.repos.d/{{ item }}
|
||||
action: copy src=$files/$item dest=/etc/yum.repos.d/$item
|
||||
with_items:
|
||||
- builder.repo
|
||||
- epel6.repo
|
||||
|
||||
- name: install additional pkgs
|
||||
action: yum state=present pkg={{ item }}
|
||||
- name: install pkgs
|
||||
action: yum state=present pkg=$item
|
||||
with_items:
|
||||
- mock
|
||||
- createrepo
|
||||
- yum-utils
|
||||
- pyliblzma
|
||||
- rsync
|
||||
- openssh-clients
|
||||
|
||||
- name: make sure newest rpm
|
||||
action: yum name={{ item }} state=latest
|
||||
with_items:
|
||||
- rpm
|
||||
- glib2
|
||||
|
||||
- yum: name=mock enablerepo=epel-testing state=latest
|
||||
action: yum name=rpm state=latest
|
||||
|
||||
- name: mockbuilder user
|
||||
action: user name=mockbuilder groups=mock
|
||||
@@ -81,16 +60,16 @@
|
||||
action: file state=directory path=/home/mockbuilder/.ssh mode=0700 owner=mockbuilder group=mockbuilder
|
||||
|
||||
- name: mockbuilder authorized_keys
|
||||
action: authorized_key user=mockbuilder key='{{ lookup('file', '/home/copr/provision/files/buildsys.pub') }}'
|
||||
action: authorized_key user=mockbuilder key='$FILE(${files}/buildsys.pub)'
|
||||
|
||||
- name: put updated mock configs into /etc/mock
|
||||
action: copy src={{ files }}/mock/{{ item }} dest=/etc/mock
|
||||
action: copy src=$files/mock/$item dest=/etc/mock
|
||||
with_items:
|
||||
- site-defaults.cfg
|
||||
- epel-5-x86_64.cfg
|
||||
- epel-5-i386.cfg
|
||||
- fedora-20-x86_64.cfg
|
||||
- fedora-20-i386.cfg
|
||||
- epel-7-x86_64.cfg
|
||||
|
||||
- lineinfile: dest=/root/.bashrc line="ulimit -n 10240" insertafter=EOF
|
||||
- name: put updated mockchain into /usr/bin
|
||||
action: copy src=$files/mockchain dest=/usr/bin/mockchain mode=0755 owner=root group=root
|
||||
|
||||
|
||||
|
||||
@@ -5,19 +5,3 @@ enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/repo/RPM-GPG-KEY-INFRASTRUCTURE
|
||||
|
||||
[msuchy-Mock]
|
||||
name=Copr repo for Mock owned by msuchy
|
||||
description=Mock for RHEL6 with patch from https://bugzilla.redhat.com/show_bug.cgi?id=1028438 and https://bugzilla.redhat.com/show_bug.cgi?id=1034805
|
||||
baseurl=http://172.16.5.4/results/msuchy/Mock/epel-6-$basearch/
|
||||
skip_if_unavailable=True
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
|
||||
[msuchy-scl-utils]
|
||||
name=Copr repo for scl-utils owned by msuchy
|
||||
description=scl-utils with patch from https://bugzilla.redhat.com/show_bug.cgi?id=985233
|
||||
baseurl=http://172.16.5.4/results/msuchy/scl-utils/epel-6-$basearch/
|
||||
skip_if_unavailable=True
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
|
||||
|
||||
@@ -3,12 +3,8 @@ config_opts['target_arch'] = 'i386'
|
||||
config_opts['legal_host_arches'] = ('i386', 'i586', 'i686', 'x86_64')
|
||||
config_opts['chroot_setup_cmd'] = 'install buildsys-build'
|
||||
config_opts['dist'] = 'el5' # only useful for --resultdir variable subst
|
||||
if not config_opts.has_key('macros'): config_opts['macros'] = {}
|
||||
config_opts['macros'] = {}
|
||||
config_opts['macros']['%__arch_install_post'] = '%{nil}'
|
||||
config_opts['macros']['%rhel'] = '5'
|
||||
config_opts['macros']['%dist'] = '.el5'
|
||||
config_opts['macros']['%el5'] = '1'
|
||||
config_opts['releasever'] = '5'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
|
||||
@@ -3,12 +3,8 @@ config_opts['target_arch'] = 'x86_64'
|
||||
config_opts['legal_host_arches'] = ('x86_64',)
|
||||
config_opts['chroot_setup_cmd'] = 'install buildsys-build'
|
||||
config_opts['dist'] = 'el5' # only useful for --resultdir variable subst
|
||||
if not config_opts.has_key('macros'): config_opts['macros'] = {}
|
||||
config_opts['macros'] = {}
|
||||
config_opts['macros']['%__arch_install_post'] = '%{nil}'
|
||||
config_opts['macros']['%rhel'] = '5'
|
||||
config_opts['macros']['%dist'] = '.el5'
|
||||
config_opts['macros']['%el5'] = '1'
|
||||
config_opts['releasever'] = '5'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
config_opts['chroothome'] = '/builddir'
|
||||
config_opts['basedir'] = '/var/lib/mock'
|
||||
config_opts['root'] = 'epel-7-x86_64'
|
||||
config_opts['target_arch'] = 'x86_64'
|
||||
config_opts['legal_host_arches'] = ('x86_64',)
|
||||
config_opts['chroot_setup_cmd'] = 'install bash bzip2 coreutils cpio diffutils findutils gawk gcc gcc-c++ grep gzip info make patch redhat-release-server redhat-rpm-config rpm-build sed shadow-utils tar unzip util-linux which xz'
|
||||
config_opts['dist'] = 'el7' # only useful for --resultdir variable subst
|
||||
config_opts['macros'] = {}
|
||||
config_opts['macros']['%dist'] = '.el7'
|
||||
config_opts['macros']['%rhel'] = '7'
|
||||
config_opts['macros']['%el7'] = '1'
|
||||
config_opts['macros']['%_topdir'] = '/builddir/build'
|
||||
config_opts['macros']['%_rpmfilename'] = '%%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm'
|
||||
config_opts['releasever'] = '7'
|
||||
|
||||
config_opts['plugin_conf']['root_cache_enable'] = False
|
||||
config_opts['plugin_conf']['yum_cache_enable'] = False
|
||||
config_opts['plugin_conf']['ccache_enable'] = False
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
cachedir=/var/cache/yum
|
||||
debuglevel=1
|
||||
logfile=/var/log/yum.log
|
||||
reposdir=/dev/null
|
||||
retries=20
|
||||
obsoletes=1
|
||||
gpgcheck=0
|
||||
assumeyes=1
|
||||
syslog_ident=mock
|
||||
syslog_device=
|
||||
|
||||
# repos
|
||||
|
||||
[beta]
|
||||
name=beta
|
||||
baseurl=http://kojipkgs.fedoraproject.org/rhel/beta/7/x86_64/os/
|
||||
|
||||
[epel]
|
||||
name=Extra Packages for Enterprise Linux 7 - $basearch
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=1
|
||||
"""
|
||||
@@ -1,62 +0,0 @@
|
||||
config_opts['root'] = 'fedora-20-i386'
|
||||
config_opts['target_arch'] = 'i686'
|
||||
config_opts['legal_host_arches'] = ('i386', 'i586', 'i686', 'x86_64')
|
||||
config_opts['chroot_setup_cmd'] = 'groupinstall buildsys-build'
|
||||
config_opts['dist'] = 'fc20' # only useful for --resultdir variable subst
|
||||
config_opts['releasever'] = '20'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
cachedir=/var/cache/yum
|
||||
debuglevel=1
|
||||
reposdir=/dev/null
|
||||
logfile=/var/log/yum.log
|
||||
retries=20
|
||||
obsoletes=1
|
||||
gpgcheck=0
|
||||
assumeyes=1
|
||||
syslog_ident=mock
|
||||
syslog_device=
|
||||
|
||||
# repos
|
||||
|
||||
[fedora]
|
||||
name=fedora
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-20&arch=i386
|
||||
failovermethod=priority
|
||||
|
||||
[updates]
|
||||
name=updates
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-f20&arch=i386
|
||||
failovermethod=priority
|
||||
|
||||
[updates-testing]
|
||||
name=updates-testing
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-testing-f20&arch=i386
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[local]
|
||||
name=local
|
||||
baseurl=http://kojipkgs.fedoraproject.org/repos/f20-build/latest/i386/
|
||||
cost=2000
|
||||
enabled=0
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=fedora-debuginfo
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-debug-20&arch=i386
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-debuginfo]
|
||||
name=updates-debuginfo
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-debug-f20&arch=i386
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=updates-testing-debuginfo
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-testing-debug-f20&arch=i386
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
"""
|
||||
@@ -1,62 +0,0 @@
|
||||
config_opts['root'] = 'fedora-20-x86_64'
|
||||
config_opts['target_arch'] = 'x86_64'
|
||||
config_opts['legal_host_arches'] = ('x86_64',)
|
||||
config_opts['chroot_setup_cmd'] = 'groupinstall buildsys-build'
|
||||
config_opts['dist'] = 'fc20' # only useful for --resultdir variable subst
|
||||
config_opts['releasever'] = '20'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
cachedir=/var/cache/yum
|
||||
debuglevel=1
|
||||
reposdir=/dev/null
|
||||
logfile=/var/log/yum.log
|
||||
retries=20
|
||||
obsoletes=1
|
||||
gpgcheck=0
|
||||
assumeyes=1
|
||||
syslog_ident=mock
|
||||
syslog_device=
|
||||
|
||||
# repos
|
||||
|
||||
[fedora]
|
||||
name=fedora
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-20&arch=x86_64
|
||||
failovermethod=priority
|
||||
|
||||
[updates]
|
||||
name=updates
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-f20&arch=x86_64
|
||||
failovermethod=priority
|
||||
|
||||
[updates-testing]
|
||||
name=updates-testing
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-testing-f20&arch=x86_64
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[local]
|
||||
name=local
|
||||
baseurl=http://kojipkgs.fedoraproject.org/repos/f20-build/latest/x86_64/
|
||||
cost=2000
|
||||
enabled=0
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=fedora-debuginfo
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-debug-20&arch=x86_64
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-debuginfo]
|
||||
name=updates-debuginfo
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-debug-f20&arch=x86_64
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=updates-testing-debuginfo
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-testing-debug-f20&arch=x86_64
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
"""
|
||||
@@ -1,63 +0,0 @@
|
||||
config_opts['root'] = 'fedora-21-i386'
|
||||
config_opts['target_arch'] = 'i686'
|
||||
config_opts['legal_host_arches'] = ('i386', 'i586', 'i686', 'x86_64')
|
||||
config_opts['chroot_setup_cmd'] = 'install @buildsys-build'
|
||||
config_opts['dist'] = 'fc21' # only useful for --resultdir variable subst
|
||||
config_opts['extra_chroot_dirs'] = [ '/run/lock', ]
|
||||
config_opts['releasever'] = '21'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
cachedir=/var/cache/yum
|
||||
debuglevel=1
|
||||
reposdir=/dev/null
|
||||
logfile=/var/log/yum.log
|
||||
retries=20
|
||||
obsoletes=1
|
||||
gpgcheck=0
|
||||
assumeyes=1
|
||||
syslog_ident=mock
|
||||
syslog_device=
|
||||
|
||||
# repos
|
||||
|
||||
[fedora]
|
||||
name=fedora
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
|
||||
[updates]
|
||||
name=updates
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
|
||||
[updates-testing]
|
||||
name=updates-testing
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[local]
|
||||
name=local
|
||||
baseurl=http://kojipkgs.fedoraproject.org/repos/f21-build/latest/i386/
|
||||
cost=2000
|
||||
enabled=0
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=fedora-debuginfo
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-debuginfo]
|
||||
name=updates-debuginfo
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-debug-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=updates-testing-debuginfo
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-debug-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
"""
|
||||
@@ -1,63 +0,0 @@
|
||||
config_opts['root'] = 'fedora-21-x86_64'
|
||||
config_opts['target_arch'] = 'x86_64'
|
||||
config_opts['legal_host_arches'] = ('x86_64',)
|
||||
config_opts['chroot_setup_cmd'] = 'install @buildsys-build'
|
||||
config_opts['dist'] = 'fc21' # only useful for --resultdir variable subst
|
||||
config_opts['extra_chroot_dirs'] = [ '/run/lock', ]
|
||||
config_opts['releasever'] = '21'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
cachedir=/var/cache/yum
|
||||
debuglevel=1
|
||||
reposdir=/dev/null
|
||||
logfile=/var/log/yum.log
|
||||
retries=20
|
||||
obsoletes=1
|
||||
gpgcheck=0
|
||||
assumeyes=1
|
||||
syslog_ident=mock
|
||||
syslog_device=
|
||||
|
||||
# repos
|
||||
|
||||
[fedora]
|
||||
name=fedora
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
|
||||
[updates]
|
||||
name=updates
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
|
||||
[updates-testing]
|
||||
name=updates-testing
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[local]
|
||||
name=local
|
||||
baseurl=http://kojipkgs.fedoraproject.org/repos/f21-build/latest/x86_64/
|
||||
cost=2000
|
||||
enabled=0
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=fedora-debuginfo
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-debuginfo]
|
||||
name=updates-debuginfo
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-debug-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=updates-testing-debuginfo
|
||||
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-debug-f$releasever&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
"""
|
||||
337
files/copr/provision/files/mockchain
Executable file
337
files/copr/provision/files/mockchain
Executable file
@@ -0,0 +1,337 @@
|
||||
#!/usr/bin/python -tt
|
||||
# by skvidal@fedoraproject.org
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation; either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Library General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program; if not, write to the Free Software
|
||||
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
# copyright 2012 Red Hat, Inc.
|
||||
|
||||
# SUMMARY
|
||||
# mockchain
|
||||
# take a mock config and a series of srpms
|
||||
# rebuild them one at a time
|
||||
# adding each to a local repo
|
||||
# so they are available as build deps to next pkg being built
|
||||
|
||||
import sys
|
||||
import subprocess
|
||||
import os
|
||||
import optparse
|
||||
import tempfile
|
||||
import shutil
|
||||
from urlgrabber import grabber
|
||||
import time
|
||||
|
||||
mockconfig_path='/etc/mock'
|
||||
|
||||
def createrepo(path):
|
||||
if os.path.exists(path + '/repodata/repomd.xml'):
|
||||
comm = ['/usr/bin/createrepo', '--update', path]
|
||||
else:
|
||||
comm = ['/usr/bin/createrepo', path]
|
||||
cmd = subprocess.Popen(comm,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
out, err = cmd.communicate()
|
||||
return out, err
|
||||
|
||||
def parse_args(args):
|
||||
parser = optparse.OptionParser('\nmockchain -r mockcfg pkg1 [pkg2] [pkg3]')
|
||||
parser.add_option('-r', '--root', default=None, dest='chroot',
|
||||
help="chroot config name/base to use in the mock build")
|
||||
parser.add_option('-l', '--localrepo', default=None,
|
||||
help="local path for the local repo, defaults to making its own")
|
||||
parser.add_option('-c', '--continue', default=False, action='store_true',
|
||||
dest='cont',
|
||||
help="if a pkg fails to build, continue to the next one")
|
||||
parser.add_option('-a','--addrepo', default=[], action='append',
|
||||
dest='repos',
|
||||
help="add these repo baseurls to the chroot's yum config")
|
||||
parser.add_option('--recurse', default=False, action='store_true',
|
||||
help="if more than one pkg and it fails to build, try to build the rest and come back to it")
|
||||
parser.add_option('--log', default=None, dest='logfile',
|
||||
help="log to the file named by this option, defaults to not logging")
|
||||
parser.add_option('--tmp_prefix', default=None, dest='tmp_prefix',
|
||||
help="tmp dir prefix - will default to username-pid if not specified")
|
||||
|
||||
|
||||
#FIXME?
|
||||
# figure out how to pass other args to mock?
|
||||
|
||||
opts, args = parser.parse_args(args)
|
||||
if opts.recurse:
|
||||
opts.cont = True
|
||||
|
||||
if not opts.chroot:
|
||||
print "You must provide an argument to -r for the mock chroot"
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if len(sys.argv) < 3:
|
||||
print "You must specifiy at least 1 package to build"
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
return opts, args
|
||||
|
||||
def add_local_repo(infile, destfile, baseurl, repoid=None):
|
||||
"""take a mock chroot config and add a repo to it's yum.conf
|
||||
infile = mock chroot config file
|
||||
destfile = where to save out the result
|
||||
baseurl = baseurl of repo you wish to add"""
|
||||
|
||||
try:
|
||||
config_opts = {}
|
||||
execfile(infile)
|
||||
if not repoid:
|
||||
repoid=baseurl.split('//')[1].replace('/','_')
|
||||
localyumrepo="""
|
||||
[%s]
|
||||
name=%s
|
||||
baseurl=%s
|
||||
enabled=1
|
||||
skip_if_unavailable=1
|
||||
metadata_expire=30
|
||||
cost=1
|
||||
""" % (repoid, baseurl, baseurl)
|
||||
|
||||
config_opts['yum.conf'] += localyumrepo
|
||||
br_dest = open(destfile, 'w')
|
||||
for k,v in config_opts.items():
|
||||
br_dest.write("config_opts[%r] = %r\n" % (k, v))
|
||||
br_dest.close()
|
||||
return True, ''
|
||||
except (IOError, OSError):
|
||||
return False, "Could not write mock config to %s" % destfile
|
||||
|
||||
return True, ''
|
||||
|
||||
def do_build(opts, cfg, pkg):
|
||||
|
||||
# returns 0, cmd, out, err = failure
|
||||
# returns 1, cmd, out, err = success
|
||||
# returns 2, None, None, None = already built
|
||||
|
||||
s_pkg = os.path.basename(pkg)
|
||||
pdn = s_pkg.replace('.src.rpm', '')
|
||||
resdir = '%s/%s' % (opts.local_repo_dir, pdn)
|
||||
resdir = os.path.normpath(resdir)
|
||||
if not os.path.exists(resdir):
|
||||
os.makedirs(resdir)
|
||||
|
||||
success_file = resdir + '/success'
|
||||
fail_file = resdir + '/fail'
|
||||
|
||||
if os.path.exists(success_file):
|
||||
return 2, None, None, None
|
||||
|
||||
# clean it up if we're starting over :)
|
||||
if os.path.exists(fail_file):
|
||||
os.unlink(fail_file)
|
||||
|
||||
mockcmd = ['/usr/bin/mock',
|
||||
'--configdir', opts.config_path,
|
||||
'--resultdir', resdir,
|
||||
'--uniqueext', opts.uniqueext,
|
||||
'-r', cfg, ]
|
||||
print 'building %s' % s_pkg
|
||||
mockcmd.append(pkg)
|
||||
cmd = subprocess.Popen(mockcmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE )
|
||||
out, err = cmd.communicate()
|
||||
if cmd.returncode == 0:
|
||||
open(success_file, 'w').write('done\n')
|
||||
ret = 1
|
||||
else:
|
||||
open(fail_file, 'w').write('undone\n')
|
||||
ret = 0
|
||||
|
||||
return ret, cmd, out, err
|
||||
|
||||
|
||||
def log(lf, msg):
|
||||
if lf:
|
||||
now = time.time()
|
||||
try:
|
||||
open(lf, 'a').write(str(now) + ':' + msg + '\n')
|
||||
except (IOError, OSError), e:
|
||||
print 'Could not write to logfile %s - %s' % (lf, str(e))
|
||||
print msg
|
||||
|
||||
|
||||
|
||||
def main(args):
|
||||
opts, args = parse_args(args)
|
||||
|
||||
# take mock config + list of pkgs
|
||||
cfg=opts.chroot
|
||||
pkgs=args[1:]
|
||||
mockcfg = mockconfig_path + '/' + cfg + '.cfg'
|
||||
|
||||
if not os.path.exists(mockcfg):
|
||||
print "could not find config: %s" % mockcfg
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if not opts.tmp_prefix:
|
||||
try:
|
||||
opts.tmp_prefix = os.getlogin()
|
||||
except OSError, e:
|
||||
print "Could not find login name for tmp dir prefix add --tmp_prefix"
|
||||
sys.exit(1)
|
||||
pid = os.getpid()
|
||||
opts.uniqueext = '%s-%s' % (opts.tmp_prefix, pid)
|
||||
|
||||
|
||||
# create a tempdir for our local info
|
||||
if opts.localrepo:
|
||||
local_tmp_dir = os.path.abspath(opts.localrepo)
|
||||
if not os.path.exists(local_tmp_dir):
|
||||
os.makedirs(local_tmp_dir)
|
||||
else:
|
||||
pre = 'mock-chain-%s-' % opts.uniqueext
|
||||
local_tmp_dir = tempfile.mkdtemp(prefix=pre, dir='/var/tmp')
|
||||
|
||||
os.chmod(local_tmp_dir, 0755)
|
||||
|
||||
if opts.logfile:
|
||||
opts.logfile = os.path.join(local_tmp_dir, opts.logfile)
|
||||
if os.path.exists(opts.logfile):
|
||||
os.unlink(opts.logfile)
|
||||
|
||||
log(opts.logfile, "starting logfile: %s" % opts.logfile)
|
||||
opts.local_repo_dir = os.path.normpath(local_tmp_dir + '/results/' + cfg + '/')
|
||||
|
||||
if not os.path.exists(opts.local_repo_dir):
|
||||
os.makedirs(opts.local_repo_dir, mode=0755)
|
||||
|
||||
local_baseurl="file://%s" % opts.local_repo_dir
|
||||
log(opts.logfile, "results dir: %s" % opts.local_repo_dir)
|
||||
opts.config_path = os.path.normpath(local_tmp_dir + '/configs/' + cfg + '/')
|
||||
|
||||
if not os.path.exists(opts.config_path):
|
||||
os.makedirs(opts.config_path, mode=0755)
|
||||
|
||||
log(opts.logfile, "config dir: %s" % opts.config_path)
|
||||
|
||||
my_mock_config = opts.config_path + '/' + os.path.basename(mockcfg)
|
||||
|
||||
# modify with localrepo
|
||||
res, msg = add_local_repo(mockcfg, my_mock_config, local_baseurl, 'local_build_repo')
|
||||
if not res:
|
||||
log(opts.logfile, "Error: Could not write out local config: %s" % msg)
|
||||
sys.exit(1)
|
||||
|
||||
for baseurl in opts.repos:
|
||||
res, msg = add_local_repo(my_mock_config, my_mock_config, baseurl)
|
||||
if not res:
|
||||
log(opts.logfile, "Error: Could not add: %s to yum config in mock chroot: %s" % (baseurl, msg))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# these files needed from the mock.config dir to make mock run
|
||||
for fn in ['site-defaults.cfg', 'logging.ini']:
|
||||
pth = mockconfig_path + '/' + fn
|
||||
shutil.copyfile(pth, opts.config_path + '/' + fn)
|
||||
|
||||
|
||||
# createrepo on it
|
||||
out, err = createrepo(opts.local_repo_dir)
|
||||
if err.strip():
|
||||
log(opts.logfile, "Error making local repo: %s" % opts.local_repo_dir)
|
||||
log(opts.logfile, "Err: %s" % err)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
download_dir = tempfile.mkdtemp()
|
||||
downloaded_pkgs = {}
|
||||
built_pkgs = []
|
||||
try_again = True
|
||||
to_be_built = pkgs
|
||||
while try_again:
|
||||
failed = []
|
||||
for pkg in to_be_built:
|
||||
if not pkg.endswith('.rpm'):
|
||||
log(opts.logfile, "%s doesn't appear to be an rpm - skipping" % pkg)
|
||||
failed.append(pkg)
|
||||
continue
|
||||
|
||||
elif pkg.startswith('http://') or pkg.startswith('https://'):
|
||||
url = pkg
|
||||
cwd = os.getcwd()
|
||||
os.chdir(download_dir)
|
||||
try:
|
||||
log(opts.logfile, 'Fetching %s' % url)
|
||||
ug = grabber.URLGrabber()
|
||||
fn = ug.urlgrab(url)
|
||||
pkg = download_dir + '/' + fn
|
||||
except Exception, e:
|
||||
log(opts.logfile, 'Error Downloading %s: %s' % (url, str(e)))
|
||||
failed.append(url)
|
||||
os.chdir(cwd)
|
||||
continue
|
||||
else:
|
||||
os.chdir(cwd)
|
||||
downloaded_pkgs[pkg] = url
|
||||
log(opts.logfile, "Start build: %s" % pkg)
|
||||
ret, cmd, out, err = do_build(opts, cfg, pkg)
|
||||
log(opts.logfile, "End build: %s" % pkg)
|
||||
if ret == 0:
|
||||
if opts.recurse:
|
||||
failed.append(pkg)
|
||||
log(opts.logfile, "Error building %s, will try again" % os.path.basename(pkg))
|
||||
else:
|
||||
log(opts.logfile,"Error building %s" % os.path.basename(pkg))
|
||||
log(opts.logfile,"See logs/results in %s" % opts.local_repo_dir)
|
||||
if not opts.cont:
|
||||
sys.exit(1)
|
||||
|
||||
elif ret == 1:
|
||||
log(opts.logfile, "Success building %s" % os.path.basename(pkg))
|
||||
built_pkgs.append(pkg)
|
||||
# createrepo with the new pkgs
|
||||
out, err = createrepo(opts.local_repo_dir)
|
||||
if err.strip():
|
||||
log(opts.logfile, "Error making local repo: %s" % opts.local_repo_dir)
|
||||
log(opts.logfile, "Err: %s" % err)
|
||||
elif ret == 2:
|
||||
log(opts.logfile, "Skipping already built pkg %s" % os.path.basename(pkg))
|
||||
|
||||
if failed:
|
||||
if len(failed) != len(to_be_built):
|
||||
to_be_built = failed
|
||||
try_again = True
|
||||
log(opts.logfile, 'Trying to rebuild %s failed pkgs' % len(failed))
|
||||
else:
|
||||
log(opts.logfile, "Tried twice - following pkgs could not be successfully built:")
|
||||
for pkg in failed:
|
||||
msg = pkg
|
||||
if pkg in downloaded_pkgs:
|
||||
msg = downloaded_pkgs[pkg]
|
||||
log(opts.logfile, msg)
|
||||
|
||||
try_again = False
|
||||
else:
|
||||
try_again = False
|
||||
|
||||
# cleaning up our download dir
|
||||
shutil.rmtree(download_dir, ignore_errors=True)
|
||||
|
||||
log(opts.logfile, "Results out to: %s" % opts.local_repo_dir)
|
||||
log(opts.logfile, "Pkgs built: %s" % len(built_pkgs))
|
||||
log(opts.logfile, "Packages successfully built in this order:")
|
||||
for pkg in built_pkgs:
|
||||
log(opts.logfile, pkg)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main(sys.argv)
|
||||
sys.exit(0)
|
||||
@@ -1,18 +1,16 @@
|
||||
#jinja2:variable_start_string:'[%' , variable_end_string:'%]'
|
||||
---
|
||||
- name: terminate instance
|
||||
hosts: all
|
||||
user: root
|
||||
gather_facts: False
|
||||
|
||||
vars:
|
||||
- OS_AUTH_URL: http://172.23.0.2:5000/v2.0
|
||||
- OS_TENANT_NAME: copr
|
||||
- OS_USERNAME: msuchy
|
||||
- OS_PASSWORD: [% copr_nova_password %]
|
||||
|
||||
tasks:
|
||||
- name: find the instance id from the builder
|
||||
action: command curl -s http://169.254.169.254/latest/meta-data/instance-id
|
||||
register: instanceid
|
||||
|
||||
- name: terminate it
|
||||
local_action: nova_compute auth_url={{OS_AUTH_URL}} login_password={{OS_PASSWORD}} login_tenant_name={{OS_TENANT_NAME}} login_username={{OS_USERNAME}} name="{{copr_task.vm_name}}" state=absent
|
||||
local_action: command euca-terminate-instances ${instanceid.stdout}
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
noc1.phx2.fedoraproject.org
|
||||
10.5.126.41
|
||||
192.168.1.10
|
||||
209.132.181.35
|
||||
|
||||
# RDU NAT
|
||||
66.187.233.202
|
||||
@@ -26,5 +25,3 @@ noc1.phx2.fedoraproject.org
|
||||
66.187.237.10
|
||||
# brno RHT NAT
|
||||
209.132.186.34
|
||||
# IUD RHT NAT
|
||||
66.187.233.203
|
||||
@@ -1,3 +0,0 @@
|
||||
# run twice daily rsync of download. but lock it
|
||||
MAILTO=smooge@gmail.com
|
||||
00 11,23 * * * root /usr/local/bin/lock-wrapper sync-up-downloads "/usr/local/bin/sync-up-downloads"
|
||||
@@ -1,27 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
## This script is used to sync data from main download servers to
|
||||
## secondary server at ibiblio.
|
||||
##
|
||||
|
||||
RSYNC='/usr/bin/rsync'
|
||||
RS_OPT="-avSHP --numeric-ids"
|
||||
RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates"
|
||||
ALT_EXCLUDES="--exclude deltaisos/archive"
|
||||
EPL_EXCLUDES=""
|
||||
FED_EXCLUDES=""
|
||||
|
||||
SERVER=dl.fedoraproject.org
|
||||
|
||||
# http://dl.fedoraproject.org/pub/alt/stage/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-alt/stage/ /srv/pub/alt/stage/ | tail -n2 | logger -p local0.notice -t rsync_updates_alt_stg
|
||||
# http://dl.fedoraproject.org/pub/alt/bfo/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-alt/bfo/ /srv/pub/alt/bfo/ | tail -n2 | logger -p local0.notice -t rsync_updates_alt_bfo
|
||||
# http://dl.fedoraproject.org/pub/epel/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${EPL_EXCLUDES} ${SERVER}::fedora-epel/ /srv/pub/epel/ | tail -n2 | logger -p local0.notice -t rsync_updates_epel
|
||||
# http://dl.fedoraproject.org/pub/fedora/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${FED_EXCLUDES} ${SERVER}::fedora-enchilada0/ /srv/pub/fedora/ | tail -n2 | logger -p local0.notice -t rsync_updates_fedora
|
||||
|
||||
# Let MM know I'm all up to date
|
||||
#/usr/bin/report_mirror
|
||||
@@ -1 +1 @@
|
||||
*/30 * * * * root /usr/local/bin/lock-wrapper fasClient "/bin/sleep $(($RANDOM \% 180)); /usr/bin/fasClient -i | /usr/local/bin/nag-once fassync 1d 2>&1"
|
||||
*/10 * * * * root /usr/local/bin/lock-wrapper fasClient "/bin/sleep $(($RANDOM \% 180)); /usr/bin/fasClient -i | /usr/local/bin/nag-once fassync 1d 2>&1"
|
||||
@@ -1,10 +1,6 @@
|
||||
[global]
|
||||
; url - Location to fas server
|
||||
{% if env == "staging" %}
|
||||
url = https://admin.stg.fedoraproject.org/accounts/
|
||||
{% else %}
|
||||
url = https://admin.fedoraproject.org/accounts/
|
||||
{% endif %}
|
||||
|
||||
; temp - Location to generate files while user creation process is happening
|
||||
temp = /var/db
|
||||
@@ -30,7 +26,7 @@ cla_group = cla_done
|
||||
; in 'groups'
|
||||
|
||||
; groups that should have a shell account on this system.
|
||||
{% if fas_client_groups is defined %}
|
||||
{% if fas_client_groups %}
|
||||
groups = sysadmin-main,{{ fas_client_groups }}
|
||||
{% else %}
|
||||
groups = sysadmin-main
|
||||
@@ -44,7 +40,7 @@ restricted_groups =
|
||||
; need to disable password based logins in order for this value to have any
|
||||
; security meaning. Group types can be placed here as well, for example
|
||||
; @hg,@git,@svn
|
||||
{% if fas_client_ssh_groups is defined %}
|
||||
{% if fas_client_ssh_groups %}
|
||||
ssh_restricted_groups = {{ fas_client_ssh_groups }}
|
||||
{% else %}
|
||||
ssh_restricted_groups =
|
||||
@@ -70,14 +66,14 @@ home_backup_dir = /home/fedora.bak
|
||||
; is a powerfull way to restrict access to a machine. An alternative example
|
||||
; could be given to people who should only have cvs access on the machine.
|
||||
; setting this value to "/usr/bin/cvs server" would do this.
|
||||
{% if fas_client_restricted_app is defined %}
|
||||
{% if fas_client_restricted_app %}
|
||||
ssh_restricted_app = {{ fas_client_restricted_app }}
|
||||
{% else %}
|
||||
ssh_restricted_app =
|
||||
{% endif %}
|
||||
|
||||
; ssh_admin_app - This is the path to an app that an admin is allowed to use.
|
||||
{% if fas_client_admin_app is defined %}
|
||||
{% if fas_client_admin_app %}
|
||||
ssh_admin_app = {{ fas_client_admin_app }}
|
||||
{% else %}
|
||||
ssh_admin_app =
|
||||
@@ -1,3 +1,4 @@
|
||||
|
||||
config = dict(
|
||||
# Set this to dev if you're hacking on fedmsg or an app locally.
|
||||
# Set to stg or prod if running in the Fedora Infrastructure.
|
||||
@@ -7,20 +8,6 @@ config = dict(
|
||||
environment="prod",
|
||||
{% endif %}
|
||||
|
||||
{% if not ansible_hostname.startswith('busgateway') %}
|
||||
# These options provide a place for hub processes to write out their last
|
||||
# processed message. This let's them read it in at startup and figure out
|
||||
# what kind of backlog they have to deal with.
|
||||
status_directory="/var/run/fedmsg/status",
|
||||
|
||||
# This is the URL of a datagrepper instance that we can query for backlog.
|
||||
{% if env == 'staging' %}
|
||||
datagrepper_url="https://apps.stg.fedoraproject.org/datagrepper/raw",
|
||||
{% else %}
|
||||
datagrepper_url="https://apps.fedoraproject.org/datagrepper/raw",
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
# This used to be set to 1 for safety, but it turns out it was
|
||||
# excessive. It is the number of seconds that fedmsg should sleep
|
||||
# after it has initialized, but before it begins to try and send any
|
||||
@@ -58,19 +45,3 @@ config = dict(
|
||||
zmq_tcp_keepalive_idle=60,
|
||||
zmq_tcp_keepalive_intvl=5,
|
||||
)
|
||||
|
||||
# This option adds an IPC socket by which we can monitor hub health.
|
||||
try:
|
||||
import os
|
||||
import psutil
|
||||
|
||||
pid = os.getpid()
|
||||
proc = [p for p in psutil.process_iter() if p.pid == pid][0]
|
||||
|
||||
config['moksha.monitoring.socket'] = \
|
||||
'ipc:///var/run/fedmsg/monitoring-%s.socket' % proc.name
|
||||
except (OSError, ImportError):
|
||||
# We run into issues when trying to import psutil from mod_wsgi on rhel7
|
||||
# but this feature is of no concern in that context, so just fail quietly.
|
||||
# https://github.com/jmflinuxtx/kerneltest-harness/pull/17#issuecomment-48007837
|
||||
pass
|
||||
@@ -6,8 +6,8 @@ suffix = 'phx2.fedoraproject.org'
|
||||
|
||||
config = dict(
|
||||
endpoints={
|
||||
"summershum.summershum01": [
|
||||
"tcp://summershum01.%s:3000" % suffix,
|
||||
"fedbadges.badges-backend01": [
|
||||
"tcp://badges-backend01.%s:3000" % suffix,
|
||||
],
|
||||
},
|
||||
)
|
||||
@@ -7,7 +7,6 @@ non_phx_suffix = 'fedoraproject.org'
|
||||
vpn_suffix = 'vpn.fedoraproject.org'
|
||||
{% endif %}
|
||||
|
||||
|
||||
config = dict(
|
||||
# This is a dict of possible addresses from which fedmsg can send
|
||||
# messages. fedmsg.init(...) requires that a 'name' argument be passed
|
||||
@@ -17,23 +16,47 @@ config = dict(
|
||||
# name of it's calling module to determine which endpoint definition
|
||||
# to use. This can be overridden by explicitly providing the name in
|
||||
# the initial call to fedmsg.init(...).
|
||||
"bodhi.branched-composer": [
|
||||
"tcp://branched-composer.%s:3000" % suffix,
|
||||
"tcp://branched-composer.%s:3001" % suffix,
|
||||
],
|
||||
"bodhi.rawhide-composer": [
|
||||
"tcp://rawhide-composer.%s:3000" % suffix,
|
||||
"tcp://rawhide-composer.%s:3001" % suffix,
|
||||
],
|
||||
"bodhi.bodhi01": [
|
||||
"tcp://bodhi01.%s:300%i" % (suffix, i)
|
||||
"bodhi.app01": [
|
||||
"tcp://app01.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.bodhi02": [
|
||||
"tcp://bodhi02.%s:300%i" % (suffix, i)
|
||||
"bodhi.app02": [
|
||||
"tcp://app02.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.releng01": [
|
||||
"tcp://releng01.%s:3000" % suffix,
|
||||
"tcp://releng01.%s:3001" % suffix,
|
||||
],
|
||||
"bodhi.releng02": [
|
||||
"tcp://releng02.%s:3000" % suffix,
|
||||
"tcp://releng02.%s:3001" % suffix,
|
||||
],
|
||||
{% if not env == 'staging' %}
|
||||
"bodhi.app03": [
|
||||
"tcp://app03.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app04": [
|
||||
"tcp://app04.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app05": [
|
||||
"tcp://app05.%s:300%i" % (non_phx_suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app06": [
|
||||
"tcp://app06.%s:300%i" % (non_phx_suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app07": [
|
||||
"tcp://app07.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app08": [
|
||||
"tcp://app08.%s:300%i" % (non_phx_suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.releng04": [
|
||||
"tcp://releng04.%s:3000" % suffix,
|
||||
"tcp://releng04.%s:3001" % suffix,
|
||||
@@ -43,39 +66,41 @@ config = dict(
|
||||
"tcp://relepel01.%s:3001" % suffix,
|
||||
],
|
||||
{% endif %}
|
||||
# FAS is a little out of the ordinary. It has 40 endpoints instead of
|
||||
# FAS is a little out of the ordinary. It has 32 endpoints instead of
|
||||
# the usual 8 since there are so many mod_wsgi processes for it.
|
||||
"fas.fas01": [
|
||||
"tcp://fas01.%s:30%02i" % (suffix, i)
|
||||
for i in range(40)
|
||||
for i in range(32)
|
||||
],
|
||||
{% if env != 'staging' %}
|
||||
"fas.fas02": [
|
||||
"tcp://fas02.%s:30%02i" % (suffix, i)
|
||||
for i in range(40)
|
||||
for i in range(32)
|
||||
],
|
||||
"fas.fas03": [
|
||||
"tcp://fas03.%s:30%02i" % (suffix, i)
|
||||
for i in range(40)
|
||||
for i in range(32)
|
||||
],
|
||||
{% endif %}
|
||||
# fedoratagger needs 32 endpoints too, just like FAS.
|
||||
"fedoratagger.tagger01": [
|
||||
"tcp://tagger01.%s:30%02i" % (suffix, i)
|
||||
# Well, fedoratagger needs 32 endpoints too, just like FAS.
|
||||
"fedoratagger.packages01": [
|
||||
"tcp://packages01.%s:30%02i" % (suffix, i)
|
||||
for i in range(32)
|
||||
],
|
||||
{% if env != 'staging' %}
|
||||
"fedoratagger.tagger02": [
|
||||
"tcp://tagger02.%s:30%02i" % (suffix, i)
|
||||
"fedoratagger.packages02": [
|
||||
"tcp://packages02.%s:30%02i" % (suffix, i)
|
||||
for i in range(32)
|
||||
],
|
||||
{% endif %}
|
||||
|
||||
# This used to be on value01 and value03.. but now we just have one
|
||||
"supybot.value01": [
|
||||
"tcp://value01.%s:3000" % suffix,
|
||||
"busmon_consumers.busgateway01": [
|
||||
"tcp://busgateway01.%s:3000" % suffix,
|
||||
],
|
||||
|
||||
{% if env != 'staging' %}
|
||||
"supybot.value03": [
|
||||
"tcp://value03.%s:3000" % suffix,
|
||||
],
|
||||
{% endif %}
|
||||
# Askbot runs as 6 processes with 1 thread each.
|
||||
"askbot.ask01": [
|
||||
"tcp://ask01.%s:30%02i" % (suffix, i)
|
||||
@@ -88,6 +113,18 @@ config = dict(
|
||||
for i in range(6)
|
||||
],
|
||||
|
||||
{% if env != 'staging' %}
|
||||
# fedorahosted trac runs as 4 processes with 4 threads each.
|
||||
"trac.hosted03": [
|
||||
"tcp://hosted03.%s:30%02i" % (vpn_suffix, i)
|
||||
for i in range(16)
|
||||
],
|
||||
"trac.hosted04": [
|
||||
"tcp://hosted04.%s:30%02i" % (vpn_suffix, i)
|
||||
for i in range(16)
|
||||
],
|
||||
{% endif %}
|
||||
|
||||
# koji is not listed here since it publishes to the fedmsg-relay
|
||||
},
|
||||
)
|
||||
32
files/fedmsg/logging.py.j2
Normal file
32
files/fedmsg/logging.py.j2
Normal file
@@ -0,0 +1,32 @@
|
||||
# Setup fedmsg logging.
|
||||
# See the following for constraints on this format http://bit.ly/Xn1WDn
|
||||
config = dict(
|
||||
logging=dict(
|
||||
version=1,
|
||||
formatters=dict(
|
||||
bare={
|
||||
"format": "%(message)s",
|
||||
},
|
||||
),
|
||||
handlers=dict(
|
||||
console={
|
||||
"class": "logging.StreamHandler",
|
||||
"formatter": "bare",
|
||||
"level": "DEBUG",
|
||||
"stream": "ext://sys.stdout",
|
||||
}
|
||||
),
|
||||
loggers=dict(
|
||||
fedmsg={
|
||||
"level": "DEBUG",
|
||||
"propagate": False,
|
||||
"handlers": ["console"],
|
||||
},
|
||||
moksha={
|
||||
"level": "DEBUG",
|
||||
"propagate": False,
|
||||
"handlers": ["console"],
|
||||
},
|
||||
),
|
||||
),
|
||||
)
|
||||
46
files/fedmsg/pkgdb.py.j2
Normal file
46
files/fedmsg/pkgdb.py.j2
Normal file
@@ -0,0 +1,46 @@
|
||||
{% if env == 'staging' %}
|
||||
suffix = 'stg.phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'stg.fedoraproject.org'
|
||||
{% else %}
|
||||
suffix = 'phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'fedoraproject.org'
|
||||
{% endif %}
|
||||
|
||||
config = dict(
|
||||
endpoints={
|
||||
"pkgdb.app01": [
|
||||
"tcp://app01.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app02": [
|
||||
"tcp://app02.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
{% if not env == 'staging' %}
|
||||
"pkgdb.app03": [
|
||||
"tcp://app03.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app04": [
|
||||
"tcp://app04.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app05": [
|
||||
"tcp://app05.%s:301%i" % (non_phx_suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app06": [
|
||||
"tcp://app06.%s:301%i" % (non_phx_suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app07": [
|
||||
"tcp://app07.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app08": [
|
||||
"tcp://app08.%s:301%i" % (non_phx_suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
{% endif %}
|
||||
},
|
||||
)
|
||||
@@ -13,7 +13,10 @@ config = dict(
|
||||
# This is the output side of the relay to which all other
|
||||
# services can listen.
|
||||
"relay_outbound": [
|
||||
# Messages emerge here
|
||||
# Messages from inside phx2 and the vpn emerge here
|
||||
"tcp://app01.%s:3999" % suffix,
|
||||
|
||||
# Messages from coprs and secondary arch composes emerge here
|
||||
"tcp://busgateway01.%s:3999" % suffix,
|
||||
],
|
||||
},
|
||||
@@ -24,26 +27,13 @@ config = dict(
|
||||
# It is also used by the mediawiki php plugin which, due to the oddities of
|
||||
# php, can't maintain a single passive-bind endpoint of it's own.
|
||||
relay_inbound=[
|
||||
{% if 'persistent-cloud' in group_names or 'jenkins-cloud' in group_names %}
|
||||
|
||||
# Stuff from the cloud has to go through our external proxy first..
|
||||
#"tcp://hub.fedoraproject.org:9941",
|
||||
|
||||
# ...and normally, we'd like them to go through round-robin, but we're
|
||||
# not getting messages in from proxies across the vpn. So, only use
|
||||
# proxy01 for now.
|
||||
"tcp://209.132.181.16:9941",
|
||||
|
||||
{% else %}
|
||||
|
||||
# Primarily, scripts from inside phx2 connect here.
|
||||
# Furthermore, scripts from outside (coprs, secondary arch koji) connect
|
||||
# here via haproxy.
|
||||
"tcp://busgateway01.%s:9941" % suffix,
|
||||
# Scripts inside phx2 connect here
|
||||
"tcp://app01.%s:3998" % suffix,
|
||||
|
||||
# Scripts from the vpn (people03) connect here
|
||||
"tcp://busgateway01.vpn.fedoraproject.org:3998",
|
||||
"tcp://app01.vpn.fedoraproject.org:3998",
|
||||
|
||||
{% endif %}
|
||||
# Scripts from outside connect here (coprs, secondary arch composes)
|
||||
"tcp://busgateway01.%s:9941" % suffix,
|
||||
],
|
||||
)
|
||||
325
files/fedmsg/ssl.py.j2
Normal file
325
files/fedmsg/ssl.py.j2
Normal file
@@ -0,0 +1,325 @@
|
||||
|
||||
{% if env == 'staging' %}
|
||||
suffix = "stg.phx2.fedoraproject.org"
|
||||
app_hosts = [
|
||||
"app01.stg.phx2.fedoraproject.org",
|
||||
"app02.stg.phx2.fedoraproject.org",
|
||||
]
|
||||
topic_prefix = "org.fedoraproject.stg."
|
||||
{% else %}
|
||||
suffix = "phx2.fedoraproject.org"
|
||||
app_hosts = [
|
||||
"app01.phx2.fedoraproject.org",
|
||||
"app02.phx2.fedoraproject.org",
|
||||
"app03.phx2.fedoraproject.org",
|
||||
"app04.phx2.fedoraproject.org",
|
||||
"app05.fedoraproject.org",
|
||||
"app06.fedoraproject.org",
|
||||
"app07.phx2.fedoraproject.org",
|
||||
"app08.fedoraproject.org",
|
||||
]
|
||||
topic_prefix = "org.fedoraproject.prod."
|
||||
{% endif %}
|
||||
|
||||
vpn_suffix = "vpn.fedoraproject.org"
|
||||
|
||||
config = dict(
|
||||
sign_messages=True,
|
||||
validate_signatures=True,
|
||||
ssldir="/etc/pki/fedmsg",
|
||||
|
||||
crl_location="https://fedoraproject.org/fedmsg/crl.pem",
|
||||
crl_cache="/var/run/fedmsg/crl.pem",
|
||||
crl_cache_expiry=86400, # Daily
|
||||
|
||||
certnames=dict(
|
||||
[
|
||||
("shell.app0%i" % i, "shell-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("bodhi.app0%i" % i, "bodhi-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("pkgdb.app0%i" % i, "pkgdb-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("mediawiki.app0%i" % i, "mediawiki-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("shell.fas0%i" % i, "shell-fas0%i.%s" % (i, suffix))
|
||||
for i in range(1, 4)
|
||||
] + [
|
||||
("fas.fas0%i" % i, "fas-fas0%i.%s" % (i, suffix))
|
||||
for i in range(1, 4)
|
||||
] + [
|
||||
("shell.packages0%i" % i, "shell-packages0%i.%s" % (i, suffix))
|
||||
for i in range(1, 3)
|
||||
] + [
|
||||
("fedoratagger.packages0%i" % i, "fedoratagger-packages0%i.%s" % (i, suffix))
|
||||
for i in range(1, 3)
|
||||
] + [
|
||||
("shell.pkgs0%i" % i, "shell-pkgs0%i.%s" % (i, suffix))
|
||||
for i in range(1, 2)
|
||||
] + [
|
||||
("scm.pkgs0%i" % i, "scm-pkgs0%i.%s" % (i, suffix))
|
||||
for i in range(1, 2)
|
||||
] + [
|
||||
("lookaside.pkgs0%i" % i, "lookaside-pkgs0%i.%s" % (i, suffix))
|
||||
for i in range(1, 2)
|
||||
] + [
|
||||
("shell.relepel01", "shell-relepel01.%s" % suffix),
|
||||
("shell.releng01", "shell-releng01.%s" % suffix),
|
||||
("shell.releng02", "shell-releng02.%s" % suffix),
|
||||
("shell.releng03", "shell-releng03.%s" % suffix),
|
||||
("shell.releng04", "shell-releng04.%s" % suffix),
|
||||
("bodhi.relepel01", "bodhi-relepel01.%s" % suffix),
|
||||
("bodhi.releng01", "bodhi-releng01.%s" % suffix),
|
||||
("bodhi.releng02", "bodhi-releng02.%s" % suffix),
|
||||
("bodhi.releng03", "bodhi-releng03.%s" % suffix),
|
||||
("bodhi.releng04", "bodhi-releng04.%s" % suffix),
|
||||
] + [
|
||||
("busmon_consumers.busgateway01", "busmon-busgateway01.%s" % suffix),
|
||||
("shell.busgateway01", "shell-busgateway01.%s" % suffix),
|
||||
] + [
|
||||
("shell.value01", "shell-value01.%s" % suffix),
|
||||
("shell.value03", "shell-value03.%s" % suffix),
|
||||
("supybot.value03", "supybot-value03.%s" % suffix),
|
||||
] + [
|
||||
("koji.koji04", "koji-koji04.%s" % suffix),
|
||||
("koji.koji01", "koji-koji01.%s" % suffix),
|
||||
("koji.koji03", "koji-koji03.%s" % suffix),
|
||||
("shell.koji04", "shell-koji04.%s" % suffix),
|
||||
("shell.koji01", "shell-koji01.%s" % suffix),
|
||||
("shell.koji03", "shell-koji03.%s" % suffix),
|
||||
] + [
|
||||
("nagios.noc01", "nagios-noc01.%s" % suffix),
|
||||
("shell.noc01", "shell-noc01.%s" % suffix),
|
||||
] + [
|
||||
("git.hosted03", "git-hosted03.%s" % vpn_suffix),
|
||||
("git.hosted04", "git-hosted04.%s" % vpn_suffix),
|
||||
("trac.hosted03", "trac-hosted03.%s" % vpn_suffix),
|
||||
("trac.hosted04", "trac-hosted04.%s" % vpn_suffix),
|
||||
("shell.hosted03", "shell-hosted03.%s" % vpn_suffix),
|
||||
("shell.hosted04", "shell-hosted04.%s" % vpn_suffix),
|
||||
] + [
|
||||
("shell.lockbox01", "shell-lockbox01.%s" % suffix),
|
||||
("announce.lockbox01", "announce-lockbox01.%s" % suffix),
|
||||
] + [
|
||||
# These first two entries are here to placate a bug in
|
||||
# python-askbot-fedmsg-0.0.4. They can be removed once
|
||||
# python-askbot-fedmsg-0.0.5 hits town.
|
||||
("askbot.ask01.phx2.fedoraproject.org", "askbot-ask01.%s" % suffix),
|
||||
("askbot.ask01.stg.phx2.fedoraproject.org", "askbot-ask01.%s" % suffix),
|
||||
|
||||
("askbot.ask01", "askbot-ask01.%s" % suffix),
|
||||
("shell.ask01", "shell-ask01.%s" % suffix),
|
||||
|
||||
("askbot.ask02", "askbot-ask02.%s" % suffix),
|
||||
("shell.ask02", "shell-ask02.%s" % suffix),
|
||||
|
||||
("fedbadges.badges-backend01", "fedbadges-badges-backend01.%s" % suffix),
|
||||
("shell.badges-backend01", "shell-badges-backend01.%s" % suffix),
|
||||
]),
|
||||
routing_policy={
|
||||
# The gist here is that only messages signed by the
|
||||
# bodhi-app0{1,2,3,4,5,6,7,8} certificates may bear the
|
||||
# "org.fedoraproject.prod.bodhi.update.request.stable" topic, or else
|
||||
# they fail validation and are either dropped or marked as invalid
|
||||
# (depending on the consumer's wishes).
|
||||
#
|
||||
# There is another option that we do not set. If `routing_nitpicky` is
|
||||
# set to True, then a given message's topic *must* appear in this list
|
||||
# in order for it to pass validation. For instance, we have
|
||||
# routing_nitpicky set to False by default and no
|
||||
# "org.fedoraproject.prod.logger.log" topics appear in this policy,
|
||||
# therefore, any message bearing that topic and *any* certificate signed
|
||||
# by our CA may pass validation.
|
||||
#
|
||||
topic_prefix + "bodhi.update.request.stable": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.update.request.testing": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.update.request.unpush": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.update.comment": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.buildroot_override.tag": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.buildroot_override.untag": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.mashtask.mashing": [
|
||||
"bodhi-releng04.%s" % suffix,
|
||||
"bodhi-relepel01.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "bodhi.mashtask.complete": [
|
||||
"bodhi-releng04.%s" % suffix,
|
||||
"bodhi-relepel01.%s" % suffix,
|
||||
],
|
||||
|
||||
|
||||
# Compose (rel-eng) messages (use the bodhi certs)
|
||||
topic_prefix + "compose.rawhide.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.mash.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.mash.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.rsync.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.rsync.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.pungify.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.pungify.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.mash.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.mash.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.rsync.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.rsync.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
|
||||
|
||||
#FAS messages
|
||||
topic_prefix + "fas.user.create": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.user.update": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.edit": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.update": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.create": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.role.update": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.member.remove": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.member.sponsor": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.member.apply": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
|
||||
# Git/SCM messages
|
||||
topic_prefix + "git.receive": [
|
||||
"scm-pkgs01.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "git.lookaside.new": [
|
||||
"lookaside-pkgs01.%s" % suffix,
|
||||
],
|
||||
|
||||
# Tagger messages
|
||||
topic_prefix + "fedoratagger.tag.update": [
|
||||
"fedoratagger-packages0%i.%s" % (i, suffix) for i in range(1, 3)
|
||||
],
|
||||
topic_prefix + "fedoratagger.tag.create": [
|
||||
"fedoratagger-packages0%i.%s" % (i, suffix) for i in range(1, 3)
|
||||
],
|
||||
topic_prefix + "fedoratagger.user.rank.update": [
|
||||
"fedoratagger-packages0%i.%s" % (i, suffix) for i in range(1, 3)
|
||||
],
|
||||
|
||||
# Mediawiki messages
|
||||
topic_prefix + "wiki.article.edit": [
|
||||
"mediawiki-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "wiki.upload.complete": [
|
||||
"mediawiki-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
|
||||
# Pkgdb messages
|
||||
topic_prefix + "pkgdb.acl.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.acl.request.toggle": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.acl.user.remove": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.owner.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.package.new": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.package.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.package.retire": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.critpath.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
|
||||
# Planet/venus
|
||||
topic_prefix + "planet.post.new": [
|
||||
"planet-people03.vpn.fedoraproject.org",
|
||||
],
|
||||
|
||||
# Supybot/meetbot
|
||||
topic_prefix + "meetbot.meeting.start": [
|
||||
"supybot-value03.%s" % suffix,
|
||||
],
|
||||
|
||||
# Only @spot and @rbergeron can use this one
|
||||
topic_prefix + "announce.announcement": [
|
||||
"announce-lockbox01.phx2.fedoraproject.org",
|
||||
],
|
||||
},
|
||||
)
|
||||
|
||||
@@ -1,40 +0,0 @@
|
||||
#!/bin/bash
|
||||
# backup.sh will run FROM backup03 TO the various GNOME boxes on the set. (there's two set
|
||||
# of machines, one being the ones with a public IP and the others being the IP-less ones that
|
||||
# will forward their agent through bastion.gnome.org)
|
||||
|
||||
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin
|
||||
|
||||
MACHINES='signal.gnome.org
|
||||
webapps2.gnome.org
|
||||
clutter.gnome.org
|
||||
blogs.gnome.org
|
||||
chooser.gnome.org
|
||||
git.gnome.org
|
||||
webapps.gnome.org
|
||||
socket.gnome.org
|
||||
bugzilla-web.gnome.org
|
||||
progress.gnome.org
|
||||
clipboard.gnome.org
|
||||
cloud-ssh.gnome.org
|
||||
bastion.gnome.org
|
||||
spinner.gnome.org
|
||||
master.gnome.org
|
||||
combobox.gnome.org
|
||||
restaurant.gnome.org
|
||||
expander.gnome.org
|
||||
live.gnome.org
|
||||
extensions.gnome.org
|
||||
view.gnome.org
|
||||
puppet.gnome.org
|
||||
accelerator.gnome.org
|
||||
range.gnome.org
|
||||
pentagon.gimp.org'
|
||||
|
||||
BACKUP_DIR='/fedora_backups/gnome/'
|
||||
LOGS_DIR='/fedora_backups/gnome/logs'
|
||||
|
||||
for MACHINE in $MACHINES; do
|
||||
rsync -avz -e 'ssh -F /usr/local/etc/gnome_ssh_config' --bwlimit=2000 $MACHINE:/etc/rsyncd/backup.exclude $BACKUP_DIR/excludes/$MACHINE.exclude
|
||||
rdiff-backup --remote-schema 'ssh -F /usr/local/etc/gnome_ssh_config %s rdiff-backup --server' --print-statistics --exclude-device-files --exclude /selinux --exclude /sys --exclude /proc --exclude-globbing-filelist $BACKUP_DIR/excludes/$MACHINE.exclude $MACHINE::/ $BACKUP_DIR/$MACHINE/ | mail -s "Daily backup: $MACHINE" backups@gnome.org
|
||||
done
|
||||
@@ -1,8 +0,0 @@
|
||||
Host live.gnome.org extensions.gnome.org puppet.gnome.org view.gnome.org drawable.gnome.org
|
||||
User root
|
||||
IdentityFile /usr/local/etc/gnome_backup_id.rsa
|
||||
ProxyCommand ssh -W %h:%p bastion.gnome.org -F /usr/local/etc/gnome_ssh_config
|
||||
|
||||
Host *.gnome.org pentagon.gimp.org
|
||||
User root
|
||||
IdentityFile /usr/local/etc/gnome_backup_id.rsa
|
||||
@@ -4,7 +4,6 @@
|
||||
10.5.125.36 kojipkgs.fedoraproject.org
|
||||
10.5.126.23 infrastructure.fedoraproject.org
|
||||
10.5.124.138 arm.koji.fedoraproject.org
|
||||
10.5.124.138 armpkgs.fedoraproject.org
|
||||
10.5.125.44 pkgs.fedoraproject.org pkgs
|
||||
#
|
||||
# This is proxy01.phx2.fedoraproject.org
|
||||
@@ -0,0 +1,10 @@
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
|
||||
10.5.126.89 admin.fedoraproject.org
|
||||
10.5.126.88 proxy01.phx2.fedoraproject.org proxy1 proxy2 proxy3 proxy4 proxy5 proxy01 proxy02 proxy03 proxy04 proxy05 fedoraproject.org
|
||||
10.5.126.86 fas01.phx2.fedoraproject.org fas1 fas2 fas01 fas02 fas03 fas-all
|
||||
10.5.126.23 infrastructure.fedoraproject.org
|
||||
|
||||
10.5.126.85 db-datanommer db-datanommer
|
||||
10.5.126.85 db-tahrir db-tahrir
|
||||
11
files/hosts/badges-web01.stg.phx2.fedoraproject.org-hosts
Normal file
11
files/hosts/badges-web01.stg.phx2.fedoraproject.org-hosts
Normal file
@@ -0,0 +1,11 @@
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
|
||||
10.5.126.89 admin.fedoraproject.org
|
||||
10.5.126.88 proxy01.phx2.fedoraproject.org proxy1 proxy2 proxy3 proxy4 proxy5 proxy01 proxy02 proxy03 proxy04 proxy05 fedoraproject.org
|
||||
10.5.126.86 fas01.phx2.fedoraproject.org fas1 fas2 fas01 fas02 fas03 fas-all
|
||||
10.5.126.23 infrastructure.fedoraproject.org
|
||||
|
||||
10.5.126.81 memcached03 memcached03.stg app01 app01.stg
|
||||
|
||||
10.5.126.85 db-tahrir db-tahrir
|
||||
@@ -1,12 +1,12 @@
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
10.5.126.23 infrastructure.fedoraproject.org
|
||||
10.5.126.52 admin.fedoraproject.org
|
||||
10.5.126.53 admin.fedoraproject.org
|
||||
#
|
||||
# Here for historical reasons due to cert names.
|
||||
#
|
||||
10.5.125.75 sign-vault1
|
||||
10.5.125.71 sign-bridge1
|
||||
10.5.125.72 sign-bridge1
|
||||
#
|
||||
# Need to be able to talk to various kojis
|
||||
#
|
||||
@@ -1,430 +0,0 @@
|
||||
# -*- test-case-name: openid.test.test_fetchers -*-
|
||||
"""
|
||||
This module contains the HTTP fetcher interface and several implementations.
|
||||
"""
|
||||
|
||||
__all__ = ['fetch', 'getDefaultFetcher', 'setDefaultFetcher', 'HTTPResponse',
|
||||
'HTTPFetcher', 'createHTTPFetcher', 'HTTPFetchingError',
|
||||
'HTTPError']
|
||||
|
||||
import urllib2
|
||||
import time
|
||||
import cStringIO
|
||||
import sys
|
||||
|
||||
import openid
|
||||
import openid.urinorm
|
||||
|
||||
# Try to import httplib2 for caching support
|
||||
# http://bitworking.org/projects/httplib2/
|
||||
try:
|
||||
import httplib2
|
||||
except ImportError:
|
||||
# httplib2 not available
|
||||
httplib2 = None
|
||||
|
||||
# try to import pycurl, which will let us use CurlHTTPFetcher
|
||||
try:
|
||||
import pycurl
|
||||
except ImportError:
|
||||
pycurl = None
|
||||
|
||||
USER_AGENT = "python-openid/%s (%s)" % (openid.__version__, sys.platform)
|
||||
MAX_RESPONSE_KB = 1024
|
||||
|
||||
def fetch(url, body=None, headers=None):
|
||||
"""Invoke the fetch method on the default fetcher. Most users
|
||||
should need only this method.
|
||||
|
||||
@raises Exception: any exceptions that may be raised by the default fetcher
|
||||
"""
|
||||
fetcher = getDefaultFetcher()
|
||||
return fetcher.fetch(url, body, headers)
|
||||
|
||||
def createHTTPFetcher():
|
||||
"""Create a default HTTP fetcher instance
|
||||
|
||||
prefers Curl to urllib2."""
|
||||
if pycurl is None:
|
||||
fetcher = Urllib2Fetcher()
|
||||
else:
|
||||
fetcher = CurlHTTPFetcher()
|
||||
|
||||
return fetcher
|
||||
|
||||
# Contains the currently set HTTP fetcher. If it is set to None, the
|
||||
# library will call createHTTPFetcher() to set it. Do not access this
|
||||
# variable outside of this module.
|
||||
_default_fetcher = None
|
||||
|
||||
def getDefaultFetcher():
|
||||
"""Return the default fetcher instance
|
||||
if no fetcher has been set, it will create a default fetcher.
|
||||
|
||||
@return: the default fetcher
|
||||
@rtype: HTTPFetcher
|
||||
"""
|
||||
global _default_fetcher
|
||||
|
||||
if _default_fetcher is None:
|
||||
setDefaultFetcher(createHTTPFetcher())
|
||||
|
||||
return _default_fetcher
|
||||
|
||||
def setDefaultFetcher(fetcher, wrap_exceptions=True):
|
||||
"""Set the default fetcher
|
||||
|
||||
@param fetcher: The fetcher to use as the default HTTP fetcher
|
||||
@type fetcher: HTTPFetcher
|
||||
|
||||
@param wrap_exceptions: Whether to wrap exceptions thrown by the
|
||||
fetcher wil HTTPFetchingError so that they may be caught
|
||||
easier. By default, exceptions will be wrapped. In general,
|
||||
unwrapped fetchers are useful for debugging of fetching errors
|
||||
or if your fetcher raises well-known exceptions that you would
|
||||
like to catch.
|
||||
@type wrap_exceptions: bool
|
||||
"""
|
||||
global _default_fetcher
|
||||
if fetcher is None or not wrap_exceptions:
|
||||
_default_fetcher = fetcher
|
||||
else:
|
||||
_default_fetcher = ExceptionWrappingFetcher(fetcher)
|
||||
|
||||
def usingCurl():
|
||||
"""Whether the currently set HTTP fetcher is a Curl HTTP fetcher."""
|
||||
fetcher = getDefaultFetcher()
|
||||
if isinstance(fetcher, ExceptionWrappingFetcher):
|
||||
fetcher = fetcher.fetcher
|
||||
return isinstance(fetcher, CurlHTTPFetcher)
|
||||
|
||||
class HTTPResponse(object):
|
||||
"""XXX document attributes"""
|
||||
headers = None
|
||||
status = None
|
||||
body = None
|
||||
final_url = None
|
||||
|
||||
def __init__(self, final_url=None, status=None, headers=None, body=None):
|
||||
self.final_url = final_url
|
||||
self.status = status
|
||||
self.headers = headers
|
||||
self.body = body
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s status %s for %s>" % (self.__class__.__name__,
|
||||
self.status,
|
||||
self.final_url)
|
||||
|
||||
class HTTPFetcher(object):
|
||||
"""
|
||||
This class is the interface for openid HTTP fetchers. This
|
||||
interface is only important if you need to write a new fetcher for
|
||||
some reason.
|
||||
"""
|
||||
|
||||
def fetch(self, url, body=None, headers=None):
|
||||
"""
|
||||
This performs an HTTP POST or GET, following redirects along
|
||||
the way. If a body is specified, then the request will be a
|
||||
POST. Otherwise, it will be a GET.
|
||||
|
||||
|
||||
@param headers: HTTP headers to include with the request
|
||||
@type headers: {str:str}
|
||||
|
||||
@return: An object representing the server's HTTP response. If
|
||||
there are network or protocol errors, an exception will be
|
||||
raised. HTTP error responses, like 404 or 500, do not
|
||||
cause exceptions.
|
||||
|
||||
@rtype: L{HTTPResponse}
|
||||
|
||||
@raise Exception: Different implementations will raise
|
||||
different errors based on the underlying HTTP library.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def _allowedURL(url):
|
||||
return url.startswith('http://') or url.startswith('https://')
|
||||
|
||||
class HTTPFetchingError(Exception):
|
||||
"""Exception that is wrapped around all exceptions that are raised
|
||||
by the underlying fetcher when using the ExceptionWrappingFetcher
|
||||
|
||||
@ivar why: The exception that caused this exception
|
||||
"""
|
||||
def __init__(self, why=None):
|
||||
Exception.__init__(self, why)
|
||||
self.why = why
|
||||
|
||||
class ExceptionWrappingFetcher(HTTPFetcher):
|
||||
"""Fetcher that wraps another fetcher, causing all exceptions
|
||||
|
||||
@cvar uncaught_exceptions: Exceptions that should be exposed to the
|
||||
user if they are raised by the fetch call
|
||||
"""
|
||||
|
||||
uncaught_exceptions = (SystemExit, KeyboardInterrupt, MemoryError)
|
||||
|
||||
def __init__(self, fetcher):
|
||||
self.fetcher = fetcher
|
||||
|
||||
def fetch(self, *args, **kwargs):
|
||||
try:
|
||||
return self.fetcher.fetch(*args, **kwargs)
|
||||
except self.uncaught_exceptions:
|
||||
raise
|
||||
except:
|
||||
exc_cls, exc_inst = sys.exc_info()[:2]
|
||||
if exc_inst is None:
|
||||
# string exceptions
|
||||
exc_inst = exc_cls
|
||||
|
||||
raise HTTPFetchingError(why=exc_inst)
|
||||
|
||||
class Urllib2Fetcher(HTTPFetcher):
|
||||
"""An C{L{HTTPFetcher}} that uses urllib2.
|
||||
"""
|
||||
|
||||
# Parameterized for the benefit of testing frameworks, see
|
||||
# http://trac.openidenabled.com/trac/ticket/85
|
||||
urlopen = staticmethod(urllib2.urlopen)
|
||||
|
||||
def fetch(self, url, body=None, headers=None):
|
||||
if not _allowedURL(url):
|
||||
raise ValueError('Bad URL scheme: %r' % (url,))
|
||||
|
||||
if headers is None:
|
||||
headers = {}
|
||||
|
||||
headers.setdefault(
|
||||
'User-Agent',
|
||||
"%s Python-urllib/%s" % (USER_AGENT, urllib2.__version__,))
|
||||
|
||||
req = urllib2.Request(url, data=body, headers=headers)
|
||||
try:
|
||||
f = self.urlopen(req)
|
||||
try:
|
||||
return self._makeResponse(f)
|
||||
finally:
|
||||
f.close()
|
||||
except urllib2.HTTPError, why:
|
||||
try:
|
||||
return self._makeResponse(why)
|
||||
finally:
|
||||
why.close()
|
||||
|
||||
def _makeResponse(self, urllib2_response):
|
||||
resp = HTTPResponse()
|
||||
resp.body = urllib2_response.read(MAX_RESPONSE_KB * 1024)
|
||||
resp.final_url = urllib2_response.geturl()
|
||||
resp.headers = dict(urllib2_response.info().items())
|
||||
|
||||
if hasattr(urllib2_response, 'code'):
|
||||
resp.status = urllib2_response.code
|
||||
else:
|
||||
resp.status = 200
|
||||
|
||||
return resp
|
||||
|
||||
class HTTPError(HTTPFetchingError):
|
||||
"""
|
||||
This exception is raised by the C{L{CurlHTTPFetcher}} when it
|
||||
encounters an exceptional situation fetching a URL.
|
||||
"""
|
||||
pass
|
||||
|
||||
# XXX: define what we mean by paranoid, and make sure it is.
|
||||
class CurlHTTPFetcher(HTTPFetcher):
|
||||
"""
|
||||
An C{L{HTTPFetcher}} that uses pycurl for fetching.
|
||||
See U{http://pycurl.sourceforge.net/}.
|
||||
"""
|
||||
ALLOWED_TIME = 20 # seconds
|
||||
|
||||
def __init__(self):
|
||||
HTTPFetcher.__init__(self)
|
||||
if pycurl is None:
|
||||
raise RuntimeError('Cannot find pycurl library')
|
||||
|
||||
def _parseHeaders(self, header_file):
|
||||
header_file.seek(0)
|
||||
|
||||
# Remove the status line from the beginning of the input
|
||||
unused_http_status_line = header_file.readline().lower ()
|
||||
while unused_http_status_line.lower().startswith('http/1.1 1'):
|
||||
unused_http_status_line = header_file.readline()
|
||||
unused_http_status_line = header_file.readline()
|
||||
|
||||
lines = [line.strip() for line in header_file]
|
||||
|
||||
# and the blank line from the end
|
||||
empty_line = lines.pop()
|
||||
if empty_line:
|
||||
raise HTTPError("No blank line at end of headers: %r" % (line,))
|
||||
|
||||
headers = {}
|
||||
for line in lines:
|
||||
try:
|
||||
name, value = line.split(':', 1)
|
||||
except ValueError:
|
||||
raise HTTPError(
|
||||
"Malformed HTTP header line in response: %r" % (line,))
|
||||
|
||||
value = value.strip()
|
||||
|
||||
# HTTP headers are case-insensitive
|
||||
name = name.lower()
|
||||
headers[name] = value
|
||||
|
||||
return headers
|
||||
|
||||
def _checkURL(self, url):
|
||||
# XXX: document that this can be overridden to match desired policy
|
||||
# XXX: make sure url is well-formed and routeable
|
||||
return _allowedURL(url)
|
||||
|
||||
def fetch(self, url, body=None, headers=None):
|
||||
stop = int(time.time()) + self.ALLOWED_TIME
|
||||
off = self.ALLOWED_TIME
|
||||
|
||||
if headers is None:
|
||||
headers = {}
|
||||
|
||||
headers.setdefault('User-Agent',
|
||||
"%s %s" % (USER_AGENT, pycurl.version,))
|
||||
|
||||
header_list = []
|
||||
if headers is not None:
|
||||
for header_name, header_value in headers.iteritems():
|
||||
header_list.append('%s: %s' % (header_name, header_value))
|
||||
|
||||
c = pycurl.Curl()
|
||||
try:
|
||||
c.setopt(pycurl.NOSIGNAL, 1)
|
||||
|
||||
if header_list:
|
||||
c.setopt(pycurl.HTTPHEADER, header_list)
|
||||
|
||||
# Presence of a body indicates that we should do a POST
|
||||
if body is not None:
|
||||
c.setopt(pycurl.POST, 1)
|
||||
c.setopt(pycurl.POSTFIELDS, body)
|
||||
|
||||
while off > 0:
|
||||
if not self._checkURL(url):
|
||||
raise HTTPError("Fetching URL not allowed: %r" % (url,))
|
||||
|
||||
data = cStringIO.StringIO()
|
||||
def write_data(chunk):
|
||||
if data.tell() > 1024*MAX_RESPONSE_KB:
|
||||
return 0
|
||||
else:
|
||||
return data.write(chunk)
|
||||
|
||||
response_header_data = cStringIO.StringIO()
|
||||
c.setopt(pycurl.WRITEFUNCTION, write_data)
|
||||
c.setopt(pycurl.HEADERFUNCTION, response_header_data.write)
|
||||
c.setopt(pycurl.TIMEOUT, off)
|
||||
c.setopt(pycurl.URL, openid.urinorm.urinorm(url))
|
||||
|
||||
c.perform()
|
||||
|
||||
response_headers = self._parseHeaders(response_header_data)
|
||||
code = c.getinfo(pycurl.RESPONSE_CODE)
|
||||
if code in [301, 302, 303, 307]:
|
||||
url = response_headers.get('location')
|
||||
if url is None:
|
||||
raise HTTPError(
|
||||
'Redirect (%s) returned without a location' % code)
|
||||
|
||||
# Redirects are always GETs
|
||||
c.setopt(pycurl.POST, 0)
|
||||
|
||||
# There is no way to reset POSTFIELDS to empty and
|
||||
# reuse the connection, but we only use it once.
|
||||
else:
|
||||
resp = HTTPResponse()
|
||||
resp.headers = response_headers
|
||||
resp.status = code
|
||||
resp.final_url = url
|
||||
resp.body = data.getvalue()
|
||||
return resp
|
||||
|
||||
off = stop - int(time.time())
|
||||
|
||||
raise HTTPError("Timed out fetching: %r" % (url,))
|
||||
finally:
|
||||
c.close()
|
||||
|
||||
class HTTPLib2Fetcher(HTTPFetcher):
|
||||
"""A fetcher that uses C{httplib2} for performing HTTP
|
||||
requests. This implementation supports HTTP caching.
|
||||
|
||||
@see: http://bitworking.org/projects/httplib2/
|
||||
"""
|
||||
|
||||
def __init__(self, cache=None):
|
||||
"""@param cache: An object suitable for use as an C{httplib2}
|
||||
cache. If a string is passed, it is assumed to be a
|
||||
directory name.
|
||||
"""
|
||||
if httplib2 is None:
|
||||
raise RuntimeError('Cannot find httplib2 library. '
|
||||
'See http://bitworking.org/projects/httplib2/')
|
||||
|
||||
super(HTTPLib2Fetcher, self).__init__()
|
||||
|
||||
# An instance of the httplib2 object that performs HTTP requests
|
||||
self.httplib2 = httplib2.Http(cache)
|
||||
|
||||
# We want httplib2 to raise exceptions for errors, just like
|
||||
# the other fetchers.
|
||||
self.httplib2.force_exception_to_status_code = False
|
||||
|
||||
def fetch(self, url, body=None, headers=None):
|
||||
"""Perform an HTTP request
|
||||
|
||||
@raises Exception: Any exception that can be raised by httplib2
|
||||
|
||||
@see: C{L{HTTPFetcher.fetch}}
|
||||
"""
|
||||
if body:
|
||||
method = 'POST'
|
||||
else:
|
||||
method = 'GET'
|
||||
|
||||
if headers is None:
|
||||
headers = {}
|
||||
|
||||
# httplib2 doesn't check to make sure that the URL's scheme is
|
||||
# 'http' so we do it here.
|
||||
if not (url.startswith('http://') or url.startswith('https://')):
|
||||
raise ValueError('URL is not a HTTP URL: %r' % (url,))
|
||||
|
||||
httplib2_response, content = self.httplib2.request(
|
||||
url, method, body=body, headers=headers)
|
||||
|
||||
# Translate the httplib2 response to our HTTP response abstraction
|
||||
|
||||
# When a 400 is returned, there is no "content-location"
|
||||
# header set. This seems like a bug to me. I can't think of a
|
||||
# case where we really care about the final URL when it is an
|
||||
# error response, but being careful about it can't hurt.
|
||||
try:
|
||||
final_url = httplib2_response['content-location']
|
||||
except KeyError:
|
||||
# We're assuming that no redirects occurred
|
||||
assert not httplib2_response.previous
|
||||
|
||||
# And this should never happen for a successful response
|
||||
assert httplib2_response.status != 200
|
||||
final_url = url
|
||||
|
||||
return HTTPResponse(
|
||||
body=content,
|
||||
final_url=final_url,
|
||||
headers=dict(httplib2_response.items()),
|
||||
status=httplib2_response.status,
|
||||
)
|
||||
@@ -1,13 +0,0 @@
|
||||
/var/log/httpd/*log {
|
||||
daily
|
||||
rotate 7
|
||||
missingok
|
||||
ifempty
|
||||
compress
|
||||
compresscmd /usr/bin/xz
|
||||
uncompresscmd /usr/bin/xz
|
||||
compressext .xz
|
||||
dateext
|
||||
sharedscripts
|
||||
copytruncate
|
||||
}
|
||||
@@ -40,19 +40,13 @@
|
||||
-A OUTPUT -p tcp -m tcp -d 10.5.126.23 --dport 80 -j ACCEPT
|
||||
-A OUTPUT -p tcp -m tcp -d 10.5.126.23 --dport 443 -j ACCEPT
|
||||
|
||||
# rsyslog out to log01
|
||||
# rsyslog out to log02
|
||||
-A OUTPUT -p tcp -m tcp -d 10.5.126.29 --dport 514 -j ACCEPT
|
||||
|
||||
# SSH
|
||||
-A INPUT -p tcp -m tcp -s 10.5.0.0/16 --dport 22 -j ACCEPT
|
||||
-A OUTPUT -p tcp -m tcp -d 10.5.0.0/16 --sport 22 -j ACCEPT
|
||||
|
||||
# for ansible accelerate mode - allow port 5099 from lockbox and it's ips
|
||||
-A INPUT -p tcp -m tcp --dport 5099 -s 192.168.1.58 -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp --dport 5099 -s 10.5.126.23 -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp --dport 5099 -s 10.5.127.51 -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp --dport 5099 -s 209.132.181.6 -j ACCEPT
|
||||
|
||||
# git to pkgs
|
||||
-A OUTPUT -m tcp -p tcp --dport 9418 -d 10.5.125.44 -j ACCEPT
|
||||
-A OUTPUT -m udp -p udp --dport 9418 -d 10.5.125.44 -j ACCEPT
|
||||
@@ -68,8 +62,6 @@
|
||||
# kinda necessary
|
||||
-A INPUT -m tcp -p tcp -s 10.5.88.36 -j ACCEPT
|
||||
-A OUTPUT -m tcp -p tcp -d 10.5.88.36 -j ACCEPT
|
||||
-A INPUT -m udp -p udp -s 10.5.88.36 -j ACCEPT
|
||||
-A OUTPUT -m udp -p udp -d 10.5.88.36 -j ACCEPT
|
||||
|
||||
# ntp
|
||||
-A OUTPUT -m udp -p udp --dport 123 -d 66.187.233.4 -j ACCEPT
|
||||
@@ -42,10 +42,6 @@ COMMIT
|
||||
-A INPUT -p tcp -m tcp -s 192.168.100.0/24 --dport 22 -j REJECT --reject-with tcp-reset
|
||||
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
|
||||
|
||||
# for fireball mode - allow port 5099 from lockbox and it's ips
|
||||
-A INPUT -p tcp -m tcp --dport 5099 -s 10.5.126.23 -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp --dport 5099 -s 10.5.127.51 -j ACCEPT
|
||||
|
||||
# Allow all netapp traffic
|
||||
-A INPUT -p udp -m udp -s 10.5.88.36 -j ACCEPT
|
||||
-A INPUT -p tcp -m tcp -s 10.5.88.36 -j ACCEPT
|
||||
@@ -25,6 +25,9 @@ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC3eVd6Ccegp1r1mhm7tPnlGUcw0zsAbR2p9hrFZ7RK
|
||||
#ricky
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDDAeAohiRJ2v/RO7R9GS93TF92Gc9ixK6HM7wlbMdlZ4yYAbeoEX8VpeNaSTfo/Nw3zazr9VpmpHg+H70K8ljQsPgRwcgpetRVpF55M5FYjqM5oM+N94HV3nSGcnWbSIho1R31DaDH2ptxVqgh2m5DG7Bc45w9Bd4wjfdQ8nBrGv93tuH7X/cee4g6GvexLm5nXhAngdEmiyxw5MHuJAvj+54l4wMXRWpeF6XlI2iamW42nLSfRMCFkGNiXvBm8zkfkeH2L7I2cNKXXoP/cPCd3G/teIsI9FDqYpZ6CS0zMkWhlTuh7rlCjc9+nJsLdDLgwhb75skiUOOfimGvCCxWeHuCsSL+KpCu4AgI9UAVgO6xblDlmbQXxlGopep29U/s00W/0qv3Zp8Ks4Za0xHdoIwHiaLM0OYymFaNDd3ZqFG0FN23ZjcGqUmFGhGfUQRDt72+e9HtXlBJ0mUaCX9+e4wFGTVciG1/5CKsLHCaLRf+knsWXrv2zcv9BoZ9SCAK32zCZw05wjcmr7jYDCTLmtC6kEBNaOeE9Qqi2oomo4ji8ybg+Qq+1BwOtJKExvmZaooBZud0qd24HmCU0/0ysw732jGcqexzxsCR0VArd+7LKexOD7KwMW0VUss6fdOWac9gwCLx9FaKYh8mVvcQjKhKGI3aO2sXRUWSbBJw8w== ricky@alpha.rzhou.org
|
||||
|
||||
#skvidal
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDjlnCEiFMrKpkiIBjs5IW1+RXDald3aKvTszj0hUw9Gl6w3vt3RAiqTD/XRKcNdP0+pVIK/I4KexKfZzemNZ8UYmZ+a9EK+Gj7OQbJv7TQDeR0zyJ8ZgFXaWoN+CnWXLO2mp9poysUR6CILjaDJt4GDxJaD+bebRu+zxUQSlgrjObhIUTSfwsEJu++zK+fy4+xSEMG7SANEJHd+zOAw6+isLnnbp8qY2fs3reKpc8XPkyJscLU4BQV2cGXwlPUhzPVv/itUUV/uWHeAqoz2i5XG4C0/BXk6D85qkGIyE08Nl3COxn6giivrdTIH6W4dUtBdYgTMZ3RgMHL9ClLpS17 skvidal@opus
|
||||
|
||||
#smooge
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAgEAxnzCHH11nDM1m7yvqo6Uanq5vcZjBcs/mr3LccxwJ59ENzSXwUgEQy/P8vby9VKMwsskoaqZcvJdOSZBFhNV970NTPb69OIXPQAl/xhaLwiJOn606fB+/S8WepeuntS0qLiebbEiA9vIQLteZ+bWl1s/didD/sFo3/wItoTGA4GuShUu1AyWJx5Ue7Y34rwGR+kIvDoy2GHUcunn2PjGt4r3v2vpiR8GuK0JRupJAGYbYCiMBDRMkR0cgEyHW6+QQNqMlA6nRJjp94PcUMKaZK6Tc+6h5v8kLLtzuZ6ZupwMMC4X8sh85YcxqoW9DynrvO28pzaMNBHm7qr9LeY9PIhXscSa35GAcGZ7UwPK4aJAAuIzCf8BzazyvUM3Ye7GPCXHxUwY0kdXk+MHMVKFzZDChNp/ovgdhxNrw9Xzcs4yw7XYambN9Bk567cI6/tWcPuYLYD4ZJQP0qSXVzVgFEPss1lDcgd0k4if+pINyxM8eVFZVAqU+BMeDC+6W8HUUPgv6LiyTWs+xTXTuORwBTSF1pOqWB4LjqsCGIiMAc6n/xdALBGUN7qsuKDU6Q7bwPppaxypi4KCvuJsqW+8sDtMUaZ34I5Zo1q7cu03wqnOljUGoAY6IDn3J66F2KlPPyb/q3PDV3WbY/jnH16L29/xUA73nFUW1p+WXutwmSU= ssmoogen@ponyo.int.smoogespace.com
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
|
||||
<clouds/>
|
||||
<slaves>
|
||||
<slave>
|
||||
<name>Fedora19</name>
|
||||
<name>Fedora18</name>
|
||||
<description></description>
|
||||
<remoteFS>/mnt/jenkins/</remoteFS>
|
||||
<numExecutors>2</numExecutors>
|
||||
@@ -62,38 +62,6 @@ class="jenkins.model.ProjectNamingStrategy$DefaultProjectNamingStrategy"/>
|
||||
<label></label>
|
||||
<nodeProperties/>
|
||||
</slave>
|
||||
<slave>
|
||||
<name>Fedora20</name>
|
||||
<description></description>
|
||||
<remoteFS>/mnt/jenkins/</remoteFS>
|
||||
<numExecutors>2</numExecutors>
|
||||
<mode>NORMAL</mode>
|
||||
<retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/>
|
||||
<launcher class="hudson.plugins.sshslaves.SSHLauncher"
|
||||
plugin="ssh-slaves@0.21">
|
||||
<host>172.16.5.23</host>
|
||||
<port>22</port>
|
||||
<credentialsId>950d5dd7-acb2-402a-8670-21f152d04928</credentialsId>
|
||||
</launcher>
|
||||
<label></label>
|
||||
<nodeProperties/>
|
||||
</slave>
|
||||
<slave>
|
||||
<name>EL7-beta</name>
|
||||
<description></description>
|
||||
<remoteFS>/mnt/jenkins/</remoteFS>
|
||||
<numExecutors>2</numExecutors>
|
||||
<mode>NORMAL</mode>
|
||||
<retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/>
|
||||
<launcher class="hudson.plugins.sshslaves.SSHLauncher"
|
||||
plugin="ssh-slaves@0.21">
|
||||
<host>172.16.5.14</host>
|
||||
<port>22</port>
|
||||
<credentialsId>950d5dd7-acb2-402a-8670-21f152d04928</credentialsId>
|
||||
</launcher>
|
||||
<label></label>
|
||||
<nodeProperties/>
|
||||
</slave>
|
||||
</slaves>
|
||||
<quietPeriod>5</quietPeriod>
|
||||
<scmCheckoutRetryCount>0</scmCheckoutRetryCount>
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user