mirror of
https://pagure.io/fedora-infra/ansible.git
synced 2026-02-04 13:43:50 +08:00
Compare commits
4 Commits
letsencryp
...
denyhosts
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d8f01f8b08 | ||
|
|
f458aec69e | ||
|
|
755e5e81ae | ||
|
|
c6cbf75e92 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,2 +1 @@
|
||||
*.swp
|
||||
*.pyc
|
||||
|
||||
112
CONVENTIONS
112
CONVENTIONS
@@ -1,112 +0,0 @@
|
||||
This file describes some conventions we are going to try and use
|
||||
to keep things organized and everyone on the same page.
|
||||
|
||||
If you find you need to diverge from this document for something,
|
||||
please discuss it on the infrastructure list and see if we can
|
||||
adjust this document for that use case.
|
||||
|
||||
Playbook naming
|
||||
===============
|
||||
The top level playbooks directory should contain:
|
||||
|
||||
* Playbooks that are generic and used by several groups/hosts playbooks
|
||||
* Playbooks used for utility purposes from command line
|
||||
* Groups and Hosts subdirs.
|
||||
|
||||
Generic playbooks are included in other playbooks and perform
|
||||
basic setup that is used by other groups/hosts.
|
||||
Examples: cloud setup, collectd, webserver, iptables, etc
|
||||
|
||||
Utility playbooks are used by sysadmins command line to perform some
|
||||
specific function. Examples: host update, vhost update, vhost reboot.
|
||||
|
||||
The playbooks/groups/ directory should contain one playbook per
|
||||
group. This should be used in the case of multiple machines/instances
|
||||
in a group. MUST include a hosts entry that describes the hosts in the group.
|
||||
Examples: packages, proxy, unbound, virthost, etc.
|
||||
Try and be descriptive with the name here.
|
||||
|
||||
The playbooks/hosts/ directory should contain one playbook per 'host'
|
||||
for when a role is handled by only one host. Hosts playbooks
|
||||
MUST be FQDN.yml, MUST contain Hosts: the host or ip.
|
||||
Examples: persistent cloud images, special hosts.
|
||||
|
||||
Where possible groups should be used. Hosts playbooks should only
|
||||
be used in specific cases where a generic group playbook would not work.
|
||||
|
||||
Both groups and hosts playbooks should always include:
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- "{{ private}}/vars.yml"
|
||||
- /srv/web/infra/ansible/vars/{{ ansible_distribution }}.yml
|
||||
|
||||
Play naming
|
||||
===========
|
||||
Plays in playbooks should be a short readable description of what the play
|
||||
is doing. This will be displayed to the user and/or mailed out, so think
|
||||
about what you would like to see if the play you are writing failed that
|
||||
would be descriptive to the reader to help fix it.
|
||||
|
||||
Inventory
|
||||
=========
|
||||
The inventory file should add all hosts to one (or more) groups.
|
||||
|
||||
When there are staging hosts for a role/service, they should be in the
|
||||
main group for that role as well as a staging for the role.
|
||||
FIXME: will depend on how we do staging. (see below)
|
||||
|
||||
Tags
|
||||
====
|
||||
Tags allow you to run just a subset of plays with a specific tag(s).
|
||||
|
||||
We have some standard tags we should use on all plays:
|
||||
|
||||
packages - this play installs or removes packages.
|
||||
|
||||
config - this play installs config files.
|
||||
|
||||
check - we could use this tag to include 'is everything running that should be'
|
||||
type tasks.
|
||||
|
||||
FIXME: others?
|
||||
|
||||
Production vs Staging vs Development
|
||||
====================================
|
||||
In the default state, we should strive to have production and staging using
|
||||
the same exact playbooks. development can also do so, or just be a more
|
||||
minimal free form for the developer.
|
||||
|
||||
When needing to make changes to test in staging the following process should
|
||||
be used:
|
||||
|
||||
FIXME... :)
|
||||
|
||||
Requirements:
|
||||
|
||||
1. shouldn't touch prod playbook by default
|
||||
2. should be easy to merge changes back to prod
|
||||
3. should not require people to remember to do a bunch of steps.
|
||||
4. should be easy to see exactly what changes are pending only in stg.
|
||||
|
||||
Cron job/automatic execution
|
||||
============================
|
||||
|
||||
We would like to get ansible running over hosts in an automated way.
|
||||
A git hook could do this.
|
||||
|
||||
* On commit:
|
||||
If we have a way to determine exactly what hosts are affected by a
|
||||
change we could simply run only on those hosts.
|
||||
|
||||
We might want a short delay (10m) to allow someone to see a problem
|
||||
or others to note one from the commit.
|
||||
|
||||
* Once a day: (more often? less often?)
|
||||
|
||||
We may want to re-run on all hosts once a day and yell loudly
|
||||
if anything changed.
|
||||
|
||||
FIXME: perhaps we want a tag of items to run at this time?
|
||||
FIXME: alternately we could have a util playbook that runs a
|
||||
bunch of checks for us?
|
||||
|
||||
252
README
252
README
@@ -1,41 +1,16 @@
|
||||
== ansible repository/structure ==
|
||||
ansible repository/structure
|
||||
|
||||
files - files and templates for use in playbooks/tasks
|
||||
- subdirs for specific tasks/dirs highly recommended
|
||||
|
||||
inventory - where the inventory and additional vars is stored
|
||||
- All files in this directory in ini format
|
||||
- added together for total inventory
|
||||
group_vars:
|
||||
- per group variables set here in a file per group
|
||||
host_vars:
|
||||
- per host variables set here in a file per host
|
||||
|
||||
library - library of custom local ansible modules
|
||||
|
||||
playbooks - collections of plays we want to run on systems
|
||||
|
||||
groups: groups of hosts configured from one playbook.
|
||||
|
||||
hosts: playbooks for single hosts.
|
||||
|
||||
manual: playbooks that are only run manually by an admin as needed.
|
||||
|
||||
tasks - snippets of tasks that should be included in plays
|
||||
|
||||
roles - specific roles to be use in playbooks.
|
||||
Each role has it's own files/templates/vars
|
||||
|
||||
filter_plugins - Jinja filters
|
||||
|
||||
master.yml - This is the master playbook, consisting of all
|
||||
current group and host playbooks. Note that the
|
||||
daily cron doesn't run this, it runs even over
|
||||
playbooks that are not yet included in master.
|
||||
This playbook is usefull for making changes over
|
||||
multiple groups/hosts usually with -t (tag).
|
||||
|
||||
== Paths ==
|
||||
|
||||
public path for everything is:
|
||||
|
||||
@@ -45,26 +20,223 @@ private path - which is sysadmin-main accessible only is:
|
||||
|
||||
/srv/private/ansible
|
||||
|
||||
|
||||
In general to run any ansible playbook you will want to run:
|
||||
|
||||
sudo -i ansible-playbook /path/to/playbook.yml
|
||||
|
||||
== Scheduled check-diff ==
|
||||
|
||||
Every night a cron job runs over all playbooks under playbooks/{groups}{hosts}
|
||||
with the ansible --check --diff options. A report from this is sent to
|
||||
sysadmin-logs. In the ideal state this report would be empty.
|
||||
|
||||
== Idempotency ==
|
||||
cloud instances:
|
||||
to startup a new cloud instance and configure for basic server use run (as
|
||||
root):
|
||||
|
||||
All playbooks should be idempotent. Ie, if run once they should bring the
|
||||
machine(s) to the desired state, and if run again N times after that they should
|
||||
make 0 changes (because the machine(s) are in the desired state).
|
||||
Please make sure your playbooks are idempotent.
|
||||
el6:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/el6_temp_instance.yml
|
||||
|
||||
== Can be run anytime ==
|
||||
f19:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/f19_temp_instance.yml
|
||||
|
||||
|
||||
The -i is important - ansible's tools need access to root's sshagent as well
|
||||
as the cloud credentials to run the above playbooks successfully.
|
||||
|
||||
This will setup a new instance, provision it and email sysadmin-main that
|
||||
the instance was created, it's instance id (for terminating it, attaching
|
||||
volumes, etc) and it's ip address.
|
||||
|
||||
You will then be able to login, as root.
|
||||
|
||||
You can add various extra vars to the above commands to change the instance
|
||||
you've just spun up.
|
||||
|
||||
variables to define:
|
||||
instance_type=c1.medium
|
||||
security_group=default
|
||||
root_auth_users='username1 username2 @groupname'
|
||||
hostbase=basename for hostname - will have instance id appended to it
|
||||
|
||||
|
||||
define these with:
|
||||
|
||||
--extra-vars="varname=value varname1=value varname2=value"
|
||||
|
||||
|
||||
|
||||
|
||||
Name Memory_MB Disk VCPUs
|
||||
m1.tiny 512 0 1
|
||||
m1.small 2048 20 1
|
||||
m1.medium 4096 40 2
|
||||
m1.large 8192 80 4
|
||||
m1.xlarge 16384 160 8
|
||||
m1.builder 5120 50 3
|
||||
|
||||
Setting up a new persistent cloud host:
|
||||
1. select an ip:
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-describe-addresses
|
||||
- pick an ip from the list that is not assigned anywhere
|
||||
- add it into dns - normally in the cloud.fedoraproject.org but it doesn't
|
||||
have to be
|
||||
|
||||
2. If needed create a persistent storage disk for the instance:
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-create-volume -z nova -s <size in gigabytes>
|
||||
|
||||
|
||||
3. set up the host/ip in ansible host inventory
|
||||
- add to ansible/inventory/inventory under [persistent-cloud]
|
||||
- either the ip itself or the hostname you want to refer to it as
|
||||
|
||||
4. setup the host_vars
|
||||
- create file named by the hostname or ip you used in the inventory
|
||||
- for adding persistent volumes add an entry like this into the host_vars file
|
||||
|
||||
volumes: ['-d /dev/vdb vol-BCA33FCD', '-d /dev/vdc vol-DC833F48']
|
||||
|
||||
for each volume you want to attach to the instance.
|
||||
|
||||
The device names matter - they start at /dev/vdb and increment. However,
|
||||
they are not reliable IN the instance. You should find the device, partition
|
||||
it, format it and label the formatted device then mount the device by label
|
||||
or by UUID. Do not count on the device name being the same each time.
|
||||
|
||||
|
||||
Contents should look like this (remove all the comments)
|
||||
|
||||
---
|
||||
# 2cpus, 3GB of ram 20GB of ephemeral space
|
||||
instance_type: m1.large
|
||||
# image id
|
||||
image: emi-B8793915
|
||||
keypair: fedora-admin
|
||||
# what security group to add the host to
|
||||
security_group: webserver
|
||||
zone: fedoracloud
|
||||
# instance id will be appended
|
||||
hostbase: hostname_base-
|
||||
# ip should be in the 209.132.184.XXX range
|
||||
public_ip: $ip_you_selected
|
||||
# users/groups who should have root ssh access
|
||||
root_auth_users: skvidal bkabrda
|
||||
description: some description so someone else can know what this is
|
||||
|
||||
The available images can be found by running::
|
||||
source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
euca-describe-images | grep emi
|
||||
|
||||
4. setup a host playbook ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
Note: the name of this file doesn't really matter but it should normally
|
||||
be the hostname of the host you're setting up.
|
||||
|
||||
- name: check/create instance
|
||||
hosts: $YOUR_HOSTNAME/IP HERE
|
||||
user: root
|
||||
gather_facts: False
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- ${private}/vars.yml
|
||||
|
||||
tasks:
|
||||
- include: $tasks/persistent_cloud.yml
|
||||
|
||||
- name: provision instance
|
||||
hosts: $YOUR_HOSTNAME/IP HERE
|
||||
user: root
|
||||
gather_facts: True
|
||||
|
||||
vars_files:
|
||||
- /srv/web/infra/ansible/vars/global.yml
|
||||
- ${private}/vars.yml
|
||||
- ${vars}/${ansible_distribution}.yml
|
||||
|
||||
tasks:
|
||||
- include: $tasks/cloud_setup_basic.yml
|
||||
# fill in other actions/includes/etc here
|
||||
|
||||
handlers:
|
||||
- include: $handlers/restart_services.yml
|
||||
|
||||
|
||||
5. add/commit the above to the git repo and push your changes
|
||||
|
||||
|
||||
6. set it up:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
|
||||
7. login, etc
|
||||
|
||||
You should be able to run that playbook over and over again safely, it will
|
||||
only setup/create a new instance if the ip is not up/responding.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
SECURITY GROUPS
|
||||
- to edit security groups you must either have your own cloud account or
|
||||
be a member of sysadmin-main
|
||||
|
||||
This gives you the credential to change things in the persistent tenant
|
||||
- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
|
||||
|
||||
This lists all security groups in that tenant:
|
||||
- euca-describe-groups | grep GROUP
|
||||
|
||||
the output will look like this:
|
||||
euca-describe-groups | grep GROU
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e default default
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e jenkins jenkins instance group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e logstash logstash security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e smtpserver list server group. needs web and smtp
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e webserver webserver security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
|
||||
|
||||
This lets you list the rules in a specific group:
|
||||
- euca-describe-group groupname
|
||||
|
||||
the output will look like this:
|
||||
|
||||
euca-describe-group wideopen
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS tcp 1 65535 FROM CIDR 0.0.0.0/0
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0
|
||||
|
||||
|
||||
To create a new group:
|
||||
euca-create-group -d "group description here" groupname
|
||||
|
||||
To add a rule to a group:
|
||||
euca-authorize -P tcp -p 22 groupname
|
||||
|
||||
To delete a rule from a group:
|
||||
euca-revoke -P tcp -p 22 groupname
|
||||
|
||||
Notes:
|
||||
- Be careful removing or adding rules to existing groups b/c you could be
|
||||
impacting other instances using that security group.
|
||||
|
||||
- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping
|
||||
and traceroute and friends).
|
||||
|
||||
|
||||
|
||||
|
||||
TERMINATING INSTANCES
|
||||
|
||||
For transient:
|
||||
1. source /srv/private/ansible/files/openstack/transient-admin/ec2rc.sh
|
||||
|
||||
- OR -
|
||||
|
||||
For persistent:
|
||||
1. source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
|
||||
2. euca-describe-instances | grep <ip of your instance>
|
||||
|
||||
3. euca-terminate-instances <the id, something like i-00000295>
|
||||
|
||||
When a playbook or change is checked into ansible you should assume
|
||||
that it could be run at ANY TIME. Always make sure the checked in state
|
||||
is the desired state. Always test changes when they land so they don't
|
||||
surprise you later.
|
||||
|
||||
186
README.cloud
186
README.cloud
@@ -1,186 +0,0 @@
|
||||
== Cloud information ==
|
||||
|
||||
The dashboard for the production cloud instance is:
|
||||
https://fedorainfracloud.org/dashboard/
|
||||
|
||||
You can download credentials via the dashboard (under security and access)
|
||||
|
||||
=== Transient instances ===
|
||||
|
||||
Transient instances are short term use instances for Fedora
|
||||
contributors. They can be terminated at any time and shouldn't be
|
||||
relied on for any production use. If you have an application
|
||||
or longer term item that should always be around
|
||||
please create a persistent playbook instead. (see below)
|
||||
|
||||
to startup a new transient cloud instance and configure for basic
|
||||
server use run (as root):
|
||||
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/transient_cloud_instance.yml -e 'name=somename'
|
||||
|
||||
The -i is important - ansible's tools need access to root's sshagent as well
|
||||
as the cloud credentials to run the above playbooks successfully.
|
||||
|
||||
This will setup a new instance, provision it and email sysadmin-main that
|
||||
the instance was created and it's ip address.
|
||||
|
||||
You will then be able to login, as root if you are in the sysadmin-main group.
|
||||
(If you are making the instance for another user, see below)
|
||||
|
||||
You MUST pass a name to it, ie: -e 'name=somethingdescriptive'
|
||||
You can optionally override defaults by passing any of the following:
|
||||
image=imagename (default is centos70_x86_64)
|
||||
instance_type=some instance type (default is m1.small)
|
||||
root_auth_users='user1 user2 user3 @group1' (default always includes sysadmin-main group)
|
||||
|
||||
Note: if you run this playbook with the same name= multiple times
|
||||
openstack is smart enough to just return the current ip of that instance
|
||||
and go on. This way you can re-run if you want to reconfigure it without
|
||||
reprovisioning it.
|
||||
|
||||
|
||||
Sizes options
|
||||
-------------
|
||||
|
||||
Name Memory_MB Disk VCPUs
|
||||
m1.tiny 512 0 1
|
||||
m1.small 2048 20 1
|
||||
m1.medium 4096 40 2
|
||||
m1.large 8192 80 4
|
||||
m1.xlarge 16384 160 8
|
||||
m1.builder 5120 50 3
|
||||
|
||||
|
||||
=== Persistent cloud instances ===
|
||||
|
||||
Persistent cloud instances are ones that we want to always have up and
|
||||
configured. These are things like dev instances for various applications,
|
||||
proof of concept servers for evaluating something, etc. They will be
|
||||
reprovisioned after a reboot/maint window for the cloud.
|
||||
|
||||
Setting up a new persistent cloud host:
|
||||
|
||||
1) Select an available floating IP
|
||||
|
||||
source /srv/private/ansible/files/openstack/novarc
|
||||
nova floating-ip-list
|
||||
|
||||
Note that an "available floating IP" is one that has only a "-" in the Fixed IP
|
||||
column of the above `nova` command. Ignore the fact that the "Server Id" column
|
||||
is completely blank for all instances. If there are no ip's with -, use:
|
||||
|
||||
nova floating-ip-create
|
||||
|
||||
and retry the list.
|
||||
|
||||
2) Add that IP addr to dns (typically as foo.fedorainfracloud.org)
|
||||
|
||||
3) Create persistent storage disk for the instance (if necessary.. you might not
|
||||
need this).
|
||||
|
||||
nova volume-create --display-name SOME_NAME SIZE_IN_GB
|
||||
|
||||
4) Add to ansible inventory in the persistent-cloud group.
|
||||
You should use the FQDN for this and not the IP. Names are good.
|
||||
|
||||
5) setup the host_vars file. It should looks something like this::
|
||||
|
||||
instance_type: m1.medium
|
||||
image:
|
||||
keypair: fedora-admin-20130801
|
||||
security_group: default # NOTE: security_group MUST contain default.
|
||||
zone: nova
|
||||
tcp_ports: [22, 80, 443]
|
||||
|
||||
inventory_tenant: persistent
|
||||
inventory_instance_name: taiga
|
||||
hostbase: taiga
|
||||
public_ip: 209.132.184.50
|
||||
root_auth_users: ralph maxamillion
|
||||
description: taiga frontend server
|
||||
|
||||
volumes:
|
||||
- volume_id: VOLUME_UUID_GOES_HERE
|
||||
device: /dev/vdc
|
||||
|
||||
cloud_networks:
|
||||
# persistent-net
|
||||
- net-id: "67b77354-39a4-43de-b007-bb813ac5c35f"
|
||||
|
||||
6) setup the host playbook
|
||||
|
||||
7) run the playbook:
|
||||
sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/hosts/$YOUR_HOSTNAME_HERE.yml
|
||||
|
||||
You should be able to run that playbook over and over again safely, it will
|
||||
only setup/create a new instance if the ip is not up/responding.
|
||||
|
||||
=== SECURITY GROUPS ===
|
||||
|
||||
FIXME: needs work for new cloud.
|
||||
|
||||
- to edit security groups you must either have your own cloud account or
|
||||
be a member of sysadmin-main
|
||||
|
||||
This gives you the credential to change things in the persistent tenant
|
||||
- source /srv/private/ansible/files/openstack/persistent-admin/ec2rc.sh
|
||||
|
||||
This lists all security groups in that tenant:
|
||||
- euca-describe-groups | grep GROUP
|
||||
|
||||
the output will look like this:
|
||||
euca-describe-groups | grep GROU
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e default default
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e logstash logstash security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e smtpserver list server group. needs web and smtp
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e webserver webserver security group
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
|
||||
|
||||
This lets you list the rules in a specific group:
|
||||
- euca-describe-group groupname
|
||||
|
||||
the output will look like this:
|
||||
|
||||
euca-describe-group wideopen
|
||||
GROUP d4e664a10e2c4210839150be09c46e5e wideopen wideopen
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS tcp 1 65535 FROM CIDR 0.0.0.0/0
|
||||
PERMISSION d4e664a10e2c4210839150be09c46e5e wideopen ALLOWS icmp -1 -1 FROM CIDR 0.0.0.0/0
|
||||
|
||||
|
||||
To create a new group:
|
||||
euca-create-group -d "group description here" groupname
|
||||
|
||||
To add a rule to a group:
|
||||
euca-authorize -P tcp -p 22 groupname
|
||||
euca-authorize -P icmp -t -1:-1 groupname
|
||||
|
||||
To delete a rule from a group:
|
||||
euca-revoke -P tcp -p 22 groupname
|
||||
|
||||
Notes:
|
||||
- Be careful removing or adding rules to existing groups b/c you could be
|
||||
impacting other instances using that security group.
|
||||
|
||||
- You will almost always want to allow 22/tcp (sshd) and icmp -1 -1 (ping
|
||||
and traceroute and friends).
|
||||
|
||||
=== TERMINATING INSTANCES ===
|
||||
|
||||
For transient:
|
||||
1. source /srv/private/ansible/files/openstack/novarc
|
||||
|
||||
2. export OS_TENANT_NAME=transient
|
||||
|
||||
2. nova list | grep <ip of your instance or name of your instance>
|
||||
|
||||
3. nova delete <name of instance or ID of instance>
|
||||
|
||||
- OR -
|
||||
|
||||
For persistent:
|
||||
1. source /srv/private/ansible/files/openstack/novarc
|
||||
|
||||
2. nova list | grep <ip of your instance or name of your instance>
|
||||
|
||||
3. nova delete <name of instance or ID of instance>
|
||||
@@ -1,98 +0,0 @@
|
||||
# (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
|
||||
# based on the log_plays example
|
||||
# skvidal@fedoraproject.org
|
||||
# rbean@redhat.com
|
||||
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
import pwd
|
||||
|
||||
import fedmsg
|
||||
import fedmsg.config
|
||||
|
||||
try:
|
||||
from ansible.plugins.callback import CallbackBase
|
||||
except ImportError:
|
||||
# Ansible v1 compat
|
||||
CallbackBase = object
|
||||
|
||||
def getlogin():
|
||||
try:
|
||||
user = os.getlogin()
|
||||
except OSError, e:
|
||||
user = pwd.getpwuid(os.geteuid())[0]
|
||||
return user
|
||||
|
||||
|
||||
class CallbackModule(CallbackBase):
|
||||
""" Publish playbook starts and stops to fedmsg. """
|
||||
|
||||
playbook_path = None
|
||||
|
||||
def __init__(self):
|
||||
config = fedmsg.config.load_config()
|
||||
config.update(dict(
|
||||
name='relay_inbound',
|
||||
cert_prefix='shell',
|
||||
active=True,
|
||||
))
|
||||
# It seems like recursive playbooks call this over and over again and
|
||||
# fedmsg doesn't like to be initialized more than once. So, here, just
|
||||
# catch that and ignore it.
|
||||
try:
|
||||
fedmsg.init(**config)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
|
||||
def playbook_on_play_start(self, pattern):
|
||||
# This gets called once for each play.. but we just issue a message once
|
||||
# for the first one. One per "playbook"
|
||||
play = getattr(self, 'play', None)
|
||||
if play:
|
||||
# figure out where the playbook FILE is
|
||||
path = os.path.abspath(play.playbook.filename)
|
||||
|
||||
# Bail out early without publishing if we're in --check mode
|
||||
if play.playbook.check:
|
||||
return
|
||||
|
||||
if not self.playbook_path:
|
||||
fedmsg.publish(
|
||||
modname="ansible", topic="playbook.start",
|
||||
msg=dict(
|
||||
playbook=path,
|
||||
userid=getlogin(),
|
||||
extra_vars=play.playbook.extra_vars,
|
||||
inventory=play.playbook.inventory.host_list,
|
||||
playbook_checksum=play.playbook.check,
|
||||
check=play.playbook.check,
|
||||
),
|
||||
)
|
||||
self.playbook_path = path
|
||||
|
||||
def playbook_on_stats(self, stats):
|
||||
if not self.playbook_path:
|
||||
return
|
||||
|
||||
results = dict([(h, stats.summarize(h)) for h in stats.processed])
|
||||
fedmsg.publish(
|
||||
modname="ansible", topic="playbook.complete",
|
||||
msg=dict(
|
||||
playbook=self.playbook_path,
|
||||
userid=getlogin(),
|
||||
results=results,
|
||||
),
|
||||
)
|
||||
@@ -1,116 +0,0 @@
|
||||
# (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
|
||||
# based on the log_plays example
|
||||
# skvidal@fedoraproject.org
|
||||
# rbean@redhat.com
|
||||
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
import pwd
|
||||
|
||||
import fedmsg
|
||||
import fedmsg.config
|
||||
|
||||
try:
|
||||
from ansible.plugins.callback import CallbackBase
|
||||
except ImportError:
|
||||
# Ansible v1 compat
|
||||
CallbackBase = object
|
||||
|
||||
try:
|
||||
from ansible.utils.hashing import secure_hash
|
||||
except ImportError:
|
||||
from ansible.utils import md5 as secure_hash
|
||||
|
||||
def getlogin():
|
||||
try:
|
||||
user = os.getlogin()
|
||||
except OSError, e:
|
||||
user = pwd.getpwuid(os.geteuid())[0]
|
||||
return user
|
||||
|
||||
|
||||
class CallbackModule(CallbackBase):
|
||||
""" Publish playbook starts and stops to fedmsg. """
|
||||
|
||||
CALLBACK_NAME = 'fedmsg_callback2'
|
||||
CALLBACK_TYPE = 'notification'
|
||||
CALLBACK_VERSION = 2.0
|
||||
CALLBACK_NEEDS_WHITELIST = True
|
||||
|
||||
playbook_path = None
|
||||
|
||||
def __init__(self):
|
||||
config = fedmsg.config.load_config()
|
||||
config.update(dict(
|
||||
name='relay_inbound',
|
||||
cert_prefix='shell',
|
||||
active=True,
|
||||
))
|
||||
# It seems like recursive playbooks call this over and over again and
|
||||
# fedmsg doesn't like to be initialized more than once. So, here, just
|
||||
# catch that and ignore it.
|
||||
try:
|
||||
fedmsg.init(**config)
|
||||
except ValueError:
|
||||
pass
|
||||
self.play = None
|
||||
self.playbook = None
|
||||
|
||||
super(CallbackModule, self).__init__()
|
||||
|
||||
def set_play_context(self, play_context):
|
||||
self.play_context = play_context
|
||||
|
||||
def v2_playbook_on_start(self, playbook):
|
||||
self.playbook = playbook
|
||||
|
||||
def v2_playbook_on_play_start(self, play):
|
||||
# This gets called once for each play.. but we just issue a message once
|
||||
# for the first one. One per "playbook"
|
||||
if self.playbook:
|
||||
# figure out where the playbook FILE is
|
||||
path = os.path.abspath(self.playbook._file_name)
|
||||
|
||||
# Bail out early without publishing if we're in --check mode
|
||||
if self.play_context.check_mode:
|
||||
return
|
||||
|
||||
if not self.playbook_path:
|
||||
fedmsg.publish(
|
||||
modname="ansible", topic="playbook.start",
|
||||
msg=dict(
|
||||
playbook=path,
|
||||
userid=getlogin(),
|
||||
extra_vars=play._variable_manager.extra_vars,
|
||||
inventory=play._variable_manager._inventory._sources,
|
||||
playbook_checksum=secure_hash(path),
|
||||
check=self.play_context.check_mode,
|
||||
),
|
||||
)
|
||||
self.playbook_path = path
|
||||
|
||||
def v2_playbook_on_stats(self, stats):
|
||||
if not self.playbook_path:
|
||||
return
|
||||
|
||||
results = dict([(h, stats.summarize(h)) for h in stats.processed])
|
||||
fedmsg.publish(
|
||||
modname="ansible", topic="playbook.complete",
|
||||
msg=dict(
|
||||
playbook=self.playbook_path,
|
||||
userid=getlogin(),
|
||||
results=results,
|
||||
),
|
||||
)
|
||||
@@ -15,20 +15,12 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import pwd
|
||||
from ansible import utils
|
||||
|
||||
try:
|
||||
from ansible.plugins.callback import CallbackBase
|
||||
except ImportError:
|
||||
# Ansible v1 compat
|
||||
CallbackBase = object
|
||||
|
||||
TIME_FORMAT="%b %d %Y %H:%M:%S"
|
||||
|
||||
MSG_FORMAT="%(now)s\t%(count)s\t%(category)s\t%(name)s\t%(data)s\n"
|
||||
@@ -58,24 +50,24 @@ class LogMech(object):
|
||||
raise
|
||||
|
||||
# checksum of full playbook?
|
||||
|
||||
|
||||
@property
|
||||
def playbook_id(self):
|
||||
if self._pb_fn:
|
||||
return os.path.basename(self._pb_fn).replace('.yml', '').replace('.yaml', '')
|
||||
else:
|
||||
return "ansible-cmd"
|
||||
|
||||
|
||||
@playbook_id.setter
|
||||
def playbook_id(self, value):
|
||||
self._pb_fn = value
|
||||
|
||||
|
||||
@property
|
||||
def logpath_play(self):
|
||||
# this is all to get our path to look nice ish
|
||||
tstamp = time.strftime('%Y/%m/%d/%H.%M.%S', time.localtime(self.started))
|
||||
path = os.path.normpath(self.logpath + '/' + self.playbook_id + '/' + tstamp + '/')
|
||||
|
||||
|
||||
if not os.path.exists(path):
|
||||
try:
|
||||
os.makedirs(path)
|
||||
@@ -84,13 +76,13 @@ class LogMech(object):
|
||||
raise
|
||||
|
||||
return path
|
||||
|
||||
|
||||
def play_log(self, content):
|
||||
# record out playbook.log
|
||||
# include path to playbook, checksums, user running playbook
|
||||
# any args we can get back from the invocation
|
||||
fd = open(self.logpath_play + '/' + 'playbook-' + self.pid + '.info', 'a')
|
||||
fd.write('%s\n' % content)
|
||||
fd.write('%s\n' % content)
|
||||
fd.close()
|
||||
|
||||
def task_to_json(self, task):
|
||||
@@ -100,25 +92,25 @@ class LogMech(object):
|
||||
res['task_args'] = task.module_args
|
||||
if self.playbook_id == 'ansible-cmd':
|
||||
res['task_userid'] = getlogin()
|
||||
for k in ("delegate_to", "environment", "with_first_found",
|
||||
"local_action", "notified_by", "notify",
|
||||
"register", "sudo", "sudo_user", "tags",
|
||||
for k in ("delegate_to", "environment", "first_available_file",
|
||||
"local_action", "notified_by", "notify", "only_if",
|
||||
"register", "sudo", "sudo_user", "tags",
|
||||
"transport", "when"):
|
||||
v = getattr(task, k, None)
|
||||
if v:
|
||||
res['task_' + k] = v
|
||||
|
||||
|
||||
return res
|
||||
|
||||
|
||||
def log(self, host, category, data, task=None, count=0):
|
||||
if not host:
|
||||
host = 'HOSTMISSING'
|
||||
|
||||
|
||||
if type(data) == dict:
|
||||
name = data.get('module_name',None)
|
||||
else:
|
||||
name = "unknown"
|
||||
|
||||
|
||||
|
||||
# we're in setup - move the invocation info up one level
|
||||
if 'invocation' in data:
|
||||
@@ -134,41 +126,28 @@ class LogMech(object):
|
||||
data['task_start'] = self._last_task_start
|
||||
data['task_end'] = time.time()
|
||||
data.update(self.task_to_json(task))
|
||||
|
||||
|
||||
if 'task_userid' not in data:
|
||||
data['task_userid'] = getlogin()
|
||||
|
||||
|
||||
if category == 'OK' and data.get('changed', False):
|
||||
category = 'CHANGED'
|
||||
|
||||
if self.play_info.get('check', False) and self.play_info.get('diff', False):
|
||||
category = 'CHECK_DIFF:' + category
|
||||
elif self.play_info.get('check', False):
|
||||
|
||||
if self.play_info.get('check', False):
|
||||
category = 'CHECK:' + category
|
||||
|
||||
# Sometimes this is None.. othertimes it's fine. Othertimes it has
|
||||
# trailing whitespace that kills logview. Strip that, when possible.
|
||||
if name:
|
||||
name = name.strip()
|
||||
|
||||
sanitize_host = host.replace(' ', '_').replace('>', '-')
|
||||
fd = open(self.logpath_play + '/' + sanitize_host + '.log', 'a')
|
||||
|
||||
fd = open(self.logpath_play + '/' + host + '.log', 'a')
|
||||
now = time.strftime(TIME_FORMAT, time.localtime())
|
||||
fd.write(MSG_FORMAT % dict(now=now, name=name, count=count, category=category, data=json.dumps(data)))
|
||||
fd.close()
|
||||
|
||||
|
||||
|
||||
logmech = LogMech()
|
||||
|
||||
class CallbackModule(CallbackBase):
|
||||
class CallbackModule(object):
|
||||
"""
|
||||
logs playbook results, per host, in /var/log/ansible/hosts
|
||||
"""
|
||||
CALLBACK_NAME = 'logdetail'
|
||||
CALLBACK_TYPE = 'notification'
|
||||
CALLBACK_VERSION = 2.0
|
||||
CALLBACK_NEEDS_WHITELIST = True
|
||||
|
||||
def __init__(self):
|
||||
self._task_count = 0
|
||||
self._play_count = 0
|
||||
@@ -259,7 +238,7 @@ class CallbackModule(CallbackBase):
|
||||
|
||||
def playbook_on_play_start(self, pattern):
|
||||
self._task_count = 0
|
||||
|
||||
|
||||
play = getattr(self, 'play', None)
|
||||
if play:
|
||||
# figure out where the playbook FILE is
|
||||
@@ -279,29 +258,27 @@ class CallbackModule(CallbackBase):
|
||||
pb_info['inventory'] = play.playbook.inventory.host_list
|
||||
pb_info['playbook_checksum'] = utils.md5(path)
|
||||
pb_info['check'] = play.playbook.check
|
||||
pb_info['diff'] = play.playbook.diff
|
||||
logmech.play_log(json.dumps(pb_info, indent=4))
|
||||
|
||||
self._play_count += 1
|
||||
# then write per-play info that doesn't duplcate the playbook info
|
||||
|
||||
self._play_count += 1
|
||||
# then write per-play info that doesn't duplcate the playbook info
|
||||
info = {}
|
||||
info['play'] = play.name
|
||||
info['hosts'] = play.hosts
|
||||
info['transport'] = play.transport
|
||||
info['number'] = self._play_count
|
||||
info['check'] = play.playbook.check
|
||||
info['diff'] = play.playbook.diff
|
||||
logmech.play_info = info
|
||||
logmech.play_log(json.dumps(info, indent=4))
|
||||
|
||||
|
||||
def playbook_on_stats(self, stats):
|
||||
results = {}
|
||||
results = {}
|
||||
for host in stats.processed.keys():
|
||||
results[host] = stats.summarize(host)
|
||||
logmech.log(host, 'STATS', results[host])
|
||||
logmech.play_log(json.dumps({'stats': results}, indent=4))
|
||||
logmech.play_log(json.dumps({'playbook_end': time.time()}, indent=4))
|
||||
print 'logs written to: %s' % logmech.logpath_play
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,278 +0,0 @@
|
||||
# (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
|
||||
# based on the log_plays example
|
||||
# skvidal@fedoraproject.org
|
||||
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import pwd
|
||||
|
||||
try:
|
||||
from ansible.utils.hashing import secure_hash
|
||||
except ImportError:
|
||||
from ansible.utils import md5 as secure_hash
|
||||
|
||||
try:
|
||||
from ansible.plugins.callback import CallbackBase
|
||||
except ImportError:
|
||||
# Ansible v1 compat
|
||||
CallbackBase = object
|
||||
|
||||
TIME_FORMAT="%b %d %Y %H:%M:%S"
|
||||
|
||||
MSG_FORMAT="%(now)s\t%(count)s\t%(category)s\t%(name)s\t%(data)s\n"
|
||||
|
||||
LOG_PATH = '/var/log/ansible'
|
||||
|
||||
def getlogin():
|
||||
try:
|
||||
user = os.getlogin()
|
||||
except OSError, e:
|
||||
user = pwd.getpwuid(os.geteuid())[0]
|
||||
return user
|
||||
|
||||
class LogMech(object):
|
||||
def __init__(self):
|
||||
self.started = time.time()
|
||||
self.pid = str(os.getpid())
|
||||
self._pb_fn = None
|
||||
self._last_task_start = None
|
||||
self.play_info = {}
|
||||
self.logpath = LOG_PATH
|
||||
if not os.path.exists(self.logpath):
|
||||
try:
|
||||
os.makedirs(self.logpath, mode=0750)
|
||||
except OSError, e:
|
||||
if e.errno != 17:
|
||||
raise
|
||||
|
||||
# checksum of full playbook?
|
||||
|
||||
@property
|
||||
def playbook_id(self):
|
||||
if self._pb_fn:
|
||||
return os.path.basename(self._pb_fn).replace('.yml', '').replace('.yaml', '')
|
||||
else:
|
||||
return "ansible-cmd"
|
||||
|
||||
@playbook_id.setter
|
||||
def playbook_id(self, value):
|
||||
self._pb_fn = value
|
||||
|
||||
@property
|
||||
def logpath_play(self):
|
||||
# this is all to get our path to look nice ish
|
||||
tstamp = time.strftime('%Y/%m/%d/%H.%M.%S', time.localtime(self.started))
|
||||
path = os.path.normpath(self.logpath + '/' + self.playbook_id + '/' + tstamp + '/')
|
||||
|
||||
if not os.path.exists(path):
|
||||
try:
|
||||
os.makedirs(path)
|
||||
except OSError, e:
|
||||
if e.errno != 17: # if it is not dir exists then raise it up
|
||||
raise
|
||||
|
||||
return path
|
||||
|
||||
def play_log(self, content):
|
||||
# record out playbook.log
|
||||
# include path to playbook, checksums, user running playbook
|
||||
# any args we can get back from the invocation
|
||||
fd = open(self.logpath_play + '/' + 'playbook-' + self.pid + '.info', 'a')
|
||||
fd.write('%s\n' % content)
|
||||
fd.close()
|
||||
|
||||
def task_to_json(self, task):
|
||||
res = {}
|
||||
res['task_name'] = task.name
|
||||
res['task_module'] = task.action
|
||||
res['task_args'] = task.args
|
||||
if self.playbook_id == 'ansible-cmd':
|
||||
res['task_userid'] = getlogin()
|
||||
for k in ("delegate_to", "environment", "with_first_found",
|
||||
"local_action", "notified_by", "notify",
|
||||
"register", "sudo", "sudo_user", "tags",
|
||||
"transport", "when"):
|
||||
v = getattr(task, k, None)
|
||||
if v:
|
||||
res['task_' + k] = v
|
||||
|
||||
return res
|
||||
|
||||
def log(self, host, category, data, task=None, count=0):
|
||||
if not host:
|
||||
host = 'HOSTMISSING'
|
||||
|
||||
if type(data) == dict:
|
||||
name = data.get('module_name',None)
|
||||
else:
|
||||
name = "unknown"
|
||||
|
||||
|
||||
# we're in setup - move the invocation info up one level
|
||||
if 'invocation' in data:
|
||||
invoc = data['invocation']
|
||||
if not name and 'module_name' in invoc:
|
||||
name = invoc['module_name']
|
||||
|
||||
#don't add this since it can often contain complete passwords :(
|
||||
del(data['invocation'])
|
||||
|
||||
if task:
|
||||
name = task.name
|
||||
data['task_start'] = self._last_task_start
|
||||
data['task_end'] = time.time()
|
||||
data.update(self.task_to_json(task))
|
||||
|
||||
if 'task_userid' not in data:
|
||||
data['task_userid'] = getlogin()
|
||||
|
||||
if category == 'OK' and data.get('changed', False):
|
||||
category = 'CHANGED'
|
||||
|
||||
if self.play_info.get('check', False) and self.play_info.get('diff', False):
|
||||
category = 'CHECK_DIFF:' + category
|
||||
elif self.play_info.get('check', False):
|
||||
category = 'CHECK:' + category
|
||||
|
||||
# Sometimes this is None.. othertimes it's fine. Othertimes it has
|
||||
# trailing whitespace that kills logview. Strip that, when possible.
|
||||
if name:
|
||||
name = name.strip()
|
||||
|
||||
sanitize_host = host.replace(' ', '_').replace('>', '-')
|
||||
fd = open(self.logpath_play + '/' + sanitize_host + '.log', 'a')
|
||||
now = time.strftime(TIME_FORMAT, time.localtime())
|
||||
fd.write(MSG_FORMAT % dict(now=now, name=name, count=count, category=category, data=json.dumps(data)))
|
||||
fd.close()
|
||||
|
||||
|
||||
logmech = LogMech()
|
||||
|
||||
class CallbackModule(CallbackBase):
|
||||
"""
|
||||
logs playbook results, per host, in /var/log/ansible/hosts
|
||||
"""
|
||||
CALLBACK_NAME = 'logdetail2'
|
||||
CALLBACK_TYPE = 'notification'
|
||||
CALLBACK_VERSION = 2.0
|
||||
CALLBACK_NEEDS_WHITELIST = True
|
||||
|
||||
def __init__(self):
|
||||
self._task_count = 0
|
||||
self._play_count = 0
|
||||
self.task = None
|
||||
self.playbook = None
|
||||
|
||||
super(CallbackModule, self).__init__()
|
||||
|
||||
def set_play_context(self, play_context):
|
||||
self.play_context = play_context
|
||||
|
||||
def v2_runner_on_failed(self, result, ignore_errors=False):
|
||||
category = 'FAILED'
|
||||
logmech.log(result._host.get_name(), category, result._result, self.task, self._task_count)
|
||||
|
||||
def v2_runner_on_ok(self, result):
|
||||
category = 'OK'
|
||||
logmech.log(result._host.get_name(), category, result._result, self.task, self._task_count)
|
||||
|
||||
def v2_runner_on_skipped(self, result):
|
||||
category = 'SKIPPED'
|
||||
res = {}
|
||||
res['item'] = self._get_item(getattr(result._result, 'results', {}))
|
||||
logmech.log(result._host.get_name(), category, res, self.task, self._task_count)
|
||||
|
||||
def v2_runner_on_unreachable(self, result):
|
||||
category = 'UNREACHABLE'
|
||||
res = {}
|
||||
res['output'] = result._result
|
||||
logmech.log(result._host.get_name(), category, res, self.task, self._task_count)
|
||||
|
||||
def v2_runner_on_async_failed(self, result):
|
||||
category = 'ASYNC_FAILED'
|
||||
logmech.log(result._host.get_name(), category, result._result, self.task, self._task_count)
|
||||
|
||||
def v2_playbook_on_start(self, playbook):
|
||||
self.playbook = playbook
|
||||
|
||||
def v2_playbook_on_task_start(self, task, is_conditional):
|
||||
self.task = task
|
||||
logmech._last_task_start = time.time()
|
||||
self._task_count += 1
|
||||
|
||||
def v2_playbook_on_setup(self):
|
||||
self._task_count += 1
|
||||
|
||||
def v2_playbook_on_import_for_host(self, result, imported_file):
|
||||
res = {}
|
||||
res['imported_file'] = imported_file
|
||||
logmech.log(result._host.get_name(), 'IMPORTED', res, self.task)
|
||||
|
||||
def v2_playbook_on_not_import_for_host(self, result, missing_file):
|
||||
res = {}
|
||||
res['missing_file'] = missing_file
|
||||
logmech.log(result._host.get_name(), 'NOTIMPORTED', res, self.task)
|
||||
|
||||
def v2_playbook_on_play_start(self, play):
|
||||
self._task_count = 0
|
||||
|
||||
if play:
|
||||
# figure out where the playbook FILE is
|
||||
path = os.path.abspath(self.playbook._file_name)
|
||||
|
||||
# tel the logger what the playbook is
|
||||
logmech.playbook_id = path
|
||||
|
||||
# if play count == 0
|
||||
# write out playbook info now
|
||||
if not self._play_count:
|
||||
pb_info = {}
|
||||
pb_info['playbook_start'] = time.time()
|
||||
pb_info['playbook'] = path
|
||||
pb_info['userid'] = getlogin()
|
||||
pb_info['extra_vars'] = play._variable_manager.extra_vars
|
||||
pb_info['inventory'] = play._variable_manager._inventory._sources
|
||||
pb_info['playbook_checksum'] = secure_hash(path)
|
||||
pb_info['check'] = self.play_context.check_mode
|
||||
pb_info['diff'] = self.play_context.diff
|
||||
logmech.play_log(json.dumps(pb_info, indent=4))
|
||||
|
||||
self._play_count += 1
|
||||
# then write per-play info that doesn't duplcate the playbook info
|
||||
info = {}
|
||||
info['play'] = play.name
|
||||
info['hosts'] = play.hosts
|
||||
info['transport'] = self.play_context.connection
|
||||
info['number'] = self._play_count
|
||||
info['check'] = self.play_context.check_mode
|
||||
info['diff'] = self.play_context.diff
|
||||
logmech.play_info = info
|
||||
logmech.play_log(json.dumps(info, indent=4))
|
||||
|
||||
|
||||
def v2_playbook_on_stats(self, stats):
|
||||
results = {}
|
||||
for host in stats.processed.keys():
|
||||
results[host] = stats.summarize(host)
|
||||
logmech.log(host, 'STATS', results[host])
|
||||
logmech.play_log(json.dumps({'stats': results}, indent=4))
|
||||
logmech.play_log(json.dumps({'playbook_end': time.time()}, indent=4))
|
||||
print('logs written to: %s' % logmech.logpath_play)
|
||||
|
||||
|
||||
@@ -1,21 +0,0 @@
|
||||
pam_url:
|
||||
{
|
||||
settings:
|
||||
{
|
||||
url = "https://fas-all.phx2.fedoraproject.org:8443/"; # URI to fetch
|
||||
returncode = "OK"; # The remote script/cgi should return a 200 http code and this string as its only results
|
||||
userfield = "user"; # userfield name to send
|
||||
passwdfield = "token"; # passwdfield name to send
|
||||
extradata = "&do=login"; # extradata to send
|
||||
prompt = "Password+Token: "; # password prompt
|
||||
};
|
||||
|
||||
ssl:
|
||||
{
|
||||
verify_peer = true; # Should we verify SSL ?
|
||||
verify_host = true; # Should we verify the CN in the SSL cert?
|
||||
client_cert = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side certificate
|
||||
client_key = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side key (can be same file as above if a single cert)
|
||||
ca_cert = "/etc/pki/tls/private/totpcgi-ca.cert";
|
||||
};
|
||||
};
|
||||
@@ -1,21 +0,0 @@
|
||||
pam_url:
|
||||
{
|
||||
settings:
|
||||
{
|
||||
url = "https://fas-all.phx2.fedoraproject.org:8443/"; # URI to fetch
|
||||
returncode = "OK"; # The remote script/cgi should return a 200 http code and this string as its only results
|
||||
userfield = "user"; # userfield name to send
|
||||
passwdfield = "token"; # passwdfield name to send
|
||||
extradata = "&do=login"; # extradata to send
|
||||
prompt = "Password+Token: "; # password prompt
|
||||
};
|
||||
|
||||
ssl:
|
||||
{
|
||||
verify_peer = true; # Should we verify SSL ?
|
||||
verify_host = true; # Should we verify the CN in the SSL cert?
|
||||
client_cert = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side certificate
|
||||
client_key = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side key (can be same file as above if a single cert)
|
||||
ca_cert = "/etc/pki/tls/private/totpcgi-ca.cert";
|
||||
};
|
||||
};
|
||||
@@ -1,27 +0,0 @@
|
||||
pam_url:
|
||||
{
|
||||
settings:
|
||||
{
|
||||
{% if env == 'staging' %}
|
||||
url = "https://fas-all.stg.phx2.fedoraproject.org:8443/"; # URI to fetch
|
||||
{% elif datacenter == 'phx2' %}
|
||||
url = "https://fas-all.phx2.fedoraproject.org:8443/"; # URI to fetch
|
||||
{% else %}
|
||||
url = "https://fas-all.vpn.fedoraproject.org:8443/"; # URI to fetch
|
||||
{% endif %}
|
||||
returncode = "OK"; # The remote script/cgi should return a 200 http code and this string as its only results
|
||||
userfield = "user"; # userfield name to send
|
||||
passwdfield = "token"; # passwdfield name to send
|
||||
extradata = "&do=login"; # extradata to send
|
||||
prompt = "Password+Token: "; # password prompt
|
||||
};
|
||||
|
||||
ssl:
|
||||
{
|
||||
verify_peer = true; # Should we verify SSL ?
|
||||
verify_host = true; # Should we verify the CN in the SSL cert?
|
||||
client_cert = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side certificate
|
||||
client_key = "/etc/pki/tls/private/totpcgi.pem"; # file to use as client-side key (can be same file as above if a single cert)
|
||||
ca_cert = "/etc/pki/tls/private/totpcgi-ca.cert";
|
||||
};
|
||||
};
|
||||
@@ -3,6 +3,8 @@ auth required pam_env.so
|
||||
auth sufficient pam_url.so config=/etc/pam_url.conf
|
||||
auth requisite pam_succeed_if.so uid >= 500 quiet
|
||||
auth required pam_deny.so
|
||||
|
||||
auth include system-auth
|
||||
account include system-auth
|
||||
password include system-auth
|
||||
session optional pam_keyinit.so revoke
|
||||
|
||||
@@ -3,14 +3,7 @@
|
||||
|
||||
AllowOverride All
|
||||
|
||||
<IfModule mod_authz_core.c>
|
||||
# Apache 2.4
|
||||
Require all granted
|
||||
</IfModule>
|
||||
<IfModule !mod_authz_core.c>
|
||||
# Apache 2.2
|
||||
Order deny,allow
|
||||
Allow from all
|
||||
</IfModule>
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
|
||||
</Directory>
|
||||
|
||||
1148
files/bacula/bacula-dir.conf.j2
Normal file
1148
files/bacula/bacula-dir.conf.j2
Normal file
File diff suppressed because it is too large
Load Diff
45
files/bacula/bacula-fd.conf.j2
Normal file
45
files/bacula/bacula-fd.conf.j2
Normal file
@@ -0,0 +1,45 @@
|
||||
#
|
||||
# Default Bacula File Daemon Configuration file
|
||||
#
|
||||
# For Bacula release 2.0.3 (06 March 2007) -- redhat (Zod)
|
||||
#
|
||||
# There is not much to change here except perhaps the
|
||||
# File daemon Name to
|
||||
#
|
||||
|
||||
#
|
||||
# List Directors who are permitted to contact this File daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-dir
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
}
|
||||
|
||||
#
|
||||
# Restricted Director, used by tray-monitor to get the
|
||||
# status of the file daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-mon
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
Monitor = yes
|
||||
}
|
||||
|
||||
#
|
||||
# "Global" File daemon configuration specifications
|
||||
#
|
||||
FileDaemon { # this is me
|
||||
Name = bacula-fd
|
||||
FDport = 9102 # where we listen for the director
|
||||
WorkingDirectory = /var/spool/bacula
|
||||
Pid Directory = /var/run
|
||||
Maximum Concurrent Jobs = 10
|
||||
Heartbeat Interval = 10
|
||||
#Maximum Network Buffer Size = 131072
|
||||
}
|
||||
|
||||
# Send all messages except skipped files back to Director
|
||||
Messages {
|
||||
Name = Standard
|
||||
director = bacula-dir = all, !skipped, !restored
|
||||
}
|
||||
104
files/bacula/bacula-sd.conf.j2
Normal file
104
files/bacula/bacula-sd.conf.j2
Normal file
@@ -0,0 +1,104 @@
|
||||
#
|
||||
# Default Bacula Storage Daemon Configuration file
|
||||
#
|
||||
# For Bacula release 2.0.3 (06 March 2007) -- redhat (Zod)
|
||||
#
|
||||
# You may need to change the name of your tape drive
|
||||
# on the "Archive Device" directive in the Device
|
||||
# resource. If you change the Name and/or the
|
||||
# "Media Type" in the Device resource, please ensure
|
||||
# that dird.conf has corresponding changes.
|
||||
#
|
||||
|
||||
Storage { # definition of myself
|
||||
Name = bacula-sd
|
||||
SDPort = 9103 # Director's port
|
||||
WorkingDirectory = "/var/spool/bacula"
|
||||
Pid Directory = "/var/run"
|
||||
Maximum Concurrent Jobs = 10
|
||||
Heartbeat Interval = 5
|
||||
}
|
||||
|
||||
#
|
||||
# List Directors who are permitted to contact Storage daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-dir
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
}
|
||||
|
||||
#
|
||||
# Restricted Director, used by tray-monitor to get the
|
||||
# status of the storage daemon
|
||||
#
|
||||
Director {
|
||||
Name = bacula-mon
|
||||
Password = "{{ bacula5PasswordDir }}"
|
||||
Monitor = yes
|
||||
}
|
||||
|
||||
#
|
||||
# Devices supported by this Storage daemon
|
||||
# To connect, the Director's bacula-dir.conf must have the
|
||||
# same Name and MediaType.
|
||||
#
|
||||
|
||||
Device {
|
||||
Name = FileStorage
|
||||
Media Type = File
|
||||
Archive Device = /bacula/
|
||||
LabelMedia = yes; # lets Bacula label unlabeled media
|
||||
Random Access = Yes;
|
||||
AutomaticMount = yes; # when device opened, read it
|
||||
RemovableMedia = no;
|
||||
AlwaysOpen = no;
|
||||
}
|
||||
|
||||
|
||||
Device {
|
||||
Name = FileStorage2
|
||||
Media Type = File
|
||||
Archive Device = /bacula2/
|
||||
LabelMedia = yes; # lets Bacula label unlabeled media
|
||||
Random Access = Yes;
|
||||
AutomaticMount = yes; # when device opened, read it
|
||||
RemovableMedia = no;
|
||||
AlwaysOpen = no;
|
||||
}
|
||||
|
||||
#
|
||||
# An autochanger device with two drives
|
||||
|
||||
Autochanger {
|
||||
Name = Autochanger
|
||||
Device = Drive-1
|
||||
Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
|
||||
Changer Device = /dev/sg1
|
||||
}
|
||||
|
||||
Device {
|
||||
Name = Drive-1 #
|
||||
Drive Index = 0
|
||||
Media Type = LTO-5
|
||||
Archive Device = /dev/nst0
|
||||
AutomaticMount = yes; # when device opened, read it
|
||||
AlwaysOpen = yes;
|
||||
RemovableMedia = yes;
|
||||
RandomAccess = no;
|
||||
AutoChanger = yes
|
||||
SpoolDirectory = /bacula/bacula/spool/;
|
||||
Maximum Spool Size = 1600G;
|
||||
# Label Media = yes
|
||||
# Enable the Alert command only if you have the mtx package loaded
|
||||
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
|
||||
# If you have smartctl, enable this, it has more info than tapeinfo
|
||||
Alert Command = "sh -c 'smartctl -H -l error %c'"
|
||||
}
|
||||
#
|
||||
# Send all messages to the Director,
|
||||
# mount messages also are sent to the email address
|
||||
#
|
||||
Messages {
|
||||
Name = Standard
|
||||
director = bacula-dir = all
|
||||
}
|
||||
10
files/bacula/bconsole.conf.j2
Normal file
10
files/bacula/bconsole.conf.j2
Normal file
@@ -0,0 +1,10 @@
|
||||
#
|
||||
# Bacula User Agent (or Console) Configuration File
|
||||
#
|
||||
|
||||
Director {
|
||||
Name = bacula-dir
|
||||
DIRport = 9101
|
||||
address = localhost
|
||||
Password = "{{ bacula5PasswordCon }}"
|
||||
}
|
||||
5
files/bacula/fedora_delete_catalog_backup
Executable file
5
files/bacula/fedora_delete_catalog_backup
Executable file
@@ -0,0 +1,5 @@
|
||||
#!/bin/sh
|
||||
#
|
||||
# This script deletes a catalog dump
|
||||
#
|
||||
rm -f /bacula/bacula.sql
|
||||
3
files/bacula/fedora_make_catalog_backup
Executable file
3
files/bacula/fedora_make_catalog_backup
Executable file
@@ -0,0 +1,3 @@
|
||||
#!/bin/sh
|
||||
rm -f /bacula/bacula.sql
|
||||
/usr/bin/mysqldump -u bacula -f bacula > /bacula/bacula.sql
|
||||
6
files/collectd/apache.conf
Normal file
6
files/collectd/apache.conf
Normal file
@@ -0,0 +1,6 @@
|
||||
LoadPlugin apache
|
||||
|
||||
<Plugin apache>
|
||||
URL "http://localhost/apache-status?auto"
|
||||
</Plugin>
|
||||
|
||||
@@ -39,18 +39,6 @@ LoadPlugin vmem
|
||||
IgnoreSelected false
|
||||
</Plugin>
|
||||
|
||||
<Plugin "interface">
|
||||
Interface "/^veth/"
|
||||
IgnoreSelected true
|
||||
</Plugin>
|
||||
|
||||
<Plugin "df">
|
||||
MountPoint "^/.*/.snapshot/"
|
||||
FSType "tmpfs"
|
||||
FSType "overlay"
|
||||
IgnoreSelected true
|
||||
</Plugin>
|
||||
|
||||
<Plugin hddtemp>
|
||||
TranslateDevicename false
|
||||
</Plugin>
|
||||
@@ -1,5 +1,4 @@
|
||||
#!/bin/bash
|
||||
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
|
||||
|
||||
host=$(hostname -s)
|
||||
pause=10
|
||||
@@ -1,4 +1,3 @@
|
||||
LoadPlugin exec
|
||||
<Plugin exec>
|
||||
Exec "nobody" "/usr/local/bin/collectd_mailq.sh"
|
||||
</Plugin>
|
||||
5
files/collectd/network-client.conf
Normal file
5
files/collectd/network-client.conf
Normal file
@@ -0,0 +1,5 @@
|
||||
LoadPlugin network
|
||||
|
||||
<Plugin "network">
|
||||
Server "log02"
|
||||
</Plugin>
|
||||
8
files/collectd/rrdtool.conf
Normal file
8
files/collectd/rrdtool.conf
Normal file
@@ -0,0 +1,8 @@
|
||||
LoadPlugin rrdtool
|
||||
|
||||
<Plugin rrdtool>
|
||||
CacheTimeout 160
|
||||
CacheFlush 1200
|
||||
WritesPerSecond 50
|
||||
</Plugin>
|
||||
|
||||
50
files/common-scripts/syncFiles.sh
Executable file
50
files/common-scripts/syncFiles.sh
Executable file
@@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
# this script lets us sync files off of lockbox via rsync with locking and relatively niceness
|
||||
# look in rsyncd.conf on lockbox for what's available here
|
||||
|
||||
set +e
|
||||
|
||||
HOST=lockbox01.vpn.fedoraproject.org
|
||||
|
||||
function cleanlock()
|
||||
{
|
||||
/bin/rm -f /var/lock/$1.lock
|
||||
}
|
||||
|
||||
|
||||
function quit()
|
||||
{
|
||||
echo $1
|
||||
if [ $2 ]
|
||||
then
|
||||
cleanlock $2
|
||||
fi
|
||||
exit 2
|
||||
}
|
||||
|
||||
function newlock()
|
||||
{
|
||||
if [ -f /var/lock/$1.lock ]
|
||||
then
|
||||
quit "Lockfile exists.. Remove /var/lock/$1.lock"
|
||||
else
|
||||
touch /var/lock/$1.lock
|
||||
fi
|
||||
}
|
||||
|
||||
# General help
|
||||
if [ $3 ] || [ ! $2 ]
|
||||
then
|
||||
quit "$0 source dest"
|
||||
fi
|
||||
|
||||
lockname=`basename $1`
|
||||
newlock $lockname
|
||||
if [ ! -d $2 ]
|
||||
then
|
||||
mkdir $2
|
||||
fi
|
||||
/usr/bin/rsync -a $HOST::$1/* $2
|
||||
cleanlock $lockname
|
||||
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||
Version: GnuPG v1
|
||||
|
||||
mQINBFfZrzsBEADGLYtUW4YZNKSq/bawWYSg3Z8OAD3amoWx9BTdiBjWyIn7PzBQ
|
||||
g/Y2QpTj9Sylhi4ZDqcP6eikrC2bqZdBeJyOAHSkV6Nvt+D/ijHOViEsSg+OwHmC
|
||||
9axbsNHI+WKYPR7GBb40/hu7miHTOWd7puuJ000nyeHckicSHNYb+KxwoN9TTyON
|
||||
utqTtzUb1v0f+GZ2E3XHCa/SgHG+syFbKhFiPRqSmwuhESgz7JIPx9UPz/pkg/rA
|
||||
qHILJDt5PGaxhRNcK4rOVhpIBxTdjyYvtkCzlMr8ZaLqlQx2B5Ub9osYSv7CwQD5
|
||||
tJTb9ed/p5HKuT9JEDSgtxV2yy6bxEMkBjlD5m4ISnOnZ8GGjPl434FdufusIwDX
|
||||
vFUQDH5BSGV1xUcoCoNAMY+CUCoUaTBkv5PqLOgsCirSImvXhSCFBT1VVb2sPhuG
|
||||
J6q9Nk18+i2sMtjflM9PzCblMe7C1gySiuH4q+hvB6IDnYirLLy0ctBvr3siY4hY
|
||||
lTydy+4z7UuquLv02t5Zbw9jxqX1LEyiMvUppx5XgGyQ0cGQpkRHXRzQqI6bjUny
|
||||
e8Ub2sfjidjqRWyycY4F7KGG/DeKE3UeclDjFlA+CTvgu88RGgzTMZym5NxgjgfJ
|
||||
PYj+etPXth3PNzxd8FAC4tWP5b6kEVVJ2Oxiy6Z8dYQJVsAVP110bo/MFwARAQAB
|
||||
tEBGZWRvcmEgSW5mcmFzdHJ1Y3R1cmUgKGluZnJhc3RydWN0dXJlKSA8YWRtaW5A
|
||||
ZmVkb3JhcHJvamVjdC5vcmc+iQI4BBMBAgAiBQJX2a87AhsPBgsJCAcDAgYVCAIJ
|
||||
CgsEFgIDAQIeAQIXgAAKCRCAWYFeR92O+RbAD/9QzUyyoDPvPjlxn341BdT1iG3s
|
||||
BvKjNOAtQkHeDzRQ0rBXG40yoTjQ+s4X+3aNumy4C+xeGqUiFMcBED/5EdahWcXm
|
||||
5dqEAysTpiWOaamVfvQaNuBZjKP6GXXUeAVvkEVXggTI18tpNR/xFqfvHMCYuRUJ
|
||||
QERNDtEPweQn9U3ewr7VOIrF8OnxVEQe9xOPKnGr0yD22NHz5hCiIKXwt34I7m9j
|
||||
IlKMETTUflmERzzzwWp9CwmwU2o+g9hILqtvLFV/9TDSiWTvr2Ynj/hlNZPG8MhB
|
||||
K73S8oQADP/ogmwYkK3cx06CkaSEiQciAkpL4v7GzWfw3hTScIxbf/R5YU5i5qHj
|
||||
N+XJRLoW4AdNRAtrJ1KsLrFhFso9o7cfUlGGDPOwwQu3etoY3t0vViXYanOJrXqA
|
||||
DaHZ7Ynj7V5KNB97xbjohT+YiApBV1jmMbydAMhNxo2ZlAC9hmlDEwD9L9CSPt1s
|
||||
PvjcY20/RjVrm62vmXI/Sqa1zPjjYaxceEZzDIcxVDAneeeAdV99zHRDjZLqucux
|
||||
GGJWwUNyxnuA7ZNdD3ZQBJlefOCT4Tg2Yj2ssH6PdGBoWS2gibnGdUsc/LhIaES4
|
||||
afRLHVbHRu1HJ3s7pAgxNRY5Cjc5GEqdvm+5LOt/usyyaUwds0cJp55KKovsqZ1v
|
||||
+h4JFKdsC+6/ZUHRQQ==
|
||||
=MNfm
|
||||
-----END PGP PUBLIC KEY BLOCK-----
|
||||
3
files/common/ansible-pub-key
Normal file
3
files/common/ansible-pub-key
Normal file
@@ -0,0 +1,3 @@
|
||||
#ansible root key
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmS3g5fSXizcCqKMI1n5WPFrfMyu7BMrMkMYyck07rB/cf2orO8kKj5schjILA8NYJFStlv2CGRXmQlendj523FPzPmzxvTP/OT4qdywa4LKGvAxOkRGCMMxWzVFLdEMzsLUE/+FLX+xd1US9UPLGRsbMkdz4ORCc0G8gqTr835H56mQPI+/zPFeQjHoHGYtQA1wnJH/0LCuFFfU82IfzrXzFDIBAA5i2S+eEOk7/SA4Ciek1CthNtqPX27M6UqkJMBmVpnAdeDz2noWMvlzAAUQ7dHL84CiXbUnF3hhYrHDbmD+kEK+KiRrYh3PT+5YfEPVI/xiDJ2fdHGxY7Dr2TQ== root@lockbox01.phx2.fedoraproject.org
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
[epel]
|
||||
name=Extras Packages for Enterprise Linux $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/7/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
|
||||
|
||||
[epel-testing]
|
||||
name=Extras Packages for Enterprise Linux $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/testing/7/$basearch/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
|
||||
|
||||
[epel-beta]
|
||||
name=Extras Packages for Enterprise Linux beta $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/beta/7/$basearch/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
|
||||
@@ -1,6 +0,0 @@
|
||||
[infrastructure-tags-stg]
|
||||
name=Fedora Infrastructure staging tag $releasever - $basearch
|
||||
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/f$releasever-infra-stg/latest/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS
|
||||
@@ -1,6 +0,0 @@
|
||||
[infrastructure-tags]
|
||||
name=Fedora Infrastructure tag $releasever - $basearch
|
||||
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/f$releasever-infra/latest/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS
|
||||
@@ -1,38 +0,0 @@
|
||||
[updates-testing]
|
||||
name=Fedora $releasever - $basearch - Test Updates
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/Everything/$basearch/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Test Updates Debug
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/Everything/$basearch/debug/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/$basearch/debug/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-testing-source]
|
||||
name=Fedora $releasever - Test Updates Source
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/Everything/SRPMS/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/testing/$releasever/SRPMS/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,38 +0,0 @@
|
||||
[updates-testing]
|
||||
name=Fedora $releasever - $basearch - Test Updates
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/testing/$releasever/Everything/$basearch/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/testing/$releasever/$basearch/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-testing-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Test Updates Debug
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/testing/$releasever/Everything/$basearch/debug/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/testing/$releasever/$basearch/debug/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-testing-source]
|
||||
name=Fedora $releasever - Test Updates Source
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/testing/$releasever/Everything/SRPMS/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/testing/$releasever/SRPMS/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-testing-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,38 +0,0 @@
|
||||
[updates]
|
||||
name=Fedora $releasever - $basearch - Updates
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Updates - Debug
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
|
||||
{% else %}
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/$basearch/debug/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-source]
|
||||
name=Fedora $releasever - Updates Source
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/SRPMS/
|
||||
{% else %}
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/updates/$releasever/SRPMS/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,38 +0,0 @@
|
||||
[updates]
|
||||
name=Fedora $releasever - $basearch - Updates
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/$releasever/Everything/$basearch/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/$releasever/$basearch/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Updates - Debug
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/$releasever/Everything/$basearch/debug/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/$releasever/$basearch/debug/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-debug-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[updates-source]
|
||||
name=Fedora $releasever - Updates Source
|
||||
failovermethod=priority
|
||||
{% if ansible_distribution_major_version|int >27 %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/$releasever/SRPMS/
|
||||
{% else %}
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/updates/$releasever/Everything/SRPMS/
|
||||
{% endif %}
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-source-f$releasever&arch=$basearch
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,29 +0,0 @@
|
||||
[fedora]
|
||||
name=Fedora $releasever - $basearch
|
||||
failovermethod=priority
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
|
||||
enabled=1
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/tree/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[fedora-source]
|
||||
name=Fedora $releasever - Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/source/tree/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,29 +0,0 @@
|
||||
[fedora]
|
||||
name=Fedora $releasever - $basearch
|
||||
failovermethod=priority
|
||||
baseurl=https://infrastructure.fedoraproject.org/pub/fedora-secondary/releases/$releasever/Everything/$basearch/os/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
|
||||
enabled=1
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[fedora-debuginfo]
|
||||
name=Fedora $releasever - $basearch - Debug
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora-secondary/releases/$releasever/Everything/$basearch/debug/tree/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
|
||||
[fedora-source]
|
||||
name=Fedora $releasever - Source
|
||||
failovermethod=priority
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/fedora-secondary/releases/$releasever/Everything/source/tree/
|
||||
#metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
|
||||
enabled=0
|
||||
metadata_expire=7d
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
|
||||
@@ -1,13 +0,0 @@
|
||||
[rhel-7-alt-for-arm-64-optional-rpms]
|
||||
name = rhel7 $basearch server optional
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-alt-for-arm-64-optional-rpms/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
|
||||
[rhel-7-alt-for-arm-64-rpms]
|
||||
name = rhel7 $basearch server
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-alt-for-arm-64-rpms/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
@@ -1,6 +0,0 @@
|
||||
[infrastructure-tags-stg]
|
||||
name=Fedora Infrastructure tag $releasever - $basearch
|
||||
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/epel$releasever-infra-stg/latest/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS
|
||||
@@ -1,6 +0,0 @@
|
||||
[infrastructure-tags]
|
||||
name=Fedora Infrastructure tag $releasever - $basearch
|
||||
baseurl=https://kojipkgs.fedoraproject.org/repos-dist/epel$releasever-infra/latest/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://infrastructure.fedoraproject.org/repo/infra/RPM-GPG-KEY-INFRA-TAGS
|
||||
@@ -1,4 +0,0 @@
|
||||
[rhel7-rhev]
|
||||
name = rhel7 rhev $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-for-rhev-power-agents-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
@@ -1,24 +0,0 @@
|
||||
[rhel7-dvd]
|
||||
name = rhel7 base dvd
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/RHEL7-$basearch/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-base]
|
||||
name = rhel7 base $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-optional]
|
||||
name = rhel7 optional $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-optional-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-extras]
|
||||
name = rhel7 extras $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-extras-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[rhel7-ha]
|
||||
name = rhel7 ha $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-ha-for-rhel-7-server-rpms/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
@@ -1,4 +0,0 @@
|
||||
[rhel7-atomic-host]
|
||||
name = rhel7 Atomic Host $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/rhel/rhel7/$basearch/rhel-7-server-atomic-host-rpms
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
@@ -1,7 +1,10 @@
|
||||
208.75.88.4
|
||||
216.93.242.12
|
||||
107.170.242.27
|
||||
108.166.189.70
|
||||
199.223.248.98
|
||||
# [clock.redhat.com]
|
||||
66.187.233.4
|
||||
# [time.nist.gov]
|
||||
192.43.244.18
|
||||
# [otc1.psu.edu]
|
||||
128.118.25.5
|
||||
# [clock.isc.org]
|
||||
204.152.184.72
|
||||
# [loopback]
|
||||
127.127.1.0
|
||||
|
||||
@@ -1,17 +1,42 @@
|
||||
#ausil
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAD9QDskl41P2f4wqBuDBRD3VJ7MfKD6gMetMEaOy2b/CzfxN1vzeoxEvUxefi4+uh5b5ht5+BhQVhvBV7sTxxYftEH+B7IRmWigqcS1Ndnw+ML6zCbSTCJOqDvTLxmkZic0NUBIBP907ztMCoZjaOW9SSCrdA9Vp87V3x/KEQaeSNntmnFqtnpQI/N0NlmqxB78p97W/QDpLuftqJ33sM0uyvxXSusThLSFBHjisezsWox49nEKY8HW+Kwkmw+k7EF4tsDWymPB+S0gMsMlTxzjutNASVDmn6H+lgkzns+5Xxii4/mZWrcjqfLuH7vCI2mWykZJ6ek0LiQea9tNN+KZomqX6NbTUK3riaDPrZPNexa4I83Fp+DYNmYgnGMInqn+cZ5PoUJ3u3LaqZGBQeuuONTw0yQ8Pkkn5xibpPO6qblHKcet0pfmWQ5ab+5BDrsyLcPXolMci5h45GNWebr7UMuXT6+q+EolnYgbgDzzGJ4xPohF04OW8CwflK64KEnYcqlGs+DF4TNgGFlhKiyCWfXSjizmQusxn17ayi6+yrkiGeqfz72qyZ1pSKlwA8XRYC2VkAAquJP6zAtAKjCUdmRTSyYgCpoIAlMwBO07BiPLLov6lKdphZYY1DI7pTXA98fhVU04PDqJJYR1GKkttmCsjbRWnxjkPl/Zka1+ei3k9DNidT6j4hFj+uTj8SS70qZUtKLNpc5IcedHaGEK0vcXJm9lIEKBIEnN0PCLZCa4kQZnfdsbuep1fbXNf4WYPXea29aRKJc4hiqsdrccTp4KueHgWt1Jj6CZDZcFgX+NlUVWwk6djgjRzHUryExtsjCcgGMPRJWdUnVcpgkQ1qJhEXng3W+nFFboArWfwU8u1pXEdeE1Z+m+ows3nJHdEgQevyy/cUx6BPNPZkBh10MWskSV8Z+vb02vJB+QikRMwQs3Ywf6RMaZFrBkWD4FfUaU24f4wgtPQN7j5xxJ2rWLJ/s9ZOWSl9yrytC6ZUQwmayLmiPUdm4u/7ZZmaly39K1YWqFDl3eUrRAZwf1L/NAqFu/qcQQ3Xf20K0nI55nVbZ8ODyx6BtfwoioblnTEcehK0uud5Vamc5mfpErFY0agEecsc0sMZO+ky9pf/gCUdM7je7kMDI2hdx61fOa8Wypb5u9WNBWKRKx8xT1XUKhb2uFumm3sR1iNm1Qhj92mo/NO2aETOA1lsYSL0XK571Yy0iFK3X1nOqp/gCsEGLI8OPQk6XuFqv8hmfiIXNKV8IwuDStw7eIvuQIgT7bmMkj+1Ca25foSmg3w5FqJux1gO9t5F018LeQZ6LVlYHZaQnaN+eTU7KfoCozhWw1H9pprDz Dennis Gilmore
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAACAEAstHxky7hl1inyHBy+q/9M+Aen2HSfy8IoW+sAO6HSuHEUT7qWB8AlSNjHhahjXx7sy/BUkUed+NB/177rjlThokZDJ0yoM9KKymp26ETGaamBSkWBxZatTj96BWfD0P2K9jc/9vxtgKBq3VK9UaOt6VtJ9q6mKY3DdWLZn+K6iGQAKMCAgd8cCMgD6epBB5/litz7WhYv+aYTyjZGUGbBojQUiWgXDv9lR7p0w+VP7pnZEeb3//k4pZhsPrKFwwRVRLxBvWgVKNvA6nMXmsdikHCLLj8YAevhEY1xAba+iCKOpTqT7Bu+1Fnb9St8u5iDod21gRmN7MGGWYsO+Iu2MNAW9sw2nsA/sdNR0HEEgBqJLhERjGv399fWKyiZaF90n59lg8Pb6EzE6wHRs6rSB+9uKApBzPk99BEHLvC6mhn6RjrOC+TWSTcmXojAwQYCadqIdgWUaBsxaugKEXBFcmRuDWtpDfsqmM1kjeGU6MiaMlqPW0KjsMaVVChLO5ZvB/T7qW4wr5ZjLri475MuHocCMP0ECSUk7I3YW2h8RU6FEFmTpuULFRQo01iPreY5XJ7l0+xy2eggAWo+X2h3nGjXhCPOelBg+LYe0WOmPgB5oc1m5HZtFTcFzYbhAE+xQKlbwNeYT8HmNmEMhPjVoNyOOV7NAap+ueS2u/7li5D59O5Iy8aa5n/WiuYfkqH4pG796nFyLr5L/LVudzyaYFb/Gk8C1j/NAWYw53D/9aOA277HHe5t0/daJhbo98u0asF5mvPld3swPuPqkEZzgUfmNgH5CkvcQcMzaOvj6qr6xNmQfgsHroCShb46kplQ2uSf1pMAqsjN7jGhk6l+Bu6hKHnJKhZJVLiuAZtgYvkCB1ahaO3wRVozA1VKCAlqHOqoCq4YLIobUL95H08Kwcz7vIRIadX1TkOoLb2EwPkE/xrhDp4BySh+j6YNklSBkiRHvJMBNnRIj8NTRjYyj2o1Om7kJ770lEdryg2og8QBaFWCmFkwzg1QVrBOuu0dN7kt2l7VI7Ib4lavKSVTrqUdxdSbthUlu/b4Qif+pbyEtUFgykRsHVs+5Ofg7FZpsgCJ8rLFjzeVF/hAYX7t3XaIPLu+DL8kzamb/CRy1b7+iAw9nJbd7ED2SGyU6+c2coMPG23y6+YxgEmNG/rkCLCypkEEDOZe4DuMerZQ/RxMo06+glC6HC/3VN2dHlVLtEEV33B04/6Z0plAhqtjG7PVs08f8a5msV/VYn5ifa4z0oIXX1r5CIg3Ejp1JguLhBHpWa7YbS2Mwu6GAbD+hQfCYrsUkFonoOLu5czpITLo7ceJFTQmAt7OxZEoZBfmtYfzADQsQVYQb6J4QwvM3iKJOn30dgtYnJOVlDZEn+0fivedxoBAt9jHJ8lVp2ov/dOFnimi5V+2QIMB0fKTkChsk10zsDZ/KUk6zfijjEju0WfjRHCd357KswNv3aXHazfRIw77S2UOenD+xmUDZ6WgnxservUSDNDz7NldLf/gdPOMO4uSwKZixzsoCNioeLEmQv4gomNK7DyZBLMHLlWlbliqP+QWuIJO1rfoH2vaxzzA7l5tJW1gfnxm87RrrwIf9v5kpdJM6gQZxqmBCRsKQd5VkrEJ/xaFfkv080pWNV0drWTZW8fAAgfUNYB260Hyk3rHsjQlVtQxGJ1aAcgjMi3eGKQMwptbUMYHqct75czX6xp6zgXPiC/glX6AtuiZQ5bOI07imil20ien/ks/dnel8L+dmYDasL9m0B2jZ3lbl3eR1Dy7UhqGyERx//vYQapEBuwFcqQ9UdIWCGGG2Pte1I39BSehUUGSCOOD38a/GCu0l7OWZKdwq80MK/Ixgz4neiZQZ7MD2wPy6vk6Num18PZPN7OynMrI2UG5MViQ0GAhRgxwbUCvc7uKnGRqZo9q2mCabCxLbv+hJ4bppxpHHJxMDDXilTKMfZb0YRbvjBUi7LFKLN3MBMK2U1jHE+PjBgweqF8Jtuw04CQMxK3unajZOVkYAIq8IdMbw0oBVP4++eGB9z0x1eH+IsqL6IgknbbyoMgQqW9/8atm8HW2QYCX47oPd4FHs8rgJZk3bz8MwN3tp8WCRtYnJuwkWGWSq77ans0Ycl/tUfSSwUjnSvMsJnuSbxvdX0XbP5eRWikk0pJz5lM9sjYFOPHrQ44/U254yBa0N6UhyNTQnMGzRvY+fADE49b10hXZwCCrxpY9KvGr1XNJMnMcUke+4p9RS5LUwcZ8A6v7oWtZaZwnuBzvKk+HAn2gevD7Stjto+TnRCx1qcbx8iOhAEC6nvbLl+U313TmawrO/usrI5w3EFKP/4BnlKJDtNBeklJ0MpU3R1fmisqfegjuBW2bbaxq8Uo6m7uqPsYuAl7E6rOyZHLbtA8szvbQ46MSqAHezqxHJajWn2oZXMtbddgO5vlkxbRp3SSVKaPOeIj3XOGl78Owp4gFNRE0RY2EuUvrwUhXZR4wx1VHYjS6o9HAwOx3dH+pf1OiblUEanLQ9HLuOBkLhP8wn1M2slsSw+A1gyuI0ayjRujYFXdw6Mqp6XKTdU8vNue2c3d0I+TMifBypP0oJtxXmEoPp/VsU9yLKA2FF7Xvv/Xq1gtZcuZWAbSwMok/ENY1xeIFyjV+0yBidmax3jaf9yus/XEpyeBS3iIz63ymU10Kb2vrWjubg/sa2yd+q0y96dLdDRbnbwGwMmg6mXvTlVXf8c= ricky@padlock01.home.elrod.me
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDi5bNJQBrvT/YuvfLO0y6smZW5N+946uISkzmDi9myffLgHAZP4nBGeH/4GcB5ns9HJ19xVtbIwqOz4QwIqKh4gKU7DgaqND2Iu0bUUFL1KXPLGyAIW+9N3yHB+nKkH31alDnF4dpKkvO63DRkqh4ptxwEQbZDCFqn+vXuMnG4cPmDEweR3QZUt5m0Vc7HXzbehZxjUZ3xRWvT/pu+khBhJcRFkLlA60Fnqv7Q+MQP1C0Cpf3hiX1LcXUogXkNooAqx1YYRd8VqvI8e9yQW+a99x8FftnmXKlGCxP33ng6+U6Y2H7u3cRDrlRTbWqkry4SuUYo+6MtvZVgL0fw6PsZ jstanley@hawtness.rmrf.net
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJH1lA7WHRCbaFtvzbw0HxHYJstZjuXhax1+eL+SUJ5fFRGosEc4fLrSCP0gSFDfXmNzuspoBgcQTqnNO8FdIUwkJLDEu0vTQls1aT9YUXb+RVwKB7ULA3b1dqFkmOgLEjTJL9AplK4OJ9Su0kq6QBV4mXCxMsgEML/gn6r8muZmu2L/LdzUnxKKggyq7O5q1K/eW5Yy21fpvbHt2UPQX1f6gt4ty7E9Nnuhi7SHCI7fNIa+kHyIesfTm/SzeK/PY9rDwZKjuyS8o22GJXGEScJomK1cjMESH/J+t8Hffaj88BjGHNczvcnXAjq6y73VJQ9DiGLD4zmFquQMxDu0Tf kevin@jelerak.scrye.com
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDefONrBaBJlCxKtDwkYWVhf96lMhRQfwVJyBoBd4Pk6TqKMlAu2eST1xRZlV4cJSxAWgZpOaFgqJ5EGd6mq8PvVk+mKXdtX7CAoWm4f3c6otUFsFDCTw3gVvYSlEk23XBHuACsbAVNL4HmP+9C7PxQBePukbMBFD2smsyQkPcX7lZw+lDJW5lOTz3dHAA92bcopDycxRDI99gGkawzjlmxpm2C9nhRabKS6mpGw3N64d8hwHkkFbtHY7rS0/0Cka0geYYYv0NVki1IIctkhZE9LndcWbVcVe1pIlR0RyW2sorfgCgoa5fRZZhukUCtspdv981h/0b87RpRVUJKuRd1 lmacken@tomservo
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCsmLoA/97DrE7roCHOY7NdB5TV/g7oxAsk74HgHcFRYAbn/rkoa7r9ZsgR7qzwd6Z+5Z77qFqvl1Bs3XtJf+1vJ3kwdcNFdKTw1DgTdE/rNPI7QzUgXKKKv/WCiU6UDBX4HHWq8Yuq4tkr/yepS8sLzMz2e0pHU4uWFQuvr5ttP9ABGohhDnPr0IcaT5vm+uBTJItJBrhqGws2fnVxhWEm8Y96AZb2vFZVwiMdcKKqfVZby3/wTuEtaDbv0krQNtLJcjaOTWLHWnxJEvLWSdFgkuIDvoNKR7ZV2lsmh5UD/smStgf8TkORR59r63dp2kWAn0/Jl59ARsdXDXGCiduF3GamxglTUA+kYbkN/PBQbl6o+nNKy4Q5TI53WNmhpdsbEJWCjzT+V1ju5JejFEHIhnWyBoBUWB2NKxWaSlToI2B9E0iJ0HK68IlA7bO4X7SD8q5cZBVTKMByFxt9uQXFeZeG7QRCPIsg6bXsirnFn5028iz+RfVFe3Mavp18v1hObvH6SDTczQauuAhTwYOtphaPZj+iHbaKvKndvlOWdGoyrNxgcx+t4loyEEcEWD0Astdp0bZD39nag94PD7hnoENOC0oE6mbtyUuSCGrU6ogee8qxYAt0AP3Rq1LLaRWXqe/1rM5A9oaDNwNkWA/JWbJbZQf0vvWTZmTib3rfew== mdomsch@fedoraproject.org
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7U0WbKLL/D6iR03/vdDZJ8Lkj1jjAkindSvC4PkXVgi6qJo1YBZnIgsmoQopYcra2yzHFt58crygIh79P/rpQowWY99W+Sk4kB9UNuiAiX/LRi+1YdxwCKcRNTVOwuji6MGZoscACERmIjPY6P1oFPERoXhUkOuzPcrDK/0z/Bp9dpNRVZE/0zN6dvHA9QODLGvcFtgnX73SbZfoIbaVP/37IvOZvjGI1jxC5DwCmY+ihM13GpELP6BM8iihlnl1pjk1vtqPxD9g9Llr14Sc6cZJKl1WCulqhde4SEMOjpMJ8J8cGYBSsdh49hB36pdKQuTTnuCXpEt5Tl8PUKCrr mmcgrath@desktop.mmcgrath.net
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC3eVd6Ccegp1r1mhm7tPnlGUcw0zsAbR2p9hrFZ7RKxdIponuVV9ix4lgwpNEVDs0j4vxAApeLpJrsV8R8+YLUZO3Mzi+2s8nM8LXrKHtJT9wKKqoU3O/lC79drbWk3EMgETyP61Zpjkub0hwG2MjviPee63zCuRbxzxyalzk+AtwkRSxYaS2Ha0uKxGDiq1c/Iu6HRgm8HrtW+Pr6QbSSoHLhGUpR0HkgoC6852xXGhrRMkzXXbD9L6vaK9F39YmzD7Z8yey+xDTFW529avkEIWDeqBpbae+HjKqEQaBx71/rcmXhqKYrEagzUGpS8Bwskp3JMksd/v9tMuUhGQ2XaooCeKzvM0KnVUk/Q031ZtjNYxLpy/rEqbyt18+8wYOvVoGgnRZ/yJ/UVwYbGJrttYrrQmaJv7b357bkgDJobkIki+zGzi1xkvb85JWEt0mfh38H2vCnpwQtSAIyF/hmrS+1xsD/oAoc83IUhsVYcDhLbBEVKMX2IsJLMAPwCE6GexRYyVE5vEN4PMV9A8VmGuIC3IzkPEbStdtlbP4ttNKtfwS+MrY+ceAABDixls6xpedgT1he44R+7C1p+w4uj4TnYReLVce6+KgfJ6mz8CTXVULLWM4l2H3PylEUyoHGRDpVanGAvm7h2D0HgxErWIkjZkL79GFhzQc1xjzixQ== notting@nostromo.devel.redhat.com
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDDAeAohiRJ2v/RO7R9GS93TF92Gc9ixK6HM7wlbMdlZ4yYAbeoEX8VpeNaSTfo/Nw3zazr9VpmpHg+H70K8ljQsPgRwcgpetRVpF55M5FYjqM5oM+N94HV3nSGcnWbSIho1R31DaDH2ptxVqgh2m5DG7Bc45w9Bd4wjfdQ8nBrGv93tuH7X/cee4g6GvexLm5nXhAngdEmiyxw5MHuJAvj+54l4wMXRWpeF6XlI2iamW42nLSfRMCFkGNiXvBm8zkfkeH2L7I2cNKXXoP/cPCd3G/teIsI9FDqYpZ6CS0zMkWhlTuh7rlCjc9+nJsLdDLgwhb75skiUOOfimGvCCxWeHuCsSL+KpCu4AgI9UAVgO6xblDlmbQXxlGopep29U/s00W/0qv3Zp8Ks4Za0xHdoIwHiaLM0OYymFaNDd3ZqFG0FN23ZjcGqUmFGhGfUQRDt72+e9HtXlBJ0mUaCX9+e4wFGTVciG1/5CKsLHCaLRf+knsWXrv2zcv9BoZ9SCAK32zCZw05wjcmr7jYDCTLmtC6kEBNaOeE9Qqi2oomo4ji8ybg+Qq+1BwOtJKExvmZaooBZud0qd24HmCU0/0ysw732jGcqexzxsCR0VArd+7LKexOD7KwMW0VUss6fdOWac9gwCLx9FaKYh8mVvcQjKhKGI3aO2sXRUWSbBJw8w== ricky@alpha.rzhou.org
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAgEAxnzCHH11nDM1m7yvqo6Uanq5vcZjBcs/mr3LccxwJ59ENzSXwUgEQy/P8vby9VKMwsskoaqZcvJdOSZBFhNV970NTPb69OIXPQAl/xhaLwiJOn606fB+/S8WepeuntS0qLiebbEiA9vIQLteZ+bWl1s/didD/sFo3/wItoTGA4GuShUu1AyWJx5Ue7Y34rwGR+kIvDoy2GHUcunn2PjGt4r3v2vpiR8GuK0JRupJAGYbYCiMBDRMkR0cgEyHW6+QQNqMlA6nRJjp94PcUMKaZK6Tc+6h5v8kLLtzuZ6ZupwMMC4X8sh85YcxqoW9DynrvO28pzaMNBHm7qr9LeY9PIhXscSa35GAcGZ7UwPK4aJAAuIzCf8BzazyvUM3Ye7GPCXHxUwY0kdXk+MHMVKFzZDChNp/ovgdhxNrw9Xzcs4yw7XYambN9Bk567cI6/tWcPuYLYD4ZJQP0qSXVzVgFEPss1lDcgd0k4if+pINyxM8eVFZVAqU+BMeDC+6W8HUUPgv6LiyTWs+xTXTuORwBTSF1pOqWB4LjqsCGIiMAc6n/xdALBGUN7qsuKDU6Q7bwPppaxypi4KCvuJsqW+8sDtMUaZ34I5Zo1q7cu03wqnOljUGoAY6IDn3J66F2KlPPyb/q3PDV3WbY/jnH16L29/xUA73nFUW1p+WXutwmSU= ssmoogen@ponyo.int.smoogespace.com
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFZ3AD/I0OfU84IrK573amZptucuBrDxHoue/c+PUsD3MGIA6QXRceq3ZkLuz25OAAu53hFxzCE4d6eVS299rVR8Cd+tVU8aqBdTHzdqv52Vs8zRfXMW69sV7fhwRLaQDcRTwY90Wmz2MbZmN996XmJDNtUIWI2mML+PBYEdO0PyiB2ttb7mmA3SwtC/rwEMJL2YHh+bTzlJ9W4BgFcFwizMXU3mk5uGp2/q3nKzEvgTROM8yWvqdM34cRYpjFKyOlpo6k3SPt76hgDUEIsAu6Ul1S0FHTCRMIihcxZOSN4frMtXVjX0NhW9mKcn1IRBpzd0Yon/gPB8OJ31ojIIop spot@pterodactyl
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQDfgKJEBuHFlFc8/IHDeIpdprNnAFQHkicXAFfAzIJSkhUaOJFjsulmgPZn2TJJpYqFAxYUjhWJOdrOwx7AHSg6gWu4TT4a0sTay+Z0eqZOShf5UL/M587DxJk1JZU8g812yDKZMc7Sv7K6zdteONnCvno1kALSg0F2MVMJXFjE/tSontkIRH6IuG19R19NGEj1h56uGwdfe78xjOmv5wk6RZBjaOKqiPSQKNqCKbY9Kyz6yrem2M5uxRK45u3wSPJdmopo8l/nwf0p6ydrUSL5C/aXGh7LPqh31eTBDQUbWHw9LQMk1SibMGQPwJt59lLMlzc5OQZAJEbadsDAgl6VVA6MZkBQROiK9E087kvPesMoGWE0KBgvTqzpBZj0uHATP9i097dv80gjupMyaePsnQOxk0wRho9nRkxRo18Drt3QPVND4YGHzahMe/YR2N83MkbnGoP8K+GsFhLMAp3NKh6yUofFxTgRiB6H8ULKf3CV+hlk0Z9RJR3CpgMTKILYHPlaleJqoP6sXg6tJxI0rUE+0jUKvaTj+N2gX0MjKfUINk5mTbjD2mdVrPtKOBvos2luNhY5nTDpJuAHQqnFHPlPw8l3lXC2VBWOjqfTeeS+qD7ArKe6F7IO5ZNxJ2mTUuodhaPySta1MS37DWoz6UqeJu+wKIsHok90+EU4aAvUABh3RXSQA1E3IaxkooMhhrdIQO6K4L0M+CZ7lP35sW5pnwsN4sFlPec9Xn5e15LTlb9yFlx7Nm4DE2SX1s9QyMRE7z0LNO0X7wiihojuyQM6OQwc+ZaaDw5HerBisX/3LcC9osVLQQg1pt91YcCczUQ08qfUJV6aOD962K+EGzVFQGGauJDzgEH9BHQg7QwCWr0f3mu8/TNBzys2c0YsywDUc3AT1KP6TEJcR/dy6WbhJD3qyO/BLfCzRrHUOIaz+WbwmfTX8tGEQnVV5sEkZ39PWA1hRQ83b3MNV8cRJl+h/FnTk62yM4ZqGu73+x8JiEG3HAJp9/xYfNSwg8++PojJBXe+yM6DrTh5fTnBhxatLEKB658p8jTqJtF4+YD9D8+L39xEns6GQ7FphNqTC6IcpXyqq+zNuzF7vs/T+5n7978dUs3sK6YpBX4BlDxK6MsRF1WYqajEVeBJEMwdX2rfGkN9B5GfWdmdrzBjZQ6yyvlx5Dg++qgxpMiVOXSnw5v7H03PrT1we9wKre/2SQ1A2Oq/UDt/7tR2cMLoaPDNBpFT1W44LJB7o9iDT9YHUG3dC7R8JoeJ5YjyFmxbUQ5xg1oHnrBaPrGCuEYdQWhuDmp9Px2yRu8Agxzr9rNCZ/W8nWJVmvwvlXoldrum2rAECx0wiWqBhQ/+eX65 badger@unaka.lan
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmS3g5fSXizcCqKMI1n5WPFrfMyu7BMrMkMYyck07rB/cf2orO8kKj5schjILA8NYJFStlv2CGRXmQlendj523FPzPmzxvTP/OT4qdywa4LKGvAxOkRGCMMxWzVFLdEMzsLUE/+FLX+xd1US9UPLGRsbMkdz4ORCc0G8gqTr835H56mQPI+/zPFeQjHoHGYtQA1wnJH/0LCuFFfU82IfzrXzFDIBAA5i2S+eEOk7/SA4Ciek1CthNtqPX27M6UqkJMBmVpnAdeDz2noWMvlzAAUQ7dHL84CiXbUnF3hhYrHDbmD+kEK+KiRrYh3PT+5YfEPVI/xiDJ2fdHGxY7Dr2TQ== root@lockbox01.phx2.fedoraproject.org
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDnI+8JwOdXUO6T7gI6oXHUG4oQJsMsCwEGnRBjU4po93i9g9C5sShgqJMvBI2wzDdgL/xOFJeHuo+WTP6W/oiv8KHEco3wXSI4OlsPanORGn2TajwzEaYlfxJNlQPvmuxFxcrfkPF8cOGa0DRNTLZK7abO3tKfZV7IJyNX3Z0LFZ+VwcJBy1ryg0GonMYkjEreiAgJyGCJ1crnKiRMPSu/QONb0MTytMlJRtc/Lfi/KkT8C/LQ/e3zA5DWo9Ykb79M1k4MmtmE8mIUlWUQ9hagMhCj3/6Uze04H48fpYzDPr6AHU6rqxLTdBGgLCeSIUkE1ReZpAk2E+QAB/fTliydT93ig5i2RDt3YHcAa994C85bc0D+A21u0H/LzR1wbIItx+MpOkZePHevDSe4y8ULx0cUiEHxmTTZ2C6j+1EqaP5PeWEqlU3iXTgiqOzTEwfEaH7nScBpGbFmPnzdgO7xLuKebnvWjGu6d8Jd41KN5dN5WNMJaNEXBl65ySfeQYCCX/JZ5bfvC/07zAKj0/RKOFMyS07rb0rKh3EBcRx/tHgCq0hJ23NwfkShchj7v2Zh+JjgHKBv1+ZiIwnx2/WuYwvKwyqXZ5Jpy+lgxcC7l11w1ZN3tCd66E6NdU8AJIOz0n+trIorsipQBY0In3ZBLUU0PUYwno73e7ZabgcE7Q== patrick-new@fedora.thuis.local
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDep2yv5JTFJ0IdCiqumMFfNdu3H5Ej/rVVDEotS+3n5+1plKvajPXOA9c/0RLrBC/vL8LqDVrxBaiCvPFCIRN9a3Y1ru3Dwg++NmcMEvYq/H3SMHhZsH1yjlCD2r38znpX+D+CBMQnn7F5jqYFAnaMeESrgGGFFANfJN9HdHjb6eIrBGJyUOJ2JnZnhLFT5y7ru2xRMDmgsO3U+crmecYAeX/4iUadUxit36defAniVOA/3Jwva4Gjz73vIDTHNy1mxB8Y2ZBBl9WcL4qHc6wnAyFaiULcT5++Gdjn+MIyL86G/7mIIgC+fcVk/5JrdwMBiAZYMUZO/pzPobOe0spF threebean@marat
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2xAeq5uO72kY4mSFgFl9ZSveiAqe4tUv8hemrxwZH+w24RFOGrW1nOV+hjQhRpYVNwvqJkrd9N7VY/HXkd9df2AgQyYoiVfeMPTA7lB0/e/S1Bd6XGdWudvqRU1O6Rug0j3RQOuz7WDJgnanBVcBl8+X7EaPGpv9aILgh6CJDOVAO2GgaFdzI7CHtR99CMqNG7BsQF8C9Y8ALK+8HOPRE0R1wzgaAw85HTo0gyIWcrZqr4HI/QDuLjUQ6AZSgzE7dTiwZuFnUjLBnL0YP1bxJglt9IFx6r6jvdp/yMD+Bn/91WvmBL/AD+GIQ/ZydoeLo+JQW22ibiX/SzdAE4Cd3 pingou@FedoraProject
|
||||
|
||||
#codeblock
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAACAEAstHxky7hl1inyHBy+q/9M+Aen2HSfy8IoW+sAO6HSuHEUT7qWB8AlSNjHhahjXx7sy/BUkUed+NB/177rjlThokZDJ0yoM9KKymp26ETGaamBSkWBxZatTj96BWfD0P2K9jc/9vxtgKBq3VK9UaOt6VtJ9q6mKY3DdWLZn+K6iGQAKMCAgd8cCMgD6epBB5/litz7WhYv+aYTyjZGUGbBojQUiWgXDv9lR7p0w+VP7pnZEeb3//k4pZhsPrKFwwRVRLxBvWgVKNvA6nMXmsdikHCLLj8YAevhEY1xAba+iCKOpTqT7Bu+1Fnb9St8u5iDod21gRmN7MGGWYsO+Iu2MNAW9sw2nsA/sdNR0HEEgBqJLhERjGv399fWKyiZaF90n59lg8Pb6EzE6wHRs6rSB+9uKApBzPk99BEHLvC6mhn6RjrOC+TWSTcmXojAwQYCadqIdgWUaBsxaugKEXBFcmRuDWtpDfsqmM1kjeGU6MiaMlqPW0KjsMaVVChLO5ZvB/T7qW4wr5ZjLri475MuHocCMP0ECSUk7I3YW2h8RU6FEFmTpuULFRQo01iPreY5XJ7l0+xy2eggAWo+X2h3nGjXhCPOelBg+LYe0WOmPgB5oc1m5HZtFTcFzYbhAE+xQKlbwNeYT8HmNmEMhPjVoNyOOV7NAap+ueS2u/7li5D59O5Iy8aa5n/WiuYfkqH4pG796nFyLr5L/LVudzyaYFb/Gk8C1j/NAWYw53D/9aOA277HHe5t0/daJhbo98u0asF5mvPld3swPuPqkEZzgUfmNgH5CkvcQcMzaOvj6qr6xNmQfgsHroCShb46kplQ2uSf1pMAqsjN7jGhk6l+Bu6hKHnJKhZJVLiuAZtgYvkCB1ahaO3wRVozA1VKCAlqHOqoCq4YLIobUL95H08Kwcz7vIRIadX1TkOoLb2EwPkE/xrhDp4BySh+j6YNklSBkiRHvJMBNnRIj8NTRjYyj2o1Om7kJ770lEdryg2og8QBaFWCmFkwzg1QVrBOuu0dN7kt2l7VI7Ib4lavKSVTrqUdxdSbthUlu/b4Qif+pbyEtUFgykRsHVs+5Ofg7FZpsgCJ8rLFjzeVF/hAYX7t3XaIPLu+DL8kzamb/CRy1b7+iAw9nJbd7ED2SGyU6+c2coMPG23y6+YxgEmNG/rkCLCypkEEDOZe4DuMerZQ/RxMo06+glC6HC/3VN2dHlVLtEEV33B04/6Z0plAhqtjG7PVs08f8a5msV/VYn5ifa4z0oIXX1r5CIg3Ejp1JguLhBHpWa7YbS2Mwu6GAbD+hQfCYrsUkFonoOLu5czpITLo7ceJFTQmAt7OxZEoZBfmtYfzADQsQVYQb6J4QwvM3iKJOn30dgtYnJOVlDZEn+0fivedxoBAt9jHJ8lVp2ov/dOFnimi5V+2QIMB0fKTkChsk10zsDZ/KUk6zfijjEju0WfjRHCd357KswNv3aXHazfRIw77S2UOenD+xmUDZ6WgnxservUSDNDz7NldLf/gdPOMO4uSwKZixzsoCNioeLEmQv4gomNK7DyZBLMHLlWlbliqP+QWuIJO1rfoH2vaxzzA7l5tJW1gfnxm87RrrwIf9v5kpdJM6gQZxqmBCRsKQd5VkrEJ/xaFfkv080pWNV0drWTZW8fAAgfUNYB260Hyk3rHsjQlVtQxGJ1aAcgjMi3eGKQMwptbUMYHqct75czX6xp6zgXPiC/glX6AtuiZQ5bOI07imil20ien/ks/dnel8L+dmYDasL9m0B2jZ3lbl3eR1Dy7UhqGyERx//vYQapEBuwFcqQ9UdIWCGGG2Pte1I39BSehUUGSCOOD38a/GCu0l7OWZKdwq80MK/Ixgz4neiZQZ7MD2wPy6vk6Num18PZPN7OynMrI2UG5MViQ0GAhRgxwbUCvc7uKnGRqZo9q2mCabCxLbv+hJ4bppxpHHJxMDDXilTKMfZb0YRbvjBUi7LFKLN3MBMK2U1jHE+PjBgweqF8Jtuw04CQMxK3unajZOVkYAIq8IdMbw0oBVP4++eGB9z0x1eH+IsqL6IgknbbyoMgQqW9/8atm8HW2QYCX47oPd4FHs8rgJZk3bz8MwN3tp8WCRtYnJuwkWGWSq77ans0Ycl/tUfSSwUjnSvMsJnuSbxvdX0XbP5eRWikk0pJz5lM9sjYFOPHrQ44/U254yBa0N6UhyNTQnMGzRvY+fADE49b10hXZwCCrxpY9KvGr1XNJMnMcUke+4p9RS5LUwcZ8A6v7oWtZaZwnuBzvKk+HAn2gevD7Stjto+TnRCx1qcbx8iOhAEC6nvbLl+U313TmawrO/usrI5w3EFKP/4BnlKJDtNBeklJ0MpU3R1fmisqfegjuBW2bbaxq8Uo6m7uqPsYuAl7E6rOyZHLbtA8szvbQ46MSqAHezqxHJajWn2oZXMtbddgO5vlkxbRp3SSVKaPOeIj3XOGl78Owp4gFNRE0RY2EuUvrwUhXZR4wx1VHYjS6o9HAwOx3dH+pf1OiblUEanLQ9HLuOBkLhP8wn1M2slsSw+A1gyuI0ayjRujYFXdw6Mqp6XKTdU8vNue2c3d0I+TMifBypP0oJtxXmEoPp/VsU9yLKA2FF7Xvv/Xq1gtZcuZWAbSwMok/ENY1xeIFyjV+0yBidmax3jaf9yus/XEpyeBS3iIz63ymU10Kb2vrWjubg/sa2yd+q0y96dLdDRbnbwGwMmg6mXvTlVXf8c= ricky@padlock01.home.elrod.me
|
||||
|
||||
#jstanley
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDi5bNJQBrvT/YuvfLO0y6smZW5N+946uISkzmDi9myffLgHAZP4nBGeH/4GcB5ns9HJ19xVtbIwqOz4QwIqKh4gKU7DgaqND2Iu0bUUFL1KXPLGyAIW+9N3yHB+nKkH31alDnF4dpKkvO63DRkqh4ptxwEQbZDCFqn+vXuMnG4cPmDEweR3QZUt5m0Vc7HXzbehZxjUZ3xRWvT/pu+khBhJcRFkLlA60Fnqv7Q+MQP1C0Cpf3hiX1LcXUogXkNooAqx1YYRd8VqvI8e9yQW+a99x8FftnmXKlGCxP33ng6+U6Y2H7u3cRDrlRTbWqkry4SuUYo+6MtvZVgL0fw6PsZ jstanley@hawtness.rmrf.net
|
||||
|
||||
#kevin
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJH1lA7WHRCbaFtvzbw0HxHYJstZjuXhax1+eL+SUJ5fFRGosEc4fLrSCP0gSFDfXmNzuspoBgcQTqnNO8FdIUwkJLDEu0vTQls1aT9YUXb+RVwKB7ULA3b1dqFkmOgLEjTJL9AplK4OJ9Su0kq6QBV4mXCxMsgEML/gn6r8muZmu2L/LdzUnxKKggyq7O5q1K/eW5Yy21fpvbHt2UPQX1f6gt4ty7E9Nnuhi7SHCI7fNIa+kHyIesfTm/SzeK/PY9rDwZKjuyS8o22GJXGEScJomK1cjMESH/J+t8Hffaj88BjGHNczvcnXAjq6y73VJQ9DiGLD4zmFquQMxDu0Tf kevin@jelerak.scrye.com
|
||||
|
||||
#lmacken
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDefONrBaBJlCxKtDwkYWVhf96lMhRQfwVJyBoBd4Pk6TqKMlAu2eST1xRZlV4cJSxAWgZpOaFgqJ5EGd6mq8PvVk+mKXdtX7CAoWm4f3c6otUFsFDCTw3gVvYSlEk23XBHuACsbAVNL4HmP+9C7PxQBePukbMBFD2smsyQkPcX7lZw+lDJW5lOTz3dHAA92bcopDycxRDI99gGkawzjlmxpm2C9nhRabKS6mpGw3N64d8hwHkkFbtHY7rS0/0Cka0geYYYv0NVki1IIctkhZE9LndcWbVcVe1pIlR0RyW2sorfgCgoa5fRZZhukUCtspdv981h/0b87RpRVUJKuRd1 lmacken@tomservo
|
||||
|
||||
#mdomsch
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCsmLoA/97DrE7roCHOY7NdB5TV/g7oxAsk74HgHcFRYAbn/rkoa7r9ZsgR7qzwd6Z+5Z77qFqvl1Bs3XtJf+1vJ3kwdcNFdKTw1DgTdE/rNPI7QzUgXKKKv/WCiU6UDBX4HHWq8Yuq4tkr/yepS8sLzMz2e0pHU4uWFQuvr5ttP9ABGohhDnPr0IcaT5vm+uBTJItJBrhqGws2fnVxhWEm8Y96AZb2vFZVwiMdcKKqfVZby3/wTuEtaDbv0krQNtLJcjaOTWLHWnxJEvLWSdFgkuIDvoNKR7ZV2lsmh5UD/smStgf8TkORR59r63dp2kWAn0/Jl59ARsdXDXGCiduF3GamxglTUA+kYbkN/PBQbl6o+nNKy4Q5TI53WNmhpdsbEJWCjzT+V1ju5JejFEHIhnWyBoBUWB2NKxWaSlToI2B9E0iJ0HK68IlA7bO4X7SD8q5cZBVTKMByFxt9uQXFeZeG7QRCPIsg6bXsirnFn5028iz+RfVFe3Mavp18v1hObvH6SDTczQauuAhTwYOtphaPZj+iHbaKvKndvlOWdGoyrNxgcx+t4loyEEcEWD0Astdp0bZD39nag94PD7hnoENOC0oE6mbtyUuSCGrU6ogee8qxYAt0AP3Rq1LLaRWXqe/1rM5A9oaDNwNkWA/JWbJbZQf0vvWTZmTib3rfew== mdomsch@fedoraproject.org
|
||||
|
||||
#mmcgrath
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7U0WbKLL/D6iR03/vdDZJ8Lkj1jjAkindSvC4PkXVgi6qJo1YBZnIgsmoQopYcra2yzHFt58crygIh79P/rpQowWY99W+Sk4kB9UNuiAiX/LRi+1YdxwCKcRNTVOwuji6MGZoscACERmIjPY6P1oFPERoXhUkOuzPcrDK/0z/Bp9dpNRVZE/0zN6dvHA9QODLGvcFtgnX73SbZfoIbaVP/37IvOZvjGI1jxC5DwCmY+ihM13GpELP6BM8iihlnl1pjk1vtqPxD9g9Llr14Sc6cZJKl1WCulqhde4SEMOjpMJ8J8cGYBSsdh49hB36pdKQuTTnuCXpEt5Tl8PUKCrr mmcgrath@desktop.mmcgrath.net
|
||||
|
||||
#notting
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC3eVd6Ccegp1r1mhm7tPnlGUcw0zsAbR2p9hrFZ7RKxdIponuVV9ix4lgwpNEVDs0j4vxAApeLpJrsV8R8+YLUZO3Mzi+2s8nM8LXrKHtJT9wKKqoU3O/lC79drbWk3EMgETyP61Zpjkub0hwG2MjviPee63zCuRbxzxyalzk+AtwkRSxYaS2Ha0uKxGDiq1c/Iu6HRgm8HrtW+Pr6QbSSoHLhGUpR0HkgoC6852xXGhrRMkzXXbD9L6vaK9F39YmzD7Z8yey+xDTFW529avkEIWDeqBpbae+HjKqEQaBx71/rcmXhqKYrEagzUGpS8Bwskp3JMksd/v9tMuUhGQ2XaooCeKzvM0KnVUk/Q031ZtjNYxLpy/rEqbyt18+8wYOvVoGgnRZ/yJ/UVwYbGJrttYrrQmaJv7b357bkgDJobkIki+zGzi1xkvb85JWEt0mfh38H2vCnpwQtSAIyF/hmrS+1xsD/oAoc83IUhsVYcDhLbBEVKMX2IsJLMAPwCE6GexRYyVE5vEN4PMV9A8VmGuIC3IzkPEbStdtlbP4ttNKtfwS+MrY+ceAABDixls6xpedgT1he44R+7C1p+w4uj4TnYReLVce6+KgfJ6mz8CTXVULLWM4l2H3PylEUyoHGRDpVanGAvm7h2D0HgxErWIkjZkL79GFhzQc1xjzixQ== notting@nostromo.devel.redhat.com
|
||||
|
||||
#ricky
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDDAeAohiRJ2v/RO7R9GS93TF92Gc9ixK6HM7wlbMdlZ4yYAbeoEX8VpeNaSTfo/Nw3zazr9VpmpHg+H70K8ljQsPgRwcgpetRVpF55M5FYjqM5oM+N94HV3nSGcnWbSIho1R31DaDH2ptxVqgh2m5DG7Bc45w9Bd4wjfdQ8nBrGv93tuH7X/cee4g6GvexLm5nXhAngdEmiyxw5MHuJAvj+54l4wMXRWpeF6XlI2iamW42nLSfRMCFkGNiXvBm8zkfkeH2L7I2cNKXXoP/cPCd3G/teIsI9FDqYpZ6CS0zMkWhlTuh7rlCjc9+nJsLdDLgwhb75skiUOOfimGvCCxWeHuCsSL+KpCu4AgI9UAVgO6xblDlmbQXxlGopep29U/s00W/0qv3Zp8Ks4Za0xHdoIwHiaLM0OYymFaNDd3ZqFG0FN23ZjcGqUmFGhGfUQRDt72+e9HtXlBJ0mUaCX9+e4wFGTVciG1/5CKsLHCaLRf+knsWXrv2zcv9BoZ9SCAK32zCZw05wjcmr7jYDCTLmtC6kEBNaOeE9Qqi2oomo4ji8ybg+Qq+1BwOtJKExvmZaooBZud0qd24HmCU0/0ysw732jGcqexzxsCR0VArd+7LKexOD7KwMW0VUss6fdOWac9gwCLx9FaKYh8mVvcQjKhKGI3aO2sXRUWSbBJw8w== ricky@alpha.rzhou.org
|
||||
|
||||
#skvidal
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDjlnCEiFMrKpkiIBjs5IW1+RXDald3aKvTszj0hUw9Gl6w3vt3RAiqTD/XRKcNdP0+pVIK/I4KexKfZzemNZ8UYmZ+a9EK+Gj7OQbJv7TQDeR0zyJ8ZgFXaWoN+CnWXLO2mp9poysUR6CILjaDJt4GDxJaD+bebRu+zxUQSlgrjObhIUTSfwsEJu++zK+fy4+xSEMG7SANEJHd+zOAw6+isLnnbp8qY2fs3reKpc8XPkyJscLU4BQV2cGXwlPUhzPVv/itUUV/uWHeAqoz2i5XG4C0/BXk6D85qkGIyE08Nl3COxn6giivrdTIH6W4dUtBdYgTMZ3RgMHL9ClLpS17 skvidal@opus
|
||||
|
||||
#smooge
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAgEAxnzCHH11nDM1m7yvqo6Uanq5vcZjBcs/mr3LccxwJ59ENzSXwUgEQy/P8vby9VKMwsskoaqZcvJdOSZBFhNV970NTPb69OIXPQAl/xhaLwiJOn606fB+/S8WepeuntS0qLiebbEiA9vIQLteZ+bWl1s/didD/sFo3/wItoTGA4GuShUu1AyWJx5Ue7Y34rwGR+kIvDoy2GHUcunn2PjGt4r3v2vpiR8GuK0JRupJAGYbYCiMBDRMkR0cgEyHW6+QQNqMlA6nRJjp94PcUMKaZK6Tc+6h5v8kLLtzuZ6ZupwMMC4X8sh85YcxqoW9DynrvO28pzaMNBHm7qr9LeY9PIhXscSa35GAcGZ7UwPK4aJAAuIzCf8BzazyvUM3Ye7GPCXHxUwY0kdXk+MHMVKFzZDChNp/ovgdhxNrw9Xzcs4yw7XYambN9Bk567cI6/tWcPuYLYD4ZJQP0qSXVzVgFEPss1lDcgd0k4if+pINyxM8eVFZVAqU+BMeDC+6W8HUUPgv6LiyTWs+xTXTuORwBTSF1pOqWB4LjqsCGIiMAc6n/xdALBGUN7qsuKDU6Q7bwPppaxypi4KCvuJsqW+8sDtMUaZ34I5Zo1q7cu03wqnOljUGoAY6IDn3J66F2KlPPyb/q3PDV3WbY/jnH16L29/xUA73nFUW1p+WXutwmSU= ssmoogen@ponyo.int.smoogespace.com
|
||||
|
||||
#spot
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFZ3AD/I0OfU84IrK573amZptucuBrDxHoue/c+PUsD3MGIA6QXRceq3ZkLuz25OAAu53hFxzCE4d6eVS299rVR8Cd+tVU8aqBdTHzdqv52Vs8zRfXMW69sV7fhwRLaQDcRTwY90Wmz2MbZmN996XmJDNtUIWI2mML+PBYEdO0PyiB2ttb7mmA3SwtC/rwEMJL2YHh+bTzlJ9W4BgFcFwizMXU3mk5uGp2/q3nKzEvgTROM8yWvqdM34cRYpjFKyOlpo6k3SPt76hgDUEIsAu6Ul1S0FHTCRMIihcxZOSN4frMtXVjX0NhW9mKcn1IRBpzd0Yon/gPB8OJ31ojIIop spot@pterodactyl
|
||||
|
||||
#toshio
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAEAQDfgKJEBuHFlFc8/IHDeIpdprNnAFQHkicXAFfAzIJSkhUaOJFjsulmgPZn2TJJpYqFAxYUjhWJOdrOwx7AHSg6gWu4TT4a0sTay+Z0eqZOShf5UL/M587DxJk1JZU8g812yDKZMc7Sv7K6zdteONnCvno1kALSg0F2MVMJXFjE/tSontkIRH6IuG19R19NGEj1h56uGwdfe78xjOmv5wk6RZBjaOKqiPSQKNqCKbY9Kyz6yrem2M5uxRK45u3wSPJdmopo8l/nwf0p6ydrUSL5C/aXGh7LPqh31eTBDQUbWHw9LQMk1SibMGQPwJt59lLMlzc5OQZAJEbadsDAgl6VVA6MZkBQROiK9E087kvPesMoGWE0KBgvTqzpBZj0uHATP9i097dv80gjupMyaePsnQOxk0wRho9nRkxRo18Drt3QPVND4YGHzahMe/YR2N83MkbnGoP8K+GsFhLMAp3NKh6yUofFxTgRiB6H8ULKf3CV+hlk0Z9RJR3CpgMTKILYHPlaleJqoP6sXg6tJxI0rUE+0jUKvaTj+N2gX0MjKfUINk5mTbjD2mdVrPtKOBvos2luNhY5nTDpJuAHQqnFHPlPw8l3lXC2VBWOjqfTeeS+qD7ArKe6F7IO5ZNxJ2mTUuodhaPySta1MS37DWoz6UqeJu+wKIsHok90+EU4aAvUABh3RXSQA1E3IaxkooMhhrdIQO6K4L0M+CZ7lP35sW5pnwsN4sFlPec9Xn5e15LTlb9yFlx7Nm4DE2SX1s9QyMRE7z0LNO0X7wiihojuyQM6OQwc+ZaaDw5HerBisX/3LcC9osVLQQg1pt91YcCczUQ08qfUJV6aOD962K+EGzVFQGGauJDzgEH9BHQg7QwCWr0f3mu8/TNBzys2c0YsywDUc3AT1KP6TEJcR/dy6WbhJD3qyO/BLfCzRrHUOIaz+WbwmfTX8tGEQnVV5sEkZ39PWA1hRQ83b3MNV8cRJl+h/FnTk62yM4ZqGu73+x8JiEG3HAJp9/xYfNSwg8++PojJBXe+yM6DrTh5fTnBhxatLEKB658p8jTqJtF4+YD9D8+L39xEns6GQ7FphNqTC6IcpXyqq+zNuzF7vs/T+5n7978dUs3sK6YpBX4BlDxK6MsRF1WYqajEVeBJEMwdX2rfGkN9B5GfWdmdrzBjZQ6yyvlx5Dg++qgxpMiVOXSnw5v7H03PrT1we9wKre/2SQ1A2Oq/UDt/7tR2cMLoaPDNBpFT1W44LJB7o9iDT9YHUG3dC7R8JoeJ5YjyFmxbUQ5xg1oHnrBaPrGCuEYdQWhuDmp9Px2yRu8Agxzr9rNCZ/W8nWJVmvwvlXoldrum2rAECx0wiWqBhQ/+eX65 badger@unaka.lan
|
||||
|
||||
#ansible root key
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmS3g5fSXizcCqKMI1n5WPFrfMyu7BMrMkMYyck07rB/cf2orO8kKj5schjILA8NYJFStlv2CGRXmQlendj523FPzPmzxvTP/OT4qdywa4LKGvAxOkRGCMMxWzVFLdEMzsLUE/+FLX+xd1US9UPLGRsbMkdz4ORCc0G8gqTr835H56mQPI+/zPFeQjHoHGYtQA1wnJH/0LCuFFfU82IfzrXzFDIBAA5i2S+eEOk7/SA4Ciek1CthNtqPX27M6UqkJMBmVpnAdeDz2noWMvlzAAUQ7dHL84CiXbUnF3hhYrHDbmD+kEK+KiRrYh3PT+5YfEPVI/xiDJ2fdHGxY7Dr2TQ== root@lockbox01.phx2.fedoraproject.org
|
||||
|
||||
|
||||
10
files/copr/copr_bashrc
Normal file
10
files/copr/copr_bashrc
Normal file
@@ -0,0 +1,10 @@
|
||||
# .bashrc
|
||||
|
||||
# Source global definitions
|
||||
if [ -f /etc/bashrc ]; then
|
||||
. /etc/bashrc
|
||||
fi
|
||||
|
||||
if [ -f /srv/copr-work/copr/cloud/ec2rc.sh ]; then
|
||||
. /srv/copr-work/copr/cloud/ec2rc.sh
|
||||
fi
|
||||
1
files/copr/fe/README
Normal file
1
files/copr/fe/README
Normal file
@@ -0,0 +1 @@
|
||||
in this dir is where we put all the configs for the copr frontend
|
||||
21
files/copr/fe/httpd/coprs.conf
Normal file
21
files/copr/fe/httpd/coprs.conf
Normal file
@@ -0,0 +1,21 @@
|
||||
NameVirtualHost *:80
|
||||
LoadModule wsgi_module modules/mod_wsgi.so
|
||||
WSGISocketPrefix /var/run/wsgi
|
||||
|
||||
<VirtualHost *:80>
|
||||
ServerName copr-fe.cloud.fedoraproject.org
|
||||
|
||||
WSGIPassAuthorization On
|
||||
WSGIDaemonProcess 127.0.0.1 user=copr-fe group=copr-fe threads=5
|
||||
WSGIScriptAlias / /srv/copr-fe/copr/coprs_frontend/application
|
||||
WSGIProcessGroup 127.0.0.1
|
||||
|
||||
ErrorLog logs/error_coprs
|
||||
CustomLog logs/access_coprs common
|
||||
|
||||
<Directory /srv/copr-fe/copr>
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
Order deny,allow
|
||||
Allow from all
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
@@ -75,21 +75,7 @@ var.socket_dir = home_dir + "/sockets"
|
||||
#######################################################################
|
||||
##
|
||||
## Load the modules.
|
||||
#include "modules.conf"
|
||||
server.modules = (
|
||||
"mod_access",
|
||||
"mod_setenv",
|
||||
"mod_redirect",
|
||||
"mod_indexfile",
|
||||
"mod_cgi"
|
||||
)
|
||||
|
||||
cgi.assign = ( ".pl" => "/usr/bin/perl",
|
||||
".cgi" => "/usr/bin/perl",
|
||||
".rb" => "/usr/bin/ruby",
|
||||
".erb" => "/usr/bin/eruby",
|
||||
".py" => "/usr/bin/python",
|
||||
".php" => "/usr/bin/php" )
|
||||
include "modules.conf"
|
||||
|
||||
##
|
||||
#######################################################################
|
||||
@@ -104,7 +90,7 @@ server.port = 80
|
||||
##
|
||||
## Use IPv6?
|
||||
##
|
||||
server.use-ipv6 = "disable"
|
||||
server.use-ipv6 = "enable"
|
||||
|
||||
##
|
||||
## bind to a specific IP
|
||||
@@ -126,7 +112,7 @@ server.groupname = "lighttpd"
|
||||
##
|
||||
## Document root
|
||||
##
|
||||
server.document-root = "/var/lib/copr/public_html"
|
||||
server.document-root = "/srv/copr-repo"
|
||||
|
||||
##
|
||||
## The value for the "Server:" response field.
|
||||
@@ -220,7 +206,7 @@ server.network-backend = "linux-sendfile"
|
||||
##
|
||||
## With SELinux enabled, this is denied by default and needs to be allowed
|
||||
## by running the following once : setsebool -P httpd_setrlimit on
|
||||
server.max-fds = 2048
|
||||
#server.max-fds = 2048
|
||||
|
||||
##
|
||||
## Stat() call caching.
|
||||
@@ -311,8 +297,8 @@ server.max-connections = 1024
|
||||
## index-file.names = ( "index.php", "index.rb", "index.html",
|
||||
## "index.htm", "default.htm" )
|
||||
##
|
||||
index-file.names = (
|
||||
"/dir-generator.php"
|
||||
index-file.names += (
|
||||
"index.xhtml", "index.html", "index.htm", "default.htm", "index.php"
|
||||
)
|
||||
|
||||
##
|
||||
@@ -459,22 +445,3 @@ server.upload-dirs = ( "/var/tmp" )
|
||||
#include_shell "cat /etc/lighttpd/vhosts.d/*.conf"
|
||||
##
|
||||
#######################################################################
|
||||
|
||||
$SERVER["socket"] == ":443" {
|
||||
ssl.engine = "enable"
|
||||
ssl.pemfile = "/etc/lighttpd/copr.fedorainfracloud.org.pem"
|
||||
ssl.ca-file = "/etc/lighttpd/copr.fedorainfracloud.org.intermediate.crt"
|
||||
ssl.disable-client-renegotiation = "enable"
|
||||
ssl.use-sslv2 = "disable"
|
||||
ssl.use-sslv3 = "disable"
|
||||
ssl.cipher-list = "ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4-SHA:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM"
|
||||
}
|
||||
|
||||
$HTTP["url"] =~ "\.log\.gz$" {
|
||||
setenv.add-response-header = ( "Content-Encoding" => "gzip")
|
||||
mimetype.assign = ("" => "text/plain" )
|
||||
}
|
||||
|
||||
url.redirect = ( "^/results/sgallagh/cockpit-preview/(.+)" => "/results/@cockpit/cockpit-preview/$1" )
|
||||
|
||||
url.redirect += ( "^/results/(.*)/(.*)/mageia-(.*)-i386(.*)" => "/results/$1/$2/mageia-$3-i586$4" )
|
||||
@@ -6,11 +6,11 @@
|
||||
|
||||
# location of inventory file, eliminates need to specify -i
|
||||
|
||||
hostfile = /home/copr/provision/inventory
|
||||
hostfile = /srv/copr-work/provision/inventory
|
||||
|
||||
# location of ansible library, eliminates need to specify --module-path
|
||||
|
||||
library = /home/copr/provision/library:/usr/share/ansible
|
||||
library = /srv/copr-work/provision/library:/usr/share/ansible
|
||||
|
||||
# default module name used in /usr/bin/ansible when -m is not specified
|
||||
|
||||
@@ -48,11 +48,7 @@ sudo_user=root
|
||||
|
||||
# connection to use when -c <connection_type> is not specified
|
||||
|
||||
#transport=paramiko
|
||||
transport=ssh
|
||||
|
||||
# this is needed for paramiko, ssh already have this said in .ssh/config
|
||||
host_key_checking = False
|
||||
transport=paramiko
|
||||
|
||||
# remote SSH port to be used when --port or "port:" or an equivalent inventory
|
||||
# variable is not specified.
|
||||
@@ -73,12 +69,11 @@ remote_user=root
|
||||
|
||||
# additional plugin paths for non-core plugins
|
||||
|
||||
action_plugins = /usr/lib/python2.7/site-packages/ansible/runner/action_plugins:/home/copr/provision/action_plugins/
|
||||
action_plugins = /usr/lib/python2.6/site-packages/ansible/runner/action_plugins:/srv/copr-work/provision/action_plugins/
|
||||
|
||||
|
||||
private_key_file=/home/copr/.ssh/id_rsa
|
||||
|
||||
[paramiko_connection]
|
||||
record_host_keys=False
|
||||
|
||||
# nothing to configure yet
|
||||
|
||||
@@ -88,6 +83,6 @@ record_host_keys=False
|
||||
# will result in poor performance, so use transport=paramiko on older platforms rather than
|
||||
# removing it
|
||||
|
||||
ssh_args=-o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s
|
||||
ssh_args=-o PasswordAuthentication=no -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
|
||||
|
||||
|
||||
75
files/copr/provision/builderpb.yml
Normal file
75
files/copr/provision/builderpb.yml
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
- name: check/create instance
|
||||
hosts: localhost
|
||||
user: copr
|
||||
gather_facts: False
|
||||
|
||||
vars:
|
||||
- keypair: buildsys
|
||||
- image: ami-0000000e
|
||||
- instance_type: m1.builder
|
||||
- security_group: builder
|
||||
|
||||
tasks:
|
||||
- name: spin it up
|
||||
local_action: ec2 keypair=${keypair} image=${image} type=${instance_type} wait=true group=${security_group}
|
||||
register: inst_res
|
||||
|
||||
- name: get its internal ip b/c openstack is sometimes stupid
|
||||
local_action: shell euca-describe-instances ${inst_res.instances[0].id} | grep INSTANCE | cut -f 18
|
||||
register: int_ip
|
||||
|
||||
- name: add it to the special group
|
||||
local_action: add_host hostname=${int_ip.stdout} groupname=builder_temp_group
|
||||
|
||||
- name: wait for the host to be hot
|
||||
local_action: wait_for host=${int_ip.stdout} port=22 delay=5 timeout=600
|
||||
|
||||
|
||||
- hosts: builder_temp_group
|
||||
user: root
|
||||
vars:
|
||||
- files: files/
|
||||
|
||||
tasks:
|
||||
- name: edit hostname to be instance name
|
||||
action: shell hostname `curl -s http://169.254.169.254/2009-04-04/meta-data/instance-id`
|
||||
|
||||
- name: add repos
|
||||
action: copy src=$files/$item dest=/etc/yum.repos.d/$item
|
||||
with_items:
|
||||
- builder.repo
|
||||
- epel6.repo
|
||||
|
||||
- name: install pkgs
|
||||
action: yum state=present pkg=$item
|
||||
with_items:
|
||||
- mock
|
||||
- createrepo
|
||||
- yum-utils
|
||||
- rsync
|
||||
- openssh-clients
|
||||
|
||||
- name: make sure newest rpm
|
||||
action: yum name=rpm state=latest
|
||||
|
||||
- name: mockbuilder user
|
||||
action: user name=mockbuilder groups=mock
|
||||
|
||||
- name: mockbuilder .ssh
|
||||
action: file state=directory path=/home/mockbuilder/.ssh mode=0700 owner=mockbuilder group=mockbuilder
|
||||
|
||||
- name: mockbuilder authorized_keys
|
||||
action: authorized_key user=mockbuilder key='$FILE(${files}/buildsys.pub)'
|
||||
|
||||
- name: put updated mock configs into /etc/mock
|
||||
action: copy src=$files/mock/$item dest=/etc/mock
|
||||
with_items:
|
||||
- site-defaults.cfg
|
||||
- epel-5-x86_64.cfg
|
||||
- epel-5-i386.cfg
|
||||
|
||||
- name: put updated mockchain into /usr/bin
|
||||
action: copy src=$files/mockchain dest=/usr/bin/mockchain mode=0755 owner=root group=root
|
||||
|
||||
|
||||
7
files/copr/provision/files/builder.repo
Normal file
7
files/copr/provision/files/builder.repo
Normal file
@@ -0,0 +1,7 @@
|
||||
[builder-infrastructure]
|
||||
name=Builder Packages from Fedora Infrastructure $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/repo/builder-rpms/$releasever/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/repo/RPM-GPG-KEY-INFRASTRUCTURE
|
||||
|
||||
13
files/copr/provision/files/epel6.repo
Normal file
13
files/copr/provision/files/epel6.repo
Normal file
@@ -0,0 +1,13 @@
|
||||
[epel]
|
||||
name=Extras Packages for Enterprise Linux $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/6/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
|
||||
|
||||
[epel-testing]
|
||||
name=Extras Packages for Enterprise Linux $releasever - $basearch
|
||||
baseurl=http://infrastructure.fedoraproject.org/pub/epel/testing/6/$basearch/
|
||||
enabled=0
|
||||
gpgcheck=1
|
||||
gpgkey=http://infrastructure.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
|
||||
56
files/copr/provision/files/mock/epel-5-i386.cfg
Normal file
56
files/copr/provision/files/mock/epel-5-i386.cfg
Normal file
@@ -0,0 +1,56 @@
|
||||
config_opts['root'] = 'epel-5-i386'
|
||||
config_opts['target_arch'] = 'i386'
|
||||
config_opts['legal_host_arches'] = ('i386', 'i586', 'i686', 'x86_64')
|
||||
config_opts['chroot_setup_cmd'] = 'install buildsys-build'
|
||||
config_opts['dist'] = 'el5' # only useful for --resultdir variable subst
|
||||
config_opts['macros'] = {}
|
||||
config_opts['macros']['%__arch_install_post'] = '%{nil}'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
cachedir=/var/cache/yum
|
||||
debuglevel=1
|
||||
logfile=/var/log/yum.log
|
||||
reposdir=/dev/null
|
||||
retries=20
|
||||
obsoletes=1
|
||||
gpgcheck=0
|
||||
assumeyes=1
|
||||
syslog_ident=mock
|
||||
syslog_device=
|
||||
|
||||
# repos
|
||||
|
||||
[core]
|
||||
name=base
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=5&arch=i386&repo=os
|
||||
|
||||
[update]
|
||||
name=updates
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=5&arch=i386&repo=updates
|
||||
|
||||
[groups]
|
||||
name=groups
|
||||
baseurl=http://buildsys.fedoraproject.org/buildgroups/rhel5/i386/
|
||||
|
||||
[extras]
|
||||
name=epel
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=i386
|
||||
|
||||
[testing]
|
||||
name=epel-testing
|
||||
enabled=0
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=testing-epel5&arch=i386
|
||||
|
||||
[local]
|
||||
name=local
|
||||
baseurl=http://kojipkgs.fedoraproject.org/repos/dist-5E-epel-build/latest/i386/
|
||||
cost=2000
|
||||
enabled=0
|
||||
|
||||
[epel-debug]
|
||||
name=epel-debug
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-debug-5&arch=i386
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
"""
|
||||
56
files/copr/provision/files/mock/epel-5-x86_64.cfg
Normal file
56
files/copr/provision/files/mock/epel-5-x86_64.cfg
Normal file
@@ -0,0 +1,56 @@
|
||||
config_opts['root'] = 'epel-5-x86_64'
|
||||
config_opts['target_arch'] = 'x86_64'
|
||||
config_opts['legal_host_arches'] = ('x86_64',)
|
||||
config_opts['chroot_setup_cmd'] = 'install buildsys-build'
|
||||
config_opts['dist'] = 'el5' # only useful for --resultdir variable subst
|
||||
config_opts['macros'] = {}
|
||||
config_opts['macros']['%__arch_install_post'] = '%{nil}'
|
||||
|
||||
config_opts['yum.conf'] = """
|
||||
[main]
|
||||
cachedir=/var/cache/yum
|
||||
debuglevel=1
|
||||
logfile=/var/log/yum.log
|
||||
reposdir=/dev/null
|
||||
retries=20
|
||||
obsoletes=1
|
||||
gpgcheck=0
|
||||
assumeyes=1
|
||||
syslog_ident=mock
|
||||
syslog_device=
|
||||
|
||||
# repos
|
||||
|
||||
[core]
|
||||
name=base
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=5&arch=x86_64&repo=os
|
||||
|
||||
[update]
|
||||
name=updates
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=5&arch=x86_64&repo=updates
|
||||
|
||||
[groups]
|
||||
name=groups
|
||||
baseurl=http://buildsys.fedoraproject.org/buildgroups/rhel5/x86_64/
|
||||
|
||||
[extras]
|
||||
name=epel
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=x86_64
|
||||
|
||||
[testing]
|
||||
name=epel-testing
|
||||
enabled=0
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=testing-epel5&arch=x86_64
|
||||
|
||||
[local]
|
||||
name=local
|
||||
baseurl=http://kojipkgs.fedoraproject.org/repos/dist-5E-epel-build/latest/x86_64/
|
||||
cost=2000
|
||||
enabled=0
|
||||
|
||||
[epel-debug]
|
||||
name=epel-debug
|
||||
mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-debug-5&arch=x86_64
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
"""
|
||||
152
files/copr/provision/files/mock/site-defaults.cfg
Normal file
152
files/copr/provision/files/mock/site-defaults.cfg
Normal file
@@ -0,0 +1,152 @@
|
||||
# mock defaults
|
||||
# vim:tw=0:ts=4:sw=4:et:
|
||||
#
|
||||
# This config file is for site-specific default values that apply across all
|
||||
# configurations. Options specified in this config file can be overridden in
|
||||
# the individual mock config files.
|
||||
#
|
||||
# The site-defaults.cfg delivered by default has NO options set. Only set
|
||||
# options here if you want to override the defaults.
|
||||
#
|
||||
# Entries in this file follow the same format as other mock config files.
|
||||
# config_opts['foo'] = bar
|
||||
|
||||
#############################################################################
|
||||
#
|
||||
# Things that we recommend you set in site-defaults.cfg:
|
||||
#
|
||||
# config_opts['basedir'] = '/var/lib/mock/'
|
||||
# config_opts['cache_topdir'] = '/var/cache/mock'
|
||||
# Note: the path pointed to by basedir and cache_topdir must be owned
|
||||
# by group 'mock' and must have mode: g+rws
|
||||
# config_opts['rpmbuild_timeout'] = 0
|
||||
# config_opts['use_host_resolv'] = True
|
||||
|
||||
# You can configure log format to pull from logging.ini formats of these names:
|
||||
# config_opts['build_log_fmt_name'] = "unadorned"
|
||||
# config_opts['root_log_fmt_name'] = "detailed"
|
||||
# config_opts['state_log_fmt_name'] = "state"
|
||||
#
|
||||
# mock will normally set up a minimal chroot /dev.
|
||||
# If you want to use a pre-configured /dev, disable this and use the bind-mount
|
||||
# plugin to mount your special /dev
|
||||
# config_opts['internal_dev_setup'] = True
|
||||
#
|
||||
# internal_setarch defaults to 'True' if the python 'ctypes' package is
|
||||
# available. It is in the python std lib on >= python 2.5. On older versions,
|
||||
# it is available as an addon. On systems w/o ctypes, it will default to 'False'
|
||||
# config_opts['internal_setarch'] = False
|
||||
#
|
||||
# the cleanup_on_* options allow you to automatically clean and remove the
|
||||
# mock build directory, but only take effect if --resultdir is used.
|
||||
# config_opts provides fine-grained control. cmdline only has big hammer
|
||||
#
|
||||
# config_opts['cleanup_on_success'] = 1
|
||||
# config_opts['cleanup_on_failure'] = 1
|
||||
|
||||
# if you want mock to automatically run createrepo on the rpms in your
|
||||
# resultdir.
|
||||
# config_opts['createrepo_on_rpms'] = False
|
||||
# config_opts['createrepo_command'] = '/usr/bin/createrepo -d -q -x *.src.rpm'
|
||||
|
||||
#############################################################################
|
||||
#
|
||||
# plugin related. Below are the defaults. Change to suit your site
|
||||
# policy. site-defaults.cfg is a good place to do this.
|
||||
#
|
||||
# NOTE: Some of the caching options can theoretically affect build
|
||||
# reproducability. Change with care.
|
||||
#
|
||||
config_opts['plugin_conf']['package_state_enable'] = True
|
||||
# config_opts['plugin_conf']['ccache_enable'] = True
|
||||
# config_opts['plugin_conf']['ccache_opts']['max_cache_size'] = '4G'
|
||||
# config_opts['plugin_conf']['ccache_opts']['compress'] = None
|
||||
# config_opts['plugin_conf']['ccache_opts']['dir'] = "%(cache_topdir)s/%(root)s/ccache/"
|
||||
# config_opts['plugin_conf']['yum_cache_enable'] = True
|
||||
# config_opts['plugin_conf']['yum_cache_opts']['max_age_days'] = 30
|
||||
# config_opts['plugin_conf']['yum_cache_opts']['dir'] = "%(cache_topdir)s/%(root)s/yum_cache/"
|
||||
# config_opts['plugin_conf']['root_cache_enable'] = True
|
||||
# config_opts['plugin_conf']['root_cache_opts']['max_age_days'] = 15
|
||||
# config_opts['plugin_conf']['root_cache_opts']['dir'] = "%(cache_topdir)s/%(root)s/root_cache/"
|
||||
# config_opts['plugin_conf']['root_cache_opts']['compress_program'] = "pigz"
|
||||
# config_opts['plugin_conf']['root_cache_opts']['extension'] = ".gz"
|
||||
# config_opts['plugin_conf']['root_cache_opts']['exclude_dirs'] = ["./proc", "./sys", "./dev",
|
||||
# "./tmp/ccache", "./var/cache/yum" ]
|
||||
#
|
||||
# bind mount plugin is enabled by default but has no configured directories to
|
||||
# mount
|
||||
# config_opts['plugin_conf']['bind_mount_enable'] = True
|
||||
# config_opts['plugin_conf']['bind_mount_opts']['dirs'].append(('/host/path', '/bind/mount/path/in/chroot/' ))
|
||||
#
|
||||
# config_opts['plugin_conf']['tmpfs_enable'] = False
|
||||
# config_opts['plugin_conf']['tmpfs_opts']['required_ram_mb'] = 1024
|
||||
# config_opts['plugin_conf']['tmpfs_opts']['max_fs_size'] = '512m'
|
||||
|
||||
#############################################################################
|
||||
#
|
||||
# environment for chroot
|
||||
#
|
||||
# config_opts['environment']['TERM'] = 'vt100'
|
||||
# config_opts['environment']['SHELL'] = '/bin/bash'
|
||||
# config_opts['environment']['HOME'] = '/builddir'
|
||||
# config_opts['environment']['HOSTNAME'] = 'mock'
|
||||
# config_opts['environment']['PATH'] = '/usr/bin:/bin:/usr/sbin:/sbin'
|
||||
# config_opts['environment']['PROMPT_COMMAND'] = 'echo -n "<mock-chroot>"'
|
||||
# config_opts['environment']['LANG'] = os.environ.setdefault('LANG', 'en_US.UTF-8')
|
||||
# config_opts['environment']['TZ'] = os.environ.setdefault('TZ', 'EST5EDT')
|
||||
|
||||
#############################################################################
|
||||
#
|
||||
# Things that you can change, but we dont recommend it:
|
||||
# config_opts['chroothome'] = '/builddir'
|
||||
# config_opts['clean'] = True
|
||||
|
||||
#############################################################################
|
||||
#
|
||||
# Things that must be adjusted if SCM integration is used:
|
||||
#
|
||||
# config_opts['scm'] = True
|
||||
# config_opts['scm_opts']['method'] = 'git'
|
||||
# config_opts['scm_opts']['cvs_get'] = 'cvs -d /srv/cvs co SCM_BRN SCM_PKG'
|
||||
# config_opts['scm_opts']['git_get'] = 'git clone SCM_BRN git://localhost/SCM_PKG.git SCM_PKG'
|
||||
# config_opts['scm_opts']['svn_get'] = 'svn co file:///srv/svn/SCM_PKG/SCM_BRN SCM_PKG'
|
||||
# config_opts['scm_opts']['spec'] = 'SCM_PKG.spec'
|
||||
# config_opts['scm_opts']['ext_src_dir'] = '/dev/null'
|
||||
# config_opts['scm_opts']['write_tar'] = True
|
||||
# config_opts['scm_opts']['git_timestamps'] = True
|
||||
|
||||
# These options are also recognized but usually defined in cmd line
|
||||
# with --scm-option package=<pkg> --scm-option branch=<branch>
|
||||
# config_opts['scm_opts']['package'] = 'mypkg'
|
||||
# config_opts['scm_opts']['branch'] = 'master'
|
||||
|
||||
#############################################################################
|
||||
#
|
||||
# Things that are best suited for individual chroot config files:
|
||||
#
|
||||
# MUST SET (in individual chroot cfg file):
|
||||
# config_opts['root'] = 'name-of-yum-build-dir'
|
||||
# config_opts['target_arch'] = 'i386'
|
||||
# config_opts['yum.conf'] = ''
|
||||
# config_opts['yum_common_opts'] = []
|
||||
#
|
||||
# CAN SET, defaults usually work ok:
|
||||
# config_opts['chroot_setup_cmd'] = 'install buildsys-build'
|
||||
# config_opts['log_config_file'] = 'logging.ini'
|
||||
# config_opts['more_buildreqs']['srpm_name-version-release'] = 'dependencies'
|
||||
# config_opts['macros']['%Add_your_macro_name_here'] = "add macro value here"
|
||||
# config_opts['files']['path/name/no/leading/slash'] = "put file contents here."
|
||||
# config_opts['chrootuid'] = os.getuid()
|
||||
|
||||
# If you change chrootgid, you must also change "mock" to the correct group
|
||||
# name in this line of the mock PAM config:
|
||||
# auth sufficient pam_succeed_if.so user ingroup mock use_uid quiet
|
||||
# config_opts['chrootgid'] = grp.getgrnam("mock")[2]
|
||||
|
||||
# config_opts['useradd'] = '/usr/sbin/useradd -m -u %(uid)s -g %(gid)s -d %(home)s -n %(user)s' # Fedora/RedHat
|
||||
#
|
||||
# Security related
|
||||
# config_opts['no_root_shells'] = False
|
||||
#
|
||||
# Proxy settings (https_proxy, ftp_proxy, and no_proxy can also be set)
|
||||
# config_opts['http_proxy'] = 'http://localhost:3128'
|
||||
337
files/copr/provision/files/mockchain
Executable file
337
files/copr/provision/files/mockchain
Executable file
@@ -0,0 +1,337 @@
|
||||
#!/usr/bin/python -tt
|
||||
# by skvidal@fedoraproject.org
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation; either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Library General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program; if not, write to the Free Software
|
||||
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
# copyright 2012 Red Hat, Inc.
|
||||
|
||||
# SUMMARY
|
||||
# mockchain
|
||||
# take a mock config and a series of srpms
|
||||
# rebuild them one at a time
|
||||
# adding each to a local repo
|
||||
# so they are available as build deps to next pkg being built
|
||||
|
||||
import sys
|
||||
import subprocess
|
||||
import os
|
||||
import optparse
|
||||
import tempfile
|
||||
import shutil
|
||||
from urlgrabber import grabber
|
||||
import time
|
||||
|
||||
mockconfig_path='/etc/mock'
|
||||
|
||||
def createrepo(path):
|
||||
if os.path.exists(path + '/repodata/repomd.xml'):
|
||||
comm = ['/usr/bin/createrepo', '--update', path]
|
||||
else:
|
||||
comm = ['/usr/bin/createrepo', path]
|
||||
cmd = subprocess.Popen(comm,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
out, err = cmd.communicate()
|
||||
return out, err
|
||||
|
||||
def parse_args(args):
|
||||
parser = optparse.OptionParser('\nmockchain -r mockcfg pkg1 [pkg2] [pkg3]')
|
||||
parser.add_option('-r', '--root', default=None, dest='chroot',
|
||||
help="chroot config name/base to use in the mock build")
|
||||
parser.add_option('-l', '--localrepo', default=None,
|
||||
help="local path for the local repo, defaults to making its own")
|
||||
parser.add_option('-c', '--continue', default=False, action='store_true',
|
||||
dest='cont',
|
||||
help="if a pkg fails to build, continue to the next one")
|
||||
parser.add_option('-a','--addrepo', default=[], action='append',
|
||||
dest='repos',
|
||||
help="add these repo baseurls to the chroot's yum config")
|
||||
parser.add_option('--recurse', default=False, action='store_true',
|
||||
help="if more than one pkg and it fails to build, try to build the rest and come back to it")
|
||||
parser.add_option('--log', default=None, dest='logfile',
|
||||
help="log to the file named by this option, defaults to not logging")
|
||||
parser.add_option('--tmp_prefix', default=None, dest='tmp_prefix',
|
||||
help="tmp dir prefix - will default to username-pid if not specified")
|
||||
|
||||
|
||||
#FIXME?
|
||||
# figure out how to pass other args to mock?
|
||||
|
||||
opts, args = parser.parse_args(args)
|
||||
if opts.recurse:
|
||||
opts.cont = True
|
||||
|
||||
if not opts.chroot:
|
||||
print "You must provide an argument to -r for the mock chroot"
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if len(sys.argv) < 3:
|
||||
print "You must specifiy at least 1 package to build"
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
return opts, args
|
||||
|
||||
def add_local_repo(infile, destfile, baseurl, repoid=None):
|
||||
"""take a mock chroot config and add a repo to it's yum.conf
|
||||
infile = mock chroot config file
|
||||
destfile = where to save out the result
|
||||
baseurl = baseurl of repo you wish to add"""
|
||||
|
||||
try:
|
||||
config_opts = {}
|
||||
execfile(infile)
|
||||
if not repoid:
|
||||
repoid=baseurl.split('//')[1].replace('/','_')
|
||||
localyumrepo="""
|
||||
[%s]
|
||||
name=%s
|
||||
baseurl=%s
|
||||
enabled=1
|
||||
skip_if_unavailable=1
|
||||
metadata_expire=30
|
||||
cost=1
|
||||
""" % (repoid, baseurl, baseurl)
|
||||
|
||||
config_opts['yum.conf'] += localyumrepo
|
||||
br_dest = open(destfile, 'w')
|
||||
for k,v in config_opts.items():
|
||||
br_dest.write("config_opts[%r] = %r\n" % (k, v))
|
||||
br_dest.close()
|
||||
return True, ''
|
||||
except (IOError, OSError):
|
||||
return False, "Could not write mock config to %s" % destfile
|
||||
|
||||
return True, ''
|
||||
|
||||
def do_build(opts, cfg, pkg):
|
||||
|
||||
# returns 0, cmd, out, err = failure
|
||||
# returns 1, cmd, out, err = success
|
||||
# returns 2, None, None, None = already built
|
||||
|
||||
s_pkg = os.path.basename(pkg)
|
||||
pdn = s_pkg.replace('.src.rpm', '')
|
||||
resdir = '%s/%s' % (opts.local_repo_dir, pdn)
|
||||
resdir = os.path.normpath(resdir)
|
||||
if not os.path.exists(resdir):
|
||||
os.makedirs(resdir)
|
||||
|
||||
success_file = resdir + '/success'
|
||||
fail_file = resdir + '/fail'
|
||||
|
||||
if os.path.exists(success_file):
|
||||
return 2, None, None, None
|
||||
|
||||
# clean it up if we're starting over :)
|
||||
if os.path.exists(fail_file):
|
||||
os.unlink(fail_file)
|
||||
|
||||
mockcmd = ['/usr/bin/mock',
|
||||
'--configdir', opts.config_path,
|
||||
'--resultdir', resdir,
|
||||
'--uniqueext', opts.uniqueext,
|
||||
'-r', cfg, ]
|
||||
print 'building %s' % s_pkg
|
||||
mockcmd.append(pkg)
|
||||
cmd = subprocess.Popen(mockcmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE )
|
||||
out, err = cmd.communicate()
|
||||
if cmd.returncode == 0:
|
||||
open(success_file, 'w').write('done\n')
|
||||
ret = 1
|
||||
else:
|
||||
open(fail_file, 'w').write('undone\n')
|
||||
ret = 0
|
||||
|
||||
return ret, cmd, out, err
|
||||
|
||||
|
||||
def log(lf, msg):
|
||||
if lf:
|
||||
now = time.time()
|
||||
try:
|
||||
open(lf, 'a').write(str(now) + ':' + msg + '\n')
|
||||
except (IOError, OSError), e:
|
||||
print 'Could not write to logfile %s - %s' % (lf, str(e))
|
||||
print msg
|
||||
|
||||
|
||||
|
||||
def main(args):
|
||||
opts, args = parse_args(args)
|
||||
|
||||
# take mock config + list of pkgs
|
||||
cfg=opts.chroot
|
||||
pkgs=args[1:]
|
||||
mockcfg = mockconfig_path + '/' + cfg + '.cfg'
|
||||
|
||||
if not os.path.exists(mockcfg):
|
||||
print "could not find config: %s" % mockcfg
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if not opts.tmp_prefix:
|
||||
try:
|
||||
opts.tmp_prefix = os.getlogin()
|
||||
except OSError, e:
|
||||
print "Could not find login name for tmp dir prefix add --tmp_prefix"
|
||||
sys.exit(1)
|
||||
pid = os.getpid()
|
||||
opts.uniqueext = '%s-%s' % (opts.tmp_prefix, pid)
|
||||
|
||||
|
||||
# create a tempdir for our local info
|
||||
if opts.localrepo:
|
||||
local_tmp_dir = os.path.abspath(opts.localrepo)
|
||||
if not os.path.exists(local_tmp_dir):
|
||||
os.makedirs(local_tmp_dir)
|
||||
else:
|
||||
pre = 'mock-chain-%s-' % opts.uniqueext
|
||||
local_tmp_dir = tempfile.mkdtemp(prefix=pre, dir='/var/tmp')
|
||||
|
||||
os.chmod(local_tmp_dir, 0755)
|
||||
|
||||
if opts.logfile:
|
||||
opts.logfile = os.path.join(local_tmp_dir, opts.logfile)
|
||||
if os.path.exists(opts.logfile):
|
||||
os.unlink(opts.logfile)
|
||||
|
||||
log(opts.logfile, "starting logfile: %s" % opts.logfile)
|
||||
opts.local_repo_dir = os.path.normpath(local_tmp_dir + '/results/' + cfg + '/')
|
||||
|
||||
if not os.path.exists(opts.local_repo_dir):
|
||||
os.makedirs(opts.local_repo_dir, mode=0755)
|
||||
|
||||
local_baseurl="file://%s" % opts.local_repo_dir
|
||||
log(opts.logfile, "results dir: %s" % opts.local_repo_dir)
|
||||
opts.config_path = os.path.normpath(local_tmp_dir + '/configs/' + cfg + '/')
|
||||
|
||||
if not os.path.exists(opts.config_path):
|
||||
os.makedirs(opts.config_path, mode=0755)
|
||||
|
||||
log(opts.logfile, "config dir: %s" % opts.config_path)
|
||||
|
||||
my_mock_config = opts.config_path + '/' + os.path.basename(mockcfg)
|
||||
|
||||
# modify with localrepo
|
||||
res, msg = add_local_repo(mockcfg, my_mock_config, local_baseurl, 'local_build_repo')
|
||||
if not res:
|
||||
log(opts.logfile, "Error: Could not write out local config: %s" % msg)
|
||||
sys.exit(1)
|
||||
|
||||
for baseurl in opts.repos:
|
||||
res, msg = add_local_repo(my_mock_config, my_mock_config, baseurl)
|
||||
if not res:
|
||||
log(opts.logfile, "Error: Could not add: %s to yum config in mock chroot: %s" % (baseurl, msg))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# these files needed from the mock.config dir to make mock run
|
||||
for fn in ['site-defaults.cfg', 'logging.ini']:
|
||||
pth = mockconfig_path + '/' + fn
|
||||
shutil.copyfile(pth, opts.config_path + '/' + fn)
|
||||
|
||||
|
||||
# createrepo on it
|
||||
out, err = createrepo(opts.local_repo_dir)
|
||||
if err.strip():
|
||||
log(opts.logfile, "Error making local repo: %s" % opts.local_repo_dir)
|
||||
log(opts.logfile, "Err: %s" % err)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
download_dir = tempfile.mkdtemp()
|
||||
downloaded_pkgs = {}
|
||||
built_pkgs = []
|
||||
try_again = True
|
||||
to_be_built = pkgs
|
||||
while try_again:
|
||||
failed = []
|
||||
for pkg in to_be_built:
|
||||
if not pkg.endswith('.rpm'):
|
||||
log(opts.logfile, "%s doesn't appear to be an rpm - skipping" % pkg)
|
||||
failed.append(pkg)
|
||||
continue
|
||||
|
||||
elif pkg.startswith('http://') or pkg.startswith('https://'):
|
||||
url = pkg
|
||||
cwd = os.getcwd()
|
||||
os.chdir(download_dir)
|
||||
try:
|
||||
log(opts.logfile, 'Fetching %s' % url)
|
||||
ug = grabber.URLGrabber()
|
||||
fn = ug.urlgrab(url)
|
||||
pkg = download_dir + '/' + fn
|
||||
except Exception, e:
|
||||
log(opts.logfile, 'Error Downloading %s: %s' % (url, str(e)))
|
||||
failed.append(url)
|
||||
os.chdir(cwd)
|
||||
continue
|
||||
else:
|
||||
os.chdir(cwd)
|
||||
downloaded_pkgs[pkg] = url
|
||||
log(opts.logfile, "Start build: %s" % pkg)
|
||||
ret, cmd, out, err = do_build(opts, cfg, pkg)
|
||||
log(opts.logfile, "End build: %s" % pkg)
|
||||
if ret == 0:
|
||||
if opts.recurse:
|
||||
failed.append(pkg)
|
||||
log(opts.logfile, "Error building %s, will try again" % os.path.basename(pkg))
|
||||
else:
|
||||
log(opts.logfile,"Error building %s" % os.path.basename(pkg))
|
||||
log(opts.logfile,"See logs/results in %s" % opts.local_repo_dir)
|
||||
if not opts.cont:
|
||||
sys.exit(1)
|
||||
|
||||
elif ret == 1:
|
||||
log(opts.logfile, "Success building %s" % os.path.basename(pkg))
|
||||
built_pkgs.append(pkg)
|
||||
# createrepo with the new pkgs
|
||||
out, err = createrepo(opts.local_repo_dir)
|
||||
if err.strip():
|
||||
log(opts.logfile, "Error making local repo: %s" % opts.local_repo_dir)
|
||||
log(opts.logfile, "Err: %s" % err)
|
||||
elif ret == 2:
|
||||
log(opts.logfile, "Skipping already built pkg %s" % os.path.basename(pkg))
|
||||
|
||||
if failed:
|
||||
if len(failed) != len(to_be_built):
|
||||
to_be_built = failed
|
||||
try_again = True
|
||||
log(opts.logfile, 'Trying to rebuild %s failed pkgs' % len(failed))
|
||||
else:
|
||||
log(opts.logfile, "Tried twice - following pkgs could not be successfully built:")
|
||||
for pkg in failed:
|
||||
msg = pkg
|
||||
if pkg in downloaded_pkgs:
|
||||
msg = downloaded_pkgs[pkg]
|
||||
log(opts.logfile, msg)
|
||||
|
||||
try_again = False
|
||||
else:
|
||||
try_again = False
|
||||
|
||||
# cleaning up our download dir
|
||||
shutil.rmtree(download_dir, ignore_errors=True)
|
||||
|
||||
log(opts.logfile, "Results out to: %s" % opts.local_repo_dir)
|
||||
log(opts.logfile, "Pkgs built: %s" % len(built_pkgs))
|
||||
log(opts.logfile, "Packages successfully built in this order:")
|
||||
for pkg in built_pkgs:
|
||||
log(opts.logfile, pkg)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main(sys.argv)
|
||||
sys.exit(0)
|
||||
16
files/copr/provision/terminatepb.yml
Normal file
16
files/copr/provision/terminatepb.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
- name: terminate instance
|
||||
hosts: all
|
||||
user: root
|
||||
gather_facts: False
|
||||
|
||||
tasks:
|
||||
- name: find the instance id from the builder
|
||||
action: command curl -s http://169.254.169.254/latest/meta-data/instance-id
|
||||
register: instanceid
|
||||
|
||||
- name: terminate it
|
||||
local_action: command euca-terminate-instances ${instanceid.stdout}
|
||||
|
||||
|
||||
|
||||
27
files/denyhosts/allowed-hosts
Normal file
27
files/denyhosts/allowed-hosts
Normal file
@@ -0,0 +1,27 @@
|
||||
# We mustn't block localhost
|
||||
127.0.0.1
|
||||
|
||||
#bastion
|
||||
10.5.126.11
|
||||
10.5.126.12
|
||||
#lockbox
|
||||
10.5.126.23
|
||||
# don't block lockbox's remote addr, either
|
||||
209.132.181.6
|
||||
|
||||
#noc1
|
||||
noc1.phx2.fedoraproject.org
|
||||
10.5.126.41
|
||||
192.168.1.10
|
||||
|
||||
# RDU NAT
|
||||
66.187.233.202
|
||||
66.187.233.206
|
||||
# RH NAT
|
||||
66.187.230.200
|
||||
# PHX2 NAT
|
||||
209.132.181.102
|
||||
# tlv RHT NAT
|
||||
66.187.237.10
|
||||
# brno RHT NAT
|
||||
209.132.186.34
|
||||
626
files/denyhosts/denyhosts.conf
Normal file
626
files/denyhosts/denyhosts.conf
Normal file
@@ -0,0 +1,626 @@
|
||||
############ THESE SETTINGS ARE REQUIRED ############
|
||||
|
||||
########################################################################
|
||||
#
|
||||
# SECURE_LOG: the log file that contains sshd logging info
|
||||
# if you are not sure, grep "sshd:" /var/log/*
|
||||
#
|
||||
# The file to process can be overridden with the --file command line
|
||||
# argument
|
||||
#
|
||||
# Redhat or Fedora Core:
|
||||
SECURE_LOG = /var/log/secure
|
||||
#
|
||||
# Mandrake, FreeBSD or OpenBSD:
|
||||
#SECURE_LOG = /var/log/auth.log
|
||||
#
|
||||
# SuSE:
|
||||
#SECURE_LOG = /var/log/messages
|
||||
#
|
||||
# Mac OS X (v10.4 or greater -
|
||||
# also refer to: http://www.denyhosts.net/faq.html#macos
|
||||
#SECURE_LOG = /private/var/log/asl.log
|
||||
#
|
||||
# Mac OS X (v10.3 or earlier):
|
||||
#SECURE_LOG=/private/var/log/system.log
|
||||
#
|
||||
########################################################################
|
||||
|
||||
########################################################################
|
||||
#
|
||||
# HOSTS_DENY: the file which contains restricted host access information
|
||||
#
|
||||
# Most operating systems:
|
||||
HOSTS_DENY = /etc/hosts.deny
|
||||
#
|
||||
# Some BSD (FreeBSD) Unixes:
|
||||
#HOSTS_DENY = /etc/hosts.allow
|
||||
#
|
||||
# Another possibility (also see the next option):
|
||||
#HOSTS_DENY = /etc/hosts.evil
|
||||
#######################################################################
|
||||
|
||||
|
||||
########################################################################
|
||||
#
|
||||
# PURGE_DENY: removed HOSTS_DENY entries that are older than this time
|
||||
# when DenyHosts is invoked with the --purge flag
|
||||
#
|
||||
# format is: i[dhwmy]
|
||||
# Where 'i' is an integer (eg. 7)
|
||||
# 'm' = minutes
|
||||
# 'h' = hours
|
||||
# 'd' = days
|
||||
# 'w' = weeks
|
||||
# 'y' = years
|
||||
#
|
||||
# never purge:
|
||||
#PURGE_DENY =
|
||||
#
|
||||
# purge entries older than 1 week
|
||||
#PURGE_DENY = 1w
|
||||
#
|
||||
# purge entries older than 5 days
|
||||
#PURGE_DENY = 5d
|
||||
#
|
||||
# For the default Fedora Extras install, we want timestamping but no
|
||||
# expiration (at least by default) so this is deliberately set high.
|
||||
# Adjust to taste.
|
||||
PURGE_DENY = 4w
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# PURGE_THRESHOLD: defines the maximum times a host will be purged.
|
||||
# Once this value has been exceeded then this host will not be purged.
|
||||
# Setting this parameter to 0 (the default) disables this feature.
|
||||
#
|
||||
# default: a denied host can be purged/re-added indefinitely
|
||||
PURGE_THRESHOLD = 4
|
||||
#
|
||||
# a denied host will be purged at most 2 times.
|
||||
#PURGE_THRESHOLD = 2
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# BLOCK_SERVICE: the service name that should be blocked in HOSTS_DENY
|
||||
#
|
||||
# man 5 hosts_access for details
|
||||
#
|
||||
# eg. sshd: 127.0.0.1 # will block sshd logins from 127.0.0.1
|
||||
#
|
||||
# To block all services for the offending host:
|
||||
#BLOCK_SERVICE = ALL
|
||||
# To block only sshd:
|
||||
BLOCK_SERVICE = sshd
|
||||
# To only record the offending host and nothing else (if using
|
||||
# an auxilary file to list the hosts). Refer to:
|
||||
# http://denyhosts.sourceforge.net/faq.html#aux
|
||||
#BLOCK_SERVICE =
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DENY_THRESHOLD_INVALID: block each host after the number of failed login
|
||||
# attempts has exceeded this value. This value applies to invalid
|
||||
# user login attempts (eg. non-existent user accounts)
|
||||
#
|
||||
DENY_THRESHOLD_INVALID = 15
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DENY_THRESHOLD_VALID: block each host after the number of failed
|
||||
# login attempts has exceeded this value. This value applies to valid
|
||||
# user login attempts (eg. user accounts that exist in /etc/passwd) except
|
||||
# for the "root" user
|
||||
#
|
||||
DENY_THRESHOLD_VALID = 15
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DENY_THRESHOLD_ROOT: block each host after the number of failed
|
||||
# login attempts has exceeded this value. This value applies to
|
||||
# "root" user login attempts only.
|
||||
#
|
||||
DENY_THRESHOLD_ROOT = 5
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DENY_THRESHOLD_RESTRICTED: block each host after the number of failed
|
||||
# login attempts has exceeded this value. This value applies to
|
||||
# usernames that appear in the WORK_DIR/restricted-usernames file only.
|
||||
#
|
||||
DENY_THRESHOLD_RESTRICTED = 1
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# WORK_DIR: the path that DenyHosts will use for writing data to
|
||||
# (it will be created if it does not already exist).
|
||||
#
|
||||
# Note: it is recommended that you use an absolute pathname
|
||||
# for this value (eg. /home/foo/denyhosts/data)
|
||||
#
|
||||
WORK_DIR = /var/lib/denyhosts
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SUSPICIOUS_LOGIN_REPORT_ALLOWED_HOSTS
|
||||
#
|
||||
# SUSPICIOUS_LOGIN_REPORT_ALLOWED_HOSTS=YES|NO
|
||||
# If set to YES, if a suspicious login attempt results from an allowed-host
|
||||
# then it is considered suspicious. If this is NO, then suspicious logins
|
||||
# from allowed-hosts will not be reported. All suspicious logins from
|
||||
# ip addresses that are not in allowed-hosts will always be reported.
|
||||
#
|
||||
SUSPICIOUS_LOGIN_REPORT_ALLOWED_HOSTS=YES
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# HOSTNAME_LOOKUP
|
||||
#
|
||||
# HOSTNAME_LOOKUP=YES|NO
|
||||
# If set to YES, for each IP address that is reported by Denyhosts,
|
||||
# the corresponding hostname will be looked up and reported as well
|
||||
# (if available).
|
||||
#
|
||||
HOSTNAME_LOOKUP=YES
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# LOCK_FILE
|
||||
#
|
||||
# LOCK_FILE=/path/denyhosts
|
||||
# If this file exists when DenyHosts is run, then DenyHosts will exit
|
||||
# immediately. Otherwise, this file will be created upon invocation
|
||||
# and deleted upon exit. This ensures that only one instance is
|
||||
# running at a time.
|
||||
#
|
||||
# Redhat/Fedora:
|
||||
LOCK_FILE = /var/lock/subsys/denyhosts
|
||||
#
|
||||
# Debian
|
||||
#LOCK_FILE = /var/run/denyhosts.pid
|
||||
#
|
||||
# Misc
|
||||
#LOCK_FILE = /tmp/denyhosts.lock
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
############ THESE SETTINGS ARE OPTIONAL ############
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# ADMIN_EMAIL: if you would like to receive emails regarding newly
|
||||
# restricted hosts and suspicious logins, set this address to
|
||||
# match your email address. If you do not want to receive these reports
|
||||
# leave this field blank (or run with the --noemail option)
|
||||
#
|
||||
# Multiple email addresses can be delimited by a comma, eg:
|
||||
# ADMIN_EMAIL = foo@bar.com, bar@foo.com, etc@foobar.com
|
||||
#
|
||||
# ADMIN_EMAIL = ausil@fedoraproject.org
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SMTP_HOST and SMTP_PORT: if DenyHosts is configured to email
|
||||
# reports (see ADMIN_EMAIL) then these settings specify the
|
||||
# email server address (SMTP_HOST) and the server port (SMTP_PORT)
|
||||
#
|
||||
#
|
||||
# THEMOVE FIXME this needs to work from external non-VPN machines.
|
||||
SMTP_HOST = bastion
|
||||
SMTP_PORT = 25
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SMTP_USERNAME and SMTP_PASSWORD: set these parameters if your
|
||||
# smtp email server requires authentication
|
||||
#
|
||||
#SMTP_USERNAME=foo
|
||||
#SMTP_PASSWORD=bar
|
||||
#
|
||||
######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SMTP_FROM: you can specify the "From:" address in messages sent
|
||||
# from DenyHosts when it reports thwarted abuse attempts
|
||||
#
|
||||
SMTP_FROM = DenyHosts <denyhosts@fedoraproject.org>
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SMTP_SUBJECT: you can specify the "Subject:" of messages sent
|
||||
# by DenyHosts when it reports thwarted abuse attempts
|
||||
SMTP_SUBJECT = DenyHosts Report from $[HOSTNAME]
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# SMTP_DATE_FORMAT: specifies the format used for the "Date:" header
|
||||
# when sending email messages.
|
||||
#
|
||||
# for possible values for this parameter refer to: man strftime
|
||||
#
|
||||
# the default:
|
||||
#
|
||||
#SMTP_DATE_FORMAT = %a, %d %b %Y %H:%M:%S %z
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# SYSLOG_REPORT
|
||||
#
|
||||
# SYSLOG_REPORT=YES|NO
|
||||
# If set to yes, when denied hosts are recorded the report data
|
||||
# will be sent to syslog (syslog must be present on your system).
|
||||
# The default is: NO
|
||||
#
|
||||
#SYSLOG_REPORT=NO
|
||||
#
|
||||
#SYSLOG_REPORT=YES
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# ALLOWED_HOSTS_HOSTNAME_LOOKUP
|
||||
#
|
||||
# ALLOWED_HOSTS_HOSTNAME_LOOKUP=YES|NO
|
||||
# If set to YES, for each entry in the WORK_DIR/allowed-hosts file,
|
||||
# the hostname will be looked up. If your versions of tcp_wrappers
|
||||
# and sshd sometimes log hostnames in addition to ip addresses
|
||||
# then you may wish to specify this option.
|
||||
#
|
||||
#ALLOWED_HOSTS_HOSTNAME_LOOKUP=NO
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# AGE_RESET_VALID: Specifies the period of time between failed login
|
||||
# attempts that, when exceeded will result in the failed count for
|
||||
# this host to be reset to 0. This value applies to login attempts
|
||||
# to all valid users (those within /etc/passwd) with the
|
||||
# exception of root. If not defined, this count will never
|
||||
# be reset.
|
||||
#
|
||||
# See the comments in the PURGE_DENY section (above)
|
||||
# for details on specifying this value or for complete details
|
||||
# refer to: http://denyhosts.sourceforge.net/faq.html#timespec
|
||||
#
|
||||
AGE_RESET_VALID=5d
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# AGE_RESET_ROOT: Specifies the period of time between failed login
|
||||
# attempts that, when exceeded will result in the failed count for
|
||||
# this host to be reset to 0. This value applies to all login
|
||||
# attempts to the "root" user account. If not defined,
|
||||
# this count will never be reset.
|
||||
#
|
||||
# See the comments in the PURGE_DENY section (above)
|
||||
# for details on specifying this value or for complete details
|
||||
# refer to: http://denyhosts.sourceforge.net/faq.html#timespec
|
||||
#
|
||||
AGE_RESET_ROOT=25d
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# AGE_RESET_RESTRICTED: Specifies the period of time between failed login
|
||||
# attempts that, when exceeded will result in the failed count for
|
||||
# this host to be reset to 0. This value applies to all login
|
||||
# attempts to entries found in the WORK_DIR/restricted-usernames file.
|
||||
# If not defined, the count will never be reset.
|
||||
#
|
||||
# See the comments in the PURGE_DENY section (above)
|
||||
# for details on specifying this value or for complete details
|
||||
# refer to: http://denyhosts.sourceforge.net/faq.html#timespec
|
||||
#
|
||||
AGE_RESET_RESTRICTED=25d
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# AGE_RESET_INVALID: Specifies the period of time between failed login
|
||||
# attempts that, when exceeded will result in the failed count for
|
||||
# this host to be reset to 0. This value applies to login attempts
|
||||
# made to any invalid username (those that do not appear
|
||||
# in /etc/passwd). If not defined, count will never be reset.
|
||||
#
|
||||
# See the comments in the PURGE_DENY section (above)
|
||||
# for details on specifying this value or for complete details
|
||||
# refer to: http://denyhosts.sourceforge.net/faq.html#timespec
|
||||
#
|
||||
AGE_RESET_INVALID=10d
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# RESET_ON_SUCCESS: If this parameter is set to "yes" then the
|
||||
# failed count for the respective ip address will be reset to 0
|
||||
# if the login is successful.
|
||||
#
|
||||
# The default is RESET_ON_SUCCESS = no
|
||||
#
|
||||
RESET_ON_SUCCESS = yes
|
||||
#
|
||||
#####################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# PLUGIN_DENY: If set, this value should point to an executable
|
||||
# program that will be invoked when a host is added to the
|
||||
# HOSTS_DENY file. This executable will be passed the host
|
||||
# that will be added as it's only argument.
|
||||
#
|
||||
#PLUGIN_DENY=/usr/bin/true
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# PLUGIN_PURGE: If set, this value should point to an executable
|
||||
# program that will be invoked when a host is removed from the
|
||||
# HOSTS_DENY file. This executable will be passed the host
|
||||
# that is to be purged as it's only argument.
|
||||
#
|
||||
#PLUGIN_PURGE=/usr/bin/true
|
||||
#
|
||||
######################################################################
|
||||
|
||||
######################################################################
|
||||
#
|
||||
# USERDEF_FAILED_ENTRY_REGEX: if set, this value should contain
|
||||
# a regular expression that can be used to identify additional
|
||||
# hackers for your particular ssh configuration. This functionality
|
||||
# extends the built-in regular expressions that DenyHosts uses.
|
||||
# This parameter can be specified multiple times.
|
||||
# See this faq entry for more details:
|
||||
# http://denyhosts.sf.net/faq.html#userdef_regex
|
||||
#
|
||||
#USERDEF_FAILED_ENTRY_REGEX=
|
||||
#
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
|
||||
|
||||
######### THESE SETTINGS ARE SPECIFIC TO DAEMON MODE ##########
|
||||
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DAEMON_LOG: when DenyHosts is run in daemon mode (--daemon flag)
|
||||
# this is the logfile that DenyHosts uses to report it's status.
|
||||
# To disable logging, leave blank. (default is: /var/log/denyhosts)
|
||||
#
|
||||
DAEMON_LOG = /var/log/denyhosts
|
||||
#
|
||||
# disable logging:
|
||||
#DAEMON_LOG =
|
||||
#
|
||||
######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DAEMON_LOG_TIME_FORMAT: when DenyHosts is run in daemon mode
|
||||
# (--daemon flag) this specifies the timestamp format of
|
||||
# the DAEMON_LOG messages (default is the ISO8061 format:
|
||||
# ie. 2005-07-22 10:38:01,745)
|
||||
#
|
||||
# for possible values for this parameter refer to: man strftime
|
||||
#
|
||||
# Jan 1 13:05:59
|
||||
#DAEMON_LOG_TIME_FORMAT = %b %d %H:%M:%S
|
||||
#
|
||||
# Jan 1 01:05:59
|
||||
#DAEMON_LOG_TIME_FORMAT = %b %d %I:%M:%S
|
||||
#
|
||||
######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DAEMON_LOG_MESSAGE_FORMAT: when DenyHosts is run in daemon mode
|
||||
# (--daemon flag) this specifies the message format of each logged
|
||||
# entry. By default the following format is used:
|
||||
#
|
||||
# %(asctime)s - %(name)-12s: %(levelname)-8s %(message)s
|
||||
#
|
||||
# Where the "%(asctime)s" portion is expanded to the format
|
||||
# defined by DAEMON_LOG_TIME_FORMAT
|
||||
#
|
||||
# This string is passed to python's logging.Formatter contstuctor.
|
||||
# For details on the possible format types please refer to:
|
||||
# http://docs.python.org/lib/node357.html
|
||||
#
|
||||
# This is the default:
|
||||
#DAEMON_LOG_MESSAGE_FORMAT = %(asctime)s - %(name)-12s: %(levelname)-8s %(message)s
|
||||
#
|
||||
#
|
||||
######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DAEMON_SLEEP: when DenyHosts is run in daemon mode (--daemon flag)
|
||||
# this is the amount of time DenyHosts will sleep between polling
|
||||
# the SECURE_LOG. See the comments in the PURGE_DENY section (above)
|
||||
# for details on specifying this value or for complete details
|
||||
# refer to: http://denyhosts.sourceforge.net/faq.html#timespec
|
||||
#
|
||||
#
|
||||
DAEMON_SLEEP = 30s
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# DAEMON_PURGE: How often should DenyHosts, when run in daemon mode,
|
||||
# run the purge mechanism to expire old entries in HOSTS_DENY
|
||||
# This has no effect if PURGE_DENY is blank.
|
||||
#
|
||||
DAEMON_PURGE = 1h
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
######### THESE SETTINGS ARE SPECIFIC TO ##########
|
||||
######### DAEMON SYNCHRONIZATION ##########
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# Synchronization mode allows the DenyHosts daemon the ability
|
||||
# to periodically send and receive denied host data such that
|
||||
# DenyHosts daemons worldwide can automatically inform one
|
||||
# another regarding banned hosts. This mode is disabled by
|
||||
# default, you must uncomment SYNC_SERVER to enable this mode.
|
||||
#
|
||||
# for more information, please refer to:
|
||||
# http:/denyhosts.sourceforge.net/faq.html#sync
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SYNC_SERVER: The central server that communicates with DenyHost
|
||||
# daemons. Currently, denyhosts.net is the only available server
|
||||
# however, in the future, it may be possible for organizations to
|
||||
# install their own server for internal network synchronization
|
||||
#
|
||||
# To disable synchronization (the default), do nothing.
|
||||
#
|
||||
# To enable synchronization, you must uncomment the following line:
|
||||
#SYNC_SERVER = http://xmlrpc.denyhosts.net:9911
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SYNC_INTERVAL: the interval of time to perform synchronizations if
|
||||
# SYNC_SERVER has been uncommented. The default is 1 hour.
|
||||
#
|
||||
SYNC_INTERVAL = 1h
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SYNC_UPLOAD: allow your DenyHosts daemon to transmit hosts that have
|
||||
# been denied? This option only applies if SYNC_SERVER has
|
||||
# been uncommented.
|
||||
# The default is SYNC_UPLOAD = yes
|
||||
#
|
||||
#SYNC_UPLOAD = no
|
||||
#SYNC_UPLOAD = yes
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SYNC_DOWNLOAD: allow your DenyHosts daemon to receive hosts that have
|
||||
# been denied by others? This option only applies if SYNC_SERVER has
|
||||
# been uncommented.
|
||||
# The default is SYNC_DOWNLOAD = yes
|
||||
#
|
||||
#SYNC_DOWNLOAD = no
|
||||
#SYNC_DOWNLOAD = yes
|
||||
#
|
||||
#
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SYNC_DOWNLOAD_THRESHOLD: If SYNC_DOWNLOAD is enabled this parameter
|
||||
# filters the returned hosts to those that have been blocked this many
|
||||
# times by others. That is, if set to 1, then if a single DenyHosts
|
||||
# server has denied an ip address then you will receive the denied host.
|
||||
#
|
||||
# See also SYNC_DOWNLOAD_RESILIENCY
|
||||
#
|
||||
#SYNC_DOWNLOAD_THRESHOLD = 10
|
||||
#
|
||||
# The default is SYNC_DOWNLOAD_THRESHOLD = 3
|
||||
#
|
||||
#SYNC_DOWNLOAD_THRESHOLD = 3
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
#######################################################################
|
||||
#
|
||||
# SYNC_DOWNLOAD_RESILIENCY: If SYNC_DOWNLOAD is enabled then the
|
||||
# value specified for this option limits the downloaded data
|
||||
# to this resiliency period or greater.
|
||||
#
|
||||
# Resiliency is defined as the timespan between a hackers first known
|
||||
# attack and it's most recent attack. Example:
|
||||
#
|
||||
# If the centralized denyhosts.net server records an attack at 2 PM
|
||||
# and then again at 5 PM, specifying a SYNC_DOWNLOAD_RESILIENCY = 4h
|
||||
# will not download this ip address.
|
||||
#
|
||||
# However, if the attacker is recorded again at 6:15 PM then the
|
||||
# ip address will be downloaded by your DenyHosts instance.
|
||||
#
|
||||
# This value is used in conjunction with the SYNC_DOWNLOAD_THRESHOLD
|
||||
# and only hosts that satisfy both values will be downloaded.
|
||||
# This value has no effect if SYNC_DOWNLOAD_THRESHOLD = 1
|
||||
#
|
||||
# The default is SYNC_DOWNLOAD_RESILIENCY = 5h (5 hours)
|
||||
#
|
||||
# Only obtain hackers that have been at it for 2 days or more:
|
||||
#SYNC_DOWNLOAD_RESILIENCY = 2d
|
||||
#
|
||||
# Only obtain hackers that have been at it for 5 hours or more:
|
||||
#SYNC_DOWNLOAD_RESILIENCY = 5h
|
||||
#
|
||||
#######################################################################
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
# run twice daily rsync of download. but lock it
|
||||
MAILTO=smooge@gmail.com,root@fedoraproject.org
|
||||
00 11,23 * * * root /usr/local/bin/lock-wrapper sync-up-downloads "/usr/local/bin/sync-up-downloads"
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
# Run quick mirror fedora every 10minutes
|
||||
*/10 * * * * root flock -n -E0 /tmp/download-sync -c '/root/quick-fedora-mirror/quick-fedora-mirror -c /root/quick-fedora-mirror/quick-fedora-mirror.conf'
|
||||
@@ -1,162 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
# Copyright (C) 2014 by Adrian Reber
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import requests
|
||||
import time
|
||||
import sys
|
||||
import getopt
|
||||
|
||||
fedora = 'org.fedoraproject.prod.bodhi.updates.fedora.sync'
|
||||
epel = 'org.fedoraproject.prod.bodhi.updates.epel.sync'
|
||||
|
||||
branched = 'org.fedoraproject.prod.compose.branched.rsync.complete'
|
||||
rawhide = 'org.fedoraproject.prod.compose.rawhide.rsync.complete'
|
||||
|
||||
base_url = 'https://apps.fedoraproject.org/datagrepper/raw'
|
||||
|
||||
|
||||
topics = []
|
||||
# default time interval to query for syncs: 1 day
|
||||
delta = 86400
|
||||
# return 0 and no output if a sync happened during <delta>
|
||||
# if no sync happened 1 is returned
|
||||
quiet = False
|
||||
secondary = False
|
||||
rawtime = False
|
||||
|
||||
def usage():
|
||||
print
|
||||
print "last-sync queries the Fedora Message Bus if new data is available on the public servers"
|
||||
print
|
||||
print "Usage: last-sync [options]"
|
||||
print
|
||||
print "Options:"
|
||||
print " -a, --all query all possible releases (default)"
|
||||
print " (fedora, epel, branched, rawhide)"
|
||||
print " -f, --fedora only query if fedora has been updated during <delta>"
|
||||
print " -e, --epel only query if epel has been updated"
|
||||
print " -b, --branched only query if the branched off release"
|
||||
print " has been updated"
|
||||
print " -r, --rawhide only query if rawhide has been updated"
|
||||
print " -q, --quiet do not print out any informations"
|
||||
print " -t, --time print date in seconds since 1970-01-01"
|
||||
print " -d DELTA, --delta=DELTA specify the time interval which should be used"
|
||||
print " for the query (default: 86400)"
|
||||
|
||||
|
||||
# -a -f -e -b -r -s -q -d
|
||||
def parse_args():
|
||||
global topics
|
||||
global delta
|
||||
global quiet
|
||||
global secondary
|
||||
global rawtime
|
||||
try:
|
||||
opts, args = getopt.getopt(sys.argv[1:], "afhebrsqtd:", ["all", "fedora", "epel", "rawhide", "branched", "secondary", "quiet", "time", "delta="])
|
||||
except getopt.GetoptError as err:
|
||||
print str(err)
|
||||
usage()
|
||||
sys.exit(2)
|
||||
|
||||
for option, argument in opts:
|
||||
if option in ("-a", "--all"):
|
||||
topics = [ fedora, epel, branched, rawhide ]
|
||||
secondary = True
|
||||
if option in ("-f", "--fedora"):
|
||||
topics.append(fedora)
|
||||
if option in ("-e", "--epel"):
|
||||
topics.append(epel)
|
||||
if option in ("-r", "--rawhide"):
|
||||
topics.append(rawhide)
|
||||
if option in ("-b", "--branched"):
|
||||
topics.append(branched)
|
||||
if option in ("-s", "--secondary"):
|
||||
topics.append(rawhide)
|
||||
secondary = True
|
||||
if option in ("-q", "--quiet"):
|
||||
quiet = True
|
||||
if option in ("-t", "--time"):
|
||||
rawtime = True
|
||||
if option in ("-d", "--delta"):
|
||||
delta = argument
|
||||
if option in ("-h"):
|
||||
usage();
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
|
||||
def getKey(item):
|
||||
return item[1]
|
||||
|
||||
def create_url(url, topics, delta):
|
||||
topic = ""
|
||||
for i in topics:
|
||||
topic += "&topic=%s" % i
|
||||
return '%s?delta=%s%s' % (url, delta, topic)
|
||||
|
||||
parse_args()
|
||||
|
||||
if topics == []:
|
||||
topics = [ fedora, epel, branched, rawhide ]
|
||||
secondary = True
|
||||
|
||||
i = 0
|
||||
data = None
|
||||
while i < 5:
|
||||
try:
|
||||
data = requests.get(create_url(base_url, topics, delta), timeout=1).json()
|
||||
break
|
||||
except:
|
||||
pass
|
||||
|
||||
if not data:
|
||||
sys.exit(1)
|
||||
|
||||
repos = []
|
||||
|
||||
for i in range(0, data['count']):
|
||||
try:
|
||||
repo = "%s-%s" % (data['raw_messages'][i]['msg']['repo'], data['raw_messages'][i]['msg']['release'])
|
||||
except:
|
||||
# the rawhide and branch sync message has no repo information
|
||||
arch = data['raw_messages'][i]['msg']['arch']
|
||||
if arch == '':
|
||||
arch = 'primary'
|
||||
elif not secondary:
|
||||
continue
|
||||
repo = "%s-%s" % (data['raw_messages'][i]['msg']['branch'], arch)
|
||||
|
||||
repos.append([repo, data['raw_messages'][i]['timestamp']])
|
||||
|
||||
if quiet == False:
|
||||
for repo, timestamp in sorted(repos, key=getKey):
|
||||
if rawtime == True:
|
||||
# this is useful if you want to compare the timestamp in seconds versus string
|
||||
print "%s: %s" % (repo, timestamp)
|
||||
else:
|
||||
print "%s: %s" % (repo, time.strftime("%a, %d %b %Y %H:%M:%S +0000", time.gmtime(timestamp)))
|
||||
|
||||
if data['count'] > 0:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
@@ -1,27 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
## This script is used to sync data from main download servers to
|
||||
## secondary server at ibiblio.
|
||||
##
|
||||
|
||||
RSYNC='/usr/bin/rsync'
|
||||
RS_OPT="-avSHP --numeric-ids"
|
||||
RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates"
|
||||
ALT_EXCLUDES="--exclude deltaisos/archive --exclude 22_Alpha* --exclude 22_Beta*"
|
||||
EPL_EXCLUDES=""
|
||||
FED_EXCLUDES=""
|
||||
|
||||
SERVER=dl.fedoraproject.org
|
||||
|
||||
# http://dl.fedoraproject.org/pub/alt/stage/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-alt/stage/ /srv/pub/alt/stage/ | tail -n2 | logger -p local0.notice -t rsync_updates_alt_stg
|
||||
# http://dl.fedoraproject.org/pub/alt/bfo/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-alt/bfo/ /srv/pub/alt/bfo/ | tail -n2 | logger -p local0.notice -t rsync_updates_alt_bfo
|
||||
# http://dl.fedoraproject.org/pub/epel/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${EPL_EXCLUDES} ${SERVER}::fedora-epel/ /srv/pub/epel/ | tail -n2 | logger -p local0.notice -t rsync_updates_epel
|
||||
# http://dl.fedoraproject.org/pub/fedora/
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${FED_EXCLUDES} ${SERVER}::fedora-enchilada0/ /srv/pub/fedora/ | tail -n2 | logger -p local0.notice -t rsync_updates_fedora
|
||||
|
||||
# Let MM know I'm all up to date
|
||||
#/usr/bin/report_mirror
|
||||
@@ -1,66 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
## This script is used to sync data from main download servers to
|
||||
## secondary server at ibiblio.
|
||||
##
|
||||
|
||||
RSYNC='/usr/bin/rsync'
|
||||
RS_OPT="-avSHP --numeric-ids "
|
||||
RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates"
|
||||
ALT_EXCLUDES=""
|
||||
EPL_EXCLUDES=""
|
||||
FED_EXCLUDES=""
|
||||
|
||||
DATE_EPEL='/root/last-epel-sync'
|
||||
DATE_FED='/root/last-fed-sync'
|
||||
DATE_ARCHIVE='/root/last-archive-sync'
|
||||
DATE_ALT='/root/last-alt-sync'
|
||||
DATE_SECOND='/root/last-second-sync'
|
||||
|
||||
for i in ${DATE_EPEL} ${DATE_FED} ${DATE_ARCHIVE} ${DATE_ALT} ${DATE_SECOND}; do
|
||||
touch ${i}
|
||||
done
|
||||
|
||||
LAST_SYNC='/usr/local/bin/last-sync'
|
||||
|
||||
SERVER=dl.fedoraproject.org
|
||||
|
||||
function sync_stuff() {
|
||||
if [[ $# -ne 5 ]]; then
|
||||
echo "Illegal number of arguments to sync_stuff: " $#
|
||||
exit 1
|
||||
fi
|
||||
DATE_FILE=$1
|
||||
LOGGER_NAME=$2
|
||||
RSYNC_FROM=$3
|
||||
RSYNC_TO=$4
|
||||
FLAG="$5"
|
||||
|
||||
CURDATE=$( date +%s )
|
||||
if [[ -s ${DATE_FILE} ]]; then
|
||||
LASTRUN=$( cat ${DATE_FILE} | awk '{print int($NF)}' )
|
||||
else
|
||||
LASTRUN=$( date +%s --date="Jan 1 00:00:00 UTC 2007" )
|
||||
fi
|
||||
DELTA=`echo ${CURDATE}-${LASTRUN} | bc`
|
||||
|
||||
${LAST_SYNC} -d ${DELTA} -q ${FLAG}
|
||||
|
||||
if [ "$?" -eq "0" ]; then
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::${RSYNC_FROM} ${RSYNC_TO} | tail -n2 | logger -p local0.notice -t ${LOGGER_NAME}
|
||||
echo ${CURDATE} > ${DATE_FILE}
|
||||
else
|
||||
logger -p local0.notice -t ${LOGGER_NAME} "No change found. Not syncing"
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
sync_stuff ${DATE_EPEL} rsync_epel fedora-epel0 /srv/pub/epel/ "-e"
|
||||
sync_stuff ${DATE_FED} rsync_fedora fedora-enchilada0 /srv/pub/fedora/ "-f"
|
||||
sync_stuff ${DATE_ARCHIVE} rsync_archive fedora-archive0 /srv/pub/archive/ "-f"
|
||||
sync_stuff ${DATE_ALT} rsync_alt fedora-alt0 /srv/pub/alt/ "-f"
|
||||
sync_stuff ${DATE_SECOND} rsync_second fedora-secondary0 /srv/pub/fedora-secondary/ "-f"
|
||||
|
||||
# Let MM know I'm all up to date
|
||||
#/usr/bin/report_mirror
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
## This script is used to sync data from main download servers to
|
||||
## secondary server at ibiblio.
|
||||
##
|
||||
|
||||
RSYNC='/usr/bin/rsync'
|
||||
RS_OPT="-avSHP --numeric-ids"
|
||||
RS_DEADLY="--delete --delete-excluded --delete-delay --delay-updates"
|
||||
ALT_EXCLUDES=""
|
||||
EPL_EXCLUDES=""
|
||||
FED_EXCLUDES=""
|
||||
|
||||
LAST_SYNC='/usr/local/bin/last-sync'
|
||||
|
||||
SERVER=dl.fedoraproject.org
|
||||
|
||||
# Alt
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-alt0/ /srv/pub/alt/ | tail -n2 | logger -p local0.notice -t rsync_alt
|
||||
# Secondary
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-secondary/ /srv/pub/fedora-secondary/ | tail -n2 | logger -p local0.notice -t rsync_2nd
|
||||
# Archives
|
||||
${RSYNC} ${RS_OPT} ${RS_DEADLY} ${ALT_EXCLUDES} ${SERVER}::fedora-archive/ /srv/pub/archive/ | tail -n2 | logger -p local0.notice -t rsync_archive
|
||||
|
||||
|
||||
# Let MM know I'm all up to date
|
||||
#/usr/bin/report_mirror
|
||||
1
files/fas-client/fas-client.cron
Normal file
1
files/fas-client/fas-client.cron
Normal file
@@ -0,0 +1 @@
|
||||
*/10 * * * * root /usr/local/bin/lock-wrapper fasClient "/bin/sleep $(($RANDOM \% 180)); /usr/bin/fasClient -i | /usr/local/bin/nag-once fassync 1d 2>&1"
|
||||
@@ -1,10 +1,6 @@
|
||||
[global]
|
||||
; url - Location to fas server
|
||||
{% if env == "staging" %}
|
||||
url = https://admin.stg.fedoraproject.org/accounts/
|
||||
{% else %}
|
||||
url = https://admin.fedoraproject.org/accounts/
|
||||
{% endif %}
|
||||
|
||||
; temp - Location to generate files while user creation process is happening
|
||||
temp = /var/db
|
||||
@@ -30,7 +26,7 @@ cla_group = cla_done
|
||||
; in 'groups'
|
||||
|
||||
; groups that should have a shell account on this system.
|
||||
{% if fas_client_groups is defined %}
|
||||
{% if fas_client_groups %}
|
||||
groups = sysadmin-main,{{ fas_client_groups }}
|
||||
{% else %}
|
||||
groups = sysadmin-main
|
||||
@@ -44,7 +40,7 @@ restricted_groups =
|
||||
; need to disable password based logins in order for this value to have any
|
||||
; security meaning. Group types can be placed here as well, for example
|
||||
; @hg,@git,@svn
|
||||
{% if fas_client_ssh_groups is defined %}
|
||||
{% if fas_client_ssh_groups %}
|
||||
ssh_restricted_groups = {{ fas_client_ssh_groups }}
|
||||
{% else %}
|
||||
ssh_restricted_groups =
|
||||
@@ -70,14 +66,14 @@ home_backup_dir = /home/fedora.bak
|
||||
; is a powerfull way to restrict access to a machine. An alternative example
|
||||
; could be given to people who should only have cvs access on the machine.
|
||||
; setting this value to "/usr/bin/cvs server" would do this.
|
||||
{% if fas_client_restricted_app is defined %}
|
||||
{% if fas_client_restricted_app %}
|
||||
ssh_restricted_app = {{ fas_client_restricted_app }}
|
||||
{% else %}
|
||||
ssh_restricted_app =
|
||||
{% endif %}
|
||||
|
||||
; ssh_admin_app - This is the path to an app that an admin is allowed to use.
|
||||
{% if fas_client_admin_app is defined %}
|
||||
{% if fas_client_admin_app %}
|
||||
ssh_admin_app = {{ fas_client_admin_app }}
|
||||
{% else %}
|
||||
ssh_admin_app =
|
||||
47
files/fedmsg/base.py.j2
Normal file
47
files/fedmsg/base.py.j2
Normal file
@@ -0,0 +1,47 @@
|
||||
|
||||
config = dict(
|
||||
# Set this to dev if you're hacking on fedmsg or an app locally.
|
||||
# Set to stg or prod if running in the Fedora Infrastructure.
|
||||
{% if env == 'staging' %}
|
||||
environment="stg",
|
||||
{% else %}
|
||||
environment="prod",
|
||||
{% endif %}
|
||||
|
||||
# This used to be set to 1 for safety, but it turns out it was
|
||||
# excessive. It is the number of seconds that fedmsg should sleep
|
||||
# after it has initialized, but before it begins to try and send any
|
||||
# messages. If set to a non-zero value, this will slow down one-off
|
||||
# fedmsg scripts like the git post-receive hook and pkgdb2branch.
|
||||
# If we are experiencing message-loss problems, one of the first things
|
||||
# to try should be to turn this number up to a non-zero value. '1' should
|
||||
# be more than sufficient.
|
||||
post_init_sleep=0.4,
|
||||
|
||||
# This is the number of milliseconds to wait before timing out on
|
||||
# connections.. notably to the fedmsg-relay in the event that it has
|
||||
# crashed.
|
||||
zmq_linger=2000,
|
||||
|
||||
# Default is 0
|
||||
high_water_mark=0,
|
||||
io_threads=1,
|
||||
|
||||
# We almost always want the fedmsg-hub to be sending messages with zmq as
|
||||
# opposed to amqp or stomp. The only exception will be the bugzilla
|
||||
# amqp<->zmq bridge service.
|
||||
zmq_enabled=True,
|
||||
|
||||
# When subscribing to messages, we want to allow splats ('*') so we tell the
|
||||
# hub to not be strict when comparing messages topics to subscription
|
||||
# topics.
|
||||
zmq_strict=False,
|
||||
|
||||
# See the following
|
||||
# - http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
|
||||
# - http://api.zeromq.org/3-2:zmq-setsockopt
|
||||
zmq_tcp_keepalive=1,
|
||||
zmq_tcp_keepalive_cnt=3,
|
||||
zmq_tcp_keepalive_idle=60,
|
||||
zmq_tcp_keepalive_intvl=5,
|
||||
)
|
||||
13
files/fedmsg/endpoints-fedbadges.py.j2
Normal file
13
files/fedmsg/endpoints-fedbadges.py.j2
Normal file
@@ -0,0 +1,13 @@
|
||||
{% if env == 'staging' %}
|
||||
suffix = 'stg.phx2.fedoraproject.org'
|
||||
{% else %}
|
||||
suffix = 'phx2.fedoraproject.org'
|
||||
{% endif %}
|
||||
|
||||
config = dict(
|
||||
endpoints={
|
||||
"fedbadges.badges-backend01": [
|
||||
"tcp://badges-backend01.%s:3000" % suffix,
|
||||
],
|
||||
},
|
||||
)
|
||||
130
files/fedmsg/endpoints.py.j2
Normal file
130
files/fedmsg/endpoints.py.j2
Normal file
@@ -0,0 +1,130 @@
|
||||
{% if env == 'staging' %}
|
||||
suffix = 'stg.phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'stg.fedoraproject.org'
|
||||
{% else %}
|
||||
suffix = 'phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'fedoraproject.org'
|
||||
vpn_suffix = 'vpn.fedoraproject.org'
|
||||
{% endif %}
|
||||
|
||||
config = dict(
|
||||
# This is a dict of possible addresses from which fedmsg can send
|
||||
# messages. fedmsg.init(...) requires that a 'name' argument be passed
|
||||
# to it which corresponds with one of the keys in this dict.
|
||||
endpoints = {
|
||||
# For message producers, fedmsg will try to guess the
|
||||
# name of it's calling module to determine which endpoint definition
|
||||
# to use. This can be overridden by explicitly providing the name in
|
||||
# the initial call to fedmsg.init(...).
|
||||
"bodhi.app01": [
|
||||
"tcp://app01.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app02": [
|
||||
"tcp://app02.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.releng01": [
|
||||
"tcp://releng01.%s:3000" % suffix,
|
||||
"tcp://releng01.%s:3001" % suffix,
|
||||
],
|
||||
"bodhi.releng02": [
|
||||
"tcp://releng02.%s:3000" % suffix,
|
||||
"tcp://releng02.%s:3001" % suffix,
|
||||
],
|
||||
{% if not env == 'staging' %}
|
||||
"bodhi.app03": [
|
||||
"tcp://app03.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app04": [
|
||||
"tcp://app04.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app05": [
|
||||
"tcp://app05.%s:300%i" % (non_phx_suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app06": [
|
||||
"tcp://app06.%s:300%i" % (non_phx_suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app07": [
|
||||
"tcp://app07.%s:300%i" % (suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.app08": [
|
||||
"tcp://app08.%s:300%i" % (non_phx_suffix, i)
|
||||
for i in range(8)
|
||||
],
|
||||
"bodhi.releng04": [
|
||||
"tcp://releng04.%s:3000" % suffix,
|
||||
"tcp://releng04.%s:3001" % suffix,
|
||||
],
|
||||
"bodhi.relepel01": [
|
||||
"tcp://relepel01.%s:3000" % suffix,
|
||||
"tcp://relepel01.%s:3001" % suffix,
|
||||
],
|
||||
{% endif %}
|
||||
# FAS is a little out of the ordinary. It has 32 endpoints instead of
|
||||
# the usual 8 since there are so many mod_wsgi processes for it.
|
||||
"fas.fas01": [
|
||||
"tcp://fas01.%s:30%02i" % (suffix, i)
|
||||
for i in range(32)
|
||||
],
|
||||
{% if env != 'staging' %}
|
||||
"fas.fas02": [
|
||||
"tcp://fas02.%s:30%02i" % (suffix, i)
|
||||
for i in range(32)
|
||||
],
|
||||
"fas.fas03": [
|
||||
"tcp://fas03.%s:30%02i" % (suffix, i)
|
||||
for i in range(32)
|
||||
],
|
||||
{% endif %}
|
||||
# Well, fedoratagger needs 32 endpoints too, just like FAS.
|
||||
"fedoratagger.packages01": [
|
||||
"tcp://packages01.%s:30%02i" % (suffix, i)
|
||||
for i in range(32)
|
||||
],
|
||||
{% if env != 'staging' %}
|
||||
"fedoratagger.packages02": [
|
||||
"tcp://packages02.%s:30%02i" % (suffix, i)
|
||||
for i in range(32)
|
||||
],
|
||||
{% endif %}
|
||||
"busmon_consumers.busgateway01": [
|
||||
"tcp://busgateway01.%s:3000" % suffix,
|
||||
],
|
||||
{% if env != 'staging' %}
|
||||
"supybot.value03": [
|
||||
"tcp://value03.%s:3000" % suffix,
|
||||
],
|
||||
{% endif %}
|
||||
# Askbot runs as 6 processes with 1 thread each.
|
||||
"askbot.ask01": [
|
||||
"tcp://ask01.%s:30%02i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
|
||||
# Askbot runs as 6 processes with 1 thread each.
|
||||
"askbot.ask02": [
|
||||
"tcp://ask02.%s:30%02i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
|
||||
{% if env != 'staging' %}
|
||||
# fedorahosted trac runs as 4 processes with 4 threads each.
|
||||
"trac.hosted03": [
|
||||
"tcp://hosted03.%s:30%02i" % (vpn_suffix, i)
|
||||
for i in range(16)
|
||||
],
|
||||
"trac.hosted04": [
|
||||
"tcp://hosted04.%s:30%02i" % (vpn_suffix, i)
|
||||
for i in range(16)
|
||||
],
|
||||
{% endif %}
|
||||
|
||||
# koji is not listed here since it publishes to the fedmsg-relay
|
||||
},
|
||||
)
|
||||
32
files/fedmsg/logging.py.j2
Normal file
32
files/fedmsg/logging.py.j2
Normal file
@@ -0,0 +1,32 @@
|
||||
# Setup fedmsg logging.
|
||||
# See the following for constraints on this format http://bit.ly/Xn1WDn
|
||||
config = dict(
|
||||
logging=dict(
|
||||
version=1,
|
||||
formatters=dict(
|
||||
bare={
|
||||
"format": "%(message)s",
|
||||
},
|
||||
),
|
||||
handlers=dict(
|
||||
console={
|
||||
"class": "logging.StreamHandler",
|
||||
"formatter": "bare",
|
||||
"level": "DEBUG",
|
||||
"stream": "ext://sys.stdout",
|
||||
}
|
||||
),
|
||||
loggers=dict(
|
||||
fedmsg={
|
||||
"level": "DEBUG",
|
||||
"propagate": False,
|
||||
"handlers": ["console"],
|
||||
},
|
||||
moksha={
|
||||
"level": "DEBUG",
|
||||
"propagate": False,
|
||||
"handlers": ["console"],
|
||||
},
|
||||
),
|
||||
),
|
||||
)
|
||||
46
files/fedmsg/pkgdb.py.j2
Normal file
46
files/fedmsg/pkgdb.py.j2
Normal file
@@ -0,0 +1,46 @@
|
||||
{% if env == 'staging' %}
|
||||
suffix = 'stg.phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'stg.fedoraproject.org'
|
||||
{% else %}
|
||||
suffix = 'phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'fedoraproject.org'
|
||||
{% endif %}
|
||||
|
||||
config = dict(
|
||||
endpoints={
|
||||
"pkgdb.app01": [
|
||||
"tcp://app01.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app02": [
|
||||
"tcp://app02.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
{% if not env == 'staging' %}
|
||||
"pkgdb.app03": [
|
||||
"tcp://app03.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app04": [
|
||||
"tcp://app04.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app05": [
|
||||
"tcp://app05.%s:301%i" % (non_phx_suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app06": [
|
||||
"tcp://app06.%s:301%i" % (non_phx_suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app07": [
|
||||
"tcp://app07.%s:301%i" % (suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
"pkgdb.app08": [
|
||||
"tcp://app08.%s:301%i" % (non_phx_suffix, i)
|
||||
for i in range(6)
|
||||
],
|
||||
{% endif %}
|
||||
},
|
||||
)
|
||||
39
files/fedmsg/relay.py.j2
Normal file
39
files/fedmsg/relay.py.j2
Normal file
@@ -0,0 +1,39 @@
|
||||
{% if env == 'staging' %}
|
||||
suffix = 'stg.phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'stg.fedoraproject.org'
|
||||
{% else %}
|
||||
suffix = 'phx2.fedoraproject.org'
|
||||
non_phx_suffix = 'fedoraproject.org'
|
||||
{% endif %}
|
||||
|
||||
# This is just an extension of fedmsg.d/endpoints.py. This dict
|
||||
# will get merged in with the other.
|
||||
config = dict(
|
||||
endpoints={
|
||||
# This is the output side of the relay to which all other
|
||||
# services can listen.
|
||||
"relay_outbound": [
|
||||
# Messages from inside phx2 and the vpn emerge here
|
||||
"tcp://app01.%s:3999" % suffix,
|
||||
|
||||
# Messages from coprs and secondary arch composes emerge here
|
||||
"tcp://busgateway01.%s:3999" % suffix,
|
||||
],
|
||||
},
|
||||
# This is the address of an active->passive relay. It is used for the
|
||||
# fedmsg-logger command which requires another service with a stable
|
||||
# listening address for it to send messages to.
|
||||
# It is also used by the git-hook, for the same reason.
|
||||
# It is also used by the mediawiki php plugin which, due to the oddities of
|
||||
# php, can't maintain a single passive-bind endpoint of it's own.
|
||||
relay_inbound=[
|
||||
# Scripts inside phx2 connect here
|
||||
"tcp://app01.%s:3998" % suffix,
|
||||
|
||||
# Scripts from the vpn (people03) connect here
|
||||
"tcp://app01.vpn.fedoraproject.org:3998",
|
||||
|
||||
# Scripts from outside connect here (coprs, secondary arch composes)
|
||||
"tcp://busgateway01.%s:9941" % suffix,
|
||||
],
|
||||
)
|
||||
325
files/fedmsg/ssl.py.j2
Normal file
325
files/fedmsg/ssl.py.j2
Normal file
@@ -0,0 +1,325 @@
|
||||
|
||||
{% if env == 'staging' %}
|
||||
suffix = "stg.phx2.fedoraproject.org"
|
||||
app_hosts = [
|
||||
"app01.stg.phx2.fedoraproject.org",
|
||||
"app02.stg.phx2.fedoraproject.org",
|
||||
]
|
||||
topic_prefix = "org.fedoraproject.stg."
|
||||
{% else %}
|
||||
suffix = "phx2.fedoraproject.org"
|
||||
app_hosts = [
|
||||
"app01.phx2.fedoraproject.org",
|
||||
"app02.phx2.fedoraproject.org",
|
||||
"app03.phx2.fedoraproject.org",
|
||||
"app04.phx2.fedoraproject.org",
|
||||
"app05.fedoraproject.org",
|
||||
"app06.fedoraproject.org",
|
||||
"app07.phx2.fedoraproject.org",
|
||||
"app08.fedoraproject.org",
|
||||
]
|
||||
topic_prefix = "org.fedoraproject.prod."
|
||||
{% endif %}
|
||||
|
||||
vpn_suffix = "vpn.fedoraproject.org"
|
||||
|
||||
config = dict(
|
||||
sign_messages=True,
|
||||
validate_signatures=True,
|
||||
ssldir="/etc/pki/fedmsg",
|
||||
|
||||
crl_location="https://fedoraproject.org/fedmsg/crl.pem",
|
||||
crl_cache="/var/run/fedmsg/crl.pem",
|
||||
crl_cache_expiry=86400, # Daily
|
||||
|
||||
certnames=dict(
|
||||
[
|
||||
("shell.app0%i" % i, "shell-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("bodhi.app0%i" % i, "bodhi-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("pkgdb.app0%i" % i, "pkgdb-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("mediawiki.app0%i" % i, "mediawiki-%s" % app_hosts[i-1])
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
] + [
|
||||
("shell.fas0%i" % i, "shell-fas0%i.%s" % (i, suffix))
|
||||
for i in range(1, 4)
|
||||
] + [
|
||||
("fas.fas0%i" % i, "fas-fas0%i.%s" % (i, suffix))
|
||||
for i in range(1, 4)
|
||||
] + [
|
||||
("shell.packages0%i" % i, "shell-packages0%i.%s" % (i, suffix))
|
||||
for i in range(1, 3)
|
||||
] + [
|
||||
("fedoratagger.packages0%i" % i, "fedoratagger-packages0%i.%s" % (i, suffix))
|
||||
for i in range(1, 3)
|
||||
] + [
|
||||
("shell.pkgs0%i" % i, "shell-pkgs0%i.%s" % (i, suffix))
|
||||
for i in range(1, 2)
|
||||
] + [
|
||||
("scm.pkgs0%i" % i, "scm-pkgs0%i.%s" % (i, suffix))
|
||||
for i in range(1, 2)
|
||||
] + [
|
||||
("lookaside.pkgs0%i" % i, "lookaside-pkgs0%i.%s" % (i, suffix))
|
||||
for i in range(1, 2)
|
||||
] + [
|
||||
("shell.relepel01", "shell-relepel01.%s" % suffix),
|
||||
("shell.releng01", "shell-releng01.%s" % suffix),
|
||||
("shell.releng02", "shell-releng02.%s" % suffix),
|
||||
("shell.releng03", "shell-releng03.%s" % suffix),
|
||||
("shell.releng04", "shell-releng04.%s" % suffix),
|
||||
("bodhi.relepel01", "bodhi-relepel01.%s" % suffix),
|
||||
("bodhi.releng01", "bodhi-releng01.%s" % suffix),
|
||||
("bodhi.releng02", "bodhi-releng02.%s" % suffix),
|
||||
("bodhi.releng03", "bodhi-releng03.%s" % suffix),
|
||||
("bodhi.releng04", "bodhi-releng04.%s" % suffix),
|
||||
] + [
|
||||
("busmon_consumers.busgateway01", "busmon-busgateway01.%s" % suffix),
|
||||
("shell.busgateway01", "shell-busgateway01.%s" % suffix),
|
||||
] + [
|
||||
("shell.value01", "shell-value01.%s" % suffix),
|
||||
("shell.value03", "shell-value03.%s" % suffix),
|
||||
("supybot.value03", "supybot-value03.%s" % suffix),
|
||||
] + [
|
||||
("koji.koji04", "koji-koji04.%s" % suffix),
|
||||
("koji.koji01", "koji-koji01.%s" % suffix),
|
||||
("koji.koji03", "koji-koji03.%s" % suffix),
|
||||
("shell.koji04", "shell-koji04.%s" % suffix),
|
||||
("shell.koji01", "shell-koji01.%s" % suffix),
|
||||
("shell.koji03", "shell-koji03.%s" % suffix),
|
||||
] + [
|
||||
("nagios.noc01", "nagios-noc01.%s" % suffix),
|
||||
("shell.noc01", "shell-noc01.%s" % suffix),
|
||||
] + [
|
||||
("git.hosted03", "git-hosted03.%s" % vpn_suffix),
|
||||
("git.hosted04", "git-hosted04.%s" % vpn_suffix),
|
||||
("trac.hosted03", "trac-hosted03.%s" % vpn_suffix),
|
||||
("trac.hosted04", "trac-hosted04.%s" % vpn_suffix),
|
||||
("shell.hosted03", "shell-hosted03.%s" % vpn_suffix),
|
||||
("shell.hosted04", "shell-hosted04.%s" % vpn_suffix),
|
||||
] + [
|
||||
("shell.lockbox01", "shell-lockbox01.%s" % suffix),
|
||||
("announce.lockbox01", "announce-lockbox01.%s" % suffix),
|
||||
] + [
|
||||
# These first two entries are here to placate a bug in
|
||||
# python-askbot-fedmsg-0.0.4. They can be removed once
|
||||
# python-askbot-fedmsg-0.0.5 hits town.
|
||||
("askbot.ask01.phx2.fedoraproject.org", "askbot-ask01.%s" % suffix),
|
||||
("askbot.ask01.stg.phx2.fedoraproject.org", "askbot-ask01.%s" % suffix),
|
||||
|
||||
("askbot.ask01", "askbot-ask01.%s" % suffix),
|
||||
("shell.ask01", "shell-ask01.%s" % suffix),
|
||||
|
||||
("askbot.ask02", "askbot-ask02.%s" % suffix),
|
||||
("shell.ask02", "shell-ask02.%s" % suffix),
|
||||
|
||||
("fedbadges.badges-backend01", "fedbadges-badges-backend01.%s" % suffix),
|
||||
("shell.badges-backend01", "shell-badges-backend01.%s" % suffix),
|
||||
]),
|
||||
routing_policy={
|
||||
# The gist here is that only messages signed by the
|
||||
# bodhi-app0{1,2,3,4,5,6,7,8} certificates may bear the
|
||||
# "org.fedoraproject.prod.bodhi.update.request.stable" topic, or else
|
||||
# they fail validation and are either dropped or marked as invalid
|
||||
# (depending on the consumer's wishes).
|
||||
#
|
||||
# There is another option that we do not set. If `routing_nitpicky` is
|
||||
# set to True, then a given message's topic *must* appear in this list
|
||||
# in order for it to pass validation. For instance, we have
|
||||
# routing_nitpicky set to False by default and no
|
||||
# "org.fedoraproject.prod.logger.log" topics appear in this policy,
|
||||
# therefore, any message bearing that topic and *any* certificate signed
|
||||
# by our CA may pass validation.
|
||||
#
|
||||
topic_prefix + "bodhi.update.request.stable": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.update.request.testing": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.update.request.unpush": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.update.comment": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.buildroot_override.tag": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.buildroot_override.untag": [
|
||||
"bodhi-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "bodhi.mashtask.mashing": [
|
||||
"bodhi-releng04.%s" % suffix,
|
||||
"bodhi-relepel01.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "bodhi.mashtask.complete": [
|
||||
"bodhi-releng04.%s" % suffix,
|
||||
"bodhi-relepel01.%s" % suffix,
|
||||
],
|
||||
|
||||
|
||||
# Compose (rel-eng) messages (use the bodhi certs)
|
||||
topic_prefix + "compose.rawhide.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.mash.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.mash.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.rsync.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.rawhide.rsync.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.pungify.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.pungify.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.mash.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.mash.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.rsync.start": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "compose.branched.rsync.complete": [
|
||||
"bodhi-releng03.%s" % suffix,
|
||||
],
|
||||
|
||||
|
||||
#FAS messages
|
||||
topic_prefix + "fas.user.create": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.user.update": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.edit": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.update": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.create": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.role.update": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.member.remove": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.member.sponsor": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
topic_prefix + "fas.group.member.apply": [
|
||||
"fas-fas0%i.%s" % (i, suffix) for i in range(1, 4)
|
||||
],
|
||||
|
||||
# Git/SCM messages
|
||||
topic_prefix + "git.receive": [
|
||||
"scm-pkgs01.%s" % suffix,
|
||||
],
|
||||
topic_prefix + "git.lookaside.new": [
|
||||
"lookaside-pkgs01.%s" % suffix,
|
||||
],
|
||||
|
||||
# Tagger messages
|
||||
topic_prefix + "fedoratagger.tag.update": [
|
||||
"fedoratagger-packages0%i.%s" % (i, suffix) for i in range(1, 3)
|
||||
],
|
||||
topic_prefix + "fedoratagger.tag.create": [
|
||||
"fedoratagger-packages0%i.%s" % (i, suffix) for i in range(1, 3)
|
||||
],
|
||||
topic_prefix + "fedoratagger.user.rank.update": [
|
||||
"fedoratagger-packages0%i.%s" % (i, suffix) for i in range(1, 3)
|
||||
],
|
||||
|
||||
# Mediawiki messages
|
||||
topic_prefix + "wiki.article.edit": [
|
||||
"mediawiki-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "wiki.upload.complete": [
|
||||
"mediawiki-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
|
||||
# Pkgdb messages
|
||||
topic_prefix + "pkgdb.acl.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.acl.request.toggle": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.acl.user.remove": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.owner.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.package.new": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.package.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.package.retire": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
topic_prefix + "pkgdb.critpath.update": [
|
||||
"pkgdb-%s" % app_hosts[i-1]
|
||||
for i in range(1, len(app_hosts) + 1)
|
||||
],
|
||||
|
||||
# Planet/venus
|
||||
topic_prefix + "planet.post.new": [
|
||||
"planet-people03.vpn.fedoraproject.org",
|
||||
],
|
||||
|
||||
# Supybot/meetbot
|
||||
topic_prefix + "meetbot.meeting.start": [
|
||||
"supybot-value03.%s" % suffix,
|
||||
],
|
||||
|
||||
# Only @spot and @rbergeron can use this one
|
||||
topic_prefix + "announce.announcement": [
|
||||
"announce-lockbox01.phx2.fedoraproject.org",
|
||||
],
|
||||
},
|
||||
)
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIID2DCCAsACCQCxRWmzwjSj6TANBgkqhkiG9w0BAQUFADCBrTELMAkGA1UEBhMC
|
||||
VVMxCzAJBgNVBAgMAk5NMRAwDgYDVQQHDAdSYWxlaWdoMRAwDgYDVQQKDAdSZWQg
|
||||
SGF0MRcwFQYDVQQLDA5GZWRvcmEgUHJvamVjdDEsMCoGA1UEAwwjZmVkLWNsb3Vk
|
||||
MDkuY2xvdWQuZmVkb3JhcHJvamVjdC5vcmcxJjAkBgkqhkiG9w0BCQEWF2FkbWlu
|
||||
QGZlZG9yYXByb2plY3Qub3JnMB4XDTE0MDkxODEwMjMxMloXDTE1MDkxODEwMjMx
|
||||
Mlowga0xCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJOTTEQMA4GA1UEBwwHUmFsZWln
|
||||
aDEQMA4GA1UECgwHUmVkIEhhdDEXMBUGA1UECwwORmVkb3JhIFByb2plY3QxLDAq
|
||||
BgNVBAMMI2ZlZC1jbG91ZDA5LmNsb3VkLmZlZG9yYXByb2plY3Qub3JnMSYwJAYJ
|
||||
KoZIhvcNAQkBFhdhZG1pbkBmZWRvcmFwcm9qZWN0Lm9yZzCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBALFOYDRhow6sEyCvm4jNlIAxs9vYDF07q3sEHzVj
|
||||
zXy0NNlUgZPRCijWFyHRDwy383f7ZtRlqVCGXxm4l8ltQUU+jmXcnIY1xY2A1TPv
|
||||
nWv+f1dGSv+SfWGAjqgwyajr6wyPAOnpwui2v03/xalAx6Xl7padfdlAEsNjAvNb
|
||||
5uZkW7DLlDu3jSIroDSKsJUQW9kc1elT90W0mNgw3MpFA5zdj0QRxi2JpBth6PeT
|
||||
CewN4r7QZ5cP4EzfHMLKT21kJzm+j5jlaQEak4yKWDEeLh4+RxgTnmss4zYKTUit
|
||||
7H+j9KaxqVsneB8Sg7EtVnXafYLrSlr9fwOV5DWklLzvjBMCAwEAATANBgkqhkiG
|
||||
9w0BAQUFAAOCAQEAHToeNGFaGlybHICw1ncLCmdu6vikPPn/UShfS25U54Q9eIMn
|
||||
zqlhbbEyzuF4wKjV35W0BORWKJ+hQ2vpfk21jUMVOsdl7IMEXtIWotfO17ufWM28
|
||||
zhwcPAlrs/Pr5dF7ihbOGKAHhEYVopSH8OTFayAQKWWKGv52lZsgwfrnDDu0TjIo
|
||||
zmhCEmOWZf+CeEWT/AP7BJ6g4Apz9grUmaRvaQGft5y5sGC8tsV0im/C9WaMfVhF
|
||||
wemG2KcOuKJDXtvd7DHNBoHcDrB1cN1i0uKhj0nxXsXpeag9Xh4BmkgHMU8rnegK
|
||||
q7hOy15qVU/lOBZUtfx69aYHPpOGJ7Jc1xFIiQ==
|
||||
-----END CERTIFICATE-----
|
||||
@@ -1,2 +0,0 @@
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCv8WqXOuL78Rd7ZvDqoi84M7uRV3uueXTXtvlPdyNQBzIBmxh+spw9IhtoR+FlzgQQ1MN4B7YVLTGki6QDxWDM5jgTVfzxTh/HTg7kJ31HbM1/jDuBK7HMfay2BGx/HCqS2oxIBgIBwIMQAU93jBZUxNyYWvO+5TiU35IHEkYOtHyGYtTtuGCopYRQoAAOIVIIzzDbPvopojCBF5cMYglR/G02YgWM7hMpQ9IqEttLctLmpg6ckcp/sDTHV/8CbXbrSN6pOYxn1YutOgC9MHNmxC1joMH18qkwvSnzXaeVNh4PBWnm1f3KVTSZXKuewPThc3fk2sozgM9BH6KmZoKl
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
{{fed_cloud09_nova_public_key}}
|
||||
@@ -1 +0,0 @@
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1sBKROSJ3rzI0IlBkM926Dvpiw3a4wYSys0ZeKRohWZg369ilZkUkRhsy0g4JU85lt6rxf5JLwURF+fWBEohauF1Uvklc25LdZpRS3IBQPaXvWeM8lygQQomFc0Df6iUbCYFWnEWMjKd7FGYX3DgOZLnG8tV2vX7jFjqitsh5LRAbmghUBRarw/ix4CFx7+VIeKCBkAybviQIW828N1IqJC6/e7v6/QStpblYpCFPqMflXhQ/KS2D043Yy/uUjmOjMWwOMFS6Qk+py1C0mDU0TUptFYwDP5o9IK/c5HaccmOl2IyUPB1/RCtTfOn6wXPRTMUU+5w+TcPH6MPvvuiSQ== root@lockbox01.phx2.fedoraproject.org
|
||||
@@ -1,135 +0,0 @@
|
||||
#---------------------------------------------------------------------
|
||||
# Example configuration for a possible web application. See the
|
||||
# full configuration options online.
|
||||
#
|
||||
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
|
||||
#
|
||||
#---------------------------------------------------------------------
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
# Global settings
|
||||
#---------------------------------------------------------------------
|
||||
global
|
||||
# to have these messages end up in /var/log/haproxy.log you will
|
||||
# need to:
|
||||
#
|
||||
# 1) configure syslog to accept network log events. This is done
|
||||
# by adding the '-r' option to the SYSLOGD_OPTIONS in
|
||||
# /etc/sysconfig/syslog
|
||||
#
|
||||
# 2) configure local2 events to go to the /var/log/haproxy.log
|
||||
# file. A line like the following can be added to
|
||||
# /etc/sysconfig/syslog
|
||||
#
|
||||
# local2.* /var/log/haproxy.log
|
||||
#
|
||||
log 127.0.0.1 local2
|
||||
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
|
||||
# turn on stats unix socket
|
||||
stats socket /var/lib/haproxy/stats
|
||||
|
||||
tune.ssl.default-dh-param 1024
|
||||
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
|
||||
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
# use if not designated in their block
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
mode http
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
option http-server-close
|
||||
option forwardfor except 127.0.0.0/8
|
||||
option redispatch
|
||||
retries 3
|
||||
timeout http-request 10s
|
||||
timeout queue 1m
|
||||
timeout connect 10s
|
||||
timeout client 1m
|
||||
timeout server 1m
|
||||
timeout http-keep-alive 10s
|
||||
timeout check 10s
|
||||
maxconn 3000
|
||||
|
||||
#frontend keystone_public *:5000
|
||||
# default_backend keystone_public
|
||||
#frontend keystone_admin *:35357
|
||||
# default_backend keystone_admin
|
||||
frontend neutron
|
||||
bind 0.0.0.0:9696 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend neutron
|
||||
# HSTS (31536000 seconds = 365 days)
|
||||
rspadd Strict-Transport-Security:\ max-age=31536000
|
||||
|
||||
frontend cinder
|
||||
bind 0.0.0.0:8776 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend cinder
|
||||
# HSTS (31536000 seconds = 365 days)
|
||||
rspadd Strict-Transport-Security:\ max-age=31536000
|
||||
|
||||
frontend swift
|
||||
bind 0.0.0.0:8080 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend swift
|
||||
# HSTS (31536000 seconds = 365 days)
|
||||
rspadd Strict-Transport-Security:\ max-age=31536000
|
||||
|
||||
frontend nova
|
||||
bind 0.0.0.0:8774 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend nova
|
||||
# HSTS (31536000 seconds = 365 days)
|
||||
rspadd Strict-Transport-Security:\ max-age=31536000
|
||||
|
||||
frontend ceilometer
|
||||
bind 0.0.0.0:8777 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend ceilometer
|
||||
# HSTS (31536000 seconds = 365 days)
|
||||
rspadd Strict-Transport-Security:\ max-age=31536000
|
||||
|
||||
frontend ec2
|
||||
bind 0.0.0.0:8773 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend ec2
|
||||
# HSTS (31536000 seconds = 365 days)
|
||||
rspadd Strict-Transport-Security:\ max-age=31536000
|
||||
|
||||
frontend glance
|
||||
bind 0.0.0.0:9292 ssl no-sslv3 no-tlsv10 crt /etc/haproxy/fedorainfracloud.org.combined
|
||||
default_backend glance
|
||||
# HSTS (31536000 seconds = 365 days)
|
||||
rspadd Strict-Transport-Security:\ max-age=31536000
|
||||
|
||||
backend neutron
|
||||
server neutron 127.0.0.1:8696 check
|
||||
|
||||
backend cinder
|
||||
server cinder 127.0.0.1:6776 check
|
||||
|
||||
backend swift
|
||||
server swift 127.0.0.1:7080 check
|
||||
|
||||
backend nova
|
||||
server nova 127.0.0.1:6774 check
|
||||
|
||||
backend ceilometer
|
||||
server ceilometer 127.0.0.1:6777 check
|
||||
|
||||
backend ec2
|
||||
server ec2 127.0.0.1:6773 check
|
||||
|
||||
backend glance
|
||||
server glance 127.0.0.1:7292 check
|
||||
|
||||
backend keystone_public
|
||||
server keystone_public 127.0.0.1:5000 check
|
||||
|
||||
backend keystone_admin
|
||||
server keystone_admin 127.0.0.1:35357 check
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user