When I switched dns to use proxy110/proxy101 for src internally
in order to fix rust crate building, it broke auth on pkgs01/src.
The problem is that proxy01/10 are setup with a keytab that has
proxy01/proxy10 listed as principals so it can accept auth via them.
However, 101/110 are not listed and thus you get a permission denied.
We might look at a better way to fix this, but for now,
lets just override that here.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
there's about... 7million hits a day from sites passing a referrer
of forks/kernel or forks/firefox where they are fetching static content
over and over and over. This may be because before they were blocked
from the forks themselves they were also downloading the js and static
content, and now they are just too dumb to see the 403 and still
want to fetch the old static content. Fortunately, they send a
referrer we can match on.
So, this should cut load another chunk.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Some scraper(s) were very very agressively crawling kernel fork repos
and causing all kinds of problems for koji and src.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
We are no longer going to force a different firewall driver for containers.
At the same time, nftables service is disabled and stopped. We don't need it
since firewalld is using nftables as a library anyway.
The rule for opening port 8080 has been replaced with rule for 443.
Service has moved to HTTPS.
Signed-off-by: Jiri Podivin <jpodivin@redhat.com>
The large flatpak push (290 flatpaks) was hitting an occasional timeout,
which caused the entire compose to fail. Just making it retry gets it
through this.
This is a emerg fix because without it all updates pushes would have
been blocked.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Change maxSurge from 100% to 0 and maxUnavailable from 0 to 1 to ensure old pod terminates before new pod starts, preventing LevelDB queue lock conflicts when email notifications are enabled.
This seems to be a similar case to the kojipkgs one, where we see from
time to time timeouts from proxies to pkgs01.
If it's a health check, haproxy will mark the backend down.
If it's a user request they will get a timeout and a 503 back.
This will help mitigate the second problem and retry those.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
This adds 3 power9 machines in the new 'fedora-isolated' vlan
in rdu3. This is the vlan thats going to house the moved rdu2-cc
hardware in rdu3. We already moved these 3 machines from iad2
so we can use them to try out the new vlan and acls and such.
This adds host vars for the 3 new machines (mac address, ips, etc)
It adds them to the copr_hypervisor group in inventory
It adds their mgmt to dhcpd config so they get known ip's for
their mgmt interfaces instead of dynamic ones.
It adds a 8 disk ppc64le kickstart to install them with.
It also fixes the dhcpd config for the bvmhost-p09-01-stg mgmt
interface, it's off by one.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
We are having problems with connections sometimes hanging from proxies
to kojipkgs. Lets try and mitigate that at the haproxy level and
hopefully improve things while we try and figure out what the underlying
cause is.
This should retry connections that failed for any 'retryable' output
(including timeout) and also it should try a _different_ backend than
the one that returned the error. This will not eliminate errors, but
should reduce them.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>