When I switched dns to use proxy110/proxy101 for src internally
in order to fix rust crate building, it broke auth on pkgs01/src.
The problem is that proxy01/10 are setup with a keytab that has
proxy01/proxy10 listed as principals so it can accept auth via them.
However, 101/110 are not listed and thus you get a permission denied.
We might look at a better way to fix this, but for now,
lets just override that here.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
there's about... 7million hits a day from sites passing a referrer
of forks/kernel or forks/firefox where they are fetching static content
over and over and over. This may be because before they were blocked
from the forks themselves they were also downloading the js and static
content, and now they are just too dumb to see the 403 and still
want to fetch the old static content. Fortunately, they send a
referrer we can match on.
So, this should cut load another chunk.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Some scraper(s) were very very agressively crawling kernel fork repos
and causing all kinds of problems for koji and src.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
We are no longer going to force a different firewall driver for containers.
At the same time, nftables service is disabled and stopped. We don't need it
since firewalld is using nftables as a library anyway.
The rule for opening port 8080 has been replaced with rule for 443.
Service has moved to HTTPS.
Signed-off-by: Jiri Podivin <jpodivin@redhat.com>
The large flatpak push (290 flatpaks) was hitting an occasional timeout,
which caused the entire compose to fail. Just making it retry gets it
through this.
This is a emerg fix because without it all updates pushes would have
been blocked.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
Change maxSurge from 100% to 0 and maxUnavailable from 0 to 1 to ensure old pod terminates before new pod starts, preventing LevelDB queue lock conflicts when email notifications are enabled.
These are internal proxies, they don't need to bother running anubis at
all, since they don't get any external traffic.
Just doing this to rule out some problem with additional proxy layer
and anubis causing the timeouts we are seeing.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>
This seems to be a similar case to the kojipkgs one, where we see from
time to time timeouts from proxies to pkgs01.
If it's a health check, haproxy will mark the backend down.
If it's a user request they will get a timeout and a 503 back.
This will help mitigate the second problem and retry those.
Signed-off-by: Kevin Fenzi <kevin@scrye.com>