The code forge is gitea/forgejo, and the proxy in front used to be traefik. I tried fail2ban in front for a while as well but the issue was that everything appeared to come from different IPs.
The bots were also hitting my other public services pretty hard but nowhere near as bad. I think it’s a combination of 2 things:
most things I host publicly beside git are smaller or static pages, so quickly served and not draining resources as much
they try to hit all ‘exit nodes’ (i.e. links) off a page, and on repos with a couple hundred+ commits, with all the individual commits and diffs that are possible to hit that’s a lot.
A small interesting observation I made was that they also seemed to ‘focus’ on specific projects. So my guess would be you get unlucky once by having a large-ish repo targeted for crawling and then they just get stuck in there and get lost in the maze of possible pages. On the other hand it may make targeted blocking for certain routes more feasible…
I think there’s a lot to be gained here by everybody pooling their knowledge, but on the other hand it’s also an annoying topic and most selfhosting (including mine) is afaik done as a hobby, so most peeps will slap an Anubis-like PoW in front and call it a day.
Those are some very good and helpful insights, thank you very much for sharing. I was also hosting forgejo and used traefik as reverse proxy. However, my forgejo was locked down, which is probably why I had no bot attack.
Some thoughts:
fail2ban works for malicious requests very good, meaning things that get logged somewhere.
CrowdSec has an AI Bot Blocklist, which they offer for free if you host a FOSS project.
I am developing a tool which blocks CIDR ranges based on country directly via ufw. Maybe blocking countries helps in such a case, but not everyone wants to block whole countries.
The code forge is gitea/forgejo, and the proxy in front used to be traefik. I tried fail2ban in front for a while as well but the issue was that everything appeared to come from different IPs.
The bots were also hitting my other public services pretty hard but nowhere near as bad. I think it’s a combination of 2 things:
A small interesting observation I made was that they also seemed to ‘focus’ on specific projects. So my guess would be you get unlucky once by having a large-ish repo targeted for crawling and then they just get stuck in there and get lost in the maze of possible pages. On the other hand it may make targeted blocking for certain routes more feasible…
I think there’s a lot to be gained here by everybody pooling their knowledge, but on the other hand it’s also an annoying topic and most selfhosting (including mine) is afaik done as a hobby, so most peeps will slap an Anubis-like PoW in front and call it a day.
Those are some very good and helpful insights, thank you very much for sharing. I was also hosting forgejo and used traefik as reverse proxy. However, my forgejo was locked down, which is probably why I had no bot attack.
Some thoughts: