Sorry for the alarming title but, Admins for real, go set up Anubis.
For context, Anubis is essentially a gatekeeper/rate limiter for small services. From them:
(Anubis) is designed to help protect the small internet from the endless storm of requests that flood in from AI companies. Anubis is as lightweight as possible to ensure that everyone can afford to protect the communities closest to them.
It puts forward a challenge that must be solved in order to gain access, and judges how trustworthy a connection is. For the vast majority of real users they will never notice, or will notice a small delay accessing your site the first time. Even smaller scrapers may get by relatively easily.
For big scrapers though, AI and trainers, they get hit with computational problems that waste their compute before being let in. (Trust me, I worked for a company that did “scrape the internet”, and compute is expensive and a constant worry for them, so win win for us!)
Anubis ended up taking maybe 10 minutes to set up. For Lemmy hosters you literally just point your UI proxy at Anubis and point Anubis to Lemmy UI. Very easy and slots right in, minimal setup.

These graphs are since I turned it on less than an hour ago. I have a small instance, only a few people, and immediately my CPU usage has gone down and my requests per minute have gone down. I have already had thousands of requests challenged, I had no idea I was being scraped this much! You can see they’re backing off in the charts.
(FYI, this only stops the web requests, so it does nothing to the API or federation. Those are proxied elsewhere, so it really does only target web scrapers).


Those work fine with Anubis.
Anubis is fairly stupid in reality. It only checks the request at all if it looks like a regular browser (and thus catches the scrapers that pretend to be regular browsers to hide in normal traffic). If you use an RSS reader for example that doesn’t hide the fact that it is a RSS reader, then Anubis will send it right through.
Good to know. But most RSS readers already pretend to be browsers, because otherwise many publications with misconfigured reverse proxies will block them from accessing the RSS feed. cbc.ca is a good example of this. Because deploying a web firewall is neither easy or trivial, unless you know exactly who needs to access what, when, and why. Most people, in my experience, do not.