Incoherent rant.

I’ve, once again, noticed Amazon and Anthropic absolutely hammering my Lemmy instance to the point of the lemmy-ui container crashing. Multiple IPs all over the US.

So I’ve decided to do some restructuring of how I run things. Ditched Fedora on my VPS in favour of Alpine, just to start with a clean slate. And started looking into different options on how to combat things better.

Behold, Anubis.

“Weighs the soul of incoming HTTP requests to stop AI crawlers”

From how I understand it, it works like a reverse proxy per each service. It took me a while to actually understand how it’s supposed to integrate, but once I figured it out all bot activity instantly stopped. Not a single one got through yet.

My setup is basically just a home server -> tailscale tunnel (not funnel) -> VPS -> caddy reverse proxy, now with anubis integrated.

I’m not really sure why I’m posting this, but I hope at least one other goober trying to find a possible solution to these things finds this post.

Anubis Github, Anubis Website

  • blob42@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    I am planning to try it out, but for caddy users I came up with a solution that works after being bombarded by AI crawlers for weeks.

    It is a custom caddy CEL expression filter coupled with caddy-ratelimit and caddy-defender.

    Now here’s the fun part, the defender plugin can produce garbage as response so when a matching AI crawler fits it will poison their training dataset.

    Originally I only relied on the rate limiter and noticed that AI bots kept trying whenever the limit was reset. Once I introduced data poisoning they all stopped :)

    git.blob42.xyz {
        @bot <<CEL
            header({'Accept-Language': 'zh-CN'}) || header_regexp('User-Agent', '(?i:(.*bot.*|.*crawler.*|.*meta.*|.*google.*|.*microsoft.*|.*spider.*))')
        CEL
    
    
        abort @bot
        
    
        defender garbage {
    
            ranges aws azurepubliccloud deepseek gcloud githubcopilot openai 47.0.0.0/8
          
        }
    
        rate_limit {
            zone dynamic_botstop {
                match {
                    method GET
                     # to use with defender
                     #header X-RateLimit-Apply true
                     #not header LetMeThrough 1
                }
                key {remote_ip}
                events 1500
                window 30s
                #events 10
                #window 1m
            }
        }
    
        reverse_proxy upstream.server:4242
    
        handle_errors 429 {
            respond "429: Rate limit exceeded."
        }
    
    }
    

    If I am not mistaken the 47.0.0.0/8 ip block is for Alibaba cloud

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    The Anubis site thinks my phone is a bot :/

    tbh I would have just configured a reasonable rate limit in Nginx and left it at that.

    Won’t the bots just hammer the API instead now?

  • NotSteve_@piefed.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I love Annubis just because the dev is from my city that’s never talked about (Ottawa)

      • lambalicious@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        22 hours ago

        What do you mean, how?

        Cute anime catgirl, a staple of the internet, without having to be showy or anything. And there are hooks to change it.

        (Was actually half-surprised they didn’t go with “anime!stereotypical egyptian priestess” given the context of the software, but I feel that would have ended up too thematically overloaded in the end)

    • AmbitiousProcess@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Could you elaborate on how it’s ableist?

      As far as I’m aware, not only are they making a version that doesn’t even require JS, but the JS is only needed for the challenge itself, and the browser can then view the page(s) afterwards entirely without JS being necessary to parse the content in any way. Things like screen readers should still do perfectly fine at parsing content after the browser solves the challenge.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I’ve been thinking about setting up Anubis to protect my blog from AI scrapers, but I’m not clear on whether this would also block search engines. It would, wouldn’t it?

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    It doesn’t stop bots

    All it does is make clients do as much or more work than the server which makes it less temping to hammer the web.

    • zoey@lemmy.librebun.comOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Yeah, from what I understand it’s nothing crazy for any regular client, but really messes with the bots.
      I don’t know, I’m just so glad and happy it works, it doesn’t mess with federation and it’s barely visible when accessing the sites.