• 0 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle
  • towerful@programming.devtoSelfhosted@lemmy.worldUnifi Anonymous...?
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    11 days ago

    Pretty much any mikrotik is a fantastic piece of kit to have.
    It is so unbelievably versatile.
    I love the various mikrotik routers, switches and APs I have. I use them all the time for little ad-hoc networks and projects and stuff.
    You will learn a lot about networking when using them.

    But Unifi is a hell of a lot easier to use, and I have not found anything I can’t do on unifi (but I don’t do bgp, mlag, etc at home).



  • Raspberry pis are an easy intro to actually using computers (instead of using something like windows).
    Raspbian is great (based on Debian) and there is a HUGE community for it.

    So yeh, it’s a great started for $25, as long as you have a PSU and SD Card. And an hdmi cable + monitor + keyboard at your disposal (and a mouse if you are installing a desktop environment (IE something like windows, whereas headless is a full screen CLI).
    And don’t get your hopes up for a windows replacement.

    But… Why not run a Virtual Machine? If you have a windows machine, run VirtualBox, create a VM and install Debian on it?
    That’s free. You can tinker and play.
    And the only thing you are missing from an actual raspberry pi is that it isn’t a standalone device (IE your desktop has to be on for it to be running), and it doesn’t have GPIO (ie hardware pins. And if this is your goal, there are other ways).

    If you really really want a computer that is on all the time running Linux (Debian, a derivative (like raspbian) or some other distro) - aka a server - then there are plenty of other options where the only drawback is lack of GPIO (which, in my experience, is rarely a drawback).
    And that is literally any computer you can get your hands on. Because the raspberry pi trades A LOT for its form factor, the ethernet speed is limited, the bus speed is limited (impacting USB and ethernet (and ram?)), the SD card is slower and will fail faster than any HDD/SSD. The benefit is the GPIO, the very low power draw, and the form factor - rarely actually a benefit.

    I’d say, play around with some virtual box VMs. See what you want, other than Fear Of Missing Out (things like PiHole? They run on Debian, or even in a docker container). Then see if you actually want a home server, and what you want to run on it.
    It’s likely you won’t want a raspberry pi, but a $150 mini pc that can actually do what you want.






  • especially once a service does fail or needs any amount of customization.

    A failed service gets killed and restarted. It should then work correctly.
    If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
    So, either build your recovery process to account for this… or fix it so it can recover.
    It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.

    As for customisation, if it isn’t exposed via env vars then it can’t be altered.
    If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)

    It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
    It’s using a chisel incorrectly.


  • I would always run proxmox to set up docker VMs.

    I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
    It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
    I will use Talos Linux again.
    However next time, I’m running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
    I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it’s the way k8s is designed.
    It wasn’t the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would’ve made things so much easier.

    Also, why wouldn’t I run proxmox?
    Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups



  • I’d still run k8s inside a proxmox VM. Even if it’s basically all resources dedicated to the VM, proxmox gives you a huge amount of oversight and additional tooling.
    Proxmox doesn’t have to do much (or even anything), beyond provide a virtual machine.

    I’ve ran Talos OS (dedicated k8s distro) bare metal. It was fine, but I wish I had a hypervisor. I was lucky that my project could be wiped and rebuilt with ease. Having a hypervisor would mean I could’ve just rolled back to a snapshot, and separated worker/master nodes without running additional servers.
    This was sorely missed when I was both learning the deployment of k8s, and k8s itself.
    For the next project that is similar, I’ll run talos inside proxmox VMs.

    As far as “how does cloudflare work in k8s”… However you want?
    You could manually deploy the example manifests provided by cloudflare.
    Or perhaps there are some helm charts that can make it all a bit easier?

    Or you could install an operator, which will look for Custom Resource Definitions or specific metadata on standard resources, then deploy and configure the suitable additional resources in order to make it work.
    https://github.com/adyanth/cloudflare-operator seems popular?

    I’d look to reduce the amount of yaml you have to write/configure by hand. Which is why I like operators



  • So uplink is 500/500.
    LAN speed tests at 1000/1000.
    WAN is 100/400.
    VPN is 8/8.

    I’m guessing the VPN is part of your homelab? Or do you mean a generic commercial VPN (like pia or proton)?

    How does the domain resolve on the LAN? Is it split horizon (so local ip on the lan, public IP on public DNS)?
    Is the homelab on a separate subnet/vlan from the computer you ran the speed test from? Or the same subnet?




  • Servers: one. No need to make the log a distributed system, CT itself is a distributed system.

    The uptime target is 99%3 over three months, which allows for nearly 22h of downtime. That’s more than three motherboard failures per month.

    CPU and memory: whatever, as long as it’s ECC memory. Four cores and 2 GB will do.

    Bandwidth: 2 – 3 Gbps outbound.
    Storage:
    3 – 5 TB of usable redundant filesystem space on SSD or.
    3 – 5 TB of S3-compatible object storage, and 200 GB of cache on SSD.
    People: at least two. The Google policy requires two contacts, and generally who wants to carry a pager alone.

    Seems beyond you typical homelab self hoster, except for the countries that have 5gbps symmetric home broadband.
    If anyone can sneak 2-3gbps outbound pass their employer, I imagine the rest is trivial.
    Altho… “At least 2 [people]” isn’t the typical self hosting

    Edit:
    Tried to fix the copy/paste.

    Also will add:

    https://crt.sh/
    Has a list of all certificates issued.
    If you are using LE for every subdomain of your homelab (including internal), maybe think about a wildcard cert?
    One of those “obscurity isn’t security”, but why advertise your endpoints? Also increases privacy (IE not advertising porn(dot)example(dot)com)





  • Yeh it is.
    Proving that a scientific theory is wrong means we don’t understand enough about the thing. And we know we need to look at other theories about the thing.
    Proving things wrong as well as failed hypothesis is as important (even if it is disappointing) as proving things correct and successful hypothesis. It rules the theory out, and guides further scientific study.
    With published papers, other scientists can hopefully see what the publishing scientists missed.
    Scientists can also repeat experiments of successful papers to confirm the papers conclusion, and perhaps even make further observations that can support further studies.