Nextcloud asked in a poll at https://mastodon.social/@nextcloud@mastodon.xyz/115095096413238457 what database its users are running. Interestingly one fifth replied they don’t know. Should people know better where their data is stored, or is it a good thing everything is running so smoothly people don’t need to know what their software stack is built upon?

  • dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    2 days ago

    Yeah, that is the kind of concern for the service developer or a very opinionated sys admin. For self-hosting, few people will reach the workload where such a decision has any material or measurable impact.

    • StarDreamer@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      2 days ago

      Exactly. Unless you are actively doing maintenance, there is no need to remember what DB you are using. It took me 3 minutes just to remember my nextcloud setup since it’s fully automated.

      It’s the whole point of using tiered services. You look at stuff at the layer you are on. Do you also worry about your wifi link-level retransmissions when you are running curl?

    • u_tamtam@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Self hosting doesn’t mean “being wasteful and letting containers duplicate services”. I want to know which DB application X is using, so I pool it for applications Y and Z.

        • u_tamtam@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          22 hours ago

          I disagree. You are just entertaining the idea that servers must always and forever be oversized, that’s the definition of wasteful (and environmentally irresponsible). Unless you are firing-up and throwing-away services constantly, nothing justifies this and sparing the relatively low effort it is to deploy your infrastructure knowingly.

          • Ajen@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            22 hours ago

            Do you have the data to back that up? Have you measured how much of an impact on system load and power consumption having 2 separate DB processes has?

            Roughly the same amount of work is being done by the CPU if you split your DBs between 2 servers or just use one. There might be a slight increase in memory usage, but that would only matter in a few niche applications and wouldn’t affect environmental impact.

        • absentbird@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          2 days ago

          And if it’s SQLite (which I believe is the default) it’s really just reading and writing a file on the file system.

      • tburkhol@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        This is one of my pet peeves with containerized services, like why would I want to run three or four instances of mariadb? I get it, from the perspective of the packagers, who want a ‘just works’ solution to distribute, but if I’m trying to run simple services on a 4 GB RPi or a 2 GB VPS, then replicating dbs makes a difference. It took a while, but I did, eventually, get those dockers configured to use a single db backend, but I feel like that completely negated the ‘easy to set up and maintain’ rationale for containers.

        • u_tamtam@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Precisely what pre-devops sysadmins were saying when containers were becoming trendy. You are just pushing the complexity elsewhere, and creating novel classes of problems for yourself (keeping your BoM in control and minimal is one of many others that got thrown away)