• 0 Posts
  • 13 Comments
Joined 3 years ago
cake
Cake day: August 16th, 2023

help-circle
  • This is highly dependent on what your needs are and how you plan to solve it. SATA-3 maxes at 6gbit, which SAS-2 had in 2009. Most cards are x8, and have at least 4 full speed SAS lanes (of whatever generation). That means 24 Gbit. PCIe x8 2.0 (from 2007) had 4 GB (32 Gbit). So if that meets your needs, you can run it on an ancient board.

    However, if you need something more advanced, such as SAS-3, a SAS expander, or a card with more native lanes, then you would need to plan accordingly.

    I’ve been running on an LSI 9211-4i4e, which is only a PCIe 2.0 card, for many years. I did notice my speeds dropped when I expanded the 4e to a 15-bay DAS (plus the 4 internal SATA drives), but it’s still enough to meet my needs.


  • It’s not really about 24/7, but it is about quality of components. Enterprise gear is made using slightly better parts and tighter tolerances. Things like more expensive capacitors rated for more hours/cycles, better power filters, things like that.

    The end result (and this is easily verified) is the failure rate is much, much lower than comparable consumer-grade equipment.

    There is sometimes a blurry line between what counts as enterprise vs pro-sumer vs consumer gear, though.





  • This is part of a series frequently known as “Microsoft interview” questions. The most famous one is, “Why is a manhole cover round?” They are partially meant to gauge your problem-solving abilities, but more importantly see how you react to a question you did not (and could not) prepare for. They’ve since fallen out of fashion, because it was always a terrible way to gauge roles like software developers.


  • Nollij@sopuli.xyztoSelfhosted@lemmy.worldAm I corrupting my data?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 months ago

    Kind of. They will be multiples of 4. Let’s say you got a gigantic 8i8e card, albeit unlikely. That would (probably) have 2 internal and 2 external SAS connectors. Your standard breakout cables will split each one into 4 SATA cables (up to 16 SATA ports if you used all 4 SAS ports and breakout cables), each running at full (SAS) speed.

    But what if you were running an enterprise file server with a hundred drives, as many of these once were? You can’t cram dozens of these cards into a server, there aren’t enough PCIe slots/lanes. Well, there are SAS expansion cards, which basically act as a splitter. They will share those 4 lanes, potentially creating a bottleneck. But this is where SAS and SATA speeds differ- these are SAS lanes, which are (probably) double what SATA can do. So with expanders, you could attach 8 SATA drives to every 4 SAS lanes and still run at full speed. And if you need capacity more than speed, expanders allow you to split those 4 lanes to 24 drives. These are typically built into the drive backplane/DAS.

    As for the fan, just about anything will do. The chip/heatsink gets hot, but is limited to the ~75 watts provided by the PCIe bus. I just have an old 80 or 90mm fan pointing at it.



  • I don’t want to speak to your specific use case, as it’s outside of my wheelhouse. My main point was that SATA cards are a problem.

    As for LSi SAS cards, there’s a lot of details that probably don’t (but could) matter to you. PCIe generation, connectors, lanes, etc. There are threads on various other homelab forums, truenas, unraid, etc. Some models (like the 9212-4i4e, meaning it has 4 internal and 4 external lanes) have native SATA ports that are convenient, but most will have a SAS connector or two. You’d need a matching (forward) breakout cable to connect to SATA. Note that there are several common connectors, with internal and external versions of each.

    You can use the external connectors (e.g. SFF-8088) as long as you have a matching (e.g. SFF-8088 SAS-SATA) breakout cable, and are willing to route the cable accordingly. Internal connectors are simpler, but might be in lower supply.

    If you just need a simple controller card to handle a few drives without major speed concerns, and it will not be the boot drive, here are the things you need to watch for:

    • MUST be LSi, but it can be rebranded LSi. This includes certain cards from Dell and IBM, but not all.
    • Must support Initiator Target (IT) mode. The alternative is Initiator RAID (IR) mode. This is nearly all, since most can be flashed to IT mode regardless
    • Watch for counterfeits! There are a bunch of these out there. My best advice is to find IT recyclers on eBay. These cards are a dime a dozen in old, decommissioned servers. They’re eager to sell them to whomever wants them.

    Also, make sure you can point a fan at it. They’re designed for rackmount server chassis, so desktop-style cases don’t usually have the airflow needed.



  • I did this back in the days of Smoothwall, ~20 years ago. I used an old, dedicated PC, with 2 PCI NICs.

    It was complicated, and took a long time to setup properly. It was loud and used a lot of power, and didn’t give me much beyond the standard $50 routers of the day (and is easily eclipsed by the standard $80 routers of today). But it ran reliably for a number of years without any interaction.

    I also didn’t learn anything useful that I could ever apply to something else, so ended up just being a waste of time. 2/10, spend your time on something more useful.


  • It won’t officially work, but it’s not too hard to get it going. I just moved a similar box to 24H2 LTSC.

    OP, you’ll probably need to run “setup.exe /product server”, or follow a recent guide. You’ll also need to do this for every major upgrade (i.e. yearly)

    I agree though with the plan to use this as a test ground. I also recently upgraded a Lubuntu system to similar specs, and it runs pretty smoothly. But learning Linux takes a lot of time they don’t have.