• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle

  • In days past some drive vendors had different sector layouts for drives and would cause issues with raid. Pretty sure most nowadays are all the same layout and you won’t run into any issues. I still look to get the same drive model anyways just to be perfectly sure that there are no issues.

    Even then you may run into weird issues like one of my 1.2 TB enterprise ssd drives was reporting 1.12 TiB rather than 1.09 TiB the other 7 drives had. TrueNas refused to build a vdev with that drive and I had to return it to get a new one.


  • Typically a Fiber ISP will run Fiber optics only to your DEMARC (or Demarcation) point. This will be usually where your main cable (before any splits) or DSL line used to come in (in the US they’ve been using Orange tubes to indicate this and it will usually run to a panel in some closet or laundry). At the DEMARC they’ll install one of two things: a basic fiber to ethernet converter which will provide you a single ethernet port and a pure tap to the internet, or a Gateway device that will convert the fiber to multiple eithernet with NAT (usually providing other capabilites like TV, Phone, etc).

    If you have the latter, you may not get much say in what you can do with your connection, and would be limited to a DMZ mode that is configured on the Gateway. What you put behind the converter or gateway is up to you.


  • I’ve got my mom setup on their PC backup service, no complaints so far (on the Backblaze side that is, she still insists that she doesn’t need continuous backups even though I’ve had to restore multiple times for her).

    I switched my backups from Crashplan to B2 as it was significantly cheaper than going to AWS. B2 is more expensive than what I was paying for Crashplan Pro Unlimited (about 8x for the amount of data I have), but I have more peace of mind with it not relying on Crashplan’s terrible Java client.

    A reminder that the only good backup is a tested backup.


  • Yes, ULA are one of the exceptions I mentioned. It covers fc00::/7 which is fc00 to fdff, though I believe most use just the top half. I use one for an intermediate network between my edge router and my primary firewall to not consume one of my limited /64 networks.

    I haven’t played with IPV6 NAT much. I know its use is a bit discouraged as NAT was always designed as a stopgap measure for IPV4 exhaustion. It might be a good option if you need additional space and your ISP doesn’t support additional prefixes. Just keep in mind that if you use these in DNS, they won’t be accessible externally.


  • Its a bit complicated and depends on your ISPs support level.

    If your ISP supports basic IPv6 they will likely use SLAAC or DHCPv6 to advertise the /64 that any directly connected devices, like your router, can use (/64 being the default size for a single LAN segment, even between point-to-point connections). If you have devices behind that router that want to use IPv6, you will need additional prefixes. The most common method nowadays is to use Prefix Delegation (DHCPv6-PD) where your router will ask the upstream router for an additional routeable prefix which you will use on another interface of the router. The RFC for prefix delegation recommends a /48, but many ISPs are not delegating that much. I only get half of a /60 from my ISP’s modem.

    If the ISP just provides you a static routeable prefix, then you would just assign that to your router’s interface and enable SLAAC/DHCPv6 to give out that prefix. This would only need to be configured in a single device and is why they don’t recommend hard coding servers and workstations with IPV6 addresses.

    Keep in mind that your router will also need a firewall as all of these IPv6 prefixes are routeable and public. While IPV6 space is quite like finding a needle in a haystack, you could still find yourself having a bad day if you treat it like private IPV4 space.

    The end result though is that you would setup DNS so that devices register their IPv6 addresses and it just works. There’s also the MDNS protocol that supports IPv6 which will do segment-local resolution for device names.


  • theit8514@lemmy.worldtoSelfhosted@lemmy.worldA Short IPv6 Guide for Home IPv4 Admins
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    5 months ago

    On one hand you definitely don’t want to be assigning manual/static IPv6 to all your devices because if your prefix ever changes you’ll have to update it everywhere. IPv6 doesn’t really have a concept of private address space (with a few exceptions). On the other hand most modern IPv6 stacks support dynamic protocols like SLAAC while also assigning a static suffix to the published prefix (e.g. You want :0:0:1234:1 to go to your server, and SLAAC gets the prefix 200x::5678/64 your server would assign itself 200x::5678:0:0:1234:1).

    DHCPv6 fixes a lot of these headaches for managed networks by allowing you to reserve specific IPv6 for a given DUID.

    IMO, your network, do what you want. I have two jump Raspberry PIs that I have static suffixes so I always know where they are without relying on DNS or whatever. Edit: I apparently misremembered how I had these setup. I use a custom interface up script to take the SLAAC prefix and append the custom suffix to it as a secondary IP.


  • You’ll probably have to provide the netmask info for us to review. If you’re using /24 then those all reside in the same network so I would expect them to be in the same broadcast domain.

    If you have mismatched netmasks that could be trying to route traffic to the gateway which then reflects back. Ensure your devices have the same network, netmask and broadcast ip (e.g. 192.168.1.0/24 will have broadcast ip of 192.168.1.255)




  • For the disks, you may have a small issue with having multiple types of disks in a single RAID10, as those disks might have slightly different physical attributes. ZFS is an option here as you can add two vdevs for the different drive types and add them to the same zpool, which effectively creates the RAID10 you’re looking for. You would typically not use LVM on top of ZFS but if you go with RAID10 it would let you create logical partitions that can be expanded easily at a later time.

    Another ZFS option is to use RAIDZ1 with the 4 disks in a vdev. The vdev will use 1 disk of space across all disks to maintain a parity with the other disks. You will have 12TB of usable storage on your 16TB raw storage. This will allow you to lose one drive with no data loss.


  • Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

    For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

    For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.


  • Cool. Yeah, as a professional I am constantly aware of data integrity and have most of my shit stored on redundant drives. I had a WoW Guild Officer who shared his home setup with like 8x12TB drives in Windows Storage Spaces with no redundancy that was like 80% full. I had to ask how he slept at night knowing he could lose 80TB of data at any time.

    Personally my TrueNAS has 5x1.92TB SSDs setup in two mirror vdevs and a hot spare for my ISCSI LUNs and 8x1.2TB 10K drives in a raidz2 (2 disk parity) for my NAS storage.


  • I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.

    If you’re not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.







  • Since you’ve probably been using the SMB protocol to access the NAS you probably need to understand a few things about the NFS protocol which functions differently. The NFS mount acts like a mapping of the entire system, rather than a specific user. That means that if there are differences in the systems, you may get access errors. For example the default user in Synology has a uid of 1024, but most client systems have a default of 1000. This means your user may not have access to the share or files, even if you have it mounted on the client.

    One thing to check is what your Shared Folder’s NFS permissions squash is set to. This is found in Control Panel > Shared Folder the the NFS permissions tab. If it’s set to “no mapping” then uids must match. The easiest setup is to “map all users to admin” but you may encounter issues with that later if you switch back to SMB since new files will be owned by admin.