{"id":9595,"date":"2025-05-29T20:00:00","date_gmt":"2025-05-30T00:00:00","guid":{"rendered":"https:\/\/nandgate.consulting\/website\/?p=9595"},"modified":"2025-05-29T19:54:14","modified_gmt":"2025-05-29T23:54:14","slug":"home-lab-server-storage-setup","status":"publish","type":"post","link":"https:\/\/nandgate.consulting\/website\/?p=9595","title":{"rendered":"Home Lab Server &#8211; Storage Setup"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Design<\/h2>\n\n\n\n<p>In 2025 there are so many options available to consider for any one storage design that I am sure even multiple books can or have been written on the topic. The primary goal of this project though will guide the design, and that goal was to increase storage space to 8TB or more. Because I have two 8TB drives, and two 1TB drives, I have some flexibility with my design, but not much. I could choose raw storage size, and just build each drive for maximum storage of up to 18TB, OR I could go for meeting the project goal of increased storage size while ALSO meeting the goal of server resilience by making each of the additional drives RAID1 drives or other options to fully provide a backup should one drive fail. In my old setup I had four 1TB drives in a RAID 0+1 configuration, which worked well, but really limited capacity. The new design will provide much more space.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"596\" src=\"https:\/\/nandgate.consulting\/website\/wp-content\/uploads\/2025\/05\/orion_server_storage_redesign-3-1024x596.avif\" alt=\"\" class=\"wp-image-9608\" srcset=\"https:\/\/nandgate.consulting\/website\/wp-content\/uploads\/2025\/05\/orion_server_storage_redesign-3-1024x596.avif 1024w, https:\/\/nandgate.consulting\/website\/wp-content\/uploads\/2025\/05\/orion_server_storage_redesign-3-300x175.avif 300w, https:\/\/nandgate.consulting\/website\/wp-content\/uploads\/2025\/05\/orion_server_storage_redesign-3-768x447.avif 768w, https:\/\/nandgate.consulting\/website\/wp-content\/uploads\/2025\/05\/orion_server_storage_redesign-3-1536x894.avif 1536w, https:\/\/nandgate.consulting\/website\/wp-content\/uploads\/2025\/05\/orion_server_storage_redesign-3-2048x1193.avif 2048w, https:\/\/nandgate.consulting\/website\/wp-content\/uploads\/2025\/05\/orion_server_storage_redesign-3-416x242.avif 416w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>I chose to split the difference, and provide redundancy where it really counts, and provide capacity where redundancy is less important. Family photos, videos, and favorite audio files take up a lot of space, cannot afford to be lost, and should a drive fail are hard to keep backed up externally due to bandwidth and cost limitations. Therefore, these files need to be placed on the 8TB drives, but both 8TB drives will be placed into a RAID1 configuration. RAID1 provides full data parity, meaning that if one drive fails, the data is mirrored on the partner drive. This isn&#8217;t my backup solution, but it does provide some measure of reliability since I can replace a single failing drive before needing to resort to full data backup restoration of data.<\/p>\n\n\n\n<p>The two 1TB drives, however, will be placed into a RAID0 configuration which provides data striping &#8211; writing data to both drives in parallel &#8211; meaning that I gain maximum performance and maximum drive capacity (2TB) out of these two drives. But I don&#8217;t have reliability. If one drive fails, the data on both becomes likely unrecoverable. So how will I use the 2TB of space? Answer: as a Bitcoin blockchain storage location. Although it will likely take a few days (yes, days) to download and verify the entire Bitcoin blockchain on these two drives I will have the space to store it without directly impacting the read\/write frequency of the two 8TB drives. Additionally, I could perform incremental backups of the blockchain to the 8TB drives if needed, or in preparation for replacing these two 1TB drives in the future. I have a few extra 1TB drives as well (remember that old Drobo 5N I spoke of in the first article in this series? &#8211; they&#8217;re from that rig) to swap in for both should even one fail which makes this a viable option for my situation.<\/p>\n\n\n\n<p>I considered other designs, such as mergerfs + SnapRAID, or ZFS, but in the end I decided to keep my design simpler for two reasons:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>If disaster strikes in the future or if a drive fails, I want to be able to recover fairly quickly and with very reliable, simple methods. RAID1 gives me that option.<\/li>\n\n\n\n<li>The less complicated the design, the less likely it will be for me to mess it up! I don&#8217;t want to rely on my staying on top of software updates, changing configurations, and details of software-driven hardware storage options like ZFS or SnapRAID, so I&#8217;m going to use the much longer lived, if less performant, RAID approach.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Drive Partitioning<\/h3>\n\n\n\n<p>Let us note now that I am not a complete newb when it comes to new and improved technology. I know that there are more drive partitioning schemes and file system types than FAT32. I will be partitioning my HDDs for RAID0 or RAID1 (as described above), but will be using BTRFS for all four HDDs. BTRFS is a newer filesystem type, but it appears to have some nice features for maintaining reliability of a drive, and it is supposed to perform better than older filesystem types as well.<\/p>\n\n\n\n<p>I&#8217;ll begin by using the <code>parted<\/code> program on my NixOS install to reformat and partition my drives. I realized early on that drives over 2TB in size require newer drive management tools than the old standby of <code>fdisk<\/code>. You can use <code>parted<\/code> from within a Nix ad-hoc shell, but I chose to add it to <code>\/etc\/nixos\/configuration.nix<\/code> so that I could always have it available in the future. (I would like to continue to expand the drive space, and besides, it would come in handy when I build another server with even larger drives using my custom-rolled NixOS ISO &#8211; more on that later.)<\/p>\n\n\n\n<p>To partition my two 1TB drives, I followed this pattern:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>$ sudo parted \/dev\/sda\nGNU Parted 3.6\nUsing \/dev\/sda\nWelcome to GNU Parted! Type 'help' to view a list of commands.\n(parted) <\/code><\/pre>\n\n\n\n<p>At the <code>(parted)<\/code> prompt, the <code>print<\/code> command will display the state of the currently selected drive. My drive is already formatted as of the writing of this guide, but this is what would be shown initially:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>(parted) print\nModel: ATA WDC WD10EZEX-60W (scsi)\nDisk \/dev\/sda: 1000GB\nSector size (logical\/physical): 512B\/512B\nPartition Table: unknown\nDisk Flags:<\/code><\/pre>\n\n\n\n<p>The first step is to set the Partition Table using the command <code>mklabel<\/code>. Although my 1TB drives could be partitioned with an <code>msdos<\/code> option, I chose to use the newer standard partition table option, <code>gpt<\/code>, like so:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>(parted) mklabel\nNew disk label type? gpt<\/code><\/pre>\n\n\n\n<p>Provided your drive is clean, when you search for the free space on it using the following command, you should see that no partitions yet exist (only the partition <em>table<\/em> has been created at this point in time):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>(parted) print free\nModel: ATA WDC WD10EZEX-60W (scsi)\nDisk \/dev\/sda: 1000GB\nSector size (logical\/physical): 512B\/512B\nPartition Table: gpt\nDisk Flags:\n\nNumber  Start   End     Size    File system  Name     Flags\n        0.0kB   1000GB  1000GB  Free Space<\/code><\/pre>\n\n\n\n<p>The next thing we will do is assign an actual partition to the drive using the <code>mkpart<\/code> subcommand within the <code>parted<\/code> command line program. If I was making multiple partitions, that can be done at this stage as well, but I have found that this is generally less valuable than allowing a Linux system to directly manage folders on the biggest drive partitions possible. The only notable exceptions are separating out data from main Linux subsystem folders by placing \/home, \/var, or \/opt folders on separate drives from the main OS drive, but we can cover those details later.<\/p>\n\n\n\n<p>You can use <code>parted<\/code>&#8216;s CLI interface by just calling <code>mkpart<\/code> without any options, but it saves time to use the simple options listed in the help: <code>mkpart PART-TYPE [FS-TYPE] START END<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>(parted) mkpart primary ext4 1M 1000GB<\/code><\/pre>\n\n\n\n<p>Notice above that I chose the File System Type as <code>ext4<\/code> instead of accepting the default of <code>ext2<\/code> , and that I did not choose another option like <code>btrfs<\/code>. This choice was deliberate to make supporting these drives from any given distribution easy, even if my current NixOS install fails at some point. I want these drives to perform reliably and often since they&#8217;ll be constantly working with the full Bitcoin blockchain, so simpler is better in this situation.<\/p>\n\n\n\n<p>Also notice that I specified a start of the partition at <code>1M<\/code>, or a 4096 byte offset from the beginning of the drive. On an SSD or NVME type drive, this wouldn&#8217;t matter, but on spinning disks such as my Western Digital or Seagate HDDs, this can be helpful in aligning the first read sector to where the magnetic pin is positioned before the drive starts spinning. (I think that is so&#8230; of course, will saving 100ms of time and maybe 1 extra hour of HDD lifetime or whatever the theoretical benefits are be worth it? &#8230; who cares! &#8211; Correct me in the comments)<\/p>\n\n\n\n<p>Now check your partition to ensure you actually created the partition successfully:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>(parted) p\nModel: ATA WDC WD10EZEX-60W (scsi)\nDisk \/dev\/sda: 1000GB\nSector size (logical\/physical): 512B\/4096B\nPartition Table: gpt\nDisk Flags:\n\nNumber  Start   End     Size    File system  Name     Flags\n 1      1049kB  1000GB  1000GB  ext4         primary<\/code><\/pre>\n\n\n\n<p>I then repeat these steps for my <code>\/dev\/sdd<\/code> 1TB drive so that both drives appear identical.<\/p>\n\n\n\n<p>NOTE: I am not a drive partitioning expert, and the above steps may be unnecessary if you are proceeding on to build a RAID array, as I am. When creating the RAID arrays I get similar messages about wiping out data and re-partitioning the drives from the <code>mdadm<\/code> program, so take everything I do here as simply steps I followed which may or may not be necessary for your individual situation.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Building a RAID 0 drive array<\/h4>\n\n\n\n<p>Remember that I want these two 1TB drives to provide maximum capacity, and that redundancy is not important because of how I intend to use them.<\/p>\n\n\n\n<p>Now we need to switch over to using the <code>mdadm<\/code> command to initialize the RAID array. But first, I needed to create mount points for my RAID array. NixOS doesn&#8217;t automatically have a \/mnt directory, so I created one as root. Then I created a subdirectory in the \/mnt directory which I chose to name <code>md0-btc<\/code>. You could choose to name it whatever you wish, like so:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo mkdir \/mnt\nsudo mkdir \/mnt\/md0-btc<\/code><\/pre>\n\n\n\n<p>Now I entered an ad-hoc nix-shell environment to run the <code>mdadm<\/code> program to create the RAID array:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo nix-shell -p mdadm<\/code><\/pre>\n\n\n\n<p>From within this ad-hoc shell environment, I ran the following:<\/p>\n\n\n\n<p><code>sudo mdadm --verbose --create \/dev\/md0 --level=raid0 --raid-devices=2 \/dev\/sda \/dev\/sdd<\/code><\/p>\n\n\n\n<p>That step generates the device under \/dev, and sets it to be a RAID0 device consisting of the \/dev\/sda and \/dev\/sdd drives (my 1TB drives). Now we can issue the command <code>sudo mkfs.ext4 \/dev\/md0<\/code> to generate the necessary filesystem accounting info that our system will need in order to mount our RAID array during system initialization.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;caveman@orion:~]$ sudo mkfs.ext4 \/dev\/md0\nmke2fs 1.47.1 (20-May-2024)\n\/dev\/md0 contains a ext4 file system\n        created on Sat May 24 06:09:25 2025\nProceed anyway? (y,N) y\nCreating filesystem with 488314368 4k blocks and 122085376 inodes\nFilesystem UUID: 3167f30d-d160-4863-bc2f-ef1b21119e22\nSuperblock backups stored on blocks:\n        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,\n        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,\n        102400000, 214990848\n\nAllocating group tables: done\nWriting inode tables: done\nCreating journal (262144 blocks): done\nWriting superblocks and filesystem accounting information: done<\/code><\/pre>\n\n\n\n<p>Other guides for Ubuntu or other distributions will talk about editing your <code>\/etc\/fstab<\/code> file to add the new RAID \/dev\/md0 as a mount during startup; however, NixOS doesn&#8217;t allow users, even superuser root accounts, to do this because everything is managed from the Nix config file at <code>\/etc\/nixos\/configuration.nix<\/code>, and this is a good thing for this project.<\/p>\n\n\n\n<p>Now I edit the <code>\/etc\/nixos\/configuration.nix<\/code> file as root (<code>sudo vim \/etc\/nixos\/configuration.nix<\/code> &#8211; or <code>sudo nano \/etc\/nixos\/configuration.nix<\/code> if you haven&#8217;t installed and like the <code>vim<\/code> text editor, as I do) to add the following to your configuration:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>fileSystems.\"\/mnt\/md0-btc\" =\n  { device = \"\/dev\/md0\";  # mount \/dev\/md0 RAID0 WD 1TB drives\n    fsType = \"ext4\";\n    options = &#91; \"nofail\" \"raid\" ];\n  }<\/code><\/pre>\n\n\n\n<p>We can test that our new configurations are working properly by first rebuilding our NixOS system. Technically, &#8220;switching&#8221; is fully switching to the new configuration, which is living dangerously, but you can run <code>sudo nixos-rebuild test<\/code> first and if all is successful, switch after that:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>$ sudo nixos-rebuild switch\nbuilding Nix\u2026\nbuilding the system configuration\u2026\nactivating the configuration\u2026\nsetting up \/etc\u2026\nreloading user units for caveman\u2026\nrestarting sysinit-reactivation.target<\/code><\/pre>\n\n\n\n<p>If you didn&#8217;t encounter errors, then see if your new RAID array is mounted using the <code>lsblck<\/code> command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>$ sudo lsblk\nNAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS\nsda           8:0    0 931.5G  0 disk\n\u2514\u2500sda1        8:1    0 931.5G  0 part\n  \u2514\u2500md0       9:0    0   1.8T  0 raid0\nsdb           8:16   0   7.3T  0 disk\n\u2514\u2500sdb1        8:17   0   7.3T  0 part\nsdc           8:32   0   7.3T  0 disk\n\u2514\u2500sdc1        8:33   0   7.3T  0 part\nsdd           8:48   0 931.5G  0 disk\n\u2514\u2500sdd1        8:49   0 931.5G  0 part\n  \u2514\u2500md0       9:0    0   1.8T  0 raid0\nnvme0n1     259:0    0 238.5G  0 disk\n\u251c\u2500nvme0n1p1 259:1    0   512M  0 part  \/boot\n\u2514\u2500nvme0n1p2 259:2    0   238G  0 part  \/nix\/store\n                                       \/<\/code><\/pre>\n\n\n\n<p>I proceeded to follow-up the build of my two Seagate 8TB drives following the same steps, but I partitioned them with the BTRFS filesystem and mounted the RAID array I created on \/dev\/md1 with the mount point \/mnt\/md1. In the next article I will begin building some simple backend services on my server which will provide the basis for the user-level services everyone wants to use.<\/p>\n\n\n\n\n\n<h3 class=\"wp-block-heading\">References<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/linuxconfig.org\/how-to-partition-a-drive-on-linux\">https:\/\/linuxconfig.org\/how-to-partition-a-drive-on-linux<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/askubuntu.com\/questions\/350266\/how-can-i-create-a-raid-array-with-2tb-disks\">https:\/\/askubuntu.com\/questions\/350266\/how-can-i-create-a-raid-array-with-2tb-disks<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/seagate.com\/products\/nas-drives\/raid-calculator\/\">Seagate RAID Capacity Calculator<\/a><\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-comments\">\n\n\n\n\n\n\t<div id=\"respond\" class=\"comment-respond wp-block-post-comments-form\">\n\t\t<h3 id=\"reply-title\" class=\"comment-reply-title\">Leave a Reply <small><a rel=\"nofollow\" id=\"cancel-comment-reply-link\" href=\"\/website\/index.php?rest_route=%2Fwp%2Fv2%2Fposts%2F9595#respond\" style=\"display:none;\">Cancel reply<\/a><\/small><\/h3><p class=\"must-log-in\">You must be <a href=\"https:\/\/nandgate.consulting\/website\/wp-login.php?redirect_to=https%3A%2F%2Fnandgate.consulting%2Fwebsite%2F%3Fp%3D9595\">logged in<\/a> to post a comment.<\/p>\t<\/div><!-- #respond -->\n\t<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Design In 2025 there are so many options available to consider for any one storage design that I am sure even multiple books can or have been written on the topic. The primary goal of this project though will guide the design, and that goal was to increase storage space to 8TB or more. Because [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-9595","post","type-post","status-publish","format-standard","hentry","category-article"],"_links":{"self":[{"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=\/wp\/v2\/posts\/9595","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9595"}],"version-history":[{"count":10,"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=\/wp\/v2\/posts\/9595\/revisions"}],"predecessor-version":[{"id":9616,"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=\/wp\/v2\/posts\/9595\/revisions\/9616"}],"wp:attachment":[{"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9595"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9595"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nandgate.consulting\/website\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9595"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}