Category: article

  • Home Lab Server – Storage Setup

    Design

    In 2025 there are so many options available to consider for any one storage design that I am sure even multiple books can or have been written on the topic. The primary goal of this project though will guide the design, and that goal was to increase storage space to 8TB or more. Because I have two 8TB drives, and two 1TB drives, I have some flexibility with my design, but not much. I could choose raw storage size, and just build each drive for maximum storage of up to 18TB, OR I could go for meeting the project goal of increased storage size while ALSO meeting the goal of server resilience by making each of the additional drives RAID1 drives or other options to fully provide a backup should one drive fail. In my old setup I had four 1TB drives in a RAID 0+1 configuration, which worked well, but really limited capacity. The new design will provide much more space.

    I chose to split the difference, and provide redundancy where it really counts, and provide capacity where redundancy is less important. Family photos, videos, and favorite audio files take up a lot of space, cannot afford to be lost, and should a drive fail are hard to keep backed up externally due to bandwidth and cost limitations. Therefore, these files need to be placed on the 8TB drives, but both 8TB drives will be placed into a RAID1 configuration. RAID1 provides full data parity, meaning that if one drive fails, the data is mirrored on the partner drive. This isn’t my backup solution, but it does provide some measure of reliability since I can replace a single failing drive before needing to resort to full data backup restoration of data.

    The two 1TB drives, however, will be placed into a RAID0 configuration which provides data striping – writing data to both drives in parallel – meaning that I gain maximum performance and maximum drive capacity (2TB) out of these two drives. But I don’t have reliability. If one drive fails, the data on both becomes likely unrecoverable. So how will I use the 2TB of space? Answer: as a Bitcoin blockchain storage location. Although it will likely take a few days (yes, days) to download and verify the entire Bitcoin blockchain on these two drives I will have the space to store it without directly impacting the read/write frequency of the two 8TB drives. Additionally, I could perform incremental backups of the blockchain to the 8TB drives if needed, or in preparation for replacing these two 1TB drives in the future. I have a few extra 1TB drives as well (remember that old Drobo 5N I spoke of in the first article in this series? – they’re from that rig) to swap in for both should even one fail which makes this a viable option for my situation.

    I considered other designs, such as mergerfs + SnapRAID, or ZFS, but in the end I decided to keep my design simpler for two reasons:

    1. If disaster strikes in the future or if a drive fails, I want to be able to recover fairly quickly and with very reliable, simple methods. RAID1 gives me that option.
    2. The less complicated the design, the less likely it will be for me to mess it up! I don’t want to rely on my staying on top of software updates, changing configurations, and details of software-driven hardware storage options like ZFS or SnapRAID, so I’m going to use the much longer lived, if less performant, RAID approach.

    Drive Partitioning

    Let us note now that I am not a complete newb when it comes to new and improved technology. I know that there are more drive partitioning schemes and file system types than FAT32. I will be partitioning my HDDs for RAID0 or RAID1 (as described above), but will be using BTRFS for all four HDDs. BTRFS is a newer filesystem type, but it appears to have some nice features for maintaining reliability of a drive, and it is supposed to perform better than older filesystem types as well.

    I’ll begin by using the parted program on my NixOS install to reformat and partition my drives. I realized early on that drives over 2TB in size require newer drive management tools than the old standby of fdisk. You can use parted from within a Nix ad-hoc shell, but I chose to add it to /etc/nixos/configuration.nix so that I could always have it available in the future. (I would like to continue to expand the drive space, and besides, it would come in handy when I build another server with even larger drives using my custom-rolled NixOS ISO – more on that later.)

    To partition my two 1TB drives, I followed this pattern:

    $ sudo parted /dev/sda
    GNU Parted 3.6
    Using /dev/sda
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) 

    At the (parted) prompt, the print command will display the state of the currently selected drive. My drive is already formatted as of the writing of this guide, but this is what would be shown initially:

    (parted) print
    Model: ATA WDC WD10EZEX-60W (scsi)
    Disk /dev/sda: 1000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:

    The first step is to set the Partition Table using the command mklabel. Although my 1TB drives could be partitioned with an msdos option, I chose to use the newer standard partition table option, gpt, like so:

    (parted) mklabel
    New disk label type? gpt

    Provided your drive is clean, when you search for the free space on it using the following command, you should see that no partitions yet exist (only the partition table has been created at this point in time):

    (parted) print free
    Model: ATA WDC WD10EZEX-60W (scsi)
    Disk /dev/sda: 1000GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system  Name     Flags
            0.0kB   1000GB  1000GB  Free Space

    The next thing we will do is assign an actual partition to the drive using the mkpart subcommand within the parted command line program. If I was making multiple partitions, that can be done at this stage as well, but I have found that this is generally less valuable than allowing a Linux system to directly manage folders on the biggest drive partitions possible. The only notable exceptions are separating out data from main Linux subsystem folders by placing /home, /var, or /opt folders on separate drives from the main OS drive, but we can cover those details later.

    You can use parted‘s CLI interface by just calling mkpart without any options, but it saves time to use the simple options listed in the help: mkpart PART-TYPE [FS-TYPE] START END.

    (parted) mkpart primary ext4 1M 1000GB

    Notice above that I chose the File System Type as ext4 instead of accepting the default of ext2 , and that I did not choose another option like btrfs. This choice was deliberate to make supporting these drives from any given distribution easy, even if my current NixOS install fails at some point. I want these drives to perform reliably and often since they’ll be constantly working with the full Bitcoin blockchain, so simpler is better in this situation.

    Also notice that I specified a start of the partition at 1M, or a 4096 byte offset from the beginning of the drive. On an SSD or NVME type drive, this wouldn’t matter, but on spinning disks such as my Western Digital or Seagate HDDs, this can be helpful in aligning the first read sector to where the magnetic pin is positioned before the drive starts spinning. (I think that is so… of course, will saving 100ms of time and maybe 1 extra hour of HDD lifetime or whatever the theoretical benefits are be worth it? … who cares! – Correct me in the comments)

    Now check your partition to ensure you actually created the partition successfully:

    (parted) p
    Model: ATA WDC WD10EZEX-60W (scsi)
    Disk /dev/sda: 1000GB
    Sector size (logical/physical): 512B/4096B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system  Name     Flags
     1      1049kB  1000GB  1000GB  ext4         primary

    I then repeat these steps for my /dev/sdd 1TB drive so that both drives appear identical.

    NOTE: I am not a drive partitioning expert, and the above steps may be unnecessary if you are proceeding on to build a RAID array, as I am. When creating the RAID arrays I get similar messages about wiping out data and re-partitioning the drives from the mdadm program, so take everything I do here as simply steps I followed which may or may not be necessary for your individual situation.

    Building a RAID 0 drive array

    Remember that I want these two 1TB drives to provide maximum capacity, and that redundancy is not important because of how I intend to use them.

    Now we need to switch over to using the mdadm command to initialize the RAID array. But first, I needed to create mount points for my RAID array. NixOS doesn’t automatically have a /mnt directory, so I created one as root. Then I created a subdirectory in the /mnt directory which I chose to name md0-btc. You could choose to name it whatever you wish, like so:

    sudo mkdir /mnt
    sudo mkdir /mnt/md0-btc

    Now I entered an ad-hoc nix-shell environment to run the mdadm program to create the RAID array:

    sudo nix-shell -p mdadm

    From within this ad-hoc shell environment, I ran the following:

    sudo mdadm --verbose --create /dev/md0 --level=raid0 --raid-devices=2 /dev/sda /dev/sdd

    That step generates the device under /dev, and sets it to be a RAID0 device consisting of the /dev/sda and /dev/sdd drives (my 1TB drives). Now we can issue the command sudo mkfs.ext4 /dev/md0 to generate the necessary filesystem accounting info that our system will need in order to mount our RAID array during system initialization.

    [caveman@orion:~]$ sudo mkfs.ext4 /dev/md0
    mke2fs 1.47.1 (20-May-2024)
    /dev/md0 contains a ext4 file system
            created on Sat May 24 06:09:25 2025
    Proceed anyway? (y,N) y
    Creating filesystem with 488314368 4k blocks and 122085376 inodes
    Filesystem UUID: 3167f30d-d160-4863-bc2f-ef1b21119e22
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
            102400000, 214990848
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (262144 blocks): done
    Writing superblocks and filesystem accounting information: done

    Other guides for Ubuntu or other distributions will talk about editing your /etc/fstab file to add the new RAID /dev/md0 as a mount during startup; however, NixOS doesn’t allow users, even superuser root accounts, to do this because everything is managed from the Nix config file at /etc/nixos/configuration.nix, and this is a good thing for this project.

    Now I edit the /etc/nixos/configuration.nix file as root (sudo vim /etc/nixos/configuration.nix – or sudo nano /etc/nixos/configuration.nix if you haven’t installed and like the vim text editor, as I do) to add the following to your configuration:

    fileSystems."/mnt/md0-btc" =
      { device = "/dev/md0";  # mount /dev/md0 RAID0 WD 1TB drives
        fsType = "ext4";
        options = [ "nofail" "raid" ];
      }

    We can test that our new configurations are working properly by first rebuilding our NixOS system. Technically, “switching” is fully switching to the new configuration, which is living dangerously, but you can run sudo nixos-rebuild test first and if all is successful, switch after that:

    $ sudo nixos-rebuild switch
    building Nix…
    building the system configuration…
    activating the configuration…
    setting up /etc…
    reloading user units for caveman…
    restarting sysinit-reactivation.target

    If you didn’t encounter errors, then see if your new RAID array is mounted using the lsblck command:

    $ sudo lsblk
    NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
    sda           8:0    0 931.5G  0 disk
    └─sda1        8:1    0 931.5G  0 part
      └─md0       9:0    0   1.8T  0 raid0
    sdb           8:16   0   7.3T  0 disk
    └─sdb1        8:17   0   7.3T  0 part
    sdc           8:32   0   7.3T  0 disk
    └─sdc1        8:33   0   7.3T  0 part
    sdd           8:48   0 931.5G  0 disk
    └─sdd1        8:49   0 931.5G  0 part
      └─md0       9:0    0   1.8T  0 raid0
    nvme0n1     259:0    0 238.5G  0 disk
    ├─nvme0n1p1 259:1    0   512M  0 part  /boot
    └─nvme0n1p2 259:2    0   238G  0 part  /nix/store
                                           /

    I proceeded to follow-up the build of my two Seagate 8TB drives following the same steps, but I partitioned them with the BTRFS filesystem and mounted the RAID array I created on /dev/md1 with the mount point /mnt/md1. In the next article I will begin building some simple backend services on my server which will provide the basis for the user-level services everyone wants to use.

    References

    Leave a Reply

  • Home Lab Server – Hardware Setup

    The next step on the journey of building a home lab server is to consider hardware and storage needs. This article cannot possibly provide any one individual or family or organization with the necessary knowledge to make an optimal decision about computing hardware. Therefore, consider this article the exploration of one man’s decision-making process with project goals and limited funds in mind.

    The hardware “stack”

    There are simply too many options and configurations to provide a full written guide in any one place that can stand the test of time. Therefore, this article will be a straightforward listing of the hardware I am working with in this project. It is not intended to be a guide for what you *should* purchase or use in your own design.

    Hardware on the Network

    • Ubiquiti Dream Machine Special Edition. Runs a variety of Ubiquiti switches and access points from this single machine which can be connected to two separate WANs at once. (We will have fiber to the home soon, but I work remotely and the kids are homeschooled so having both a cable and fiber connection to provide redundanct connectivity is becoming critical to daily life.) Having Ubiquiti networking gear simplified my ability to manage multiple VLANs, mesh networking, and connectivity to IoT devices around the home. Expect to pay as much for this type of gear as for any one MacBook Pro 15″ or a high-end gaming desktop, but it is awfully nice to not have to troubleshoot WiFi problems every other month.
    • HP G600 with 1.2GB of disk space. This machine is for core services primarily, such as running an LXC with PiHole as my DNS filtering mechanism. (Helps cut down on the cybersecurity attack surface of my family’s devices, and allows some amount of control over general Internet browsing by completely dropping DNS queries for domains we know we wouldn’t want our kids exposed to on the Internet.)
    • Numerous laptops, gaming platforms, smartphones, and tablets connect to the primary WiFi network, with other IoT devices connecting to a separate VLAN I setup which can talk to the Internet but cannot access any of my general purpose computing machines on the main WiFi network. Again, this design limits our exposure to cybersecurity threats via poorly designed or maintained IoT devices by manufacturers of devices which primarily serve different purposes but have some sort of digital controls connecting to the Internet. For instance, my new furnace and A/C system is designed for Internet connectivity as an IoT device – you know, for saving the planet with eco-friendly stuff and such.

    Hardware Design of the New Server

    First, a quick way to display everything on my system directly from the command line:

    [nix-shell:~]$ hwinfo --short
    cpu:                                                            
                           AMD Ryzen 5 3600 6-Core Processor, 2200 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 2800 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 2800 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 3410 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 4203 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 2200 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 2200 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 2200 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 2200 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 3715 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 4200 MHz
                           AMD Ryzen 5 3600 6-Core Processor, 2800 MHz
    keyboard:
      /dev/input/event1    Logitech USB Keyboard
    mouse:
      /dev/input/mice      Logitech USB Optical Mouse
    monitor:
                           GTW KX2153
    graphics card:
                           nVidia VGA compatible controller
    sound:
                           nVidia Multimedia controller
                           AMD Multimedia controller
    storage:
                           Silicon Motion Mass storage controller
                           AMD Mass storage controller
    network:
      enp34s0              Ethernet controller
    network interface:
      tailscale0           Network Interface
      enp34s0              Ethernet network interface
      lo                   Loopback network interface
    disk:
      /dev/nvme0n1         Silicon Motion Inland TN320 NVMe SSD
      /dev/sdd             WDC WD10EZEX-60W
      /dev/sdb             ST8000DM004-2U91
      /dev/sdc             ST8000DM004-2U91
      /dev/sda             WDC WD10EZEX-60W
    partition:
      /dev/nvme0n1p1       Partition
      /dev/nvme0n1p2       Partition
      /dev/sdd1            Partition
      /dev/sdb1            Partition
      /dev/sdc1            Partition
      /dev/sda1            Partition
    usb controller:
                           AMD USB Controller
                           nVidia USB Controller
                           AMD USB Controller
    bios:
                           BIOS
    bridge:
                           AMD PCI bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD PCI bridge
                           AMD PCI bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD ISA bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD PCI bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD PCI bridge
                           AMD PCI bridge
                           AMD Host bridge
                           AMD PCI bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD Host bridge
                           AMD Host bridge
    hub:
                           Linux 6.6.90 xhci-hcd xHCI Host Controller
                           Linux 6.6.90 xhci-hcd xHCI Host Controller
                           Linux 6.6.90 xhci-hcd xHCI Host Controller
                           Linux 6.6.90 xhci-hcd xHCI Host Controller
                           Linux 6.6.90 xhci-hcd xHCI Host Controller
                           Linux 6.6.90 xhci-hcd xHCI Host Controller
    memory:
                           Main Memory
    unknown:
                           FPU
                           DMA controller
                           PIC
                           Keyboard controller
                           unknown unknown
                           AMD Generic system peripheral
                           unknown unknown
                           AMD Encryption controller
                           nVidia Serial bus controller
                           AMD SMBus
                           Serial controller
      /dev/input/event2    Logitech USB Keyboard
    

    However, this doesn’t tell you much about how to look up specific information for the model of motherboard, CPU, drives, etc. You might also have some questions about the reporting of the CPU, because it says it is an “AMD Ryzen 5 3600 6-Core Processor” and yet 12 cores with their current processor speed are displayed. My particular CPU does have just 6 CPU dies on the chip, but each die is a dual-core processor, hence the 12 cores shown.

    Next, you might notice that I have 5 separate disks, one NVME drive on which I have already installed NixOS, and four additional drives. To simplify things, I have listed more detailed information of my hardware.

    Motherboard: MSI A520M-A PRO with dual channel DDR4 RAM slots, AMD AM4 CPU slot, 1x PCIE 4 (for GPU), 1x PCIE 3, single NVME m.4 slot, and 4 SATA drive connectors, and up to 1Gb Ethernet.
    Memory: 32GB at 3600MHz
    CPU: AMD Ryzen 5 3600 6-core, dual CPU (12 logical CPUs total)
    GPU: ASUS Nvidia 1650 Super
    NVME Drive: Kingston 256GB m.4
    HDDs: 2x Western Digital 1TB, 2x Seagate IronWolf 8TB

    Remember, these components are half extra leftover parts from prior machines, and some new parts meant to provide more storage and capabilities for my server. Making a decision about the hardware to use may often be based on total cost and total availability of parts; in my case, availability of parts trumped cost because I wanted to spend as little as possible in order to run a server.

    I began by installing NixOS (Gnome desktop version) directly to my NVME drive. Prior experience has taught me that (1) I often make mistakes not being a full-time sysadmin, and (2) sometimes the easiest way to correct mistakes is to search the internet and apply solutions from forums and help files online. It’s a big pain to drag around a laptop to lookup information online to solve problems, so I wanted an attached display and windowed desktop environment on my server when troubleshooting needed to progress to on-server work. (rather than SSH’ing into the server from an external laptop or desktop)

    In the next article I will detail how I built out the storage of my new server after installing NixOS, along with explanations behind each of my decisions.

  • Home Lab Server – Rebuild

    Abstract

    In this article I will outline the history of previous home lab server designs and purposes, why I decided to rebuild my home lab server, and why a small business or home technologist might want to also pursue a home lab server build and setup. I discuss a few of the many, MANY options available to us today, but will focus primarily on the primary goal of retaining sovereignty over my owned data and files.

    History

    As my family has grown in size via number of children and scope of data captured and maintained, including managing children’s homeschool activities, various organizations and clubs we are involved in and media of photos and videos, the need to have more control and ownership over this data and media has grown as well. For some time I relied on SmugMug.com for photo and video sharing – because of its licensing terms which give users complete retention of rights over their own media – and various shared services for data and file management such as Microsoft OneDrive, Google services, Fastmail (for primary email services), and plenty of other online services. Then the COVID-19 situation happened. It became abundantly clear that (1) the vast majority of the populace worldwide was willing to abdicate personal responsibility and accountability to authority figures for any reason at all, whether real or invented, and (2) when that authority figure controls some aspect of your livelihood, personal information, or financial independence from “systems”, then that authority figure retains great amounts of control over you.

    I had been using and playing with Linux servers for over twenty-five years, from the early days of the “new” Linux distributions of RedHat, Slackware, Mandrake, and SUSE; however, I had not seriously built and maintained a server for the purposes of running our household, a small business, or non-profit organizations in any meaningful way. COVID-19 made me rethink my stance on everything about how our life was organized in this digital age. At first I re-purposed a small Network Attached Storage (NAS) setup on an aging Drobo 5N which had been used primarily for computer backup so that it could serve audio, video and photo files to all of the family devices. The Drobo quickly began to deteriorate in capability as I outstripped its ability to keep up with the growing software services I was installing on it.

    The next step was to build two small servers using purchased and spare parts from used computers. I first built a small Raspberry Pi 4 system with an external 2TB LaCie drive (for its speed and ruggedness) to serve as a Bitcoin node for further monetary self-custody and research. (This is an entirely different topic, but an important decision-maker for later server builds.) Within a year I built a second machine using ProxMox installed on a refurbished HP G600 $140 special from MicroCenter using additional drives harvested from the no-longer-supported Drobo 5N to provide both NAS and media services to all of the home devices. This setup worked well until it too began to be too limiting for a variety of purposes, including having a stable system that wouldn’t require a day’s worth of work to bring back online after a power outage. (I found out after a few years of living in a new neighborhood to us that the electrical grid infrastructure throughout the neighborhood was reaching end-of-life, and was experiencing annual and even semi-annual outages – not good for computer equipment!) Most recently I built an even heftier server using Ubuntu 24.04 with a more recent AMD Ryzen 5 CPU and NVidia 1650 Super GPU so that the media could be served up more reliably to children’s devices (mostly for audio books and seasonal music listening – we watch very little TV). For several years these components performed well enough but still did not solve a variety of automation needs that I increasingly found necessary as my job and family duties required more of my time. These limitations and various problems led me to redesign my entire system and processes with new goals in mind.

    Goals of a Server Rebuild

    In order to focus the server rebuild on the most important tasks and to not get sidetracked on the many, MANY options available, the following goals will guide this project. As I complete individual tasks and goals I will post additional articles about the process.

    Primary Goals:

    • Increase Storage Space. My server needs to be capable of storing up to 8TB of data and media. This amount of storage will serve the purpose of providing at least 2TB of Bitcoin node space (a minimum of 1.5TB is generally required to run a Bitcoin node, as of 2025, due to the size of Bitcoin’s blockchain); 4TB of media space for expansion from a current ~500GB of space used for photos, audio, and video (with video taking up the most space); and an additional 2TB of space for individual users and future expansion (self-hosted AI LLMs perhaps?).
    • Tailscale Integration. I run multiple services on my home network, and Tailscale greatly simplifies and alleviates my cybersecurity risks on the Internet, so it has become a critical component in my servers, laptops, and other devices throughout the home.
    • Home Assistant Capable. As additional equipment in the home becomes Internet-of-Things capable I have found that most vendor’s smart phone apps, WiFi connectivity, and other aspects are all built to barely work and often don’t satisfy half of the needs of the user. So I need to have a Home Assistant control center which can simplify all of the nonsense for the less technical family members and to simplify my equipment monitoring and home maintenance routines.
    • Separate Server Maintenance from Data and Media. The server should be built so that changes to the software running the server (its operating system and supporting third-party programs) should be independent of the data such that any breakages of the server can be repaired quickly without impairing or impacting any of the data and media files.
    • Resilient Operating System. A server’s OS must be re-deployable on demand and capable of being restored to a prior snapshot in time within four hours. The somewhat arbitrary “4 hours” timeline is based on my need to have access to all family files and media within half a work day because of the many activities that occur both during the average 8-hour workday, as well as during evenings and weekends where such data might be critical to supporting non-computing activities such as a “home lab” for preparing teaching materials, planning children’s events, family travel needs, or financial decision-making.

    Secondary Goals:

    If some of these goals are met in the process of achieving the primary goals listed above, then it’s bonus points for me, but failing these goals is not a reason for making different decisions about the design.

    • Eliminate Cloud Vendor Lock-In. I don’t want to pay for cloud services, I don’t want 3rd parties owning my data, and I want to be in control of what my children see and hear on their digital devices. You wouldn’t let your children play with vipers and wolves, would you?
    • Reduce Screen Time. While counter-intuitive, the less time that my family and I can spend in front of screens because modern communication and information necessities are automated away from our everyday activities, the better. Playing music via an iPad should “Just Work” without requiring a series of hoops to jump through to filter out NSFW-level content. Home videos created by family members (and kids!) should be instantly available via Jellyfin and not stuck on YouTube with all of its advertising and subscription nonsense. Family financial information and monthly budget numbers can exist independent of Quicken and other online products; and for the more independently minded, fiat-money free. (We can talk about Bitcoin in a future post.)
    • Replace Existing Subscriptions. I have a variety of online services which are currently paid-for subscriptions. Usually these services are in place because of convenience and lack of effort on my part to build something better which costs far less in the long run. It would be nice to save some money by eliminating one or more of these services.
    • Create New Family Privacy and Safety Standards. While I don’t expect to invent all-new ways to protect families and their children from the many ills of the online digital world, it would be valuable to find new ways to do exactly that: protect others.
    • Grow Nandgate.Consulting LLC as an Advisory Business. A grand goal, but it is only secondary to the primary purposes of this project.

  • A Simple Method for Merging Multiple .vcf Files into a Single .vcf File

    Nearly all email providers include some method for managing contacts information, including the ability to import and export contacts in the vCard (.vcf file extension) or CardDAV format. If you are required to import contacts to a new email provider using a single .vcf file, but your export contained a single .vcf file for each contact, then the simplest methods for converting many .vcf files into a single, combined .vcf file is as follows:

    On Linux:

    First, download your individual contact files in vCard (.vcf) file format to your home directory. If the download was a zip file, extract the contents to a new temporary directory; in the example command line code below, we have placed them in a directory named ‘contacts’, and this folder exists in the home directory of the user.

    Next, we will run a command using a terminal window. The ‘cat’ command concatenates files together. The ‘>’ symbol is a parameter of the ‘cat’ command which specifies which set of files are being concatenated (on the left-hand side of the symbol), and which new file to store the concatenated files to (on the right-hand side of the symbol). In this example, we are concatenating all files (hence the ‘*’ operator) which end in ‘.vcf’ within our ‘contacts’ directory into a new file we decided to name ‘all_contacts.vcf’:

    cat ~/contacts/*.vcf > all_contacts.vcf

    Now we can use the new ‘all_contacts.vcf’ file to upload existing contacts to the new email provider or software tool which expects all contacts to be contained in a single file.

    On MacOS:

    Follow the same steps as “On Linux” above. The MacOS Terminal tool works largely the same as a Linux terminal, and shares many of the same command line tools as Linux.

    On Windows:

    First, download your individual contact files in vCard (.vcf) file format to your home directory. If the download was a zip file, extract the contents to a new temporary directory; in the example command line code below, we have placed them in a directory named ‘contacts’, and this folder exists in the home directory of the user.

    Next, we will run a command using a terminal window. The ‘copy’ command concatenates files together. In this example, we are concatenating all files (hence the ‘*’ operator) which end in ‘.vcf’ within our C:\Users subdirectory into a new file we decided to name ‘all_contacts.vcf’:

    copy C:\Users\<your-username>\Downloads\contacts\*.vcf all_contacts.vcf

    NOTE: Be sure to replace the <your-username> part of the command above with your actual username in Windows.

    Now we can use the new ‘all_contacts.vcf’ file to upload existing contacts to the new email provider or software tool which expects all contacts to be contained in a single file.


    Remember, your computer system is a powerful tool that likely doesn’t require more software for you to use it efficiently. You just need to learn more about how to use it to accomplish computational tasks more efficiently!

  • Communications Privacy

    artwork of a 3-D padlock on a glassy surface

    As the governments and corporations of the world continue to pressed to implement further regulations, rules, and restrictions to protect people from one another we will continue to experience more unintended consequences that hurt our personal liberty and freedom. Where those freedoms and personal liberty to do as we wish need to be restricted, such as prohibiting wanton acts of aggression and murder, preventing theft of goods and property, and restricting permanent harm to our neighbor, such restrictions are necessary. However, in far too many cases governments and corporations have grown so large that there may be no counter-balance to overreaches in such matters. Therefore, it is important to have a plan for both organizational privacy of individual’s information while simultaneously allowing for personal privacy from the organization.

    In this article I will primarily focus on a few IT software tools that can be used to improve the security of personal communications between individuals as well as between organizations.

    Encrypted Email Providers

    There are a variety of email service providers today that give some level of encryption to the end user from the provider. I will not cover the features of each service here, but you have probably heard of several of them before; Proton Mail, Tutenota, Skiff, Hushmail, and the list grows each year. Why are such services growing in popularity and market share among individuals and businesses alike? The answer is quite simple, but perhaps not obvious to the average software user: the average individual has reason to be concerned about bad actors within the very organizations they have placed some level of trust in for a variety of technology products and services.

    Encrypted email providers usually provide encryption of the body of the email at rest as well as in-transit. This means that although the provider has all of the technology infrastructure to manage sending, storing, and transmitting email messages to and from the user’s account, they cannot access any of the content of the user’s account. This differs from Gmail, Yahoo! Mail, and other “free” services where the user IS the product. Many people have become aware of how dangerous this can be, especially as it concerns children or other vulnerable people who can quickly become victims of internal bad actors.

    So who are internal bad actors at email providers? These are not the employees and third-party providers with whom the business contracts in nearly all cases. Bad actors are the external hackers, some of whom could be foreign state actors or corporate spies, who through some software vulnerability are able to gain access to internal platforms which are then exploited for personal or organizational gains at the expense of the end user. Sometimes the bad actors are corporate policies or attempts to “monetize” the user which have not protected every user from wrongdoing or exposure of vulnerabilities in leaking information publicly through such policies and money-making schemes.

    Therefore, when considering how to best protect not only your personal, organizational, and end user email communications it is wise to consider email provider alternatives to whatever free or cheap email provider option is available. Protecting information is not just the cybersecurity expert’s job; everyone is responsible for cybersecurity, and making that work easier for employees, end users, and clients by using a provider that encrypts the data from itself helps everyone improve personal privacy.

    Encrypted Chat & Text Messaging

    One of the most ubiquitous uses of the Internet today is the ability to send short, asynchronous text-based messages to one another from anywhere in the world, and to receive, respond, and engage in a text-based message “chat” from any smartphone, laptop, tablet, or other personal computer. While this can be very valuable for many day-to-day activities, there are times where this ubiquitous, always-on nature of communications can compromise individual’s ability to speak freely and protest unjust actions by others.

    The Arab Spring, the U.S. January 6th Capitol riot, and Hong Kong 2014 protests are just a few of the more prominent situations which have shown that governments, whether for good or for ill depending on one’s political persuasion, have used various surveillance tools to identify participants of such activities. Often, this has been through interception of text messages during such events, triangulation of user’s location, and then reconstruction of a user’s involvement in planning or carrying out acts which governments have deemed to be offenses against the state.

    These situations are why many have turned to encrypted messaging applications such as WhatsApp, iMessages (to some degree), or Signal, and I believe it is why demand will continue to grow for bigger tools such as video conferencing software, multi-communication mode platforms, and cellphone service providers to encrypt more and more text messaging capabilities by default. Using tools with end-to-end encryption protects each individual’s ability to conduct free trade, movement, and relationship building without fear of reprisal from much larger organizations who might seek to do individuals harm. Yes, this can go both ways, and such tools can be used for evil purposes just as much as for good. But let us consider how often the very large organizations (governments, mega-corporations, and global non-profit organizations) which can easily engage in graft and corrupt practices with impunity are more likely to decry such end-to-end encryption than individuals who very often gain only their own personal autonomy and freedom from oppression when using such encrypted messaging tools.

    As an IT business strategy then it would be wiser for long-term growth and sustainability of the business to promote end-to-end encrypted messaging to protect against overreach and oppression by competitors. Competitors may not be just local and industry-specific competitors either.

    For instance, if a government wished to, let’s say, force the continuation of a war by increasing the money supply to afford ongoing war, and then suppressed anyone speaking out against the war, the easiest mechanism for suppression would be to scoop up anyone speaking out against the war by querying the databases of providers of unencrypted messaging tools. A business which would be hurt by the increasing money supply would be encouraged, therefore, to NOT speak out against the government’s actions, whether publicly OR privately in internal digital communications, if end-to-end encrypted messaging did not exist. No strategy to combat the government’s overreach might be able to be effectively crafted, and the business may cease to exist if the increased money supply harmed the business enough to force it to close it’s doors. This could certainly be true for small to medium sized growth businesses that relied on a global supply chain of both talent and resources where quick response to changing governmental conditions in one location might necessitate action by the business globally.

    Money privacy

    Last, but perhaps most importantly for growth businesses in 2023, the IT tools surrounding money management and government fiat might be the last frontier of personal communication privacy left to be tapped for improved profitability and sustainability. While money may not seem an obvious communication technology, let’s explore quickly how it is, and how privacy surrounding money can be so critical to every organization and person engaged in economic transactions today.

    First, if we consider that the prices of goods, products, and services are the primary means by which we communicate the value of a thing, then it logically follows that money is the tool by which prices are communicated. We could attempt to denominate every good, product, and service by every other good, product, and service, but then we would have a many-to-many relationship of values which would be impossible to keep track of for the millions of things for which prices could exist. It is obviously much easier to track how many US dollars, Mexican pesos, or Bitcoin sats a thing is valued for, then how many cows, airplanes, or paper cups a used automobile is worth.

    Second, because we can use money to communicate prices efficiently to others, there is a need to secure the structure of that money such that it isn’t in constant flux or impossibly complex to understand – it needs to be a clear communication tool, not a confusing one. This is what inflation does to money; inflation increases the complexity of conveying prices in the marketplace. When the communication of prices increases in complexity, it reduces personal freedom and privacy because reliance has to be given away to authorities (whether appointed or not) about prices of various goods, products, and services such that the individual person or business is required to rely on a larger entity for more and more things.

    For instance, the cost of healthcare throughout Western countries is an enigma to most people in the year 2023. Certainly in the U.S. where we have a complex system of private and public health insurance intermediary payors, large and small healthcare provider systems, and a maze of government regulations it is impossible to fully determine the true cost in goods that it takes to produce a single drug pill, nor how much profit is gained by any one organization by producing that one pill. Reliance on the pricing of healthcare is therefore handed off by the actual consumer of healthcare to some larger organization who can negotiate in the aggregate of prices of providing goods, products, and services across a very large number of healthcare consumers. And so, the consumer hands off personal autonomy (and privacy) of making healthcare decisions to experts and authorities who may not have the same individual preferences and values of such healthcare decisions. If, however, inflation of healthcare costs did not exist because healthcare goods, products, and services prices did not fluctuate constantly, then healthcare consumers would be able to compare prices of these things consistently, and could accurately choose between health outcomes rather than relying on others to make decisions for them based primarily on prices.

    So what about money privacy; how can we move towards better privacy of our money, and why should we? The first example that springs to mind is that large banking and investment institutions such as Chase, Citibank, and Bank of America in the U.S. have already begun surveilling customers and shutting down accounts of customers who do not fit into prescribed norms which protect the bank’s bigger backer – the Fed. Since the Federal Reserve provides the money liquidity that large banks rely on to create new money (for things such as mortgages, loans, and credit issuance), large banks are more reliant on the Fed than on their own customers for ongoing operational stability of their business. Therefore, bank’s incentives are aligned to appeasing federal government policies (even though the Fed is a quasi-government entity) more than they are aligned to providing banking services to their customers, notably the smallest customers: individuals and small businesses. Therefore, for better privacy, businesses and individuals should increasingly seek to diversify fiat banking activities across multiple fiat transaction facilities – banks, credit cards, credit unions, and non-financial transactional opportunities such as trade agreements or contractual non-monetary agreements – and investigate non-fiat alternatives. Bitcoin is really the only option for non-fiat financial transacting as a form of money, but that is a topic for another day.

    One last note on money privacy: Bitcoin does not offer privacy per-se, but it does offer freedom from the pricing non-transparencies of the inflation-driven fiat currencies of current money systems. There are new opportunities to grow a business and a personal money flow system throughout the Bitcoin ecosystem, notably Podcasting 2.0, but learning more about how to do so requires a lot more reading, instruction, and understanding than can be accomplished in one article.


    For more information about how Nandgate.Consulting can help you or your business improve on these topics in your own work and life, please contact us today.