Black-Pixel.Net


Project Home Server Part 2: Initial ZFS setup for Nextcloud and Virtiofs for SMB

In Part 1 of this series, I covered the hardware selection and initial setup of my Proxmox home server, including the choice of a Ryzen 5 PRO 5650G with ECC RAM to ensure data integrity for my Nextcloud instance. Now that the hardware is ready, it’s time to focus on the storage layer. I decided to run Nextcloud on a dedicated ZFS mirror pool and share a data disk with backups and media via a Samba VM.


Why ZFS?

I decided to switch to ZFS mirror for the following reasons:

  • Data Integrity: ZFS uses checksums to detect and correct silent data corruption, which is critical for storing irreplaceable files like photos and videos.
  • Redundancy: A mirror (RAID 1) setup with two 4TB SSDs provides protection against disk failure.
  • Encryption: Native encryption ensures that my data is secure at rest.

Step 1: Identify Disks by WWN

To ensure persistent device naming, I used the World Wide Name (WWN) of each disk. This avoids issues with device names like /dev/sda or /dev/sdb changing after a reboot.

ls -l /dev/disk/by-id/

Output:

lrwxrwxrwx 1 root root  9 Apr 20 14:17 wwn-0x500a0751e9a33671 -> ../../sdb
lrwxrwxrwx 1 root root  9 Apr 20 14:17 wwn-0x500a0751e9a337e0 -> ../../sda

I chose these two 4TB SSDs (Crucial BX500) for the mirror. Using /dev/disk/by-id/ ensures that ZFS will always reference the correct physical disks, regardless of their /dev/sdX assignment.


Step 2: Create an Encrypted ZFS Mirror

Proxmox’s web GUI does not support encrypted ZFS pools, so I created the pool via the command line:

zpool create -O encryption=on -O keylocation=prompt -O keyformat=passphrase -O mountpoint=/nextcloud_zfs_mirror -O compression=lz4 nextcloud_zfs_mirror mirror /dev/disk/by-id/wwn-0x500a0751e9a33671 /dev/disk/by-id/wwn-0x500a0751e9a337e0

Explanation of Options:

Option Purpose
-O encryption=on Enables native ZFS encryption.
-O keylocation=prompt Prompts for a passphrase on pool import.
-O keyformat=passphrase Uses a passphrase (instead of a key file) for encryption.
-O mountpoint=/nextcloud_zfs_mirror Sets the default mount point for the pool.
-O compression=lz4 Enables transparent compression to save space.
mirror Configures the pool as a RAID 1 mirror.

Note: If you prefer to use a key file instead of a passphrase, replace keylocation=prompt with keylocation=file:///etc/zfs/keys/nextcloud_zfs_mirror.key and create the key file securely.


Step 3: Verify the ZFS Pool

After creating the pool, I verified its status:

zpool status

Output:

  pool: nextcloud_zfs_mirror
 state: ONLINE
config:

        NAME                        STATE     READ WRITE CKSUM
        nextcloud_zfs_mirror        ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x500a0751e9a33671  ONLINE       0     0     0
            wwn-0x500a0751e9a337e0  ONLINE       0     0     0

errors: No known data errors

The pool is ONLINE and both disks are healthy. No errors detected.


Step 4: Integrate the ZFS Pool into Proxmox

To make the ZFS pool available as a storage backend in Proxmox:

pvesm add zfspool nextcloud_zfs_mirror --pool nextcloud_zfs_mirror

This allows you to create VMs or containers that use the ZFS pool for storage. In the Proxmox GUI, the pool will appear as an option when adding a new disk to a VM.


Step 5: Verify Available Space

To check the available space in the pool:

zfs list -o name,used,avail,quota,refquota

Example Output:

NAME                     USED  AVAIL  QUOTA  REFQUOTA
nextcloud_zfs_mirror     128K  3.62T    -        -

This confirms that the pool is ready to use, with ~3.6TB of available space (after accounting for ZFS overhead and mirroring).


Step 6: Schedule Regular Scrubs

To ensure data integrity, I scheduled a monthly scrub using a cron job:

crontab -e

0 3 1 * * /sbin/zpool scrub nextcloud_zfs_mirror

Step 7: Init script to decrypt to pool and mount it

Because the ZFS pool for Nextcloud is encrypted, I manually decrypt it after a Proxmox reboot. The following steps are required:

zfs load-key nextcloud_zfs_mirror
zfs mount nextcloud_zfs_mirror

Adding External Storage to a SMB VM via Virtiofs

To access files stored on a HDD via a SMB VM, I mounted the HDD on the Proxmox host and added it as a Virtiofs share:

  1. Mount the HDD on the Proxmox host:

    mkdir -p /data_disk
    mount /dev/sdX1 /data_disk
    

    (Replace /dev/sdX1 with the actual device name of your external HDD.)

  2. Add the directory as a Virtiofs share in Proxmox:

    • Navigate to Datacenter → Storage → Add → Directory.
    • Select the mounted directory (/data_disk) and save it as a storage location.
  3. Add Virtiofs to the VM:

    • Go to the VM’s Hardware tab.
    • Click Add → Virtiofs and select the previously created directory mapping.
  4. Install and configure Virtiofs in the VM:

    • Install the required package inside the VM:
      sudo apt install virtiofsd  # For Debian/Ubuntu
      
    • Mount the Virtiofs share inside the VM:
      mkdir -p /data_disk
      mount -t virtiofs data_disk /data_disk
      

    (Replace data_disk with the tag you assigned in Proxmox.)

This setup allows the VM to access the HDD as if it were a local filesystem. I also needed to change the ownership of the files to my samba share user.