ByteGuardian
A Cyber Security Blog

Navigating IT and Cyber Security – Exploring the Digital Frontier Together!

#004 – My Homelab Setup – Hardware & Hypervisor

Goal

My plans for my Homelab are as follows: NAS for important digital documents (ideally with redundancy so if a drive fails data isn’t lost), NAS for media (redundancy is not important in this case as the media is available via hard copy), Media server apps (ie. Plex), and finally an NVR for some CCTV cameras (this is more of a long term goal tho further down the line). With all these goals in mind, I realised I didn’t need the most powerful machine, however, I plan on sharing the media with some friends and family so a cheap GPU may help with hardware transcoding. I wanna do this on the cheap, as I don’t really know what I’m doing but also I do not have endless amounts of cash to throw at this little project. So ideally an old decommissioned enterprise workstation or server off Gumtree or Facebook marketplace.

Note: I also plan to deploy the server at my family’s house as they have a better internet connection which would make the media more accessible for all. This also means that I need a secure way of accessing the server remotely, ideally a VPN tunnel.

The Hardware

Initially, I wanted to get an old Dell Optiplex workstation, but I ended up finding a decommissioned HPE Proliant ML30 Gen9 for a good bargain on Facebook marketplace. I thought that this was the perfect solution as it had hardware raid already installed and an 8×2.5-inch hot-swappable drive bay. In my head all I needed was to buy some hard drives, upgrade the 8 GB of DDR4 memory and add a cheap GPU that could handle a few transcodes at a single time and even this was overkill, I didn’t know how wrong I was.

Specs:

  • CPU: Intel Xeon E3-1230 v5
  • RAM: 8 GB @ 2133 GHz – Regular non-ECC DDR4 (it baffled me that this machine had hot-swappable drives and a small UPS built-in but no ECC memory)
  • GPU: Intel onboard Graphics
  • Storage: None
  • Additional: 2 port NIC Card, HPE Hardware raid.

Problem

For a GPU I picked up a Nvidia Quadro M4000, very overkill for my use case but I got it very cheap so what the hell. The PSU that came with the Proliant didn’t have a PCie connector to power a GPU so I had to replace this. Note: I thought I needed a PSU that had two EPS connectors one to power the CPU and one to power the 8×2.5-inch hot-swappable drive bay. I went to my local PC parts shop and got my hands on the cheapest power supply that ticked all the requirements.

This is where I ran into my first major issue, so yes the CPU was powered by an EPS cable, however, the drive bay was yes a female EPS connector but it was not pinned like one. Naturally, I only figured this out after I installed the new power supply and attempted the power on the machine and was met by no post and the LEDs signalling a boot error. A simple Google search later led me to this Reddit post, and in the comments Reddit user “Realistic_Wasabi2024” even managed to DIY their own connector by using a spare SATA power connector from the PSU. I didn’t have the skills to do this so I paid my way out of this issue by ripping out the hardware RAID and drive bay and replacing them with a Silverstone ECS06 6 (Ports SATA Gen3 (6Gbps) Non-RAID PCI Express Gen3 x2 card). I understand that in doing this I would lose the hardware RAID, but in the little bit of mucking around I did with various OS’ I was using it in HBA mode and using a ZFS software raid within Linux, more on why later.

Proxmox

My initial plan was to use Proxmox as a hypervisor and then deploy everything else I needed in various LXCs. I was (and still am) a complete newb to Proxmox, but I was keen to learn and there is no time pressure with this project. After a few days of playing around with Proxmox, I ran into a roadblock, which came in the form of struggling with user and group permissions for unprivileged LXCs’, I couldn’t give LCXs’ write permission to any bindmount’d directories. In non-nerd terms, I couldn’t get any of my apps to interact with directories outside of their LCX. The solution was here, and I’m sure that it wasn’t a bug (I think), I think it was more of a, me being a newb kinda problem. The only way I could get it to work was to switch the LXC to a privileged container, which I didn’t want to do.

After doing some more research, I started thinking that Proxmox wasn’t the best solution for me. I’m NOT saying Proxmox is a bad solution, I think that I don’t need a lot of the advanced virtualization stuff that comes with Proxmox and for my needs, I could easily get away with using something like TrueNAS Scale as a hypervisor instead. So I started doing exactly that, in saying that this post is getting plenty long enough, so I’ll leave the configuration to the next post.

Table of Contents