Thursday, October 15, 2020

Running Openshift at Home - Part 1/4 Homelab Hardware

Part 1/4 - Homelab Hardware

Part 2/4 - DIY Silent Server Cabinet
Part 3/4 - Installing a Two-Node Proxmox VE Cluster 

Part 4/4 - Deploying Openshift 4 on Proxmox VE

This is a series of post from building the hardware infrastructure that you can run from the comfort of your home to running you first application in Openshift 4.5/OKD4.5.

Objective

About a month ago I decided to build a homelab. All I wanted was to run my own Openshift cluster at home so that I can build and deploy applications to rapidly test ideas but also to develop my expertise in Openshift. I was shaving yak and ended up building a homelab and a IKEA server cabinet from scratch. 

Previously, I have been running my experiments on the cloud and it has cost me quite some $$ with monthly bills ranging from $80 to $250. This is another motivation factor. So if I could run my experiments on a homelab that cost's less than the price of iPhone 11 pro, with only additional $20 a month on electricity bills, I would call it a success. 

Disclaimer: I am not a systems engineer. My day job is a Software Architect, I design and build software solutions.

Hardware


Some homelab setup of folks I know cost thousands of dollars using Intel NUC computer or some desktop grade computers using multi-core AMD Ryzen CPU. This is beyond the budget my wife has agreed for this project :). So I did some google search and learned about this Reddit forum where people are mostly using old servers discontinued and no longer supported (EOL) by manufacturers and have been disposed off data centers. There is quite a lot of these listed on ebay. For less than $300, you'll get an enterprise grade server with 8 to 12 physical cores which is amazing!

There is just one caveat. These servers are not designed to be run at home. They are designed to sit in data centers away from people. Therefore, noise was not taken into consideration in their designs. These servers are very loud. But this did not stop me as I am already thinking of a solution to the noise problem. In the above photo, the entire homelab infrastructure is enclosed in an IKEA BESTA cabinet which I will talk about in more details in another post.

Being in Singapore, ebay or amazon is not really an option. These servers weigh up to 25kg. Shipping cost would have been more expensive than the item itself. Thankfully, we have Carousell in Singapore. When I searched for "used server" i got a few results and this is where I bought all my servers.

I used 3 servers for more compute power plus redundancy but it's not mandatory. It's totally fine to start with one server. The 3 servers and the network switch were sourced from Carousell. A quick search for "server" keyword in Carousell gives you this kind of result.

 

So here's what I got.

  • 1x HP Proliant DL360 G6

    • 12 Cores/24 threads

    • 128 GB RAM (originally 64GB)

    • 4x 300GB SAS HDD (RAID 5)

    • $300 from Carousel + $180 on upgrades, additional 3 SAS disks + Raid cache

  • 1x Dell PowerEdge R710

    • 12 Cores/24 threads

    • 128 GB RAM (originally 64GB)

    • 2x 2TB SATA HDD (RAID 1)

    • $350 from Carousell + FREE Memory sticks from a Friend

  • 1x Dell PowerEdge R520

    • 16 Cores/32 threads

    • 96 GB RAM (originall 48GB)

    • 4x 2TB SAS HDD (RAID5)

    • $450 from Carousell + FREE Memory sticks from a Friend

  •  1x Cisco SG95-24 Network Switch

    • 24-port un-managed gigabit switch

    • $40 from Carousell

  • 1x ASUS Gigabit Router

    • I used my old Wifi router, as a wired router to isolate the homelab from my home network

  • 4x Raspberry Pi 2B (Everything below is free, I already have them in my junk stash)

    • I used my 4 old Raspberry Pis (model 2B) lying around the stash of cables and electronics. I used these Pis before when I was experimenting on Hadoop clusters.

    • 13x Patch cables

      • Cat 6 UTP Path cables 1.5m

    • 2x 4-socket Powerstrips

    • USB Power supply for Raspberry Pis 

       

    HP Proliant DL360 G6
     

    Dell PowerEdge R710

    Some of the upgrade parts were sourced from China via AliExpress. I bought a Raid Cache module for the HP DL360 because the unit that i got did not came with one and the Disk writes are too slow. After adding the cache the disk write went about 40x faster. I also bought Server RAM modules from China via AliExpress while some memory modules were given to me by a friend for free.

     

    Problems

    Internet Explorer and Windows

    When these servers first arrived, I didn't not have a monitor/screen that has VGA input. These servers are 10+ years old, HDMI was not yet around at the time. But I though this is not going to be a problem because I knew these servers has a special Ethernet port for for remote management. HP has ILO while Dell has iDRAC. These are basically small embedded web servers running outside the main board and their purpose is to provide remote access to the server management tools and screen even before the server boots up. So you can literally see from the browser what would have been displayed in a monitor attached to the server. The problem is that, 10 years ago most enterprises runs Windows everywhere and rarely Linux. So these management console web application only works on old Internet Explorer. It uses ActiveX controls to emulate the server screen output. Added to the problem is that I don't have a Windows machine. So what I did was to run a Windows XP image on VirtualBox in my Mac and do all my initial setup from this VM.

    Firmware

    Another problem particularly for HP is that the firmware updates only comes in .EXE. So you are gonna need to run Windows just to update the firmware. I had to create a Windows 10 bootable USB stick and let the server boot on the USB stick. From there you can execute the .EXE firmware updates. You can download Windows 10 ISO for free. You just get a warning "Activate Windows" in the desktop.

    Dell has firmware updates for RHEL, so i just booted into a bootable CentOS Live CD USB stick and download all the updated firmware from Dell support and execute. Here's a step-by-step guide video on Youtube that I followed. 

    Noise

    I knew that these servers are loud. I never thought it would be too loud. Their fans spins up to 12000++ RPM, and with 3 of these server running at my home office, the noise is just unbearable. They sound like hair dryers. I looked for software solutions but did not find any for HP. However, I found a solution for Dell. Dell allows you to control the fan speeds remotely via IPMI serial over LAN. You just need to install the ipmitool command line tool and invoke the following commands.

    The first command will disable the automatic fan control and allows you to control the fan speed manually. The second line will set the Fan speed to 10% denoted by the last hex argument 0x10.

    ipmitool -H <IP of iDRAC> -I lanplus -U <USERNAME> -P <PASSWORD> raw 0x30 0x30 0x01 0x00   
    ipmitool -H <IP of iDRAC> -I lanplus -U <USERNAME> -P <PASSWORD> raw 0x30 0x30 0x02 0xff 0x10

    Be careful when doing this. You need to watch out for the temperature reading of your server and adjust the fan speed accordingly. You don't want your server to burn. I found this from a reddit post.

    However, for the HP Server, there seems to be no way to control the fan speeds manually unless I hack the fans. I saw someone online placed an Arduino board to reduce the fan speed, but i decided not to go there. The only solution to quite down the HP server is to keep it cool. I will just have to deal with this physically by building a silent server cabinet which I will talk about in Part 2. Or, I will probably let go of the HP server and replace it with Dell.

    Hardware Topology

    Here how the hardware are connected together. There are 2 subnets, 1 with internet and connected via the router and another one for ProxmoxVE cluster network.

    Up next, I will probably use another Raspberry Pi as LDAP server and another to automate the Dell fan control. Stay tuned for part 2, the Silent Server Cabinet build notes.

    Part 2/4 - DIY Silent Server Cabinet 
    Part 3/4 - Installing a Two-Node Proxmox VE Cluster 

    Part 4/4 - Deploying Openshift 4 on Proxmox VE

No comments:

Post a Comment

Popular