Saturday, July 15, 2023

3D-Printed Custom Guitar Pedal Board


Custom Pedalboard Design on Fusion 360

While finding my way back to playing electric guitar, I found myself buying a bunch of guitar pedals. After having collected a few, it started making a mess on my desktop, and it was time to get a pedalboard. The plan was to make a pedalboard as compact as possible so that I could use it on the floor but also fits and looks nice on top of my desk. Here's what it looked like on my desk without a pedal board. Look how messy the cables are.


I also bought 2 more pedals to complete all of the guitar usual guitar effects that most guitarists need and there I obviously no more space for those 2 more on my desktop.

I couldn't find the right pedal board that I was looking for, from browsing the internet. They are either too big or too small and usually not the right shape. So I decided to design my own.

I spent a night designing the pedal. It was designed to exactly fit the pedals I have so that no space is wasted. The design has to consider the spacing of the pedal to make room for the audio cables, TRS plugs, and power supply plugs. I created 4 different pedal layouts before I settled on the final one. 

The first thing I did was model the boxes that serve as the placeholder for each pedal, arrange the pedal to my desired layout, and then design the pedal board around it.



I then set the opacity of these pedal placeholders to 30% and designed the pedal around it.


I then hid the placeholders and divided the object into smaller pieces that could fit inside my 3d printer bed. Et voila!


The 3D printing took almost 3 days, printing around 20 hours a day. Here are some photos of the 3D printing moments.





The pedals, as usual, are attached using velcro tapes. The cabling was done using a Solderless Guitar Patch cable to get a clean-looking cabling. You can get it from Amazon, link below.

Solderless Guitar Patch Cables Set


The whole board is powered through USB using a USB to 9V buck converter so it can be powered by a regular power bank.

USB Buck Converter from Amazon



Here is the final outcome of the project with the pedals attached and the cabling done.




Look how it fits nicely on my desk now!


One thing I forgot to add to the design was a handle. Right now it's not easy to pick up the board from the floor without wiggling your finger under it.

If you have the same sets of pedals, you can 3D print this pedal board as well. I published the STL files on ThingiVerse under a Creative Commons license. Because the design was split into multiple parts, you can change different parts of the pedal board without having to reprint the entire thing.

Here are the pedals I used and feel free to build yours.

I don't usually use guitar amps. You can plug the mooer preamp live to the Return input of any amp and turn off the cab simulator. In this setup, you have all the controls on the Preamp-live and you are only using the power amp section of your amp. This ensures your pedal setup will sound more or less the same regardless of which amp you use or which studio you playing in.

Or you can plug the XLR output of the Mooer pre-amp live directly into a mixer or an audio interface. 

Happy 3D printing!

Monday, April 3, 2023

Installing OpenShift on Any Infrastructure





As of this writing, Assisted Installer is the easiest way to install an OpenShift cluster on a custom/bespoke infrastructure. You do not need to manually deal with ignition files or manually configure the OpenShift installer. You do not even need to manually set up a bootstrap node.

In this post, I will walk you through how to create an OpenShift cluster using Assisted Installer on a bespoke infrastructure. To host VMs, I am using Proxmox VE, a lightweight open-source virtualization product. However, similar steps should also apply to other hypervisor products. If you want to know how to install Proxmox, see my previous post at https://blog.rossbrigoli.com/2020/10/running-openshift-at-home-part-34.html

First, we need to set up the infrastructure. We need to create VMs and then do the following steps to configure our infrastructure.

  • Create and configure a DNS server
  • Configure the DHCP server of our router
  • Create and configure an HAProxy load balancer

We will run the supporting services in a dedicated VM called openshift-services.

The first step is to create a VM that will host all of the services required by the cluster.


Creating the Virtual Machines


1. Download the CentOS Stream image from https://www.centos.org/centos-stream/ and upload it to Proxmox. You may also let Proxmox download the image directly from a URL.

Download the correct architecture that matches your VM host. In my case, my VMs have x86_64 CPUs.

Upload the image to the Proxmox's local storage as shown below.




2. Create a VM in Proxmox and name it "openshift-services".


3. Use the CentOS Stream ISO file as the DVD image to boot from.



4. Choose the disk size. 100GB should be enough but you can make it bigger as well. You don't need a lot of resources for this VM as well as it is only doing DNS name resolution and load balancing. So you can use 2 CPUs (or less) and 4GB of RAM.


5. Review the configuration and create the VM.




6. Start the VM and follow the CentOS Stream installer instructions. Then wait for the installation to complete.



7. While waiting for the installation to finish, create the cluster from Red Hat Hybrid Console. Navigate to https://console.redhat.com > OpenShift, then click "Create Cluster". 

If you don't have a Red Hat account yet, feel free to register for a Red Hat account, it's free.



8. Because we want to install OpenShift in our bespoke infrastructure, we need to choose the "Run it yourself" option and then the Platform Agnostic.


9. You will be presented with 3 installation options. Let's use the recommended option, Interactive, which is the easiest one. 




10. Fill up the details according to the domain and cluster name you choose. 

Take note that the cluster name will become part of the base domain of the resulting cluster. Also take note of the domain name, because you are gonna need to configure the DNS server to resolve this domain name to the load balancer.

For example:

Cluster name = mycluster
Cluster base domain = mydomain.com

The resulting base domain of the cluster will be mycluster.mydomain.com

My plan is to let the VMs get their IP Addresses from my router through DHCP, so I am selecting DHCP option for host's network configuration as shown below.




11. On the next page, there are optional operators that you can install. I don't need them for this installation so I will not select any.


12. On the third page, this is where you define the hosts/node. Click the Add Host button. You may also paste your SSH public key here so that you can SSH to the nodes for debugging later. Once all set, then click the Generate Discovery ISO. This will generate an ISO image.



13. Wait for the ISO to be generated, then download it. This will be the image that you will use to boot the VMs.



14. While waiting for the ISO to be downloaded, revisit the CentOS Stream installation on the openshift-services VM. By this time, the installation should have been completed and you just need to reboot the system.


15. Once the OpenShift Discovery ISO is downloaded, upload it to Proxmox local disk so that we can use it as the bootable installer for our VMs.


16. Once uploaded, let's start creating the VMs for our OpenShift cluster. We will be creating a total of 5 VMs, 3 for master nodes and 2 for worker nodes. You can choose to create more worker nodes as you wish.

Prepare the VMs according to the spec below.

Both Master and Worker Nodes: 

CPU: 8
Memory: 16 GB
Disk: 250 GB



Use the ISO image you downloaded as the image for the DVD drive. In the above screenshot, it is the value of the ide2 field.


17. After creating the VMs, do not start the VMs yet as we still need to install the supporting services in the openshift-services VM. Your list of VM should look something like the screenshot below.



18. Using your router's built-in DHCP server, assign IP addresses to your VMs through the address reservation feature. This requires mapping MAC Addresses to IP Addresses. Do this before starting the VMs. Here is an example mapping I used in my router setup. I used x.x.x.210 as the IP address of the openshift-services. It is important to take note of this address because this will be the load balancer IP and DNS server IP. 




DNS Server


19. Now let's install and configure a DNS Server in openshift-services VM. You need to SSH to the openshift-services VM.

OpenShift cluster requires a DNS server. We will use the openshift-services as the DNS server by installing Bind (a.k.a. named) to it.

Bind can be installed by running the following command inside openshift-service VM.

dnf install -y bind bind-utils

After installing, run the following command to enable, start and validate the status of named service.

systemctl enable named
systemctl start named
systemctl status named

Once we validated the installation, we need to configure the named service according to the IP addresses and DNS name mapping that we want to use for our worker nodes.


20. Configure the DNS Server so that the names will resolve to the IP addresses that we configured in the router's DHCP address reservation (step 9). I have prepared a named configuration for this setup which is available in GitHub at https://github.com/rossbrigoli/okd4_files. So let's get these files and configure our DNS server. Note that you can choose to use your own DNS server configuration.

git clone https://github.com/rossbrigoli/okd4_files.git


21. I have prepared a script that will edit the files according to the chosen cluster name and base domain name. These should be the same names as the ones you have chosen when downloading the OpenShift image from the Red Hat Hybrid console. In the same directory, run the setdomain.sh script.

cd okd4_files
./setdomain.sh mycluster mydomain.com

This script will edit the named configuration files in place.


22. Copy the named config files to the correct location. Ensure that the named service is running after the restart.

sudo cp named.conf /etc/named.conf
sudo cp named.conf.local /etc/named/
sudo mkdir /etc/named/zones
sudo cp db* /etc/named/zones
sudo systemctl restart named
sudo firewall-cmd --permanent --add-port=53/udp
sudo firewall-cmd --reload
sudo systemctl status named



23 . Now that we have a DNS name server running. We need to configure the router to use this as the primary DNS server so that all the DHCP clients (all VMs) will get configured with this DNS server as well. Depending on your router, the process of changing the primary DNS server may be a little different. Here's how it looks in mine.




24. Reboot the openshift-services VM, at the next boot it should get the new primary DNS from DHCP server. Run the following command to check if you can resolve DNS names.

nslookup okd4-compute-1.mycluster.mydomain.com

If you followed my DHCP address reservation configuration and DNS configuration, you should get a response with the IP address 192.168.1.1.204.


Load Balancer



The other thing we need to do is that we need to route the incoming traffic to the OpenShift nodes using the HA proxy load balancer. OpenShift is also using HA Proxy internally.



25. On openshift-services VM, install HAProxy by running the following command.

dnf install -y haproxy



26. Copy the haproxy config files from the okd4_files directory we got from git earlier and then restart the service. Normally you would use a separate load balancer for control-plane traffic and the workload traffic. However, for simplicity, in this example, we are using the same load balancer for both traffics.

cd 
sudo cp okd4_files/haproxy.cfg /etc/haproxy/haproxy.cfg
sudo setsebool -P haproxy_connect_any 1
sudo systemctl enable haproxy
sudo systemctl start haproxy
sudo systemctl status haproxy



27. Open up firewall ports so that HA Proxy can accept connections.

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=22623/tcp
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload



28. Now that we have a load balancer pointing to the correct IP Addresses, start all the VMs



29. After starting the VMs, go back to Red Hat Hybrid Cloud Console https://console.redhat.com. In the cluster that you created earlier, the VM/hosts should start coming up in the list as shown below.




30. You can then start assigning roles to these nodes. I assigned 3 of the nodes as control-plane nodes (masters) and 2 of the nodes as compute (worker) nodes. You can do this by clicking the values under the role column.



31. Click the Next buttons until you reach the Networking options page. Because we manually configured our load balancer and DNS server, we can select the User-Managed Networking option as shown below.




32. On the Review and Create page, click Install Cluster. This will start the installation of OpenShift to your VMs. You will be taken to the installation progress page, and from here, it's a waiting game.




This process will take around ~40 minutes to an hour depending on the performance of your servers. During the course of installation, the VMs will reboot several times.

There may be errors along the way but they will self-heal. If you find a pod that is stuck, you can also delete it to help the installation heal faster.


Post Installation



Once the Cluster installation is complete. Ensure that your computer can resolve the base domain you used to the IP address of the openshift-services so that you can access the cluster's web console and the Kubernetes APIs through the CLI. 

You need to configure the following host mapping.

mycluster.mydomain.com should resolve to 192.168.1.210
*.mycluster.mydomain.com should resolve to 192.168.1.210
api.mycluster.mydomain.com should resolve to 192.168.1.210

You can add this in your /etc/hosts file or change the primary DNS server of your client computer to the IP address of openshift-services, 192.168.1.210, where the DNS server is running. On a Windows machine, the hosts file is in C:/Windows/System32/etc.

If everything is set correctly, you should be able to navigate to OpenShift's web console at https://console-openshift-console.apps.<clustername>.<base domain name>, for example, https://console-openshift-console.apps.mycluster.mydomain.com.

Alternatively, you can visit https://console.redhat.com, navigate to OpenShift menu, and select the cluster you just created. Then click the Open console button.



OpenShift CLI



At this point, you won't be able to log in to the web console because you do not have the kubeadmin user's password and have not created any other user yet. However, you can access the cluster from the CLI by using the kubeconfig file you downloaded earlier. 

To access the cluster, we will need to install the OpenShift Client called oc.


33. Download the oc client from the red hat console. Go to https://console.redhat.com/openshift/downloads and download the OpenShift command-line interface (oc) that matches your client computer's OS type and architecture.


34. After downloading, extract the tar file and copy kubectl and oc to an executable path. I am using Mac OS, so the following command lines work for me after downloading the tar file.

tar -xvzf ~/Downloads/openshift-client-mac.tar.gz
cp ~/Downloads/openshift-client-mac/oc /usr/local/bin
cp ~/Downloads/openshift-client-mac/kubectl /usr/local/bin

On MacOS, you might be prompted to allow this app to access the system directories. You need to grant it permission for it to work properly.


35. In the Red Hat Hybrid Cloud Console, download the kubeconfig file, this holds a certificate to authenticate to your OpenShift cluster's API. Navigate to Clusters > your cluster name > Download kubeconfig.


This will download a kubeconfig file. Now to access your cluster you need to set an environment variable KUBECONFIG with a value which is a path pointing to this kubeconfig file. The below example command works on Mac OS.

cp ~/Downloads/kubeconfig-noingress ~/
export KUBECONFIG=~/kubeconfig-noingress

36. To confirm if you can access the cluster run the following oc command.

oc get nodes

This will list down all the nodes of your newly created OpenShift cluster.

Note: Even before the installation is 100% complete. You can start accessing the cluster. This is useful to see the progress of installation. For example, you can run the following command to check on the status of the Cluster Operators.

oc get co

If the installation goes well, you should see the following results. All operators should be Available=True.


Note: if you see errors in one or more Cluster Operator status, you can try to delete the pods of those operator, this will trigger self healing and help to un-stuck the installation process.


37. At the end of the installation, you will be prompted with the kubeadmin password. Copy this password and use it to access the OpenShift Console.

Using the username kubeadmin and the password you copied from the installation page of the Red Hat's Hybrid Cloud Console, you should be able to login to your OpenShift cluster.


Voila! Congratulations! You have just created a brand new OpenShift cluster.


There are also post-installation steps that need to be done such as configuring the internal image registry and adding an identity provider to get rid of the temporary kubeadmin login credentials. I will not cover this in this post because this is very well documented in the Openshift Documentation.




Wednesday, March 22, 2023

Running GitLab CI Jobs on OpenShift: The Easy Way

Rationale

Containers offer several benefits and an ideal environment for running GitLab CI jobs. They offer isolation to ensure the job's dependencies and configuration don't interfere with other jobs on the same machine. Containers also ensure that each GitLab CI job runs in the same environment, ensuring the reproducibility of the job's results. In addition, they offer portability and scalability, making it easy to run GitLab CI jobs on different infrastructures or cloud providers and to handle changing workloads. Finally, containers offer faster job start-up times, enabling quicker spin-up of GitLab CI job environments and faster execution of tests. Overall, containers provide a flexible, scalable, and efficient way to run CI/CD pipelines.So why not run your GitLab CI jobs on OpenShift cluster?

GitLab Runner Operator

Regardless of where your GitLab is running, you can run your Gitlab CI jobs on container by setting up GitLab runners (agents) in OpenShift. The fastest way to do this is to install the Gitlab Runner operator.

The following steps will guide you through the installation of GitLab runner operator on OpenShift.

1. There are prerequisites for installing GitLab Runner Operator. One of them is that the OpenShift Cluster must have "cert-manager" installed. This is used by the GitLab runner operator for requesting TLS certificates. The fastest way to install cert-manger is through Operators. Note that you must be a Cluster Administrator in order to install operators from Operator Hub. Navigate to Operator Hub and search for cert-manager. You may find two entries and you can install any of the two but for this example, we will use the Red Hat version of cert-manager.


 

2. Install the Operator from the UI using all default configurations.

3. Once cert-manager operator is installed, navigate back to the OperatorHub, look for GitLab runner. Select the Certified version of GitLab Runner Operator. Certified means it has been tested to work on OpenShift.



4. Install the operator using the default configurations as shown below.



5. After the installation is complete, verify the installation by making sure that the gitlab-runner-controller-manager pod is running in the openshift-operators namespace.




6. Create a Project/namespace where you want to GitLab runners to run. Let's call it gitlab-runners. 




6. Now that you have the operator running and you have a namespace for gitlab runners, you can create instances of GitLab runner by creating the "Runner" CRD. But before we create our first GitLab runner, we need to first create a secret that will hold the runner registration token. This is the token from your GitLab instance used by runners to register themselves.

Get the runner token secret from your GitLab instance by going to the Admin Area > CI/CD > Runners page. Then click the "Register and instance runner" button. Copy tRegistration Token. 



7. Navigate to the gitlab-runners project. Create a secret called gitlab-dev-runner-secret by navigating to Workloads > Secrets > Create > Key/Value Secret as shown below.



8. Once the secret is created, we can now create our first gitlab runner instance. Navigate to Installed Operators > GitLab Runner > GitLab Runner tab in the gitlab-runners project and click the Create Runner button. 

Give it a name. The Gitlab URL field should be the base URL of your GitLab instance. 

Leave the rest of the fields default. Click the Create button.



9. Once the GitLab runner pod is running, verify if the runner could register itself to GitLab by navigating to Gitlab and see if the new runner is listed, as shown below.



Conclusion

Et Voila! Now, all your GitLab CI jobs with the tag "openshift" will be executed by this new GitLab runner running on Openshift.

You can create as many runners as you want. You may want a dedicated runner for front-end builds and another runner for back-end CI builds.

You can play around with the runner YAML and experiment with configurations such as setting up a dedicated service account for the runner. If your CI build is accessing the K8s API of OpenShift, you may want to use a service account that has access to the Kube API.

Enjoy!



Tuesday, March 7, 2023

Running Nexus Docker Registry on OpenShift

I have figured out how to make the docker registry/repo of Nexus work on OpenShift. There are not a lot of resources out there that describe how to configure this. So if you are trying to make the Nexus Docker registry work on OpenShift, here is what you need to do.

1. Install Nexus Repository Operator from the Operator hub and Install it.


 

2. Create an instance of Nexus Repository. Leave everything as the default unless you want to change things. It should look something like this.


3. The operator will create a route for the Nexus web app. However, the Docker endpoint does not work out of the box. We will get to this later. Now let's create a Docker hosted repository in Nexus. 



4. Configure the Docker repo to have an HTTP connector at the specified port, in this example 5003.




5. Test if the container listens on this port by opening the pod terminal from OpenShift UI and running
curl localhost:5003. You should get a response like this. This means that the Docker endpoint is up.



6. Because docker clients does not accept URL path, the docker API endpoint is exposed at the root. However, this port is not exposed. Typically, if Nexus is running on a VM, you must set up a reverse proxy to forward requests to port 5003. Luckily in Openshift, we can expose this port through a service and then a route.

Modify the existing service to expose another port, 5003, as shown below.



7. Finally, expose the service through another Route. The existing route is pointing to the service at port 8081. The new route must point to port 5003 (imageregistry) of the service. The route must use a different host/subdomain from the other existing route and must use edge-terminated SSL, as shown below.



8. Et voila! You can now run a
docker login command using the hostname you provided in the route. You can push images using the host/imagename:tag. Take note that the repository URL displayed in the Nexus UI will not work. You need to use the host you defined in the route.

There you go. I hope I have saved you some time. Enjoy!



Popular