Running GitLab CI Jobs on OpenShift, The easy way


Containers offer several benefits and an ideal environment for running GitLab CI jobs. They offer isolation to ensure the job's dependencies and configuration don't interfere with other jobs on the same machine. Containers also ensure that each GitLab CI job runs in the same environment, ensuring the reproducibility of the job's results. In addition, they offer portability and scalability, making it easy to run GitLab CI jobs on different infrastructures or cloud providers and to handle changing workloads. Finally, containers offer faster job start-up times, enabling quicker spin-up of GitLab CI job environments and faster execution of tests. Overall, containers provide a flexible, scalable, and efficient way to run CI/CD pipelines.So why not run your GitLab CI jobs on OpenShift cluster?

GitLab Runner Operator

Regardless of where your GitLab is running, you can run your Gitlab CI jobs on container by setting up GitLab runners (agents) in OpenShift. The fastest way to do this is to install the Gitlab Runner operator.

The following steps will guide you through the installation of GitLab runner operator on OpenShift.

1. There are prerequisites for installing GitLab Runner Operator. One of them is that the OpenShift Cluster must have "cert-manager" installed. This is used by the GitLab runner operator for requesting TLS certificates. The fastest way to install cert-manger is through Operators. Note that you must be a Cluster Administrator in order to install operators from Operator Hub. Navigate to Operator Hub and search for cert-manager. You may find two entries and you can install any of the two but for this example, we will use the Red Hat version of cert-manager.


2. Install the Operator from the UI using all default configurations.

3. Once cert-manager operator is installed, navigate back to the OperatorHub, look for GitLab runner. Select the Certified version of GitLab Runner Operator. Certified means it has been tested to work on OpenShift.

4. Install the operator using the default configurations as shown below.

5. After the installation is complete, verify the installation by making sure that the gitlab-runner-controller-manager pod is running in the openshift-operators namespace.

6. Create a Project/namespace where you want to GitLab runners to run. Let's call it gitlab-runners. 

6. Now that you have the operator running and you have a namespace for gitlab runners, you can create instances of GitLab runner by creating the "Runner" CRD. But before we create our first GitLab runner, we need to first create a secret that will hold the runner registration token. This is the token from your GitLab instance used by runners to register themselves.

Get the runner token secret from your GitLab instance by going to the Admin Area > CI/CD > Runners page. Then click the "Register and instance runner" button. Copy tRegistration Token. 

7. Navigate to the gitlab-runners project. Create a secret called gitlab-dev-runner-secret by navigating to Workloads > Secrets > Create > Key/Value Secret as shown below.

8. Once the secret is created, we can now create our first gitlab runner instance. Navigate to Installed Operators > GitLab Runner > GitLab Runner tab in the gitlab-runners project and click the Create Runner button. 

Give it a name. The Gitlab URL field should be the base URL of your GitLab instance. 

Leave the rest of the fields default. Click the Create button.

9. Once the GitLab runner pod is running, verify if the runner could register itself to GitLab by navigating to Gitlab and see if the new runner is listed, as shown below.


Et Voila! Now, all your GitLab CI jobs with the tag "openshift" will be executed by this new GitLab runner running on Openshift.

You can create as many runners as you want. You may want a dedicated runner for front-end builds and another runner for back-end CI builds.

You can play around with the runner YAML and experiment with configurations such as setting up a dedicated service account for the runner. If your CI build is accessing the K8s API of OpenShift, you may want to use a service account that has access to the Kube API.


Running Nexus Docker Registry on OpenShift

I have figured out how to make the docker registry/repo of Nexus work on OpenShift. There are not a lot of resources out there that describe how to configure this. So if you are trying to make the Nexus Docker registry work on OpenShift, here is what you need to do.

1. Install Nexus Repository Operator from the Operator hub and Install it.


2. Create an instance of Nexus Repository. Leave everything as the default unless you want to change things. It should look something like this.

3. The operator will create a route for the Nexus web app. However, the Docker endpoint does not work out of the box. We will get to this later. Now let's create a Docker hosted repository in Nexus. 

4. Configure the Docker repo to have an HTTP connector at the specified port, in this example 5003.

5. Test if the container listens on this port by opening the pod terminal from OpenShift UI and running
curl localhost:5003. You should get a response like this. This means that the Docker endpoint is up.

6. Because docker clients does not accept URL path, the docker API endpoint is exposed at the root. However, this port is not exposed. Typically, if Nexus is running on a VM, you must set up a reverse proxy to forward requests to port 5003. Luckily in Openshift, we can expose this port through a service and then a route.

Modify the existing service to expose another port, 5003, as shown below.

7. Finally, expose the service through another Route. The existing route is pointing to the service at port 8081. The new route must point to port 5003 (imageregistry) of the service. The route must use a different host/subdomain from the other existing route and must use edge-terminated SSL, as shown below.

8. Et voila! You can now run a
docker login command using the hostname you provided in the route. You can push images using the host/imagename:tag. Take note that the repository URL displayed in the Nexus UI will not work. You need to use the host you defined in the route.

There you go. I hope I have saved you some time. Enjoy!

TDD is not a Testing Approach

TDD stands for Test-Driven Development. Contrary to what I mostly hear from others that it's a testing approach, No, it is not a testing approach. Rather, it's a development practice where tests are used to determine not only the correctness but also the completeness of the code. We often hear about TDD as something where tests are written first before the code. Which is partially correct because it's not only it. Test-driven development (TDD) is a software development methodology in which tests are written for a new piece of code before the code itself is written. The tests are designed to fail initially because if it doesn't, it means it's an invalid test, or the test is clearly not working. But as the implementation code is developed, it is written specifically just to pass the tests. By the time the implementation code is complete, it means it is already tested.

If you are familiar with developer assessment platforms like HackerRank, Codility, and Coder byte, or if you have attended algorithmic hackathons like Facebook Hacker Cup and Google CodeJam, the way developers work in these environments is that they write their code and then press a button. That button press runs a series of tests in ~2 seconds and comes back with a report that says your code has passed or not. TDD is actually very similar, except that the tests are also written by the same developer who's trying to solve the problem.

Another difference is that in TDD, the developers don't write an entire 1000-line test suite first. Instead, it actually follows the following cycle.

  1. Write tests that check for very specific behavior.
  2. The tests should fail because there has yet to be an implementation code written.
  3. Write just enough code to make the tests pass.
  4. Then refactor your code until you are satisfied with your design.
  5. Go back to step 1.

This approach to development has gained popularity in recent years because it can help to ensure that code is well-designed, easy to maintain, and free of defects.

One of the key advantages of TDD is that it forces developers to think about the desired behavior of their code before they start writing it. This helps to ensure that the code is well-designed and easy to understand. It also helps to prevent developers from writing code that is difficult to maintain or modify in the future. It prevents the developer from writing codes that are not necessary. Developers often call this "over-design." In other words, developers are unknowingly being forced to write good-quality code.

Another advantage of TDD is that it can help to catch defects early in the development process. Because tests are written before the code, developers can identify and fix defects as soon as they are introduced. This can save time and effort in the long run, as it is much easier to fix a defect early on than it is to track it down and fix it later on.

The Challenges

TDD is not without its challenges, however. There are also common mistakes the teams make when dealing with TDD.

One common challenge is that writing tests can be time-consuming, and it can be tempting for developers to skip this step in the interest of saving time. However, skipping the testing step can lead to defects and other problems down the line. In the end, as the quality degrades, the developers will end up spending more time fixing other problems that are not captured by tests, and this time is usually more than the time they save from not writing the needed tests.

Another challenge with TDD is that it can be difficult for developers who are new to the methodology to know how to write effective tests. Writing tests that are comprehensive enough to cover all possible scenarios can be a daunting task, and it can take time and experience to develop the skills needed to write effective tests.

Automating UI tests as part of the TDD approach may only sometimes work because UI tests typically take some time to run, and they are often fragile. Another thing is that because these tests are very visual and are more than just behavior checking, it is often left out of the TDD cycle. However, there are attempts to automate visual testing by Gojko Adzic that he describes in this talk.

The test activity done by the QA/testing team may seem redundant. It is true that because the code is already tested, then there is no need to perform manual tests execution of the same tests written by the developers. However, there are tests that cannot be covered or are very difficult to implement in the TDD approach, and this includes integration tests. The TDD tests are mainly unit tests, and any interaction with external systems is usually mocked. One way to solve this challenge is by bringing the testers closer to the developers. Let the testers define which behavior needs to be tested. The developer will then tell the testers which behaviors are already covered by the TDD tests and that the testers can focus on these other tests. This problem is less prevalent in Microservices Architecture, though. That's because each service is isolated and are independent from each other, the need for integration tests is lesser.

Despite these challenges, many developers and organizations have found that the benefits of TDD outweigh the challenges. By forcing developers to think about the desired behavior of their code and by catching defects early in the development process, TDD can help to ensure that code is of high quality and easy to maintain. As a result, TDD has become an increasingly popular approach to software development.

Five DevSecOps Myths Executives should Know

DevSecOps, a term coined from the combination of "development", "security", and "operations", is a set of practices that aim to integrate security and operations early on in the software development lifecycle. This approach is designed to address the increasing need for security in the fast-paced world of software development, where the frequent updates and deployments of applications make it difficult to incorporate security measures after the fact.

Traditionally, security was seen as an afterthought in the software development process. Developers would focus on building and deploying their applications, and security measures would be implemented later on by a separate team. This approach often led to security vulnerabilities that could have been avoided if security had been considered from the beginning.

With DevSecOps, the focus is on integrating security into the development process from the moment the code is written and/or committed. This means that security considerations are made at each stage of the development lifecycle, from planning and design to testing and deployment. This approach allows for the identification and resolution of security issues early on before they become a major problem in production.

The Myths

One of the reasons why most executives don't often get it includes the following common misconceptions about DevOps and DevSecOps from the C-Level perspective:

  1. That it is just some practice for building software;
  2. That it is a team-level thing and does not concern the entire organization;
  3. That it's only about putting in place a set of fancy tools for the IT teams;
  4. That it's all about creating a new organizational unit called DevOps/DevSecOps, which is responsible for implementing and maintaining the fancy toolsets;
  5. That DevOps and DevSecOps are only for "unicorns." You must have often heard the phrase, "We are a bank; we are not Netflix! We are highly regulated. We have hundreds of applications to maintain, not one single front-facing application".

If we dig into these myths, the first and second ones are partially correct because DevSecOps and DevOps, from the definition, are practices that integrated operations and security into the software development team. But it's not just about CI/CD, and it's not just about being agile. It's about building the right thing the right way for the right customers at the right time. And this directly impacts the company as a whole. Building the right products that people want at the right time could directly impact revenue. While building the wrong product that nobody wants at the wrong time could break a company. DevOps and DevSecops achieve this through continuous delivery, which allows for a faster feedback loop.

The third myth is false because DevSecOps is not only about the tools. There needs to be more than the tools to give the organization DevOps and DevSecops. This includes changing the processes and adapting certain practices. It is also not about creating a new dedicated team for DevSecOps as with myth number 4. It's about collaborating and breaking silos so that operations and security teams closely collaborate with developers. One very basic example of collaboration is that the security team, instead of manually performing security tests that they alone defined and designed, could instead share the security test definitions and design with the developers, and the developers can then take those into account when writing their code and even write codes to automate the execution of those tests. As a result, you can cut down the processing time by several orders of magnitude. And this means cost savings and better quality because of reduced human errors.

The fifth and last myth is particularly interesting because I heard this many times while working with many FSI clients. What those statements mean is often unclear, and they often don't like it when asked, "why not"? After a conversation with a seasoned manager who works in a traditional bank for so many years, there is one thing I learned, and you will be surprised. They are all just excuses. Tada! Excuses, because it requires lot of work to implement DevSecOps, and only some are up to the challenge. In fact, traditional organizations will benefit more from DevSecOps than startups.

DevSecOps is about optimizing the feedback loop from idea to end-users. By continuously delivering product increments and features, you will discover problems sooner and come up with solutions sooner. In the worst extreme, you might pivot your strategy or even abandon the idea early. Providing solutions sooner translates to happier customers. Every business needs that. Not just the unicorns.

The Missing Link

Most executives live in the Ivory Tower. And that's all right. That's the reality, and we must live with it unless we require every leader to go through an "undercover boss" mission. Not happening. Therefore, we need to help them understand the value of DevSecOps and DevOps, and the best tool we have for this is reporting. The fastest way to start this is to use monitoring tools to gain data points that we can use to produce the DORA (DevOps Research and Assessment) metrics. It is a set of four metrics that measure the software delivery performance of a team and the entire organization. This includes Deployment Frequency, Mean Time to Recovery (the mean amount of time required to recover from failure), Change Failure Rate (how often deployments fail), and Lead time for Change (the total amount of time from receiving the requirements to production). These metrics are a very good start because you can then connect other KPIs to them. For example, you can start looking to measure customer/end-user feedback and find a correlation with the above metrics. Speed-to-market is directly proportional to and can be measured by Deployment Frequency and Lead time for Change. When these metrics are properly reported back to upper management, the executives can then relate these KPIs to other business KPIs, including revenue and customer feedback, ultimately understanding the business impact of the DevSecOps initiative.

DevOps, DevSecOps, DevBizOps, DevSecBizOps, NoOps?

I have used both DevOps and DevSecOps terms interchangeably above because I think they are all the same. There are also other terms that came out after DevOps and they are basically just DevOps with an emphasis on certain areas. DevSecOps emphasizes security. DevBizOps emphasizes business. DevSecBizOps emphasizes both security and business, duh! And there is also a term called NoOps which I will leave for you to explore; it's interesting. However, those terms revolve around applying agile software development practices, encouraging collaboration, and breaking silos to achieve continuous delivery.


To summarize, the key benefit of DevSecOps is that it allows for the continuous integration and deployment of secure software. Because security is integrated into the development process, it is possible to deploy updates and new features quickly and efficiently without sacrificing security. This allows organizations to stay competitive in a rapidly-changing market, where the ability to adapt and innovate quickly is key.

Another advantage of DevSecOps is that it encourages collaboration between the business, development, security, and operations teams. By working together, these teams can identify and address security concerns in a more efficient and effective manner. This can lead to a more secure and stable software development process, as well as a more positive work environment.

To implement DevSecOps effectively, organizations must embrace the core DevOps principles and be willing to make some changes to their existing processes and organizational structure. This may include adopting new technologies and tools, such as automation and orchestration platforms, as well as implementing new security protocols and processes. However, the long-term benefits of DevSecOps make it well worth the effort.

Overall, DevOps and/or DevSecOps is a powerful approach to software development that allows organizations to build and deploy secure software quickly and efficiently. By integrating security into the development process, organizations can stay competitive and protect themselves against security threats. It is not just for IT teams, it also impacts the organization as a whole. It's not just about the tools, it's also about faster feedback loops and better customer experience. And lastly, executives will see the value of DevSecOps initiatives when they have visibility of the software delivery performance.

AZ-GTi Equatorial Mode at Low Latitude Problem Solved

Just a few weeks ago, I decided to get back into my old hobby, amateur astronomy. I spoke to old astronomy friends to get some tips in buying an ultra-portable telescope and mount. This led me to Skywatcher's AZ-GTI mount and a 127cm Maksutov Cassegrain telescope. I bought a kit online which arrived in just a couple of days. I knew this mount is an alt-az mount but then my friend told me this can be converted to an equatorial mount, and be used for some basic astrophotography. I looked up online and found that, indeed, some people are using this mount in equatorial mode by putting it on a wedge and changing the firmware. I bought the wedge and the counterweight after watching this youtube video.

The wedge and the mount arrived in a week and this is when I found the problem. The whole setup looks cool and very compact. But the problems is, I can't configure it to my location's latitude. I am in a city that is only 1.4 degrees north of the equator. The mount hits the base of the wedge starting at 7 degrees. But I needed 1.4 degrees.

The review videos I watched online was done by people living in Canada or the USA which is far up in the northern hemisphere. So they do not have this problem.

Though I found people talking about the exact same problem, I never really found a real solution. I guess there is not a lot of people in the low latitude countries who bought this mount and wedge. So I decided to fix this myself. As an Agile practitioner, I solve problems iteratively.

Iteration 1

In the first iteration, I tried flipping the pinion gear of the wedge so that the dovetail is facing the opposite side. It somehow fixed the original problem. But it also created a new one. Some bolts are in an odd position and difficult to access, like the altitude adjuster. Also, the centre of gravity is now too far off the centre. It could actually tip the tripod over and could break your precious telescope.


Iteration 2

In the second iteration, I designed and 3d printed a wedge that will replace the Skywatcher wedge. The objective is to test the feasibility first and see if the plastic can hold the weight. I designed a very simple L bracket pre-configured angle of 91.4 degrees. This will give me a fixed altitude of 1.4 degrees.

I printed it at 40% honeycomb infill at a slightly higher temperature and higher extrusion rate for stronger layer adhesion. The printing process took 34 hours.

Although it looks cool, and it shows that it can handle the load, there are some problems with the design.

Problems / Lessons learned

  •  The main problem was that this 40% semi-hollow plastic shell flexes when there is a load. And when it does, that 1.44 degrees of altitude is lost and sometimes points below the horizon depending on the position of the telescope.
  • The second problem was that you cannot adjust the azimuth so it's very difficult to polar-align especially when you cannot see Polaris.
  • And lastly, the power adapter port is no longer accessible at a certain Right-Ascension angle. I forgot to take this into account while designing.

Iteration 3

Based on what I learned from the second iteration, this Iteration was a complete pivot from the second. I decided to reuse the Skywatcher's wedge instead of replacing it so that I still have an azimuth adjuster. The objective this time was not to replace the wedge with a 3d printed one but to help that wedge alleviate the stress of very low latitude configuration. So I designed an extension of that wedge. This extension will tilt the wedge at 12 degrees angle and also move the wedge's mount point off centre, just enough to give room for the counterweight before hitting the tripod legs. This will give a better centre of gravity. And because of the 12 degrees offset, to achieve a 1.4 degrees altitude, the mount will just have to be configured at 13.4 degrees position, which is above the 7 degrees limit.

I printed this solid (100% infill) with a yellow PLA plastic that I already have. The printing process took almost 20 hours to complete. The result is amazing! It's very strong and heavy. It weighs almost a kilogram.

Problems / Lessons Learned

Although this iteration has proven that the new design works and the result is amazing and almost perfect, there are still very minor issues that need fixing.

  • The colour is Yellow, does not fit with the Skywatcher's colour palette. This just because that's the only available filament I had at the time of printing.
  • The thumbscrews at the bottom are quite difficult to tighten with your thumb. Because the gap between the thumb screw knobs and the base of the wedge extension is too tight as shown in the following image.

With these minor problems, I decided to do another iteration with more experimentation.


Iteration 4

In this iteration, apart from fixing the Iteration 3 issues, I also experimented with a different material, ABS, an oil-based plastic. This is a slightly stronger material than what I usually use in 3d printing. This material also does not decompose. I have not tried printing with this material before. And, to be able to print with this material, I had to modify my printer by adding a heater at the print bed. This is required to print this material. So as extra work in this iteration, I hacked my 3d printer. I stuck a silicon heater to the glass print bed, installed a relay module and updated the firmware.

But this has led to a failed print. The material has warped significantly even though the bed is heated. It seems that the bed temperature settings were not set correctly. 

In this iteration, I also tweaked the printing configuration, changed the nozzle temperature, increased the print bed temperature to 110 degrees Celsius and picked a slower printing speed. The print has less warping but after just 2 hours of printing, strange issues came up. There were layer shifting issues, and about 3 hours into the process, the printer crashed. The motherboard overheated and went into thermal shutdown and resets itself leaving me with unfinished print. The layer shifting can also be explained by overheating.

This iteration is a failure but it surfaced an issue with my 3d printer. I changed the design of the print bed of my 3d printer so that it leaves a gap where hot air can escape from the bottom of the print bed where the motherboard is. 

The design change worked well. The printer no longer overheats. I tried a couple of huge prints without overheating issues. So I'm ready to try again.

Iteration 5 

In this iteration, since the printer design is fixed, I tried again printing in ABS. There is less warping but there are layer adhesion problems. There are cracks in the print. I think I was still printing too fast. Or perhaps I should have enclosed the printer to give a uniform temperature across the layers.

The resulting print was indeed super strong, but with ugly looks and many cracks all over.

Iteration 6

I decided I had enough of ABS printing. So I bought a white PLA filament instead. I adjusted the design to fix the thumb screws gap. Added an embossed branding. Printed with a little over extrusion to make sure it doesn't leave any gaps between layers and ensures a stronger layer bond.

And here's the final result with the wedge and the AZ-GTI mount attached to it.

I'm happy with the final result and decided that this is the final version. Overall, I spent around $40 on filaments, few hours of CAD design, about a total of 40 hours of overnight printing. But the result is worth it. All problems are now solved. It looks cool and blends in very well with the Skywatcher colours. The only thing that worries me now is that in the next few years, PLA, the material I used, will start to degrade and eventually decompose especially when exposed outdoors. But I guess will deal with that in the next few years.

For anyone who has the same problem with AZ-GTi mounts in equatorial mode at low latitudes, feel free to download the 3d model STL file and print it yourself. Save yourself of the troubles I had.

You can get the STL file here -->

Here are my print settings.

  • Filament  - 3d-aura PLA Extreme (super strong PLA)
  • Nozzle diameter  -  0.4mm
  • Layer height - 0.2mm
  • Infill - 100% solid
  • Printed support  -  Yes
  • Bed temperature - 70 deg C
  • Nozzle temperature  - 210 deg C
  • The print speed at 120mm per second
  • Printing orientation for maximum tension strength as per below:

Happy 3d printing!

DIY Delta 3D Printer Rebuild


About 5 years ago, I built a Delta 3d printer from a kit I bought from AliExpress. You will find the details in this post. In that same year, I have been exploring 3D printing and building a 3D-printed electronic drum kit. The 3D models are opensource and are available on Github. The drum kit has been working very well, and I have used it to produce songs that you can now find on Youtube and Spotify. That's my other hobby.

Old Printer

5 years have passed, the 3D printed PLA plastic (Poly Lactic Acid) parts of my old 3d printer has degraded and started to become weak and brittle. Each part started to crumble and break apart one by one. PLA plastic is biodegradable; I guess it has reached the PLA's limited life span. Unfortunately, I have not printed reserved parts of the printer, and even if I did, it would have also degraded at the same time and would render unusable Today. The images below show the broken parts and have rendered the printer unusable.

Just before the year-end holidays, I decided to rebuild the printer and make it more robust. I wanted to replace most of the plastic parts with aluminium, but I can no longer find the exact spare parts. So I had to improvise and fit non-standard parts in it. The results were surprisingly amazing!

Spare Parts Sourcing

I can no longer find a supplier of the exact same parts, so I decided to find other parts and fit them in the printer. I got new aluminium extruder parts from Creality; this was originally designed to work with Creality 3D printers. I bought generic carriage/rollers, effector plate, hot end and nozzle from somewhere else and put them all together. Later on, I also replaced the cheap, mechanically unstable push rod with a better one. With these parts, this printer is never going to look the same.

Using some nylon spacers, I managed to attach the carriage to the roller with minimum impact on the printer's geometry. The dimension is about 0.3 mm off from the original. But this is OK as I can compensate for this in the firmware. I assembled a version 6 J-head hot-end made in brass and the classic hammock effector plate.

The aluminium extruder from Creality looks beautiful. I also bought an aluminium version of the delta frames. It was originally made of injection-moulded orange plastic.

Knocking the Old Printer Down

Knocking this printer down is a kind of a test of my memory. I had to remember how I assembled this to make the disassembly smoother. I assembled this printer 5 years ago without any assembly instructions. Now it's time to knock it down to the last bolts/screws.

Dismantling was fun. It took less than an hour to disconnect every electronic part and take every bolt and nut. Disconnect all peripherals from the motherboard and remove disassemble the effector.


Building the New Version

Starting at the bottom triangle frame using the new aluminium version of the corner frame, attaching the motors and building up to the top frame was the sequence I followed. During the assembly process, I made few mistakes. I had to redo some parts of the build. At this point, I realized that I may be wiser than my younger self, but I am definitely not smarter than my 5-year younger self. Nevertheless, I managed to assemble the frame with the motors in it and the timing belts installed, well greases bearings, etc..., in less than a couple of hours. 


The image below shows the complete mechanical assembly—no more orange plastic parts. The only remaining plastic part is the new effector plate which is made of injection-moulded nylon. I liked the hammock auto-levelling design of the effector plate, but I could not find an aluminium version of it. The pushrods were not replaced and still made of carbon fibre. The rest are now made of aluminium painted in black (not anodized).



Assembling the electronics was quite fast except for one part. I realized that the Z-axis's mechanical lever limit switch for automatic bed levelling cannot be attached to this type of effector plate. The holes do not match.

Connected everything to the Arduino board

Loaded the Marlin firmware


After reading about this version 6 design of the hammock effector plate, I learned that it was designed to use a particular Industrial Automation optical sensor. The optical sensor is Omron SX-671. I found it on AliExpress and just ordered it right away.

EE-SX671R | Omron, Europe
SX-671 Optical Sensor


I knew this order will take at least three weeks. But I want to use the printer immediately to start printing some of the remaining parts of the printer. I made a crude plastic frame that allows me to mount the mechanical lever limit switch. It worked and should last long enough to print the remaining parts.

The plastic assembly bolted to support the lever limit switch.


Crude lever Z-axis limit switch assembly just so I can print the other parts while waiting for the optical sensor.

After the electronics are complete, I flashed the modified firmware. The firmware configuration was modified according to the new geometry of the printer. The Z-axis height is slightly bigger because of the shorter hot-end and a better end stop positioning.  After flashing the firmware to the existing MKS Mini Arduino board, it just worked right away. With just a minor adjustment on the Z height, the printer was then ready to print its own LCD panel housing + plus other parts.

Printing its own LCD panel housing + motherboard chassis

I have also designed a cover for the carriage rollers. This part becomes important now that I have a kid that could stick her fingers between those rollers. Putting a cover makes sure nothing gets in between the rollers and adds a better look to the entire build.

To test the reliability of the machine I just built and to break-in the new bearings, I printed a
"Baby Groot" for my little girl, which took 6.5 hours.

Finally, after two weeks, the optical sensor and the pushrods have arrived. The pushrods are of good quality with zero backlashes. It's built for the FLSUN Q5 printer, but it fits perfectly. After a few test prints, I can see that the print quality has improved with less mechanical errors caused by a worn-out ball and socket joints. And finally, here's the rebuild 3D printer in action. Printing a camera T-ring telescope adapter.

Including waiting for the orders to arrive from China, the project took more than a month. But it only cost around $100 in total. A lot cheaper than buying a new printer. Overall, I am very with how it came out. I like the yellow chrome over all the black metal frame.

I now am a proud owner of a unique 3d printer that no one can buy from anywhere. I hope this inspires someone to build their own 3d printer by leveraging open-source hardware.


I decided to add a heater to the print bed. The issue was that the motherboard I have, "MKS Mini v1.2", has no support for the heated bed. Therefore there is no port to attach a print bed heater. But the bed temperature sensor exists, and the D8 pin of the MPU is also exposed. D8 pin is used by Marlin firmware to trigger print bed heater on or off. Another issue is that the power supply I am using is only 72 watts, while if some print bed heaters are about 100W. So how did I solve this?


I bought a silicon heater which is 220V and 100W. Then I bought a relay module connected it to the D8 pin of the motherboard. This only adds a little bit more current load on the board just to drive the tiny relay. But then the switching of the heater is done in the relay. There is just this mechanical "tick" sound whenever the print bed turns on or off. The bed heats up to the target temperature in just a few seconds.

I also revised the print bed clamp design; I raised the print bed to leave a gap between the frame and the print bed. This allows the air to flow in and out and not accumulate under the print bed, which heats up the motherboard. This is a response to an issue where the motherboard was overheating.

Now, with this revision, the printer can print materials other than PLA. I tried printing with ABS, and it works!


Architecting Systems Like A Rock Band

In Software, orchestration often means control, synchronization, mediation and scheduling of decoupled application services in order to ful...