Monday, April 3, 2023

Installing OpenShift on Any Infrastructure

As of this writing, Assisted Installer is the easiest way to install an OpenShift cluster on a custom/bespoke infrastructure. You do not need to manually deal with ignition files or manually configure the OpenShift installer. You do not even need to manually set up a bootstrap node.

In this post, I will walk you through how to create an OpenShift cluster using Assisted Installer on a bespoke infrastructure. To host VMs, I am using Proxmox VE, a lightweight open-source virtualization product. However, similar steps should also apply to other hypervisor products. If you want to know how to install Proxmox, see my previous post at

First, we need to set up the infrastructure. We need to create VMs and then do the following steps to configure our infrastructure.

  • Create and configure a DNS server
  • Configure the DHCP server of our router
  • Create and configure an HAProxy load balancer

We will run the supporting services in a dedicated VM called openshift-services.

The first step is to create a VM that will host all of the services required by the cluster.

Creating the Virtual Machines

1. Download the CentOS Stream image from and upload it to Proxmox. You may also let Proxmox download the image directly from a URL.

Download the correct architecture that matches your VM host. In my case, my VMs have x86_64 CPUs.

Upload the image to the Proxmox's local storage as shown below.

2. Create a VM in Proxmox and name it "openshift-services".

3. Use the CentOS Stream ISO file as the DVD image to boot from.

4. Choose the disk size. 100GB should be enough but you can make it bigger as well. You don't need a lot of resources for this VM as well as it is only doing DNS name resolution and load balancing. So you can use 2 CPUs (or less) and 4GB of RAM.

5. Review the configuration and create the VM.

6. Start the VM and follow the CentOS Stream installer instructions. Then wait for the installation to complete.

7. While waiting for the installation to finish, create the cluster from Red Hat Hybrid Console. Navigate to > OpenShift, then click "Create Cluster". 

If you don't have a Red Hat account yet, feel free to register for a Red Hat account, it's free.

8. Because we want to install OpenShift in our bespoke infrastructure, we need to choose the "Run it yourself" option and then the Platform Agnostic.

9. You will be presented with 3 installation options. Let's use the recommended option, Interactive, which is the easiest one. 

10. Fill up the details according to the domain and cluster name you choose. 

Take note that the cluster name will become part of the base domain of the resulting cluster. Also take note of the domain name, because you are gonna need to configure the DNS server to resolve this domain name to the load balancer.

For example:

Cluster name = mycluster
Cluster base domain =

The resulting base domain of the cluster will be

My plan is to let the VMs get their IP Addresses from my router through DHCP, so I am selecting DHCP option for host's network configuration as shown below.

11. On the next page, there are optional operators that you can install. I don't need them for this installation so I will not select any.

12. On the third page, this is where you define the hosts/node. Click the Add Host button. You may also paste your SSH public key here so that you can SSH to the nodes for debugging later. Once all set, then click the Generate Discovery ISO. This will generate an ISO image.

13. Wait for the ISO to be generated, then download it. This will be the image that you will use to boot the VMs.

14. While waiting for the ISO to be downloaded, revisit the CentOS Stream installation on the openshift-services VM. By this time, the installation should have been completed and you just need to reboot the system.

15. Once the OpenShift Discovery ISO is downloaded, upload it to Proxmox local disk so that we can use it as the bootable installer for our VMs.

16. Once uploaded, let's start creating the VMs for our OpenShift cluster. We will be creating a total of 5 VMs, 3 for master nodes and 2 for worker nodes. You can choose to create more worker nodes as you wish.

Prepare the VMs according to the spec below.

Both Master and Worker Nodes: 

CPU: 8
Memory: 16 GB
Disk: 250 GB

Use the ISO image you downloaded as the image for the DVD drive. In the above screenshot, it is the value of the ide2 field.

17. After creating the VMs, do not start the VMs yet as we still need to install the supporting services in the openshift-services VM. Your list of VM should look something like the screenshot below.

18. Using your router's built-in DHCP server, assign IP addresses to your VMs through the address reservation feature. This requires mapping MAC Addresses to IP Addresses. Do this before starting the VMs. Here is an example mapping I used in my router setup. I used x.x.x.210 as the IP address of the openshift-services. It is important to take note of this address because this will be the load balancer IP and DNS server IP. 

DNS Server

19. Now let's install and configure a DNS Server in openshift-services VM. You need to SSH to the openshift-services VM.

OpenShift cluster requires a DNS server. We will use the openshift-services as the DNS server by installing Bind (a.k.a. named) to it.

Bind can be installed by running the following command inside openshift-service VM.

dnf install -y bind bind-utils

After installing, run the following command to enable, start and validate the status of named service.

systemctl enable named
systemctl start named
systemctl status named

Once we validated the installation, we need to configure the named service according to the IP addresses and DNS name mapping that we want to use for our worker nodes.

20. Configure the DNS Server so that the names will resolve to the IP addresses that we configured in the router's DHCP address reservation (step 9). I have prepared a named configuration for this setup which is available in GitHub at So let's get these files and configure our DNS server. Note that you can choose to use your own DNS server configuration.

git clone

21. I have prepared a script that will edit the files according to the chosen cluster name and base domain name. These should be the same names as the ones you have chosen when downloading the OpenShift image from the Red Hat Hybrid console. In the same directory, run the script.

cd okd4_files
./ mycluster

This script will edit the named configuration files in place.

22. Copy the named config files to the correct location. Ensure that the named service is running after the restart.

sudo cp named.conf /etc/named.conf
sudo cp named.conf.local /etc/named/
sudo mkdir /etc/named/zones
sudo cp db* /etc/named/zones
sudo systemctl restart named
sudo firewall-cmd --permanent --add-port=53/udp
sudo firewall-cmd --reload
sudo systemctl status named

23 . Now that we have a DNS name server running. We need to configure the router to use this as the primary DNS server so that all the DHCP clients (all VMs) will get configured with this DNS server as well. Depending on your router, the process of changing the primary DNS server may be a little different. Here's how it looks in mine.

24. Reboot the openshift-services VM, at the next boot it should get the new primary DNS from DHCP server. Run the following command to check if you can resolve DNS names.


If you followed my DHCP address reservation configuration and DNS configuration, you should get a response with the IP address

Load Balancer

The other thing we need to do is that we need to route the incoming traffic to the OpenShift nodes using the HA proxy load balancer. OpenShift is also using HA Proxy internally.

25. On openshift-services VM, install HAProxy by running the following command.

dnf install -y haproxy

26. Copy the haproxy config files from the okd4_files directory we got from git earlier and then restart the service. Normally you would use a separate load balancer for control-plane traffic and the workload traffic. However, for simplicity, in this example, we are using the same load balancer for both traffics.

sudo cp okd4_files/haproxy.cfg /etc/haproxy/haproxy.cfg
sudo setsebool -P haproxy_connect_any 1
sudo systemctl enable haproxy
sudo systemctl start haproxy
sudo systemctl status haproxy

27. Open up firewall ports so that HA Proxy can accept connections.

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=22623/tcp
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload

28. Now that we have a load balancer pointing to the correct IP Addresses, start all the VMs

29. After starting the VMs, go back to Red Hat Hybrid Cloud Console In the cluster that you created earlier, the VM/hosts should start coming up in the list as shown below.

30. You can then start assigning roles to these nodes. I assigned 3 of the nodes as control-plane nodes (masters) and 2 of the nodes as compute (worker) nodes. You can do this by clicking the values under the role column.

31. Click the Next buttons until you reach the Networking options page. Because we manually configured our load balancer and DNS server, we can select the User-Managed Networking option as shown below.

32. On the Review and Create page, click Install Cluster. This will start the installation of OpenShift to your VMs. You will be taken to the installation progress page, and from here, it's a waiting game.

This process will take around ~40 minutes to an hour depending on the performance of your servers. During the course of installation, the VMs will reboot several times.

There may be errors along the way but they will self-heal. If you find a pod that is stuck, you can also delete it to help the installation heal faster.

Post Installation

Once the Cluster installation is complete. Ensure that your computer can resolve the base domain you used to the IP address of the openshift-services so that you can access the cluster's web console and the Kubernetes APIs through the CLI. 

You need to configure the following host mapping. should resolve to
* should resolve to should resolve to

You can add this in your /etc/hosts file or change the primary DNS server of your client computer to the IP address of openshift-services,, where the DNS server is running. On a Windows machine, the hosts file is in C:/Windows/System32/etc.

If everything is set correctly, you should be able to navigate to OpenShift's web console at https://console-openshift-console.apps.<clustername>.<base domain name>, for example,

Alternatively, you can visit, navigate to OpenShift menu, and select the cluster you just created. Then click the Open console button.

OpenShift CLI

At this point, you won't be able to log in to the web console because you do not have the kubeadmin user's password and have not created any other user yet. However, you can access the cluster from the CLI by using the kubeconfig file you downloaded earlier. 

To access the cluster, we will need to install the OpenShift Client called oc.

33. Download the oc client from the red hat console. Go to and download the OpenShift command-line interface (oc) that matches your client computer's OS type and architecture.

34. After downloading, extract the tar file and copy kubectl and oc to an executable path. I am using Mac OS, so the following command lines work for me after downloading the tar file.

tar -xvzf ~/Downloads/openshift-client-mac.tar.gz
cp ~/Downloads/openshift-client-mac/oc /usr/local/bin
cp ~/Downloads/openshift-client-mac/kubectl /usr/local/bin

On MacOS, you might be prompted to allow this app to access the system directories. You need to grant it permission for it to work properly.

35. In the Red Hat Hybrid Cloud Console, download the kubeconfig file, this holds a certificate to authenticate to your OpenShift cluster's API. Navigate to Clusters > your cluster name > Download kubeconfig.

This will download a kubeconfig file. Now to access your cluster you need to set an environment variable KUBECONFIG with a value which is a path pointing to this kubeconfig file. The below example command works on Mac OS.

cp ~/Downloads/kubeconfig-noingress ~/
export KUBECONFIG=~/kubeconfig-noingress

36. To confirm if you can access the cluster run the following oc command.

oc get nodes

This will list down all the nodes of your newly created OpenShift cluster.

Note: Even before the installation is 100% complete. You can start accessing the cluster. This is useful to see the progress of installation. For example, you can run the following command to check on the status of the Cluster Operators.

oc get co

If the installation goes well, you should see the following results. All operators should be Available=True.

Note: if you see errors in one or more Cluster Operator status, you can try to delete the pods of those operator, this will trigger self healing and help to un-stuck the installation process.

37. At the end of the installation, you will be prompted with the kubeadmin password. Copy this password and use it to access the OpenShift Console.

Using the username kubeadmin and the password you copied from the installation page of the Red Hat's Hybrid Cloud Console, you should be able to login to your OpenShift cluster.

Voila! Congratulations! You have just created a brand new OpenShift cluster.

There are also post-installation steps that need to be done such as configuring the internal image registry and adding an identity provider to get rid of the temporary kubeadmin login credentials. I will not cover this in this post because this is very well documented in the Openshift Documentation.

Wednesday, March 22, 2023

Running GitLab CI Jobs on OpenShift: The Easy Way


Containers offer several benefits and an ideal environment for running GitLab CI jobs. They offer isolation to ensure the job's dependencies and configuration don't interfere with other jobs on the same machine. Containers also ensure that each GitLab CI job runs in the same environment, ensuring the reproducibility of the job's results. In addition, they offer portability and scalability, making it easy to run GitLab CI jobs on different infrastructures or cloud providers and to handle changing workloads. Finally, containers offer faster job start-up times, enabling quicker spin-up of GitLab CI job environments and faster execution of tests. Overall, containers provide a flexible, scalable, and efficient way to run CI/CD pipelines.So why not run your GitLab CI jobs on OpenShift cluster?

GitLab Runner Operator

Regardless of where your GitLab is running, you can run your Gitlab CI jobs on container by setting up GitLab runners (agents) in OpenShift. The fastest way to do this is to install the Gitlab Runner operator.

The following steps will guide you through the installation of GitLab runner operator on OpenShift.

1. There are prerequisites for installing GitLab Runner Operator. One of them is that the OpenShift Cluster must have "cert-manager" installed. This is used by the GitLab runner operator for requesting TLS certificates. The fastest way to install cert-manger is through Operators. Note that you must be a Cluster Administrator in order to install operators from Operator Hub. Navigate to Operator Hub and search for cert-manager. You may find two entries and you can install any of the two but for this example, we will use the Red Hat version of cert-manager.


2. Install the Operator from the UI using all default configurations.

3. Once cert-manager operator is installed, navigate back to the OperatorHub, look for GitLab runner. Select the Certified version of GitLab Runner Operator. Certified means it has been tested to work on OpenShift.

4. Install the operator using the default configurations as shown below.

5. After the installation is complete, verify the installation by making sure that the gitlab-runner-controller-manager pod is running in the openshift-operators namespace.

6. Create a Project/namespace where you want to GitLab runners to run. Let's call it gitlab-runners. 

6. Now that you have the operator running and you have a namespace for gitlab runners, you can create instances of GitLab runner by creating the "Runner" CRD. But before we create our first GitLab runner, we need to first create a secret that will hold the runner registration token. This is the token from your GitLab instance used by runners to register themselves.

Get the runner token secret from your GitLab instance by going to the Admin Area > CI/CD > Runners page. Then click the "Register and instance runner" button. Copy tRegistration Token. 

7. Navigate to the gitlab-runners project. Create a secret called gitlab-dev-runner-secret by navigating to Workloads > Secrets > Create > Key/Value Secret as shown below.

8. Once the secret is created, we can now create our first gitlab runner instance. Navigate to Installed Operators > GitLab Runner > GitLab Runner tab in the gitlab-runners project and click the Create Runner button. 

Give it a name. The Gitlab URL field should be the base URL of your GitLab instance. 

Leave the rest of the fields default. Click the Create button.

9. Once the GitLab runner pod is running, verify if the runner could register itself to GitLab by navigating to Gitlab and see if the new runner is listed, as shown below.


Et Voila! Now, all your GitLab CI jobs with the tag "openshift" will be executed by this new GitLab runner running on Openshift.

You can create as many runners as you want. You may want a dedicated runner for front-end builds and another runner for back-end CI builds.

You can play around with the runner YAML and experiment with configurations such as setting up a dedicated service account for the runner. If your CI build is accessing the K8s API of OpenShift, you may want to use a service account that has access to the Kube API.


Tuesday, March 7, 2023

Running Nexus Docker Registry on OpenShift

I have figured out how to make the docker registry/repo of Nexus work on OpenShift. There are not a lot of resources out there that describe how to configure this. So if you are trying to make the Nexus Docker registry work on OpenShift, here is what you need to do.

1. Install Nexus Repository Operator from the Operator hub and Install it.


2. Create an instance of Nexus Repository. Leave everything as the default unless you want to change things. It should look something like this.

3. The operator will create a route for the Nexus web app. However, the Docker endpoint does not work out of the box. We will get to this later. Now let's create a Docker hosted repository in Nexus. 

4. Configure the Docker repo to have an HTTP connector at the specified port, in this example 5003.

5. Test if the container listens on this port by opening the pod terminal from OpenShift UI and running
curl localhost:5003. You should get a response like this. This means that the Docker endpoint is up.

6. Because docker clients does not accept URL path, the docker API endpoint is exposed at the root. However, this port is not exposed. Typically, if Nexus is running on a VM, you must set up a reverse proxy to forward requests to port 5003. Luckily in Openshift, we can expose this port through a service and then a route.

Modify the existing service to expose another port, 5003, as shown below.

7. Finally, expose the service through another Route. The existing route is pointing to the service at port 8081. The new route must point to port 5003 (imageregistry) of the service. The route must use a different host/subdomain from the other existing route and must use edge-terminated SSL, as shown below.

8. Et voila! You can now run a
docker login command using the hostname you provided in the route. You can push images using the host/imagename:tag. Take note that the repository URL displayed in the Nexus UI will not work. You need to use the host you defined in the route.

There you go. I hope I have saved you some time. Enjoy!

Sunday, December 11, 2022

TDD is not a Testing Approach

TDD stands for Test-Driven Development. Contrary to what I mostly hear from others that it's a testing approach, No, it is not a testing approach. Rather, it's a development practice where tests are used to determine not only the correctness but also the completeness of the code. We often hear about TDD as something where tests are written first before the code. Which is partially correct because it's not only it. Test-driven development (TDD) is a software development methodology in which tests are written for a new piece of code before the code itself is written. The tests are designed to fail initially because if it doesn't, it means it's an invalid test, or the test is clearly not working. But as the implementation code is developed, it is written specifically just to pass the tests. By the time the implementation code is complete, it means it is already tested.

If you are familiar with developer assessment platforms like HackerRank, Codility, and Coder byte, or if you have attended algorithmic hackathons like Facebook Hacker Cup and Google CodeJam, the way developers work in these environments is that they write their code and then press a button. That button press runs a series of tests in ~2 seconds and comes back with a report that says your code has passed or not. TDD is actually very similar, except that the tests are also written by the same developer who's trying to solve the problem.

Another difference is that in TDD, the developers don't write an entire 1000-line test suite first. Instead, it actually follows the following cycle.

  1. Write tests that check for very specific behavior.
  2. The tests should fail because there has yet to be an implementation code written.
  3. Write just enough code to make the tests pass.
  4. Then refactor your code until you are satisfied with your design.
  5. Go back to step 1.

This approach to development has gained popularity in recent years because it can help to ensure that code is well-designed, easy to maintain, and free of defects.

One of the key advantages of TDD is that it forces developers to think about the desired behavior of their code before they start writing it. This helps to ensure that the code is well-designed and easy to understand. It also helps to prevent developers from writing code that is difficult to maintain or modify in the future. It prevents the developer from writing codes that are not necessary. Developers often call this "over-design." In other words, developers are unknowingly being forced to write good-quality code.

Another advantage of TDD is that it can help to catch defects early in the development process. Because tests are written before the code, developers can identify and fix defects as soon as they are introduced. This can save time and effort in the long run, as it is much easier to fix a defect early on than it is to track it down and fix it later on.

The Challenges

TDD is not without its challenges, however. There are also common mistakes the teams make when dealing with TDD.

One common challenge is that writing tests can be time-consuming, and it can be tempting for developers to skip this step in the interest of saving time. However, skipping the testing step can lead to defects and other problems down the line. In the end, as the quality degrades, the developers will end up spending more time fixing other problems that are not captured by tests, and this time is usually more than the time they save from not writing the needed tests.

Another challenge with TDD is that it can be difficult for developers who are new to the methodology to know how to write effective tests. Writing tests that are comprehensive enough to cover all possible scenarios can be a daunting task, and it can take time and experience to develop the skills needed to write effective tests.

Automating UI tests as part of the TDD approach may only sometimes work because UI tests typically take some time to run, and they are often fragile. Another thing is that because these tests are very visual and are more than just behavior checking, it is often left out of the TDD cycle. However, there are attempts to automate visual testing by Gojko Adzic that he describes in this talk.

The test activity done by the QA/testing team may seem redundant. It is true that because the code is already tested, then there is no need to perform manual tests execution of the same tests written by the developers. However, there are tests that cannot be covered or are very difficult to implement in the TDD approach, and this includes integration tests. The TDD tests are mainly unit tests, and any interaction with external systems is usually mocked. One way to solve this challenge is by bringing the testers closer to the developers. Let the testers define which behavior needs to be tested. The developer will then tell the testers which behaviors are already covered by the TDD tests and that the testers can focus on these other tests. This problem is less prevalent in Microservices Architecture, though. That's because each service is isolated and are independent from each other, the need for integration tests is lesser.

Despite these challenges, many developers and organizations have found that the benefits of TDD outweigh the challenges. By forcing developers to think about the desired behavior of their code and by catching defects early in the development process, TDD can help to ensure that code is of high quality and easy to maintain. As a result, TDD has become an increasingly popular approach to software development.

Wednesday, December 7, 2022

Five DevSecOps Myths Executives should Know

DevSecOps, a term coined from the combination of "development", "security", and "operations", is a set of practices that aim to integrate security and operations early on in the software development lifecycle. This approach is designed to address the increasing need for security in the fast-paced world of software development, where the frequent updates and deployments of applications make it difficult to incorporate security measures after the fact.

Traditionally, security was seen as an afterthought in the software development process. Developers would focus on building and deploying their applications, and security measures would be implemented later on by a separate team. This approach often led to security vulnerabilities that could have been avoided if security had been considered from the beginning.

With DevSecOps, the focus is on integrating security into the development process from the moment the code is written and/or committed. This means that security considerations are made at each stage of the development lifecycle, from planning and design to testing and deployment. This approach allows for the identification and resolution of security issues early on before they become a major problem in production.

The Myths

One of the reasons why most executives don't often get it includes the following common misconceptions about DevOps and DevSecOps from the C-Level perspective:

  1. That it is just some practice for building software;
  2. That it is a team-level thing and does not concern the entire organization;
  3. That it's only about putting in place a set of fancy tools for the IT teams;
  4. That it's all about creating a new organizational unit called DevOps/DevSecOps, which is responsible for implementing and maintaining the fancy toolsets;
  5. That DevOps and DevSecOps are only for "unicorns." You must have often heard the phrase, "We are a bank; we are not Netflix! We are highly regulated. We have hundreds of applications to maintain, not one single front-facing application".

If we dig into these myths, the first and second ones are partially correct because DevSecOps and DevOps, from the definition, are practices that integrated operations and security into the software development team. But it's not just about CI/CD, and it's not just about being agile. It's about building the right thing the right way for the right customers at the right time. And this directly impacts the company as a whole. Building the right products that people want at the right time could directly impact revenue. While building the wrong product that nobody wants at the wrong time could break a company. DevOps and DevSecops achieve this through continuous delivery, which allows for a faster feedback loop.

The third myth is false because DevSecOps is not only about the tools. There needs to be more than the tools to give the organization DevOps and DevSecops. This includes changing the processes and adapting certain practices. It is also not about creating a new dedicated team for DevSecOps as with myth number 4. It's about collaborating and breaking silos so that operations and security teams closely collaborate with developers. One very basic example of collaboration is that the security team, instead of manually performing security tests that they alone defined and designed, could instead share the security test definitions and design with the developers, and the developers can then take those into account when writing their code and even write codes to automate the execution of those tests. As a result, you can cut down the processing time by several orders of magnitude. And this means cost savings and better quality because of reduced human errors.

The fifth and last myth is particularly interesting because I heard this many times while working with many FSI clients. What those statements mean is often unclear, and they often don't like it when asked, "why not"? After a conversation with a seasoned manager who works in a traditional bank for so many years, there is one thing I learned, and you will be surprised. They are all just excuses. Tada! Excuses, because it requires lot of work to implement DevSecOps, and only some are up to the challenge. In fact, traditional organizations will benefit more from DevSecOps than startups.

DevSecOps is about optimizing the feedback loop from idea to end-users. By continuously delivering product increments and features, you will discover problems sooner and come up with solutions sooner. In the worst extreme, you might pivot your strategy or even abandon the idea early. Providing solutions sooner translates to happier customers. Every business needs that. Not just the unicorns.

The Missing Link

Most executives live in the Ivory Tower. And that's all right. That's the reality, and we must live with it unless we require every leader to go through an "undercover boss" mission. Not happening. Therefore, we need to help them understand the value of DevSecOps and DevOps, and the best tool we have for this is reporting. The fastest way to start this is to use monitoring tools to gain data points that we can use to produce the DORA (DevOps Research and Assessment) metrics. It is a set of four metrics that measure the software delivery performance of a team and the entire organization. This includes Deployment Frequency, Mean Time to Recovery (the mean amount of time required to recover from failure), Change Failure Rate (how often deployments fail), and Lead time for Change (the total amount of time from receiving the requirements to production). These metrics are a very good start because you can then connect other KPIs to them. For example, you can start looking to measure customer/end-user feedback and find a correlation with the above metrics. Speed-to-market is directly proportional to and can be measured by Deployment Frequency and Lead time for Change. When these metrics are properly reported back to upper management, the executives can then relate these KPIs to other business KPIs, including revenue and customer feedback, ultimately understanding the business impact of the DevSecOps initiative.

DevOps, DevSecOps, DevBizOps, DevSecBizOps, NoOps?

I have used both DevOps and DevSecOps terms interchangeably above because I think they are all the same. There are also other terms that came out after DevOps and they are basically just DevOps with an emphasis on certain areas. DevSecOps emphasizes security. DevBizOps emphasizes business. DevSecBizOps emphasizes both security and business, duh! And there is also a term called NoOps which I will leave for you to explore; it's interesting. However, those terms revolve around applying agile software development practices, encouraging collaboration, and breaking silos to achieve continuous delivery.


To summarize, the key benefit of DevSecOps is that it allows for the continuous integration and deployment of secure software. Because security is integrated into the development process, it is possible to deploy updates and new features quickly and efficiently without sacrificing security. This allows organizations to stay competitive in a rapidly-changing market, where the ability to adapt and innovate quickly is key.

Another advantage of DevSecOps is that it encourages collaboration between the business, development, security, and operations teams. By working together, these teams can identify and address security concerns in a more efficient and effective manner. This can lead to a more secure and stable software development process, as well as a more positive work environment.

To implement DevSecOps effectively, organizations must embrace the core DevOps principles and be willing to make some changes to their existing processes and organizational structure. This may include adopting new technologies and tools, such as automation and orchestration platforms, as well as implementing new security protocols and processes. However, the long-term benefits of DevSecOps make it well worth the effort.

Overall, DevOps and/or DevSecOps is a powerful approach to software development that allows organizations to build and deploy secure software quickly and efficiently. By integrating security into the development process, organizations can stay competitive and protect themselves against security threats. It is not just for IT teams, it also impacts the organization as a whole. It's not just about the tools, it's also about faster feedback loops and better customer experience. And lastly, executives will see the value of DevSecOps initiatives when they have visibility of the software delivery performance.