Showing posts with label Software Engineering. Show all posts
Showing posts with label Software Engineering. Show all posts

Sunday, December 11, 2022

TDD is not a Testing Approach

TDD stands for Test-Driven Development. Contrary to what I mostly hear from others that it's a testing approach, No, it is not a testing approach. Rather, it's a development practice where tests are used to determine not only the correctness but also the completeness of the code. We often hear about TDD as something where tests are written first before the code. Which is partially correct because it's not only it. Test-driven development (TDD) is a software development methodology in which tests are written for a new piece of code before the code itself is written. The tests are designed to fail initially because if it doesn't, it means it's an invalid test, or the test is clearly not working. But as the implementation code is developed, it is written specifically just to pass the tests. By the time the implementation code is complete, it means it is already tested.


If you are familiar with developer assessment platforms like HackerRank, Codility, and Coder byte, or if you have attended algorithmic hackathons like Facebook Hacker Cup and Google CodeJam, the way developers work in these environments is that they write their code and then press a button. That button press runs a series of tests in ~2 seconds and comes back with a report that says your code has passed or not. TDD is actually very similar, except that the tests are also written by the same developer who's trying to solve the problem.


Another difference is that in TDD, the developers don't write an entire 1000-line test suite first. Instead, it actually follows the following cycle.


  1. Write tests that check for very specific behavior.
  2. The tests should fail because there has yet to be an implementation code written.
  3. Write just enough code to make the tests pass.
  4. Then refactor your code until you are satisfied with your design.
  5. Go back to step 1.


This approach to development has gained popularity in recent years because it can help to ensure that code is well-designed, easy to maintain, and free of defects.


One of the key advantages of TDD is that it forces developers to think about the desired behavior of their code before they start writing it. This helps to ensure that the code is well-designed and easy to understand. It also helps to prevent developers from writing code that is difficult to maintain or modify in the future. It prevents the developer from writing codes that are not necessary. Developers often call this "over-design." In other words, developers are unknowingly being forced to write good-quality code.

Another advantage of TDD is that it can help to catch defects early in the development process. Because tests are written before the code, developers can identify and fix defects as soon as they are introduced. This can save time and effort in the long run, as it is much easier to fix a defect early on than it is to track it down and fix it later on.


The Challenges


TDD is not without its challenges, however. There are also common mistakes the teams make when dealing with TDD.


One common challenge is that writing tests can be time-consuming, and it can be tempting for developers to skip this step in the interest of saving time. However, skipping the testing step can lead to defects and other problems down the line. In the end, as the quality degrades, the developers will end up spending more time fixing other problems that are not captured by tests, and this time is usually more than the time they save from not writing the needed tests.


Another challenge with TDD is that it can be difficult for developers who are new to the methodology to know how to write effective tests. Writing tests that are comprehensive enough to cover all possible scenarios can be a daunting task, and it can take time and experience to develop the skills needed to write effective tests.


Automating UI tests as part of the TDD approach may only sometimes work because UI tests typically take some time to run, and they are often fragile. Another thing is that because these tests are very visual and are more than just behavior checking, it is often left out of the TDD cycle. However, there are attempts to automate visual testing by Gojko Adzic that he describes in this talk.


The test activity done by the QA/testing team may seem redundant. It is true that because the code is already tested, then there is no need to perform manual tests execution of the same tests written by the developers. However, there are tests that cannot be covered or are very difficult to implement in the TDD approach, and this includes integration tests. The TDD tests are mainly unit tests, and any interaction with external systems is usually mocked. One way to solve this challenge is by bringing the testers closer to the developers. Let the testers define which behavior needs to be tested. The developer will then tell the testers which behaviors are already covered by the TDD tests and that the testers can focus on these other tests. This problem is less prevalent in Microservices Architecture, though. That's because each service is isolated and are independent from each other, the need for integration tests is lesser.


Despite these challenges, many developers and organizations have found that the benefits of TDD outweigh the challenges. By forcing developers to think about the desired behavior of their code and by catching defects early in the development process, TDD can help to ensure that code is of high quality and easy to maintain. As a result, TDD has become an increasingly popular approach to software development.

Wednesday, December 7, 2022

Five DevSecOps Myths Executives should Know

DevSecOps, a term coined from the combination of "development", "security", and "operations", is a set of practices that aim to integrate security and operations early on in the software development lifecycle. This approach is designed to address the increasing need for security in the fast-paced world of software development, where the frequent updates and deployments of applications make it difficult to incorporate security measures after the fact.


Traditionally, security was seen as an afterthought in the software development process. Developers would focus on building and deploying their applications, and security measures would be implemented later on by a separate team. This approach often led to security vulnerabilities that could have been avoided if security had been considered from the beginning.


With DevSecOps, the focus is on integrating security into the development process from the moment the code is written and/or committed. This means that security considerations are made at each stage of the development lifecycle, from planning and design to testing and deployment. This approach allows for the identification and resolution of security issues early on before they become a major problem in production.


The Myths


One of the reasons why most executives don't often get it includes the following common misconceptions about DevOps and DevSecOps from the C-Level perspective:


  1. That it is just some practice for building software;
  2. That it is a team-level thing and does not concern the entire organization;
  3. That it's only about putting in place a set of fancy tools for the IT teams;
  4. That it's all about creating a new organizational unit called DevOps/DevSecOps, which is responsible for implementing and maintaining the fancy toolsets;
  5. That DevOps and DevSecOps are only for "unicorns." You must have often heard the phrase, "We are a bank; we are not Netflix! We are highly regulated. We have hundreds of applications to maintain, not one single front-facing application".


If we dig into these myths, the first and second ones are partially correct because DevSecOps and DevOps, from the definition, are practices that integrated operations and security into the software development team. But it's not just about CI/CD, and it's not just about being agile. It's about building the right thing the right way for the right customers at the right time. And this directly impacts the company as a whole. Building the right products that people want at the right time could directly impact revenue. While building the wrong product that nobody wants at the wrong time could break a company. DevOps and DevSecops achieve this through continuous delivery, which allows for a faster feedback loop.


The third myth is false because DevSecOps is not only about the tools. There needs to be more than the tools to give the organization DevOps and DevSecops. This includes changing the processes and adapting certain practices. It is also not about creating a new dedicated team for DevSecOps as with myth number 4. It's about collaborating and breaking silos so that operations and security teams closely collaborate with developers. One very basic example of collaboration is that the security team, instead of manually performing security tests that they alone defined and designed, could instead share the security test definitions and design with the developers, and the developers can then take those into account when writing their code and even write codes to automate the execution of those tests. As a result, you can cut down the processing time by several orders of magnitude. And this means cost savings and better quality because of reduced human errors.


The fifth and last myth is particularly interesting because I heard this many times while working with many FSI clients. What those statements mean is often unclear, and they often don't like it when asked, "why not"? After a conversation with a seasoned manager who works in a traditional bank for so many years, there is one thing I learned, and you will be surprised. They are all just excuses. Tada! Excuses, because it requires lot of work to implement DevSecOps, and only some are up to the challenge. In fact, traditional organizations will benefit more from DevSecOps than startups.


DevSecOps is about optimizing the feedback loop from idea to end-users. By continuously delivering product increments and features, you will discover problems sooner and come up with solutions sooner. In the worst extreme, you might pivot your strategy or even abandon the idea early. Providing solutions sooner translates to happier customers. Every business needs that. Not just the unicorns.


The Missing Link


Most executives live in the Ivory Tower. And that's all right. That's the reality, and we must live with it unless we require every leader to go through an "undercover boss" mission. Not happening. Therefore, we need to help them understand the value of DevSecOps and DevOps, and the best tool we have for this is reporting. The fastest way to start this is to use monitoring tools to gain data points that we can use to produce the DORA (DevOps Research and Assessment) metrics. It is a set of four metrics that measure the software delivery performance of a team and the entire organization. This includes Deployment Frequency, Mean Time to Recovery (the mean amount of time required to recover from failure), Change Failure Rate (how often deployments fail), and Lead time for Change (the total amount of time from receiving the requirements to production). These metrics are a very good start because you can then connect other KPIs to them. For example, you can start looking to measure customer/end-user feedback and find a correlation with the above metrics. Speed-to-market is directly proportional to and can be measured by Deployment Frequency and Lead time for Change. When these metrics are properly reported back to upper management, the executives can then relate these KPIs to other business KPIs, including revenue and customer feedback, ultimately understanding the business impact of the DevSecOps initiative.


DevOps, DevSecOps, DevBizOps, DevSecBizOps, NoOps?


I have used both DevOps and DevSecOps terms interchangeably above because I think they are all the same. There are also other terms that came out after DevOps and they are basically just DevOps with an emphasis on certain areas. DevSecOps emphasizes security. DevBizOps emphasizes business. DevSecBizOps emphasizes both security and business, duh! And there is also a term called NoOps which I will leave for you to explore; it's interesting. However, those terms revolve around applying agile software development practices, encouraging collaboration, and breaking silos to achieve continuous delivery.


Conclusion


To summarize, the key benefit of DevSecOps is that it allows for the continuous integration and deployment of secure software. Because security is integrated into the development process, it is possible to deploy updates and new features quickly and efficiently without sacrificing security. This allows organizations to stay competitive in a rapidly-changing market, where the ability to adapt and innovate quickly is key.


Another advantage of DevSecOps is that it encourages collaboration between the business, development, security, and operations teams. By working together, these teams can identify and address security concerns in a more efficient and effective manner. This can lead to a more secure and stable software development process, as well as a more positive work environment.


To implement DevSecOps effectively, organizations must embrace the core DevOps principles and be willing to make some changes to their existing processes and organizational structure. This may include adopting new technologies and tools, such as automation and orchestration platforms, as well as implementing new security protocols and processes. However, the long-term benefits of DevSecOps make it well worth the effort.


Overall, DevOps and/or DevSecOps is a powerful approach to software development that allows organizations to build and deploy secure software quickly and efficiently. By integrating security into the development process, organizations can stay competitive and protect themselves against security threats. It is not just for IT teams, it also impacts the organization as a whole. It's not just about the tools, it's also about faster feedback loops and better customer experience. And lastly, executives will see the value of DevSecOps initiatives when they have visibility of the software delivery performance.

Sunday, May 19, 2019

Architecting Systems Like A Rock Band

In Software, orchestration often means control, synchronization, mediation and scheduling of decoupled application services in order to fulfill groups of tasks as part of a business processes. In highly distributed systems, orchestrating could be a nightmare to develop and maintain. If not designed carefully, it can lead to a scenario where the orchestrator becomes the bottle neck to future architecture changes. In traditional SOA, the orchestrator is often called the service bus or enterprise service bus. Even Micro-services architecture has an orchestrator. It hides behind the name "Gateway" and "Control Plane".

In an Orchestra, if the conductor leaves the orchestra in the middle of a symphony, chaos could occur eventually. The percussion section could become out-of-sync with the woodwinds and the brass section overtime. This is because each musician and each section rely heavily on a set of rules and patterns in the sheet music and of the tempo and the instructions of the conductor. Remove the music sheets and the orchestra will simply stop playing as the musician will not know what to play next. But not in a rock band.

Rock bands don't need a conductor and often times they don't need music sheets. This is because each musician do not rely on a single person to give them instructions. Instead, they listen to each other. They listen to the drum beat for the tempo. They listen to the singer to know which section they are in the song. They listen to the guitarist to know when to bring down their instruments volume to bring up the guitar solo, and so on and so forth. Each musician also knows what they need to do and when they need to do their thing without being told when and how. They decide when to do things based on what they hear. And because they don't follow music sheets, they can improvise and make the music sound better than expected. They can recover quickly from failure, i.e. a broken guitar string or a thrown away drum stick or improvise when this happens so, but they never stop playing just because of it. This is why each live performance is a bit unique as opposed to the musicians in an orchestra. In other words, musicians in a rock band react to events instead of being conducted.

Event-Driven Architecture

Broker Event-Driven Architecture

Event-Driven Architecture is recently catching the attention of software architects and engineers Today because of its simplicity and versatility. Components are loosely coupled. The interface contract definition does not have to be defined up front and can be easily changed. There are no more request-response (synchronous type or communication) and messages are pushed and not pulled. When pulling data a service needs to know upfront where to pull the data from. But when you push events/data, one does not need to know to which specific service you need to push the data to. There are many advantages of this architecture except if the system is too large, say 1000 services/actors, in which case the risk of loosing the enterprise-wide event broker becomes unacceptable.


The Rock Band Architecture

Similar to EDA, in a rock band architecture, the role of orchestration is delegated and distributed among each of the services. Independent services listen for events and react to those events whenever necessary. Services also broadcast events but they don't need to know how other services would react to it or would use those messages, or if other services are even listening. Much like a rock band, each musician is responsible for playing his own instrument but they also communicate by listening to each others instruments and looking at each other from time-to-time so that collectively, they can deliver the music as a one pleasing rock symphony. 

But there is a fundamental difference. Rock Bands are small. When you put 1000 rock musicians together and play the same song, it's fun for sure but it does not sound as good as a single 5-pieace rock band on a small stage. This is because it is so difficult to synchronize a huge number of musicians and that is the reason why orchestras need a conductor.


Sound travels at 342 m/s. So if you put a drummer and a bassist 100 meters apart, they will be de-synchronized by 1/3 of a second and that's what you are hearing in the video. The drums are not crisp and the bass guitar lost its punch.

Similar principles apply to Event-Driven Architecture. If two-closely relate services need to send messages to each other but there are 5 topics to go through before reaching the other end of the channel, you are loosing time and it's not efficient. 

Another important aspect to consider when deciding an architecture is how well it fits in the enterprise organization. Teams maintaining and building applications in a huge organization don't often speak to each other and the bureaucratic processes behind this are counter productive. No matter how you try to break the silos between teams, sections and departments, you can only do too much. So when different silos are sharing a middleware such as an Event Broker, what organizations usually do is to create another silo, such as a dedicated middleware team, to maintain that middleware. And from then on, teams and departments has to go through them when they want to establish a communication interface. That's a new layer of red tape.

The Rock Band Architecture is a Federated Event-Driven Architecture. Because bands sound better when they are the only ones playing the music in a small stage closer to the audience.

Rock Band Architecture - Federated Event-Driven Architecture

Instead of one single event broker, each silo has it's own event broker. This maximizes the freedom of the teams to choose how to implement their event broker. This also reduces the work on coordination between the silos. Overall, this reduces the effort required for dependency management.

The Event Broker

You don't need an orchestrator or a conductor in a Rock Band. What the system needs is a channel of communication where services can broadcast to and listen for events. The Event Broker. A rock band only needs a stage and their instrument monitors (the big speakers and guitar amps you see on stage). Events are not some generic free-form messages but are specific, small, real-time and contextual. The messaging channel for the event broker must have the following properties:
  1. Events must be broadcasted in real-time or near real-time, otherwise the band would be out of sync.
  2. The communication channel must also use a protocol that is platform agnostic so that services can be built in any platform and developed in any language
  3. It supports any kind of endpoints, i.e. HTTP, HTTP/2, TCP, UDP, MQTT, etc...
  4. It implements a publish-subscribe or consumer-producer messaging pattern
The Players

The services in RBA are the band members or the "Players". Each player plays a specific role that makes up the entire system or subsystem. Service can be in any form of software component (or even hardware) that is either subscribed to one or more topic or is publishing to topics or doing both. Services in RBA are similar to the "Actors" of the Actor-Model architecture pattern, they are completely isolated and are not aware of each others existence but players in RBA are much richer and performs more than just primitive calculations compared to Actors. They are also "stateful" and are always up and running. A player can be as big as an entire application or as small as a serverless function. Players are monitored so that in the event of a crash, there is something or someone that can do something about it like a player service automatic restart or an severity 1 incident raised to the helpdesk. One way to do this is to design the player service so that they are sending heartbeats regularly to a specific topic on the event broker.

The Band

A Band is a group of Players that are connected to a common Event Broker. These player services are grouped functionally into a Band. The grouping can be done in an organization context or by a more detailed context like a subsystem or even an application. The goal is to minimize dependencies between systems and/or organizational unit.

Edge Services

The services that are publishing  messages outside of it's own band are called Edge Services. I wanted to call it the "Groupie", but my wife doesn't approve. There can be one or more Edge Services in a band. Edge services must only publish messages to other band's event broker and must not subscribe to other band's event broker. This simplifies the service as they don't have to maintain a persistent connection with an event broker outside its perimeter.

Summary

Enterprises continue to implement SOA architectures and have also begun adapting event-driven architectures (SOA 2.0) as the standard for many of their applications. But organizations are also becoming more and more agile. Being able to decentralize event brokers in a federated manner can reduce or eliminate organizational dependencies between different silos which in turn would produce a more empowered, agile team who can make important architectural decisions on their own with minimal/isolated risk of impacts.


Popular