Testing Software - A Quick Overview
Software is an ever changing interweaving of collections of ideas expressed in code to solve various problems. In today's day an age the problems that software is solving is expanding at an ever increasing rate. The term “software is eating the world” is more and more relevant every day.
Although software is being deployed in every area of our daily lives the process which teams develop software is extremely inconsistent. Most of this may be a function of the maturing processes in a relatively new field. We are starting to see patterns emerge in the area of “devops” and better managed development workflows. At the core of a lot of these topics is how we test our software.
As developers we are extremely interested in building, learning, and automating. A smaller (but growing) collection of us gets excited about testing. Personally I have struggled for years in various environments to pin down a good testing strategy. That is still a learning curve I am on… and most likely will never be off of. It is what is inspiring me to write this post.
So what is software testing?
If you ask an end user, and many developers, what they think software testing is they would most likely say “Well, click (or tap) around the application and see what breaks”. This isn’t necessarily incorrect. This is simply one level of a deeper topic. Therefore in order to define what software testing is we need to first define what levels of software testing exists. So let's break this down from the bottom level (the single lines of code) to the top level (the system as a whole):
-
Unit testing
-
Integration testing (sometimes referred to as functional tests)
-
System testing
-
Acceptance testing
Each of these levels builds upon the previous to provide for a consistent and comprehensive testing environment for a given project.
How much testing do we need?
Does your organization or project require a comprehensive suite of tests in each level? This depends on many factors. Purists will most likely say YES but the realist in me wants to say most likely not.
Organizations and the software developers you choose to work with should have honest conversations around the requirements of a feature, what should be tested, and how we will be testing. From these conversations we can map a out a testing plan that will serve as a communication tool for both project stakeholders and developers. Ideally the testing plan would integrate into an overall CI / CD system to provide an organic view of the projects state. For smaller projects a simple Google doc will suffice.
So how do we actually test?
Armed with functional requirements and a solid understanding of what is important to be tested within the scope of these requirements we can start to write our tests. For web applications, we have a slew of tools to choose from. For unit testing in the PHP world our go to framework should be PHPUnit. For functional tests we have a few options in the Drupal CMS. These tests are developed using a framework that allows us to mock up a full application in an isolated environment which we can run tests.
This allows us to run the entire suite of tests before we merge in a feature branch. This ensures that the new feature works and that it is not breaking any other existing functionality. The last piece is critical. I’ve worked on some very large software projects that did not test for regressions. Each change we made cause a ripple of frustrations through our user base. No bueno! With a well developed test suite and clear communication we should be able to mitigate these risks!
In coming posts I would like to explore some of the base classes we have available to test our solutions in the context of Drupal. This will hopefully give you a more concrete understanding on how we can take a test plan and translate them into executable tests.
What level of testing do you do with your clients / projects? I'd love to discuss in the comments!