Fundamentals of software testing: Key practices for success

Damaso Sanoja
September 5, 2024

All businesses want to stay ahead in competitive markets, comply with regulations, and prevent the costs associated with downtime and security breaches. And—in theory, at least—all software development teams exist to deliver the best possible user experience.

The natural conclusion, then, is that organizations would prioritize software testing as it helps ensure all those aims—except that very often, they don’t. In fact, according to Smart Bear, 64% of organizations test less than half of their applications' functionality. Why is that?

Well, testing complex applications that change rapidly gets incredibly tricky. When your team only has so many resources and capacity, something has to give, and that something is usually thorough testing. However, neglecting testing inevitably leads to quality issues, client dissatisfaction, and increased costs. Inadequate coverage and flawed test design means bugs remain undetected despite testing efforts.

If you and your team have gotten trapped in this cycle, it helps to remind yourself of the fundamentals again. Yes, there might be shiny new tactics and trends to try out. Still, you should always ask yourself if and how the flavor of the month helps you achieve the fundamental outcome: software that is usable, secure, accessible, performant, with minimal bugs.

To help you do that, let’s revisit why we need software testing, the objectives testing seeks to achieve, how to do testing, the types of tests you can run, and when to implement testing. We also explore the reason for common challenges with testing—such as inadequate test coverage and the complexities of evolving software—and ways to overcome them.

Why software testing is important

The value of testing is evident when you consider what happens if you don’t do it. Its purpose is to catch errors, gaps, or missing requirements. If that doesn’t happen and your software ends up being buggy or not meeting users’ expectations, they will be unhappy.

Organizations want to avoid unhappy users because it costs more to fix their issues. The company loses money if users get upset and decide to go elsewhere. Developer teams might not have to deal personally with disgruntled customers regularly, but they still feel the impact.

Buggy software and unhappy users result in a chaotic, high-pressure work environment where you’re constantly fixing bugs in production rather than improving your product.

Buggy software also increases the costs of development. Even though software testing isn’t cheap, it costs less to fix a defect during development compared to post-release because of the following:

  • Complexity. As development progresses, code becomes more interconnected. Late-stage fixes often require changes in multiple areas, increasing the risk of introducing new bugs.
  • User impact. Post-release fixes can disrupt users, potentially damaging reputation and requiring additional resources for communication and support.
  • Deployment costs. Releasing patches or updates after launch involves additional testing, distribution, and sometimes user training.
  • Opportunity cost. Time spent fixing old issues post-release is time not spent on new features or improvements.

The objectives of software testing

Based on this understanding of the purpose of testing, let’s examine what testing should aim to achieve.

Ensure quality software

Software testing ensures that software works as expected under various scenarios and conditions without any bugs. Beyond checking basic functionality, testing also evaluates software performance and identifies additional value and features that can benefit users.

Remember that value can vary across organizations. For example, processing queries at lightning-fast speeds is a top priority for Google Search. Continuous testing helps maintain these speeds when adding new features or managing growing amounts of data. However, this emphasis on performance might not be as crucial for other organizations. For instance, financial institutions prioritize testing for transaction security, data encryption, and compliance with financial regulations.

Identify and fix bugs

A key goal of software testing is to detect bugs early in the development cycle to avoid failures that can be expensive and difficult to fix later on. This proactive approach saves time and resources while providing a smoother user experience.

Enhance performance, usability, and accessibility

In addition to finding bugs or ensuring software works as intended, testing includes functional and nonfunctional testing to assess the software’s overall quality. Functional testing focuses on verifying that the software functions according to its specified requirements, while non-functional testing evaluates aspects like performance, usability, and accessibility. Using both approaches ensures the software is functional but also user-friendly, inclusive, and meets performance benchmarks.

-> Read more: Automated Accessibility Testing: Updated for 2024

Ensure security

Because of the increasing threats to data security, another key objective of software testing is to ensure that the software is secure against potential attacks. Security testing helps identify vulnerabilities, ensuring that data integrity and privacy are maintained.

Types of software testing

In the early days of software development, all teams performed all testing manually. Testers had to go through each step by hand and write down the results, which took a lot of time and was often prone to mistakes. As software got more complex, it became clear that manual testing wasn’t enough, so the industry began the move toward automated testing tools. These tools started simple but have become much more advanced, allowing testers to run thousands of tests quickly and accurately without needing a human to do it each time.

Automated testing is widely used because it makes testing faster and more reliable. However, manual testing is still essential for exploratory and usability testing, where a human touch is needed.

Manual testing

During manual testing, QA engineers, usability and accessibility experts, and end users manually execute test cases without the assistance of automated tools or scripts.

Because the tester acts as an end user, this type of testing helps identify usability and user experience issues that automated tests might overlook. Consider a photo-editing app. Automated tests verify that all filters apply correctly. However, a manual tester, acting as a user, can detect subtle problems such as an “enhance filter” that, while technically functional, makes portraits look unnatural or a “dark mode” that, although implemented correctly, causes eye strain during extended use.

Manual testing is also more flexible than automated testing. Testers can quickly adapt tests on the fly based on the results and insights they gather during the testing process. Continuing with the example of the photo-editing app, after detecting the issue with the “enhance functionality,” the human tester could decide to test the filter on a broader range of photo types (landscape, night scenes, or group photos) to understand its limitations. The tester could also suggest investigating if specific demographics—such as age groups, professional photographers, or enthusiasts—have different perceptions of the filter’s effectiveness.

The downside is that manual testing is time-consuming, has a high risk of human error, especially in complex or repetitive testing scenarios, and requires experienced, qualified personnel, which comes with a high ongoing cost.

Automated testing

Automated testing involves using specialized software tools to execute pre-scripted tests on a software application.

Automated tests can be highly efficient. They can be executed quickly and repeatedly, which is ideal for regression testing—verifying that recent code changes haven’t adversely affected existing functionality. Automated tests also reduce the chances of human error, provide precise results, and can save costs by detecting everyday usability, performance, and accessibility flaws, which reduces the complexity and duration of manual testing.

However, automated testing involves upfront costs for setting up the right tools and creating test scripts. Additionally, maintaining these scripts requires regular updates to keep pace with changes in the software. In-house teams often struggle with this ongoing maintenance due to limited resources or expertise, leading to gaps in testing and missed bugs. That’s why many companies prefer to partner with specialists like QA Wolf, who handle all aspects of automated testing—from initial setup to continuous script maintenance—ensuring thorough and effective test coverage without the hassle of managing it internally.

-> Read more: Five misbeliefs people have about automated testing, and the truth of our experience

Levels of software testing

Software testing is applied at different stages of the SDLC to validate different aspects of your software at certain times of the software development lifecycle. These levels build on each other to ensure quality software and comprehensive coverage.

Levels of software testing
Credit: Imgur

Unit tests

Unit tests are designed to test individual components or sections of code independently, ensuring that each part functions as expected in isolation.  They are maintained within the product’s code base. This type of testing is usually automated and carried out by development teams. Unit tests are crucial because they help detect issues early in the development process, significantly reducing the time and cost required to fix bugs later.

Best practices:

  • Write small, focused tests: Each unit test should focus on a small piece of functionality to ensure clarity and effectiveness.
  • Automate tests: Automating unit tests allows them to be run frequently and consistently, catching bugs as soon as they are introduced.
  • Run tests continuously: Integrate unit tests into the continuous integration pipeline to catch issues early.

Common pitfalls:

  • Over-mocking: Relying too much on mock objects can make tests brittle and less representative of real-world scenarios.
  • Neglecting edge cases: Failing to test edge cases can lead to unexpected errors in production.
  • Poor maintenance: Outdated or poorly maintained unit tests can lead to false positives or negatives, reducing trust in the test suite.

-> Read more: Catching bugs with regression testing is not solving your real quality problem.

Component integration tests

Component integration tests are conducted after individual parts of the software have been unit-tested and then combined. These tests verify how well the integrated components work together and ensure that data flows smoothly between them. Integration testing is critical because it identifies problems that may not be detected during unit testing, such as issues with data formats or communication between components.

Best practices:

  • Test interfaces between components: Focus on the points where different components interact to ensure smooth data flow and functionality.
  • Use realistic test data: Use data that closely mirrors what would be encountered in a production environment to catch more realistic issues.
  • Automate integration tests: Automating these tests ensures they are run consistently with every code change, catching integration issues early.

Common pitfalls:

  • Lack of clear boundaries: Failing to define what is being tested can lead to broad tests that don’t effectively catch specific issues.
  • Overlapping with unit tests: Ensure integration tests do not duplicate the checks already covered by unit tests.
  • Neglecting environment configuration: Overlooking environment setup can lead to false positives or negatives in tests.

E2E, a.k.a. system integration tests

E2E tests focus on the complete functionality of an application in an environment that mimics real-world conditions. This type of testing, also known as system integration testing, ensures that all components of the system work together seamlessly from the user’s perspective. E2E tests validate that the system meets defined specifications and user expectations in a simulated real-life scenario.  E2E tests are usually maintained outside of the code base of a given application since they exercise pieces of multiple applications.

Best practices:

  • Test user journeys: Focus on testing complete user workflows to ensure the application behaves as expected from start to finish.
  • Regularly update tests: Ensure that E2E tests are updated with changes in the application to maintain relevance and accuracy.

Common pitfalls:

  • High maintenance costs: E2E tests can be expensive and time-consuming to maintain, especially as the application evolves.
  • Slow execution: Due to their comprehensive nature, E2E tests can be slow, which may delay feedback if not appropriately managed.
  • Over-reliance on E2E tests: Depending too much on E2E tests instead of a balanced test strategy can lead to missed bugs in the lower layers of the application.
  • Low coverage: While automation is helpful, E2E tests can be slow and brittle, leading some teams to skimp on E2E coverage.

-> Read more: End-to-end testing 101

Acceptance tests

Acceptance testing, or user acceptance testing (UAT), is usually the final testing phase to ensure that an application meets business-wide requirements and is ready for deployment. Typically performed by end users or their proxies (i.e., software testers or product managers), acceptance tests validate that the software functions as expected in real-world scenarios and meets the user’s needs.

Best practices:

  • Involve real users: Though this happens less frequently than it should, whenever possible, involve actual users or in acceptance testing to get genuine feedback on usability and functionality.
  • Define clear acceptance criteria: Establish clear, measurable criteria to ensure tests align with business requirements before testing.
  • Simulate real-world conditions: Test in an environment that closely resembles the production setting to ensure accurate results.

Common pitfalls:

  • Vague acceptance criteria: Ambiguous criteria can lead to misunderstandings and incomplete testing.
  • Inadequate user involvement: Lack of actual user involvement can result in missing critical feedback on usability or functionality.
  • Insufficient test coverage: Focusing too narrowly on specific areas can miss broader issues that impact the user experience.

When to implement software testing

The team should build unit and integration tests early in the software lifecycle. Incorporating tests from the beginning allows you to detect issues early, reduces the complexity of debugging, and prevents the accumulation of technical debt. Waiting until the later stages of the development cycle to integrate tests can lead to higher costs and longer delays as problems are more deeply embedded and more challenging to isolate.

In addition to those broad recommendations, here are specific scenarios where you want to initiate software testing.

When new functionality or features are developed

New features or updates always come with some risks. Thorough and timely testing makes sure these new additions work as expected without causing problems with existing features. By being proactive with testing, you can catch issues early that might impact how the software functions or the user experience.

When a feature is modified due to a change in requirements

Changes in requirements happen a lot in fast-paced development environments. Whenever a feature gets updated or modified, thorough testing makes sure those changes meet the new requirements and don’t cause problems elsewhere in the system. This helps keep everything working smoothly and ensures the software meets its intended goals.

When a bug fix is implemented

After a known bug has been fixed, testing is needed to make sure the fix actually solves the problem and doesn’t create new issues. This is also an excellent time to do regression testing, which checks that other parts of the software still work correctly after the changes.

Before refactoring a piece of functionality to prevent regression

Refactoring is all about making code more efficient and easier to read, but it can sometimes accidentally introduce new errors. Running thorough tests before refactoring helps make sure everything still works correctly afterward, preventing any problems from slipping through the cracks.

-> Read more: Tech debt is preventing your team from shipping and innovating - this what to do about it

Common challenges in software testing

The “must-dos” of software testing are pretty obvious. However, several challenges and limitations can hinder effective testing. By understanding these challenges, you can devise strategies to overcome them.

Complex applications

Complex applications with intricate designs and tightly interconnected components can make it difficult to detect bugs. The complex interactions between various parts of the system can create unexpected behaviors that are hard to predict and test. This complexity often leads to hidden bugs that go unnoticed during testing, resulting in incomplete test coverage and potential issues that could surface later in production.

Focusing on testability during the planning and design phases is essential to address this. By designing with testability in mind and planning comprehensive testing early on, you can ensure every part of the application is thoroughly tested, reducing the risk of hidden bugs that could escalate into more significant problems.

For legacy applications, enhancing testability may involve refactoring sections of the code to make them more test-friendly. It’s essential to focus on achieving 80% test coverage as quickly as possible, prioritizing the most critical parts of the application first. This approach helps ensure that major bugs are caught early, minimizing the risk of severe issues and maintaining the application’s stability.

Rapidly changing applications can make testing feel like an uphill battle

Keeping test cases current can be challenging when frequently updating or modifying applications. Even if your team is doing the right thing by implementing testing on every PR, rapid changes can quickly outpace or overwhelm testing efforts, resulting in gaps in test coverage and potential quality problems.

Continuous integration and continuous deployment (CI/CD) pipelines help manage frequent updates by integrating testing into the development workflow. CI/CD requires that tests are continuously updated and run with each code change, ensuring that gaps in test coverage are minimized. This process quickly identifies issues, helping teams keep software quality in check even when changes occur rapidly.

-> Read more: The complete and indispensable guide to reaching continuous deployment.

Failure investigation and debugging

Figuring out why an app isn’t working correctly and finding the root cause can be time-consuming and tricky. Debugging often means trying multiple times to reproduce the issue, carefully reviewing the code, analyzing logs, and tracking how data moves and changes through the app.

Things get even more complicated with flaky tests—those that sometimes pass and sometimes fail without any code changes. Flaky tests make it hard to identify real issues because they create noise, leading to confusion and wasted time. This is where automatic flake detection comes in as a powerful solution. Automatic flake detection tools can identify unstable tests by running them multiple times to see if they produce inconsistent results. By flagging flaky tests, these tools help developers focus on actual bugs and reduce the time wasted on unreliable tests.

In addition to flaky tests, problems come from complex interactions between different parts of the app, specific conditions in various environments, and issues that are hard to reproduce. For example, suppose a web app crashes when uploading large files. In that case, debugging might involve testing different file types, checking server logs and memory usage, reviewing file-processing code, and investigating how the database handles the data.

Because dealing with flakes and other complex debugging requires time and expertise, this work is best handled by experienced QA engineers.

-> Read more: Flaky test coverage is fake test coverage

Inadequate coverage and flawed test case design

Often, test cases are not comprehensive enough—they miss critical aspects of functionality.

Consider an e-commerce checkout process. A flawed test design might only verify successful purchases with valid credit cards but miss testing scenarios like purchases with insufficient funds, handling expired cards, and how to proceed if the payment gateway is down. These oversights could lead to critical issues, potentially causing lost sales and customer trust.

Effective software testing requires thorough coverage and well-designed test cases.

-> Read more: Guide to planning meaningful test coverage

Time and resource constraints

Testing takes a lot of resources, including time and personnel. As software evolves, keeping test cases up-to-date requires ongoing effort and expense. Many organizations have limited resources, making it difficult to conduct thorough testing.

While you can’t always solve these resource limitations, you can manage the risk by optimizing your available resources. One practical approach is full parallelization, which allows multiple tests to run simultaneously. This speeds up the testing process, reduces the time required, and better uses available resources.

Additionally, focusing on core functionality and components with a higher risk of failure can help ensure that the most critical parts are tested thoroughly. Prioritizing automated tests can free up manual testers to handle more complex, exploratory scenarios. Outsourcing testing to external experts is another way to extend testing capabilities without straining internal resources. By combining these strategies with full parallelization, teams can maximize their testing efficiency and maintain high-quality standards even with limited resources.

-> Read more: Doing more with less: Five ways to increase velocity by removing the bottlenecks in your QA process

Measuring testing effectiveness/KPIs

Tracking the right metrics and key performance indicators (KPIs) to optimize testing efforts and maximize limited resources is essential. Different teams within the organization may be responsible for monitoring and improving specific metrics. Here’s how these metrics can be broken down by team:

QA team

  • Test coverage: Measures the percentage of the codebase or features tests cover. The QA team is responsible for maintaining high test coverage to minimize the risk of undetected bugs.
  • Flaky test rate: Monitors how often tests fail or pass inconsistently. The QA team focuses on identifying and reducing flaky tests through automatic flake detection to ensure the reliability of test results.
  • Time to fulfill coverage requests: Tracks how quickly the QA team can implement new test cases for additional features or areas. Faster fulfillment indicates agility and responsiveness to evolving project requirements.
  • Percentage of tests skipped: Identifies the number of tests skipped during a test run. The QA team works to minimize skipped tests to ensure comprehensive coverage and detect potential issues.
  • Time spent triaging test failures and reproducing bugs: Measures the time spent analyzing test failures and reproducing bugs. The QA team aims to lower this time by improving test case design and using better debugging tools.

Development team

  • Time spent babysitting test runs: Measures the time engineers spend monitoring automated test runs. Reducing this time allows developers to focus more on strategic testing and improving overall test efficiency.
  • Defect density: Calculates the number of bugs per unit of code, such as per thousand lines. The development team is responsible for writing cleaner code and reducing the number of defects introduced during coding.
  • Bug resolution time: Tracks the average time required to fix a bug after it’s reported. The development team works on reducing this metric by quickly addressing reported issues and collaborating effectively with the QA team.
  • Mean time to recovery (MTTR): Measures the average time it takes to recover from a test or system failure. The development team focuses on reducing MTTR by improving the robustness of the codebase and implementing faster recovery strategies.

DevOps team

  • Test suite execution time: Measures the entire duration of a test suite until all tests are completed. The DevOps team optimizes the CI/CD pipeline to reduce test execution time, leveraging full parallelization and efficient resource management. While developers and QA can do some optimization work, the most significant gains can be achieved through infrastructure improvements.
  • Escaped defects: Tracks the number of bugs missed during testing that are discovered in production. The DevOps team collaborates with QA and development to improve automated testing and prevent defects from reaching production.

Product and business teams

  • Revenue damage from bugs: Estimates the financial impact of bugs found in production, including lost sales or increased support costs. The product and business teams use this metric to prioritize testing efforts and justify investments in quality assurance.

By dividing these metrics among the responsible teams, organizations can better allocate resources, improve their testing processes, and enhance overall software quality. Focusing on the most relevant metrics allows each team to make data-driven decisions that optimize testing strategies, prioritize tasks, and efficiently address critical issues.

-> Read more: Measuring what matters: QA metrics that improve products, teams, and businesses

Why testing fundamentals matter more than ever

It might seem tempting to cut corners on testing to save time and resources, but this shortcut is a risky move. In the long run, skipping thorough testing often leads to buggy software, unhappy users, and skyrocketing costs.

Instead, focusing on software testing fundamentals—the why, how, and when—helps you develop a testing strategy that fits your organization’s unique needs and constraints. By mastering the basics, you can optimize your testing process, maximize resources, and deliver high-quality software more efficiently.

If your team finds it hard to stay on top of the core testing principles, QA Wolf offers a way to achieve thorough test coverage with minimal effort. Our testing experts handle the complexities of creating and maintaining tests, allowing your engineering team to focus on building and improving your product.

Keep reading