The Standish Group's research has consistently shown that approximately 80% of software defects originate in the requirements phase — not in coding, testing, or deployment. The cost of fixing a requirements defect found during testing is 10-100 times higher than fixing it during requirements gathering itself. And a requirements defect that escapes to production can cost 100-1000 times more to resolve.
These are not abstract statistics. At Pepla, we have seen projects where ambiguous requirements led to months of rework, where missing non-functional requirements caused production failures on launch day, and where scope creep — fed by poorly managed requirements — turned six-month projects into eighteen-month ordeals. Requirements engineering is not glamorous work, but it is the work that determines whether everything that follows succeeds or fails.
Functional vs Non-Functional Requirements
Functional requirements describe what the system should do. They define specific behaviours, features, and functions. "The system shall allow users to reset their password via email" is a functional requirement. It is testable — either the system does this or it does not.
Non-functional requirements (NFRs) describe how the system should perform. They define quality attributes: performance, security, availability, scalability, usability, and maintainability. "The login page shall load in under 2 seconds on a 4G connection" is a non-functional requirement. "The system shall be available 99.9% of the time, excluding scheduled maintenance" is another.
The most common failure in requirements engineering is neglecting non-functional requirements. Teams document what the system should do in exhaustive detail but say nothing about how fast, how secure, how available, or how scalable it should be. The result is a system that functionally works but is too slow, too fragile, or too insecure for production use.
Non-functional requirements must be specific and measurable. "The system should be fast" is not a requirement — it is a wish. "API responses shall return within 500ms at the 95th percentile under a load of 1,000 concurrent users" is a requirement that can be tested, verified, and contractually enforced.
The most common failure is neglecting non-functional requirements. "The system should be fast" is a wish -- specify exact thresholds or it is not testable.
Common categories of NFRs that every project should address:
- Performance: Response times, throughput, resource utilisation.
- Scalability: Maximum users, data volumes, growth expectations.
- Security: Authentication, authorisation, data encryption, compliance standards.
- Availability: Uptime targets, disaster recovery, backup requirements.
- Usability: Accessibility standards, browser/device support, localisation.
- Maintainability: Code quality standards, documentation requirements, technology constraints.
User Stories and the INVEST Criteria
In Agile environments, requirements are often expressed as user stories — short descriptions of functionality from the user's perspective. The standard format is: "As a [role], I want [capability] so that [benefit]."
The value of user stories is not the format — it is the conversation they enable. A user story is a placeholder for a discussion between the product owner, the development team, and stakeholders. The story title captures the intent; the detail emerges through conversation and is captured in acceptance criteria.
Good user stories follow the INVEST criteria:
- Independent: The story can be developed and delivered independently of other stories. Dependencies between stories create scheduling constraints and increase risk.
- Negotiable: The story is not a contract. The detail can be negotiated between the product owner and the development team. This allows for technical input on implementation approach.
- Valuable: The story delivers value to the user or the business. If you cannot articulate the value, the story may not be necessary.
- Estimable: The team can estimate the effort required. If a story is too large or too vague to estimate, it needs to be split or refined.
- Small: The story can be completed within a single sprint. Stories that span multiple sprints are epics that need decomposition.
- Testable: There are clear criteria for determining whether the story is complete. If you cannot define done, you cannot build it.
A common anti-pattern is writing user stories that are actually technical tasks disguised in user story format. "As a developer, I want to refactor the authentication module so that the code is cleaner" is a technical task, not a user story. There is nothing wrong with technical tasks, but they should be tracked differently and justified by the user-facing value they enable.
80% of software defects trace back to requirements -- investing upfront in clarity is the cheapest bug fix you will ever make.
Acceptance Criteria: Defining Done
Acceptance criteria are the specific conditions that must be met for a user story to be considered complete. They are the bridge between a general statement of intent (the user story) and a specific, testable definition of done.
Effective acceptance criteria follow the Given/When/Then format:
Given I am a registered user on the login page When I enter an incorrect password three times Then my account is locked for 30 minutes And I receive an email notification about the lock And the login page displays a message explaining the lock
This format is powerful because it is simultaneously readable by business stakeholders and directly translatable into automated tests. BDD (Behaviour-Driven Development) frameworks like SpecFlow, Cucumber, and Behave can execute these scenarios as automated acceptance tests, closing the loop between requirements and verification.
Every acceptance criterion should be:
- Specific: "The system handles errors gracefully" is not specific. "When a payment fails, the system displays error code, error description, and a retry button" is specific.
- Testable: Can a tester (human or automated) verify this criterion unambiguously? If the answer is no, it needs refinement.
- Achievable: Does the team have the capability to implement this? Criteria that require unavailable technology or unrealistic performance targets set the team up for failure.
- Agreed upon: The product owner, development team, and testers should all agree that the criteria correctly define done for this story.
Requirements Traceability Matrix
A requirements traceability matrix (RTM) tracks the relationship between requirements, design decisions, implementation, and test cases. For each requirement, the RTM records: the requirement ID, description, priority, the design component that addresses it, the code module that implements it, and the test cases that verify it.
The RTM serves several purposes:
- Completeness verification: Every requirement has a corresponding test. If a row in the matrix has no test case, that requirement is unverified.
- Impact analysis: When a requirement changes, the RTM shows which design components, code modules, and test cases are affected. This prevents changes from having undetected ripple effects.
- Coverage reporting: The RTM provides a clear answer to "what percentage of requirements are tested?" — a question that auditors, regulators, and project stakeholders frequently ask.
The RTM does not need to be elaborate. A spreadsheet with columns for requirement ID, description, priority, status, implementation reference, and test case reference is sufficient for most projects. The discipline of maintaining it matters more than the sophistication of the tool.
If you cannot test it, it is not a requirement. Every acceptance criterion must be specific, measurable, and unambiguously verifiable.
MoSCoW Prioritisation
Not all requirements are equally important, and no project has unlimited time and budget. Prioritisation determines what gets built first, what gets built later, and what does not get built at all. MoSCoW is the most widely used prioritisation framework in Agile and traditional project management:
- Must Have: Requirements that are non-negotiable for the current release. Without these, the product cannot launch. The system must authenticate users. The system must process payments. These are must-haves.
- Should Have: Important requirements that are not critical for launch but should be included if possible. A password strength indicator. Email notifications for order status changes. These add significant value but the product is viable without them.
- Could Have: Nice-to-have requirements that will be included only if time and budget permit. User profile avatars. Dark mode. Export to PDF. These enhance the experience but do not affect core functionality.
- Won't Have (this time): Requirements that are explicitly out of scope for the current release. This category is as important as the others — it sets expectations and prevents scope creep by documenting what was deliberately excluded.
The critical discipline in MoSCoW is ensuring that must-haves do not exceed 60% of total estimated effort. If everything is a must-have, nothing is prioritised, and the framework provides no value. At Pepla, we facilitate prioritisation workshops where stakeholders assign MoSCoW categories through discussion and trade-off — not through a top-down decree from a single product owner.
Requirements Review Process
Requirements reviews are structured inspections of requirements documents to identify defects before development begins. The principle is simple: finding a requirement that is ambiguous, incomplete, or incorrect during a review meeting costs hours. Finding the same defect during development costs days. Finding it in production costs weeks.
An effective requirements review involves:
- Diverse reviewers: Business stakeholders verify business intent. Developers verify technical feasibility. Testers verify testability. UX designers verify usability implications. Each perspective catches different classes of defects.
- A checklist of common defects: Ambiguous language ("the system should handle this appropriately"), missing error cases, undefined terms, untestable criteria, conflicting requirements, and missing NFRs.
- Time-boxed sessions: Reviews longer than 2 hours lose effectiveness. Review in focused sessions with breaks.
- Tracked outcomes: Every issue raised should be logged, assigned, and resolved. A review without follow-up is a waste of everyone's time.
Managing Scope Creep
Scope creep is the gradual, uncontrolled expansion of project scope — new requirements added after the scope baseline is set, without corresponding adjustments to timeline, budget, or resource allocation. It is the single most common cause of project overruns.
Scope creep is not the same as scope change. Scope change is a deliberate, documented decision to modify the project scope, with impact analysis and stakeholder approval. Scope creep happens when "can we also..." and "while you're at it..." requests accumulate without formal evaluation.
Managing scope creep requires:
- A clear scope baseline. Before development begins, the agreed requirements are documented and signed off. This is the measuring stick against which all change is evaluated.
- A change control process. Every new requirement or modification to an existing requirement goes through a defined process: document the change, assess the impact on schedule, budget, and resources, present the trade-off to stakeholders, and obtain explicit approval before proceeding.
- The courage to say no — or at least "not now." New ideas are not inherently bad. But every addition has a cost, and that cost must be visible to the person requesting it. "We can add that feature, but it will push the delivery date by two weeks" is a legitimate response that enables an informed decision.
- Regular scope reviews. At the end of each sprint or phase, compare actual scope to baseline scope. If scope has grown, make it visible and discuss the implications with stakeholders.
At Pepla, we treat scope management as a core delivery discipline, not an administrative overhead. Our project managers track scope changes in real time, quantify their impact, and ensure that every addition is a deliberate decision rather than an unnoticed drift.
Every "can we also..." request has a cost. Make that cost visible so the person requesting it can make an informed trade-off decision.
Requirements engineering is the cheapest place to find defects and the most expensive place to skip. Invest the time upfront — in clear requirements, thorough acceptance criteria, rigorous reviews, and disciplined scope management — and the rest of the project becomes dramatically more predictable.
At Pepla, our BAs follow the INVEST criteria religiously. Every user story we write includes testable acceptance criteria, and every requirements document includes a traceability matrix.
Get the requirements right, and you have a roadmap. Get them wrong, and you have an expensive discovery process that masquerades as development.




