The most common objection to user research is cost. "We don't have the budget for a research lab." "We can't afford a dedicated researcher." "User testing takes too long — we need to ship." These objections share a common misconception: that user research requires expensive infrastructure, specialised staff, and months of lead time.
It does not. Some of the most valuable user research methods cost almost nothing, can be conducted in days rather than months, and produce insights that fundamentally change product direction. At Pepla, we build lightweight research into every project, not as a separate phase with a separate budget, but as an integrated activity that happens alongside design and development.
The 5-User Rule
Jakob Nielsen's research at the Nielsen Norman Group demonstrated that testing with just five users uncovers approximately 85% of usability problems in an interface. This finding, replicated across hundreds of studies, is the single most important insight for teams doing research on a budget.
Five users uncover 85% of usability problems. After the third participant, you start seeing the same issues repeated.
Five users. Not fifty. Not five hundred. Five carefully selected participants who represent your target audience, each spending 30-60 minutes with your product, will reveal the vast majority of issues that matter. If you have a budget for five one-hour sessions — even compensated at R500 per participant — your total cost is R2,500 plus the time of one observer. That is a fraction of the cost of a single sprint, and the insights will prevent waste many times that amount.
The math works because usability problems are not evenly distributed. A few critical issues affect most users. After the third or fourth participant, you start seeing the same problems repeated. By the fifth, diminishing returns set in. If you have budget for more participants, run additional rounds of five after you have fixed the issues from the first round — you will find the next tier of problems.
Guerrilla Testing
Guerrilla testing is usability testing stripped to its essentials. You take a prototype or live product to a public place — a coffee shop, a co-working space, a university campus — and ask strangers to complete a few tasks while you observe. Sessions last 5-15 minutes. Compensation is often just a coffee.
The trade-off is demographic precision. You cannot control who you recruit, so guerrilla testing works best for products with broad audiences — consumer apps, public websites, general-purpose tools. For specialised products (B2B enterprise software, medical applications, financial tools), you need targeted recruitment even on a budget.
Guerrilla testing is most valuable during early design phases when you are testing navigation structures, information architecture, and core task flows. The feedback is directional rather than statistically rigorous — it tells you whether people can find things and complete tasks, not how they feel about your brand. That directional signal, obtained in an afternoon, is infinitely more valuable than no signal at all.
Five users uncover 85% of usability problems -- you do not need a research lab, just a clear task and the discipline to listen.
Card Sorting
Card sorting is a technique for understanding how users categorise and label information. You write each piece of content or feature on a card (physical or digital) and ask participants to sort them into groups that make sense to them.
In an open card sort, participants create their own group labels. This reveals how users think about your content — which items they consider related, what language they use to describe categories, and where their mental model diverges from your information architecture.
In a closed card sort, you provide predefined categories and participants place cards into them. This tests whether your existing navigation structure matches user expectations.
Card sorting can be done remotely using tools like Optimal Workshop or UXtweak, often for free at small sample sizes. Five to fifteen participants is sufficient for clear patterns to emerge. The output directly informs navigation design, menu structure, and content organisation — decisions that are expensive to change after development begins.
Tree Testing
Tree testing is the complement to card sorting. Where card sorting asks "how would you organise this?" tree testing asks "given this organisation, can you find what you need?"
You present participants with a text-only representation of your navigation hierarchy — no visual design, no branding, just labels — and ask them to find specific items. "Where would you click to change your password?" "Where would you find last month's invoice?" The text-only format isolates navigation structure from visual cues, giving you a pure signal about whether your information architecture works.
Tree testing is particularly valuable for validating the output of card sorting exercises. Sort first to understand user mental models, restructure your navigation based on the findings, then tree test the new structure to verify it works.
Card sort to discover how users think, then tree test to verify your navigation matches their mental model.
Heuristic Evaluation
A heuristic evaluation is an expert review of an interface against established usability principles. It does not involve users at all — instead, one or more evaluators systematically examine the interface using a set of heuristics (Nielsen's 10 Usability Heuristics are the most common framework).
This method is valuable precisely because it is fast and cheap. A single experienced evaluator can review an interface in 1-2 hours and identify dozens of potential issues. Three evaluators working independently will collectively catch 60-75% of usability problems. The expertise cost is real — you need someone who understands usability principles and can apply them — but the time and participant recruitment costs are zero.
Heuristic evaluation is best used as a complement to user testing, not a replacement. Expert evaluators catch issues that users might not articulate (inconsistent patterns, missing error prevention), while users reveal problems that experts might overlook (confusing terminology, unexpected workflows, real-world context issues).
Survey Design
Surveys are the most commonly used and most commonly misused research method. A well-designed survey efficiently gathers quantitative data from a large number of participants. A poorly designed survey produces misleading data that is worse than no data at all.
Key principles for useful surveys:
- Ask one thing per question. "How satisfied are you with the speed and reliability of the application?" asks two things. Split it into two questions.
- Avoid leading questions. "How much did you enjoy our new feature?" presupposes enjoyment. "How would you describe your experience with the new feature?" is neutral.
- Use validated scales. The System Usability Scale (SUS) is a 10-question standardised questionnaire that produces a reliable usability score. It is free, takes 2 minutes to complete, and provides a benchmark-comparable number.
- Keep it short. Survey completion rates drop sharply after 5 minutes. Prioritise ruthlessly — every question must earn its place.
- Include one open-ended question. "Is there anything else you would like to tell us?" often produces the most valuable individual insights.
Interview Techniques
User interviews are the richest source of qualitative insight, but they require skill to conduct effectively. The goal is to understand user behaviour, motivations, and pain points — not to validate your existing assumptions.
Effective interview techniques include asking open-ended questions ("Tell me about the last time you..."), asking for specific examples rather than generalisations ("Can you walk me through exactly what happened?"), listening more than talking (the 80/20 rule — participants should talk 80% of the time), and probing with follow-up questions ("Why was that frustrating?" "What did you do next?" "What would have been better?").
Remote interviews via video call are perfectly effective and eliminate travel costs. Record sessions (with permission) so the team can review findings without everyone attending live. At Pepla, we typically conduct 5-8 interviews per research round, each lasting 30-45 minutes.
Synthesising Findings
Research produces data. Synthesis transforms data into insights. The most effective synthesis technique for small-scale research is affinity mapping: write each observation on a sticky note (physical or digital, using tools like Miro or FigJam), group related observations into themes, and label the themes. Patterns emerge quickly — usually within an hour of collaborative synthesis.
The output should be actionable. Not "users find the navigation confusing" but "users expect account settings to be accessible from the profile icon in the top right, not from the hamburger menu." Specific, actionable insights lead to specific, actionable design changes.
Lead with the top three findings, include a 30-second video clip, and connect every insight to a business metric stakeholders care about.
Presenting to Stakeholders
Research findings must be communicated in formats that stakeholders will engage with. A 40-page report will not be read. Instead:
- Lead with the top 3-5 findings. Prioritise by business impact, not by how interesting they are to the research team.
- Include video clips. A 30-second clip of a real user struggling with your interface is more persuasive than any number of bullet points.
- Connect findings to business metrics. "Users cannot find the checkout button" becomes "this usability issue is likely contributing to our 68% cart abandonment rate."
- Recommend specific next steps. Research without recommendations is incomplete.
The best user research is not the most rigorous or the most expensive. It is the research that actually gets done, produces actionable insights, and changes how the team builds the product. Start small, start now, and iterate.
You do not need a research lab, a dedicated researcher, or a six-figure budget. You need five users, a clear set of tasks, and the discipline to listen. The return on that minimal investment will reshape how your team thinks about the people who use your software.




