What are you testing?
Although you’ll be testing for it usability should be considered a basic requirement, something to build from not work towards. Beyond this your users will ideally want to use your product, find value in its functionality and sometimes they’ll even enjoy using it!
Ideally you’re testing for:
Whether your sprints are 2-weeks long or not the testing process should run in parallel to the sprint, sharing key milestones that feed into each work-stream. This is an intensive workflow that may even require a dedicated member of the team, but testing frequently is more cost-effective in the long run and allows you to carry learnings from one sprint to the next, helping to mitigate the diminishing returns seen as a result of repeating the same experiment.
Prioritising changes that are the result of testing should become a part of the sprint planning and backlog refinement. Playing back your testing results should pair with a sprint demo. These milestones allow for the two work-streams to synch-up with each other.
The chart below illustrates the sprint and testing workflows and where they intersect. Everything trickling out towards frequent releases.
Shouldn’t we test it before we build it?
Validating design with testing before allowing it to be built introduces too much waterfall into an agile process —and that kind of bottleneck risks impacting cadence. The cost of maintaining momentum is the potential for some development debt, as built solutions may require immediate refactoring. These changes are less likely to slow delivery as there is the opportunity to prioritise them as part of the sprint process.
During the early sprints testing with clickable prototypes may be necessary, but it shouldn’t be a blocker for development.
Because of this, testing should begin in the second sprint once we have something of substance to validate.
When should we stop testing?
You should be testing right into the last sprint, and beyond. Even if the results go into the backlog it’s useful to document issues and prioritise them for when you’re able to address them.
At Etch we’re strong believers in continuous improvement, with offerings dedicated to this mindset. It’s how you evolve a product past MVP and ensure you’re delivering as much value to your customers as possible. Listening and responding to user feedback is a cornerstone of this improvement.
The 5 steps to testing success
1. Identify your objectives
Does the functionality you’re testing satisfy the user story?
Although we’re referring to ‘usability testing’ there are other metrics that you can test. For example you might be interested in how a visual design or brand resonates with its intended audience, or you might want to use the test to explore competing approaches. A higher converting design isn’t necessarily more ‘usable’.
Desire paths are important to capture during testing too — are people achieving their goal via a different or unexpected method than you intended? Occasionally this demonstrates a need to educate the user, however sometimes it’s better to adapt to their behaviour than to fight against it.
2. Prepare the sessions
Each session should be part interview, part task. Even the most task-heavy test benefits from the context provided by some basic supporting questions, whether they be demographic (age, gender, location) or usage (device, preferences, habits).
Outline your test in bullet-list format. Remember though that it’s an outline used for consistency, you should wander off the script and adapt based on the participants responses.
We use Figma at Etch which allows us to collaborate remotely and create clickable prototypes, which are useful when we’re not able to test on a staging or test environment.
- Remind your participant that you’re not testing them, but the product; they can’t make mistakes, only illuminate issues.
- Introduce yourself as a researcher and distance yourself from the product you’re testing, where appropriate.
- If you’re recording the session get the participants permission, and tell them how the data will be used (GDPR yo!).
3. Find and book participants
Although 5 participants doesn’t sound like a lot, it’s enough to hear most of what you need before you start running into diminishing returns. With continual improvement you’ll have a new opportunity to test every sprint — and with changes come new observations and fresh feedback.
Participants are generally difficult to find, so treat them as a precious resource — don’t be in a rush to use them all at once!
Tools like Calendly and Hubspot can make booking participants a breeze. They allow you to plug-in your calendar and have participants book them selves into a slot based on the criteria you set. This saves a lot of back and forth trying to book participants which can be a time-consuming (costly!) exercise.
4. Run the sessions
Video conference software is ideal for performing remote sessions - which right now are probably your only option! We use Zoom which allows us to record picture-in-picture direct to the cloud, along with automatic transcription which is useful for finding conversations. Let your participants know in advance how you’ll be running the sessions so that they use the right device and are able to meet your expectations, as well as letting them know how you plan on capturing the video, consent is important.
Face-to-face sessions are often ideal as they remove some technology from the equation, which is frequently the biggest hurdle to overcome during the test. However it’s important that the participant is comfortable and in a familiar environment. One to consider when we're allowed to interact with one another again.
All of this is so that you can really engage with the participant and observe their behaviour. A user will sometimes have a positive verbal response but show visible confusion, or claim something to be difficult while completing it with ease. They’re not trying to trick you, that information is valuable, but being able to observe the user is critical in reading-between-the-lines of the feedback you’re receiving, which is also why it's important for you to parse and compile the feedback rather than delivering it verbatim.
Capturing the notes is a matter of preference, but consider how they contribute to the wider workflow. Trello or Confluence are often suitable digital solutions — the trusted post-it as a more low-fi solution. We’ve even created bespoke sheets in the past to help us capture insight quickly and easily out in the field.
If you can get hold of extra resource then a second person to take notes is very useful — this allows you to focus entirely on the interview and observing behaviour. Watching videos back while writing up the results can be a good way to keep the information fresh.
What about unmoderated tests?
Rarely a first choice unmoderated tests can be useful when quantitative data is required but they lack the detail and insight of a test that you’re directly involved in.
5. Prioritise and deliver the results
This is best achieved using an effort/impact matrix or RICE table.
Make this a collaborative exercise with stakeholders and team members to allow everyone to feed into the prioritisation — using a decider to make final decisions on placement.
Measure complexity consistently with your sprint process, i.e. use t-shirt sizing or sprint points, whatever you’re already using. This way the output of the testing cycle can feed directly into sprint planning and the cycle can begin anew.
Interested in what we do at Etch Products? Our mission is to partner with leaders and innovators who want to activate their people, untangle process and bring ideas to life. Etch is the product team that drives behaviour change, positive business outcomes and delivers radical impact. Getting you further faster. To find out more about our offering or any subjects discussed, drop an email to Jamie, Head of Etch Products.
Hero image by Lidya Nada