How to run an unmoderated remote usability test

November 4, 2019

Read time:

5

What is an unmoderated remote usability test?

In this article, I'll explain how to run an unmoderated remote usability test or URUT for short. This is a technique that evaluates the usability of an interface or product via an online testing platform like usertesting.com. It's similar to in-person usability testing. You will have participants complete tasks in their environment without a facilitator present. Now, there are two broad methods for collecting participant behavior, and it is essential to understand. The first is via video recordings of participants interacting with an interface. And the second is via clickstream data, much like web analytics.

So why use unmoderated remote usability testing?

Firstly, it's valuable when you need to obtain a large sample and or need a high degree of confidence with your study. An unmoderated remote usability test can also be useful when your audience is geographically dispersed or hard to access. It's conducted online, much like a survey, it can be taken in the participant's own time and the location of their choosing.

It's also valuable when speed is crucial. An unmoderated remote usability study can be run entirely in a couple of days. It's valuable where a specific environment or context is critical and also when budgets are tight. An unmoderated remote usability study  can be relatively inexpensive. It's also helpful in cases where you need to compare two or more products or interfaces. An unmoderated remote usability study is perfect for benchmarking studies comparing either competitive products or different iterations of your product. The ability to capture large sample sizes means that statistically significant differences between interfaces can be identified.

So how do we run an unmoderated remote usability test?

There are a couple of things to consider before you start. Firstly, define the project objectives and identify your research questions. Setting the objectives of the project will help with designing the study and picking the right tool.

Identify your sample

Ideally, participants are representative of the product's audience, sourcing participants through either a database of existing customers running an intercept on a website, utilizing your social media presence, or paying for a sample via a preexisting panel. Tasks develop for you are you to a need to be clear and provide enough detail for participants to complete the task on their own. Try to include any information they would require to complete the task. For example, if a task requires credit card details, providing fictitious card details will be necessary. Also, look at using questions in addition to tasks to collect further information. Questions can be used to verify that tasks have been completed correctly.

Include questions after each task to measure ease of task completion. Questions can also be provided after the test as a whole to gage an overall assessment of the experience. And finally, open-ended questions allow participants to expand on their answers and give you more detail. Piloting the study with either a subset of participants or in a preview mode will allow issues with the prototype technology tasks or questions to be sorted out.

When you are testing, it's essential to monitor the data and be available for offering help to participants. Monitoring the data will ensure everything is working as planned and that you were able to receive data that meet the objectives of your study. That is why it is essential to run a pilot to make sure you're getting data that meets your goals. 

Once you've collected your results, it's time for analysis.

Start by looking at some overarching metrics, such as overall task completion, the system usability scale, and customer satisfaction. These will give you a sense of the overall performance of the product.

Next, look into the individual tasks and identify those that are causing issues and why. Watch video of specific tasks to observe behavior, patterns and identify the elements of the interface that are causing the issues. For click stream services focusing on the combination of the pages visited during the tasks to identify behavior and the pages where the issues have occurred.

Now that you have an idea of how to run an unmoderated remote usability test, here are some final tips.

Choose the testing platform after you've identified the objectives of your study; it is crucial to select a tool that is fit for the purpose and will support your study's objectives.

Set clear expectations for participants.

Obtaining useful data is dependent on the participant's understanding of what is expected of them. Remember, the participants won't receive any assistance during this study from a facilitator. It is crucial to ensure that tasks are clean and easy to follow. Try to avoid questions that start with "do" or "does" and change them to "how" or "what" questions. If they can answer yes or not, it's generally not a good question, and the participant won't explain it even if you ask them too.

Avoid bias, randomize the order of tasks, and pay attention to task wording to avoid bias.

Keep participants engaged to avoid participants quitting your study, but keeping it interesting and short.

Unmoderated remote usability testing is a technique that can offer quick, inexpensive, and robust usability testing. The particular value can be the ability to use the method for benchmarking and context-sensitive studies. It's a great tool to have in your bag of research techniques. Exploring the different tools on free trials and experimenting with technique is the best way to learn and develop expertise.

If you have any questions about remote usability studies, feel free to leave them in the comments, and I will try to get to every one of them. If you need usability testing services, reach out and i'll see how I can help!

Posted on

November 4, 2019