Usability testing for the real world
How to ensure that your usability project is successful, makes an impact, and isn’t perceived as a theoretical research exercise
Usability research is complex. How people feel about it, whether it’s respected, funded, and effectively used within an organization – sometimes there’s a shared point of view, but often there is not.
As part of the process of engaging Crux Collaborative for our usability research services, many of our clients ask if there’s a way to ensure that a usability project is going to be successful. While there’s no guaranteed magic formula for ensuring the success of a usability project – there are several “do’s and don’ts” that are followed by organizations where usability research is effectively conducted.
The staff at Crux Collaborative has been conducting usability tests for nearly two decades. We’ve learned a lot over the years about what makes a usability project succeed and what can make it fail.
Here are the “do’s and don’ts” for project success we’ve identified from our collective experience of working on usability projects:
- DON’T use usability as a way to prove yourself right or prove someone else wrong. If you’re trying to win an organizational or political battle, usability research is the wrong weapon.
- DO use usability research to identify how to improve an experience or identify whether a hypothesis (new idea) proves effective, or if it can be optimized to perform even better.
- DON’T ask questions that can’t be answered in a ulab. Or rather, don’t ask questions that can’t be answered by watching users attempt to complete tasks. This is another key tenet of successful usability projects. If you want to know whether the new blue of the logo “resonates” – qualitative research with a small sample set is NOT the way to go.
- DO make sure you identify meaningful research objectives. Can users find the location of the store nearest to them and find directions for how to get there? That question, a usability study can answer!
- DON’T go after more data than you can act upon. This one can be particularly hard to stick to, but it is critical. Often when usability research has a bad rap within an organization, it’s because one or more studies have been completed that no one has actually done anything with. The report has either been too dense (or boring) to read, or not actionable enough.
- DO make sure you limit the scope of the project to what the team can implement in the next year. Projects that are looked back upon as failures often result in a list of un-prioritized items the facilitator or research team has identified as being “wrong” – without indicating what would make them “right” – making the report, and the results, fairly useless.
- DON’T skimp on the recruiting process. It can be easy to want to make compromises here: “We don’t want to ‘bother’ our end users!” “We can’t get a list of customers fast enough.” “It’s going to take too long to find the right people.” “We don’t want to offer a large gratuity, because we don’t want people to participate just for the money.”
- DO allow the time and budget to recruit the right audience. Even if it will take longer. Even if you have to pay them a bigger gratuity to show up. It’s worth it. No really, it’s absolutely worth it. The more specialized or specific the target audience, the more critical it is that you have the right audience at the study.
- DON’T limit participation to just one or two people. Participating in user research shouldn’t be a VIP-only event. It may seem like a hassle to include people from project + product management, marketing, and technology. It might seem daunting to go through the effort of gaining consensus and approval from each stakeholder and department – but it’s worth it.
- DO include team members from all relevant departments throughout the duration of the project. It takes a team of people to create and maintain a site, and it takes a team of people to improve it. If you don’t have access to the right people at the right time, then you could miss a critical piece of information. There may be a legal or regulatory restriction you don’t know about, a data constraint, or a new campaign that will impact timing and change the home page – rendering your current test plan useless. Sure, it can take more time to involve multiple team members… but do it anyway.
- DON’T write test plans that lead the user through the site in an artificial way. Don’t start by telling participants what to do. Instead, ask them what they would do, and let them show you how they would use the site or prototype to accomplish their goal. If you have recruited well, most participants will complete the task without you having to ask or lead them to it. If they don’t, you can always redirect them.
- DO create a test plan that is fluid and allows you to let the user take the lead. After you have decided what questions you want to be able to answer when the research is complete (your research objectives), write a test plan that allows participants to take the lead. This will enable you to gain valuable insights about how they think about your site, product, or service without having tainted their approach or perception.
- DON’T let members of the project team skip the real-time observation. Sure there is always session video recordings, but let’s be honest… other than the facilitator and analysis team, who really reviews the usability videos after the fact? Almost no one. And even when they do, it doesn’t have the same impact as seeing it real time with the rest of the team.
- DO require attendance from the project team on research days. Nothing can replace the experience of watching an actual user try to use a website, app, or prototype. And having the entire team be present and see (and hear) the same things is critical when it comes to understanding findings and implementing recommendations.
- DON’T accept analysis that comes from a single perspective. A common factor of usability projects that fail is the expectation that the facilitator can and should identify meaningful and actionable recommendations without input from the team. This approach is weak because the facilitator represents just one of many relevant perspectives. Many perspectives are required in order to ensure that the analysis is accurate, and not rooted in a lack of contextual understanding.
- DO confirm that the report reflects input from the full team. The facilitator of the usability study has valuable experience and a relevant perspective to share – but ultimately, it is the team who understands the technology constraints, the brand message, and marketing approach, as well as the organizational and contextual history as to why certain decisions around content and functionality were made.
- DON’T accept a report that only identifies problems. Nothing can be more demoralizing and cause a project to stall more quickly than receiving a long list of everything that’s wrong with the hard work that has been completed to date. Especially when that’s all it is, and it doesn’t include any realistic or meaningful suggestions for how to improve the problems which are identified.
- DO make sure the report includes the strengths as well as the weaknesses. Reading a report that doesn’t include recommended improvements is tantamount to reading a list of complaints, and no organization or team is going to have an ongoing appetite for the type of project that uncovers myriad issues and offers no solutions. Plus, having a clear understanding of what IS working well is valuable, too. (If it ain’t broke, don’t fix it.)
- DON’T accept a report that presents unrealistic or un-implementable “solutions”. Almost as bad as a report that presents no recommendations is a report that presents recommendations that cannot be implemented. This is why it is critical to have the folks who manage the site on a day-to-day basis present during both the user research as well as the initial analysis sessions. And this is why recommendations are the result of a collaborative effort between the facilitator and the project team.
- DO insist on collaborative, realistic solutions. Our team may recommend something that sounds simple, such as renaming a field, without understanding the data implications. But, we are happy to brainstorm with you to identify an alternate solution when we gain a better understanding of your real-world limitations. When your technology team is present, they can help the research team come up with a recommendation that meets the needs of both the business and the end users.
Finally (and for the record): a usability report should be easy to read and understand. Successful reports are concise, prioritized, and visual for easy skimming and comprehension.
As you can see, if there is one theme woven throughout each of these “dos and dont’s”, it’s collaboration. Whether you are hiring us to conduct research with you, or whether you are doing it within your own organization, it is critical to leverage the strengths of the whole team.
More than any other factor, collaboration is the key to our success as your research partner. When the clients who hire us are engaged and available to collaborate with us throughout the span of the project, we are better able to work together to plan, design, conduct, and analyze usability research that is effective, actionable, and results in measurable improvements that are quickly able to be implemented.
Related Insights
-
Budget constraints are real, and teams are being asked to do more with less all the time. But, failing to meet certain expectations can result in disappointed customers, regulatory or legal hassles, competitor disruption, and loss of business. -
Effective Ways to Plan and Budget for User Research
Learn how to plan, maximize, and be “strategic” about where to spend money on UX services in the coming year. This article will focus on research services and outline the value they provide as well as considerations for what stage to use them. Continue to Effective Ways to Plan and Budget for User Research…
-
Why Focus Group Style Usability Sessions Fail
In this article, we’re going to explore why having more than one participant in a task-based user research session is ultimately more trouble than it is worth and produces inconsistent results.