Good is Objectively Good.
Whether you agree with this sentiment or not, we all inevitably engage with clients or stakeholders who believe otherwise. So, how do you move past personal opinions—and paralysis by analysis—to achieve consensus and make progress?
First, let’s establish a shared understanding/definition of design in general:
- Design is intentional
- It has a core objective of defining and solving problems. At it’s very least, it communicates effectively and efficiently. At it’s best, it improves people’s lives and the human condition.
- Design is functional
- It performs an intended and expected function.
- Design is aesthetic
- It’s not only useful, but beautiful and delightful. Its form matches its function. Design can be art, but that’s not it’s end goal.
- Design is measurable
- It can be qualified and quantified against an objective. This is perhaps design’s biggest strength—we know when we’ve got it right and when we’ve got it wrong.
If you embrace frequent validation, you’ll be more confident in defending the rationale behind design decisions and will ultimately ship better work.
So why don’t we validate more often?
Timing is Everything
It’s not easy to sell clients and stakeholders on weeks of research, discovery, and user testing. Most are understandably concerned about exceeding timelines and budget. It’s your job to educate them on the value of this critical part of the design process.
If you fail to buy more time, do the validation anyway. Just do it faster. Do some more heavy lifting on the front-end of projects to free up a day or two for focused validation efforts closer to the back-end. When a client raises a concern with your design, especially if you disagree, let them know that you’d like to validate whether the concern is legitimate as quickly as you can. Do it because it’s valuable, but more because it’s responsible.
Lean in. Learn as quickly and as cheap as you can. Embrace guerrilla validation.
No, [guerrilla](https://www.dictionary.com/browse/guerrilla "guerrilla"). Scrappy. Quick. Successful.
In other words, guerrilla validation is not:
- Weeks worth of research
- Think in days, or hours, not weeks.
- True user testing
- User testing means showing prototypes and doing focused usability and solution/value testing with actual customers of the product or service. It’s often longer, filmed, and has an observer taking notes. This is not that.
- Super clean and organized
- Be okay with messy. Your prototype likely won’t be perfect. You may not have all your tasks and objectives documented; you may not have a clear participant list. These are not bad things.
- Mindful of context
- Good user testing is truly mindful and considerate of the context in which the testing is going to take place to make sure the results are most relevant. This includes situations the user is in, their mental state, devices, software, and more.
Different Types of Guerrilla
It’s important to note that all of these methods could be done in under a day and shouldn’t cost more than your time in setup and some gift cards to reimburse people for their time (more on that later).
- Simple reaction testing
- Show a screen to a participant and ask them to simply tell you what they think they’re seeing and why that is. This allows you to get some quick “gut checks” on what your work is communicating.
- General problem discovery
- Find trends in usability problems across the board by having participants click through a prototype while trying to accomplish a few tasks. See how many they can accomplish while getting useful feedback along the way. This is more proactive as it covers a lot of ground, but if stakeholders are not convinced a flow is going to work well, this is a great method.
- Preference/competitive testing
- Create a series of screens or simple prototypes, have the participant look at and/or use 2-3 options or versions (sometimes a competitor’s version), and see which they prefer and why. This is very useful when stakeholders are making assumptions about which option might perform better, especially if it’s their option.
- Findability & learnability testing
- Create a series of screens or simple prototypes and ask pointed questions about how they would do something or where they would go to do something. This helps validate that the UI has clear signifiers about how the interaction works.
- Recall testing
- Quickly display a screen or simple walkthrough (within 5-10 seconds) and ask your participants what they remember about it, what stood out to them, and why. This is quite useful when it comes to validating first impressions, particularly emulating the speed in which most users browse and make decisions. Also can be good for validating that brand collateral and content is communicating well.
Prepping for Success
- Focus on speed
- Try not to spend more than a day on this. Spend your morning planning, your afternoon talking to participants, and wrap it up with some quick takeaways.
- Know what you’re validating
- You should clearly identify what you’re wanting to validate and the specific ways you’ll gather data that will give you signal one way or another.
- Using a clickable prototype is ideal
- This is a good way to gain better feedback on interaction and design flow. Depending on what you’re trying to validate, you could also simply show one or two screens.
- Use the right tools for speed
- Like we established, the name of the game is speed. With that in mind, use whatever tool allows you to create a prototype or series of screens the quickest and easiest – use the tool you know. This could be Sketch, it could be InVision, it could even be Keynote (way undervalued for prototypes).
- Be objective
- Get comfortable with being wrong—and ready to let go of your favorite concepts. This isn’t about your personal preferences.
- Test interaction over solution/value
- Because of the nature of guerrilla validation, it’s much easier to test interaction and visual design than it is to get accurate and relevant signal on value (does this actually solve a problem). Unless, of course, you’re able to test with actual users – this will always give you better signal.
Finding Your People
- Stick to 5-6 participants
- With user testing, testing with more users is not always better, as you usually start to see the same trends and patterns after just 5 participants. With the second, third, and fourth participants you start to see the same themes emerge, which means you’ll see diminishing results after 6 or so participants.
- Stay in the office (good)
- The quickest and dirtiest method is to simply show a few screens to your coworkers, particularly those who have no context or understanding of your project. You could do this with designers and non-designers. Each group will yield interesting results.
- Get out of the office (better)
- Go to a coffee shop, the library—anywhere you can find a diverse group of people. Ask them if you can have 5-10 minutes of their time in exchange for a snack or beverage.
- Tap into your (and your friends’) network
- Quickly recruit people that best resemble the user. While this approach might feel less daunting than talking to strangers, be mindful it could take longer to pull it off.
Running the Test and Getting the Data
- They are not the test subjects
- Let participants know you’re testing the design, not your participants. There is no wrong answer and they should be encouraged to be very honest.
- Repeat after them
- “I see you clicked on that link” or “I notice you skipped over that section…” This will ideally prompt them to vocalize their thoughts.
- The struggle is good
- Try not to help. If the silence or awkwardness becomes unbearable, feel free to ask a question to move it forward, but let them work through any challenges that pop up.
- Duration varies
- Time spent with participants could range from just a few minutes to well over 30. It depends on what you’re testing and how the conversation unfolds. Of course, since guerrilla testing is all about speed, if you’ve got the data you need or the conversation veers off topic too frequently, feel free to simply end it whenever and thank them for their time.
- Take notes (but be sly about it)
- Make sure you’re not distracting your participant. Ideally, make mental notes during the test, then jot them down immediately afterward.
- Plan to reimburse
- You should always thank participants for their time, but even better, say thanks with a simple gift (for example, a $5 gift card). Give it to them after the test so their actions aren’t reward-driven.
Recap and Move Forward
- Rule of three
- Try to focus on the top three trends and learnings. Anything more than that may be less common to all participants and potentially less valuable.
- Share the data
- Obviously you’ll want to share this data with your stakeholders in a way that the can understand and rally behind. Be sure to include the goal of the validation sessions, the method, approach, results, and top three takeaways.
- Plan next steps (iterate)
- This could mean anything from a casual, esoteric conversation to a drastic overhaul of your original designs. Act accordingly.
Alternatives to IRL
There are a handful of online tools that can help offset some of your own time when it comes to talking directly with participants and getting signal. These tools all cost money, though, so be prepared to pony up a couple hundred bucks to get enough signal to make decisions. Here are two tools that I’ve found most useful:
- A well-known tool, usertesting.com is useful for self-moderated tests with a handful of different types of tasks and activities built in. Not a cheap option, but if you have the budget, go for it.
- This tool is really built for guerrilla validation as their tests are pretty lightweight and focused on a few key tests – preference testing, first-click testing, five second testing, etc. If you go this direction, it makes more sense to test with more people, probably at least 20-30, though ideally closer to 100 (which could cost over $100).
- While I don't have direct experience using this tool, I have heard from several people I trust that it's been helpful in validating their work. They have a free personal plan that costs $35 for each test.
Disclaimer: It’s Not All Unicorns and Butterflies
Guerrilla validation is a very useful way to get quick feedback. That said, be mindful of common challenges:
- Stakeholder buy-in
- Some may doubt the results if your testing and overall process isn’t rigorous enough, or if it’s misaligned with their expectations. Help them understand why guerrilla validation is always better than no validation.
- The data is not always legit
- While you’ll come away from this process with a fair amount of qualitative data, that data may not be completely legitimate in making significant product decisions. Again, don’t take results as gospel, but use it as valuable input into steering the direction of the product design.
- It can get awkward
- The more you do this, the easier it is. Try to pinpoint individuals that don’t look super busy, and simply be your friendly self. Most people are very willing to help, and often enjoy the experience.
- Your mileage may vary
- Your initial first or second attempt at guerrilla validation may not yield the precise results you expect, or what’s documented here. Make it work for your purposes.
Hopefully this guide helped you understand the of quick design validation. We promise that as soon as you do this once, you’ll immediately see the value. And, it gets easier and more exciting the more you do it.
Now, go out there and solve some problems!