Conducting research on your visual design concepts can be a very useful way of understanding user preferences. However, the data that determines your outcome is only as good as the quality of your testing.
Recently, we conducted visual design testing for a client for whom we are redesigning a website. We had previously conducted two rounds of research prior to this testing—including general background research and wireframe testing. And as a result, we felt like we had a pretty good grasp of the target audience’s likes and dislikes. So imagine our surprise when the concept we thought would win actually ended up in last place.
Developing Quality User Research for Visual Design
Having your projected first-place winner fall to last place could have been a huge letdown, not to mention very confusing. But because we had developed the testing in such a way as to avoid personal biases such as, “I like blue better” or “I like sunsets but I hate the beach,” we were able to gather focused data about the design and layout. And by knowing what layout works for users, we can extend those design principles across the site in a more detailed way. Here’s how we did it:
A Level Playing Field
To ensure that participants weren’t zeroing in on design because they liked one image over another, we used the same imagery on all four concepts. This forced the participants to evaluate the layout and presentation of the content, not the imagery, which can be all people see sometimes. But because imagery is such an important part of the equation, we had a separate exercise where we presented a series of discrete images, and asked participants to evaluate them in the context of a website similar to the one we were redesigning. Participants told us whether they found the images to be positive, negative or neutral. In this way, we were able to identify a set of images that we could incorporate into the winning design direction.
Exercise 1: Quick Exposure Exercise
In our first exercise we started on a laptop and showed each of the four concepts to each participant for ten seconds. After each quick exposure, we asked the participant what he/she recalled about the concept. This is an excellent way of gaining feedback about first impressions and determining what stands out about a particular layout.
Exercise 2: Design Reaction Exercise
Next, we asked the participant to select three words from a grid of potential words to describe how the design made them feel. Since these feelings are elicited after only 10 seconds of exposure to the design, we’re able to truly capture a gut reaction. (More about the importance that these words played in our outcome later…)
Potential Design Reactions:
Exercise 3: Ranking Visual Design Preference
After seeing each of the concepts for a brief period, the participants were asked to view them all as side-by-side printouts on boards. After spending a few minutes looking at the designs, they were then asked to rank them in order of their preference from 1 to 4.
Next, we moved back to the computer, where we asked a series of questions about the participant’s chosen design, including:
- Why did you make the choice that you did?
- What are the three things that you like most about the design that you selected?
- Does seeing the design on the computer change your mind about your choice? If so, why?
- Does seeing the design on the tablet change your mind about your choice? If so, why?
- How do you feel about the font size? (If prompting is needed: Is it too small? Too big? Or just right?)
Getting the answers to these questions starts to dig into the designs in a deeper way, providing qualitative data about what works and what doesn’t. It also helps to uncover whether or not device preference affects how the user interacts with and perceives the design.
Exercise 4: Design Reaction Exercise, Part Deux
Remember that grid of words we used in Exercise 2? We brought it back and asked the participant to choose three words to describe how their chosen design made them feel now that they’d had more time with it. Of the 20 participants we interviewed, only one circled the same words for their chosen design during both reaction exercises—after the initial quick reaction and when they had spent more time with the design. This speaks to the power of first impressions, whether accurately formed or not.
Gathering Focused Data from User Research
So how did we focus this data to come to a conclusion? First, we took the visual design rankings and assigned a point value as follows:
1st place = 6 points
2nd place = 4 points
3rd place = 2 points
4th place = 0 points
Then we applied the point values to all of the rankings and tallied them up to determine an over all design score (the higher the score, the better.) But as you can see, we had a tie for second place.
But not to worry, we have more data to draw on, including all of the words that participants circled in the Design Reaction Exercise. These provide excellent context into the participant’s preference beyond merely ranking the designs.
Further Analysis for the Top Three Scoring Designs:
The Design Reaction Analysis reiterates the first-place ranking for Concept 2, and provides some additional data to make a determination that gives the edge to Concept 4 over Concept 3. At this point, if the client wanted to move forward with refining two directions, they could feel confident about selecting Concepts 2 and 4. If the user research had been more limited, this decision would not have been as well supported by data.
There’s Always a Place for Data
Just as this example illustrates, there’s always an opportunity for data to help us make better decisions. The trick is having a firm grasp on what you want from data, and asking the right questions of it. At EXTRACTABLE, data is at the core of everything we do, and we’d be happy to help you ask the right questions so you can find the answers you’re seeking.