On the previous article we started important topic – the importance of user research in UX Design and I promised you to write more about it. This time I want to share with you some user research tips, the benefits you get from user research, the types of user testing and the methods of user research.
Let’s start with the benefits as many people ignore it because of the cost of doing it, but at the end it is an investment which will lead you to get ROI faster and build successful long-term strategy putting customer in the center of it. For those who still think that this is not for them because of the budget problems, I wrote methods of user research that don’t require budget at all. But let’s start from the beginning why you should spend you precious time on it?
KEY BENEFITS OF USER RESEARCH
- Decrease costs in the long term; save on development and redesign efforts
- Increase user satisfaction; deliver exactly what your target wants
- Get qualitative feedback which helps you to improve user’s interactive experience increasing conversion rate
- Understand what best supports user’s goals and motivations; focus on the features that really matter to your target
- Gain a competitive advantage; stand out from the crowd, be always 1 step ahead of your competitors, create a “wow” effect
- Shorten the learning curve for new users; make your product easy-to-use
Have not only beautiful design, but smart one, putting your user in the center of your design strategy
At the end UX design team works together with the marketing one to achieve common goals, but before moving to the user research part, they always define their goals, what do they want to get from this research. One of the most important practices in UX design is actually done before the UX design process even starts. Defining the goals is the key driver for a results-driven process.
Writing down goals before the UX process starts will help you set the right KPIs, keep your team focused and save lot’s of debate, time and energy.
In order to choose the right method for doing user research, you need to know dimensions that would help you to use the right tools to achieve your goals.
USER RESEARCH DIMENSIONS
ATTITUDE VS BEHAVIOR
Basically, it is about “what people say” versus “what people do”.
For example, card sorting provides insights about users’ mental model and can help determine the best information architecture for your product, application or website. Surveys measure and categorize attitudes that can help track and discover important issues to address. Focus groups provide information about what people think about a brand or product concept.
On behavior side, UX designers seek to understand what people do. For example A/B testing presents changes to a site’s design to random samples of site visitors in order to see the effect of different site-design choices on behavior, while eye- tracking seeks to understand how users visually interact with interface designs.
QUALITATIVE VS QUANTITATIVE
Which one you need depends a lot on your goals, for example if you are interested in knowing which option is better A or B, then you probably use a survey to have a quantitative research to see what majority of people prefer. If you are interested in qualitative data, then a good option could be open-ended questions. In usability studies, for example, the researcher directly observes how people use technology to meet their needs. This gives them the ability to ask questions, probe on behavior, or possibly even adjust the study material to better meet its objectives.
Qualitative methods are much better for answering questions about why or how to fix a problem, whereas quantitative methods do a much better job answering how many and how much types of questions. Having such numbers helps prioritize resources, for example to focus on issues with the biggest impact.
THE CONTEXT OF PRODUCT USE
Natural use of the product – the goal is to leave the user with a product and look on what he or she does. This provides greater validity but less control over what topics you learn about.
Scripted use of the product – is done in order to focus the insights on specific usage aspects, for example newly redesigned flow.
Not using the product during the study – studies where the product is not used are conducted to examine issues that are broader than usage and usability, such as a study of the brand, demand, etc
A hybrid of the above – allows users to interact with and rearrange design elements that could be part of a product experience, in order discuss how their proposed solutions would better meet their needs and why they made certain choices. Concept-testing methods employ a an approximation of a product or service that gets at the heart of what it would provide in order to understand if users would want or need such a product or service.
19 USER RESEARCH METHODS
1. PARTICIPATORY DESIGN
Participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them the most and why.
2. FOCUS GROUPS
Size of the groups can be different, but normally it is between 3-12 participants, who are lead through a moderated discussion about a set of topics. It allows you to learn about user attitudes, ideas, and desires.
A researcher meets with participants one – on- one to discuss in depth what the participant thinks about the particular topic. This method enables you to get detailed information about a user’s attitudes, desires, and experiences. In individual interviews, normally an interviewer talks with one user for 30 minutes to an hour. Participants can be asked to rate or rank choices for site content.
Good tool to precisely measure where participants look when they perform tasks or interact naturally with websites or mobile apps
5. CUSTOMER FEEDBACK
Open-ended or close-ended information often through a feedback link, button, form, or email.
6. DESIRABILITY STUDIES
Participants are offered different visual-design alternatives and are expected to associate each alternative with a set of attributes selected from a closed list.
7. CARD SORTING
Ask users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site or mobile app by using users’ mental models. It allows users to group your site’s information and have a simple and logic design. This helps ensure that the site structure matches the way users think. To conduct a card sort, you can use actual cards, pieces of paper, or one of several online card-sorting software tools. Card sorting makes you understand your users’ expectations, knowing how your users group information can help you: build the structure for your website, decide what to put on the homepage, label categories and navigation etc. Depending on your needs, you may choose to do an open or closed card sort.
Open Card Sort – participants are asked to organize topics from content within your website into groups that make sense to them and then name each group they created in a way that they feel describes the content. It is used to learn how users group content and the terms or labels they give each category.
Closed Card Sort – participants are asked to sort topics from content within your website into pre-defined categories. It works best when you are working with a pre-defined set of categories, and you want to learn how users sort content items into each category.
8. CLICKSTREAM ANALYSIS
Analyzing the areas that users click on.
9. A/B TESTING
Method of testing different designs on a site by assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior. A/B testing aims to identify changes to web pages that increase or maximize an outcome of interest, for example click-through rate. Two versions; A and B are compared, which are identical except for one variation that might impact a user’s behavior. Significant improvements can be seen through testing elements like copy text, layouts, images and colors.
Basically this method helps you to choose the best option.
10. MULTIVARIATE TESTING OR BUCKET TESTING
Similar to A/B testing but tests more than two versions at the same time.
11. TRUE – INTENT STUDIES
Ask site visitors what their goal or intention is when they are entering the site or a mobile app. And if the experience was successful in terms of goal achievement.
A survey that is triggered prior, during or after using the product to know the preferences and feelings of the users. For example, questionnaires invite people to say who they are, what they do, and where they go. Creating one is very simple with tools such as Typeform.
13. FIRST CLICK TESTING
Focused on navigation, which can be performed on a functioning website, a prototype, or a wireframe. First Click Testing examines what a participant would click on first on the interface in order to complete their intended task. First-click tests are used to evaluate the intuitiveness of buttons, links and other on-page content within the context of the design. Click tests can be conducted on rough sketches, wireframes, polished design comps or fully designed interfaces. If the representation of your navigation and content is in question, a click test can tell you if the design is helping or hurting findability.
14. EXPERT REVIEW
A group of usability experts evaluate your website against a list of established guidelines.
15. PARALLEL DESIGN
Involves several designers working on the same project/feature simultaneously, but independently, with the intention to combine the best aspects of each for the ultimate solution. With the parallel design method, several designers create an initial design from the same set of requirements. Each designer works independently and, when finished, shares his or her concepts with the group. Then, the design team considers each solution, and each designer uses the best ideas to further improve their own solution.
16. TREE TESTING
Measure the findability of elements within an existing or proposed information architecture. Tree tests measure the intuitiveness of label and link grouping, hierarchy and nomenclature.
17. PROMINENCE AND RECALL TESTING
Are conducted to determine what interface elements users notice,remember and correctly interpret. User attention is brief, identifying what they notice and how they interpret those elements is key to developing more effective design and content.
18. GOOGLE ANALYTICS
Is an important tool for UX Designers. It provides key information about site visitors. It is immediately accessible and gives you data about user flow, user preferences and user profile. Personally, I think it should be used by default as it is very helpful and very easy to use, you can know behavior of the users depending on the device they use. You can see and understand the existing performance and set goals for improvement.
19. USABILITY TESTING
Give participants goals and guidelines to accomplish with a site or prototype, for example ask them to book a hotel for a specific date in a specific place with a specific price etc. The idea is to know how they do it; what is the user flow and what aspects need to be improved. Observe what the users do, if they succeed or not, and where they have difficulties with the user interface. Let the users do their thing, don’t help them, just observe. It’s important to test users individually and let them solve any problems on their own. If you help them or direct their attention to any particular part of the screen, you have contaminated the test results. The goal is to identify any usability problems, collect qualitative and quantitative data and determine the participant’s satisfaction with the product.
One of the most important methods of user research is usability testing, that’s why I want to focus your attention on it and talk about it a little bit more. Let’s take a look at the 3 types of usability testing, normally companies classify them differently, but this is the most common way of classification. You need to choose one taking into account your goals.
TYPES OF USABILITY TESTING
COMPARATIVE USABILITY TESTING
Compare the usability of one website with another. Comparative tests are usually used to compare your website or app against your competitors’ ones, however it can also be used to compare your different designs to identify the one which provides the best user experience.
EXPLORATIVE USABILITY TESTING
Explorative usability testing can identify what content and functionality a new product should include to meet the users’ needs. Users test a range of different services where they are given realistic scenarios to complete which helps to highlight any gaps in the market that can be taken advantage of and show where to focus your efforts on.
This is a test of a new or updated service pre-launch. This usability test introduces users to the new design to ensure it is intuitive to use and provides a positive user experience. The aim of the usability evaluation is to ensure any potential issues are highlighted and fixed before the product is launched.
USABILITY TESTING TECHNIQUES
- Concurrent Think Aloud (CTA) is used to understand participants’ thoughts as they interact with a product by having them think aloud while they work. The goal is to encourage participants to keep a running stream of consciousness as they work.
- Retrospective Think Aloud (RTA) the moderator asks participants to retrace their steps when the session is complete. Often participants watch a video replay of their actions, which may or may not contain eye-gaze patterns.
- Concurrent Probing (CP) requires that when participants work on tasks, when they say something interesting or do something unique, the researcher asks follow-up questions.
- Retrospective Probing (RP) requires waiting until the session is complete and then asking questions about the participant’s thoughts and actions. When the participant makes comments or actions, the researcher takes notes and follows up with additional questions at the end of the session.
USER RESEARCH TIPS
1. MAKE THE TAST REALISTIC
To give you an example, let’s say you are interested in making users browse product offerings and purchase a product they are interested in.
Poor task: Purchase a pair of Gucci classic shoes.
Better task: Buy a pair of shoes for under 300 euros.
Asking a participant to do something that he wouldn’t normally do will make him try to complete the task without really engaging with the interface. Participant should have the freedom to compare products based on his own criteria and preferences.
2. MAKE THE TASK ACTIONABLE
Let’s say you are interested in making people find movies for a specific date
Poor task: You want to watch a movie Friday afternoon. Go to www.test.com and tell me where you’d click next.
Better task: Go to www.test.com find a movie you would like to watch on Friday afternoon.
It’s best to ask the user to do the action, rather than asking them how she or he would do it. If you ask how would he find a way to do it, the participant is likely to answer in words, not actions. And normally, what people say and what people do is quite different. Also, making them telling you how they would do it, doesn’t allow you to observe the ease or frustration that comes with using the interface.
You can tell that the task isn’t actionable enough if the participant turns to the facilitator, takes her hand off the mouse, and says something like “I would first click here, and then there would be a link to where I want to go, and I’d click on that.”
3. AVOID CLUES
Let’s say you are interested in finding your grades.
Poor task: You want to see the results of your final exams. Go to the website, sign in, and tell me where you would click to get your transcript.
Better task: Find the results of your final exams.
Step descriptions often contain hidden clues as to how to use the interface. For example, if you tell someone to click on courses in the main menu, you won’t learn if that menu label is meaningful to the users.
This article I want to end up with this quote, which sums up my thoughts very well: “A good user experience, like a measurable ROI, doesn’t typically happen by accident. It is the result of careful planning, analysis, investment, and continuous improvement.” – Jeff Horvath