Selecting tools and methods
There are many tools and methods that may be used during the User-Centred Design process in order to achieve a successful result. A selection of pragmatic usability tools and methods, commonly used in the industry today, are listed in the table below.
The table indicates the following (for consideration when selecting methods):
- The stage in the development process when they provide best results.
- The resource level required, which relates to effort and costs.
- The strength of a particular tool or method and the kind of information it provides.
What is User-Centred Design?
User-Centred Design is an approach to software or product development that focuses specially on making products usable. It typically involves end-users throughout the development cycle, during requirement activities, obtaining their feedback on early designs and re-designing prototypes designs in light of their feedback and comments.
Following this process to develop a product can result in a number of significant advantages for the developer, by producing products which:
- Are easier to understand and use, thus reducing training and support costs.
- Improve the quality of life of users by reducing stress and improving satisfaction.
- Significantly improve the productivity and operational efficiency of individual users and consequently the organisation.
Usability tools and methods
|Stage in development|
|Tool/Method||Context and user requirements||Early design and prototyping||Test and evaluation||Resources required||Purpose/Strength|
|Affinity diagramming||X||X||LOW||Helps structure concepts and content|
|Brainstorming||X||LOW||Generates design ideas|
|Card sorting||X||X||MEDIUM||Helps structure interface content|
|Cognitive workload assessment||X||LOW||Assesses if mental effort is acceptable|
|Cognitive walkthrough||X||X||MEDIUM||Checks structure and flow against user goals|
|Competitor analysis||X||LOW||Gathers design input from other products|
|Context of use analysis||X||LOW||Specifies vital user and product characteristics|
|Contextual Inquiry||X||MEDIUM/ HIGH||Provides information about users' work context|
|Cost-benefit analysis||X||MEDIUM||Directs design effort towards issues providing best return|
|Diary Keeping||X||MEDIUM||Captures day-to-day usage|
|Eye-tracking||X||HIGH||Analyse how users look at parts of an interface|
|Focus group||X||LOW||Elicits user requirements/views through discussion|
|Functionality matrix||X||X||LOW||Specifies functions required to support tasks|
|Goal and effect analysis||X||LOW||Analyse usability goals to help prioritise development effort|
|Group discussion||X||X||X||LOW||Summarises user ideas/comments on design issues|
|Heuristic evaluation||X||X||LOW||Provides expert feedback on user interfaces|
|Interactive/ Computer-based prototyping||X||MEDIUM/ HIGH||Used for testing with users|
|Interview techniques||X||X||LOW/ MEDIUM||Provides detailed user experience about product usage|
|ISO 9241 conformance||X||MEDIUM||Assesses product conformance with ISO 9241|
|Observation||X||MEDIUM||Describes user activity in detail|
|Paper prototyping||X||MEDIUM||Tests design ideas with users|
|Parallel design||X||HIGH||Provides one conceptual design idea from several|
|Participatory evaluation||X||X||MEDIUM||Detects task-related usability problems early in design|
|Rapid prototyping||X||MEDIUM||Allows users to visualise future systems and evaluate|
|Remote evaluation||X||LOW/ MEDIUM||Test certain design aspects with users remotely|
|Scenarios and personas||X||X||MEDIUM||Illustrates requirements and supports conceptual design|
|Storyboarding||X||MEDIUM||Visualises relationship between events and actions|
|Style guide conformance||X||X||MEDIUM||Assess conformity with product-specific style guidelines|
|SUMI - Software Usability Measurement Inventory||X||LOW||Provides an objective way of assessing user satisfaction with software|
|Supportive Evaluation||X||HIGH||Identifies usability problems in a collaborative forum|
|Surveys (through questionnaires)||X||X||MEDIUM||Provides mass data from users|
|Task allocation||X||MEDIUM||Gives an understanding of an existing product and information flow|
|Task analysis||X||MEDIUM||Analyses current user work in depth|
|User-based testing (for design feedback)||X||MEDIUM||Provides recommendations for how a design can be improved.|
|User-based testing (for metrics)||X||HIGH||Measures usability and identifies interaction problems|
|Video prototyping||X||HIGH||Presents design ideas realistically|
|WAMMI - Web site Analysis and Measurement Inventory||X||LOW||Provides an objective way of assessing user satisfaction with a web site|
|Wizard of Oz||X||X||HIGH||Used to test advanced interface design concepts|
Affinity diagramming is a method for categorising ideas/concepts, functionality or content. Users sort items into categories visually (typically on a blank wall using sticky notes). Usually the method is used in groups where several participants will work together to agree on a categorisation scheme based on the relationships they see between items.
Brainstorming brings together a set of experts to inspire each other in the creative, idea generation phase of the problem solving process. Brainstorming is used to generate new ideas by freeing the mind to accept any idea that is suggested, thus allowing freedom for creativity. The result of a brainstorming session is hopefully a set of good ideas, and a general feel for the solution area.
Card sorting techniques are used to analyse and explore the latent structure in an unsorted collection of information items, functions, statements or ideas. Each item is written on a small index card - in a typical card sorting exercise there may be anything from 30 to 80 cards in total. Participants, working on their own, sort these cards into groups or clusters. The data from each individual card sort are then combined and are analysed statistically.
Cognitive workload assessment
Measuring cognitive workload involves assessing how much mental effort a user expends whilst using a prototype. This can be obtained from questionnaires such as the Subjective Mental Effort Questionnaire (SMEQ) and the Task Load Index (TLX).
A process of going step by step through a product or system design getting reactions from relevant staff and typically users. Normally one or two members of the design team will guide the walkthrough, while one or more users will comment as the walkthrough proceeds.
Competitor analysis is used to identify strengths and weaknesses in the user interface designs of competing (or similar) products. Typically this method is used as input to new design/prototyping work - to help identify good ideas and avoid bad ones. Competitor analysis is often done using expert-based heuristic evaluation or in some cases user-based testing.
Context of Use Analysis
Context of Use Analysis is a structured method for eliciting detailed information about a product and how it will be used. Through a workshop attended by the product stakeholders important characteristics of the users (or groups of users), their tasks, their environment are identified. Context Analysis meetings should take place as early as possible in the design of a product. However the results of these meetings can be used throughout the lifecycle of the product, being continually updated and used for reference.
Contextual inquiry is one of the best methods to use when you really need to understand the users' work context. It is basically a structured field interviewing method, based on a few core principles that differentiate this method from plain, journalistic interviewing. Contextual inquiry is more a discovery process than an evaluative process; more like learning than testing. This technique is best used in the early stages of development, to gain an understand of how people feel about their jobs, how they carry out their work, how information flows through the organisation, etc.
Cost-benefit analysis of usability
Presents a generic framework for identifying the costs and benefits associated with a user-centred design activity. The first step is to identify the benefits to be gained by improving the usability of a system (include increased user productivity, decreased user errors and decreased training costs. The costs of the proposed usability effort (personnel and equipment overheads) must be taken into account and subtracted from the total projected benefit value.
Activity diaries require the informant to record activities they are engaged in throughout a normal day. Diaries may vary from open-ended, where the informant writes in their own words, to highly structured tick-box forms, where the respondent gives simple multiple choice or yes/no answers to questions. Diary keeping is useful to capture user behaviour over a period of time and it allows data to be captured about every day tasks, without researcher intrusion.
Eye-tracking studies are used to investigate and analyse what users look at in different sitations when interacting with a product (e.g. where they look on a computer screen). They can be used to explore a variety of user interface design aspects - which design elements receive attention, which texts users read, what areas they ignore, how they visually navigate an interface, etc.
A focus group brings together a cross-section of stakeholders in a discussion group format. Views on relevant topics are elicited by a facilitator. The meetings can be taped for later analysis. Focus groups are useful early in requirements specification but can also serve as a means of collecting feedback once a system has been in use or has been place on field trials for some time. Focus groups help to provide a multi-faceted perspective on requirements and identify issues that may need to be tackled.
This process specifies the system functions that each user will require for the different tasks that they perform. The most critical task functions are identified so that more time can be paid to them during usability testing later in the design process. It is important that input from different user groups is obtained in order to complete the matrix fully. This method is particularly useful for systems where the number of possible functions is high (e.g. generic software package) and where the range of tasks that the user will perform is well specified.
Goal and effect analysis
Usability goals help focus design work on issues that have the most impact on users and their usage of an interface. They are formulated in relation to the overall business goals of a product or system and help guide the design process - aiding innovation and providing a basis for determining interface design tradeoffs in relation to other demands. Goals are defined together with key stakeholders in the product (e.g. during context analysis) and should be possible to follow-up (e.g. be measureable in user-based testing).
Group discussions are based on the idea of stakeholders within the design process discussing new ideas, design options, costs and benefits, screen layouts etc., when relevant to the design process. Group discussions help to summarise the ideas and comments held by individual members. Each participant acts to stimulate ideas, and that by a process of discussion, a collective view is established which is greater than the individual parts.
Heuristic evaluation is an expert inspection method that identifies general usability problems that users can be expected to encounter when using a product or interface. Usually at least three usability experts evaluate the system with reference to established guidelines or principles, noting down their observations and often ranking them in order of severity. It is a quick and efficient method which can be completed in a few days.
Utilises computer simulations to provide more realistic mock-ups under development. The prototypes often have greater fidelity to the finished system than is possible with simple paper mock-ups. End-users interact with the prototype to accomplish set tasks and any problems that arise are noted.
Expert and/or novice users are asked in-depth questions by an interviewer in order to gain specific knowledge, or to obtain the subjective opinions based on product usage experience. Interviews may follow a pre-specified list of items (structured) and/or may allow users to provide their views freely (unstructured). Type, detail and validity of data gathered vary with the type of interview and the experience of the interviewer.
ISO 9241 conformity assessment
Assesses a product for conformance to the relevant requirements as detailed in ISO 9241 standard: Ergonomic Requirements for work with Visual Display Terminals (VDTs). Developers provide documentary evidence regarding their development process and one or more auditors examine these documents and visit the site to interview relevant staff. Auditors determine if conformance is warranted or if not feedback is provided on non-conformances.
Observational methods involve an investigator viewing users as they work and taking notes on the activity that takes place. Observation may be either direct, where the investigator is actually present during the task, or indirect, where the task is viewed by some other means such as through use of a video recorder.
This method uses simple materials to create a paper-based simulation of an interface with the aim of exploring user requirements. When the paper prototype has been prepared, a member of the design team sits in front of the user and 'plays the computer' by moving interface elements around in response to the user's actions. The user makes selections and activates interface elements by using their finger for input actions. Users are given task instructions and encouraged to express their thoughts and impressions. The evaluator makes notes during the test. The method is most suitable where it is easy to simulate system behaviour or when the evaluation of detailed screen elements is not required.
Several different groups of interface designers create conceptual designs within a parallel process. The aim is to develop and evaluate different interface designs before settling on a single design. The groups of designers must work independently, since the goal is to generate as much diversity as possible. Designers may not discuss their designs with each other until after they have presented their design concepts. The final result may be one of the designs or a combination of designs with the best features from each. Although parallel design may at first seem expensive, as many ideas are generated without implementing them, it is in fact a very cheap way of exploring the range of possible design concepts and selecting the probable optimum design.
A cost-effective technique for identifying usability problems in prototype products. The technique encourages design team and users to collaborate in order to identify usability issues and their solutions. Qualitative information is provided about difficulties users experience when attempting to complete tasks, and other interface elements that give rise to problems.
This method develops quickly different concepts through software or hardware prototypes, and evaluates them. The rapid development of a simulation/prototype of a future product allows users to visualise it and provide feedback. It can be used to clarify user requirements. Later during development, it can be used to specify details of the user interface.
Usability testing is typically best carried out with users in person. However, meeting users can be costly and time consuming, particularly when significant travel is involved. For some kinds of products it is possible to assess certain interface characteristics through remote user testing - using tools that allow the test facilitator to follow what the participant is doing.
Scenarios and personas
Scenarios and personas are used to characterize users and their tasks in a specific context. They offer concrete representations of a user working with a product in order to achieve a particular goal. The objective of user scenarios and personas in the early phases of development is to improve the accessibility of end user requirements and usability goals to the design team. Later they can be used to support design and evaluation activities.
Storyboards are sequences of images which demonstrate the relationship between individual events (e.g. screen outputs) and actions within a system. A typical storyboard will contain a number of images depicting features such as menus, dialogue boxes and windows. The storyboard can be shown to the design team as well as typical users, allowing them to visualise the composition and scope of possible interfaces and offer critical feedback.
Style guide conformance
An expert review of an interface to assess conformance with a style guide (e.g. Microsoft Windows User Experience guidelines).
SUMI - Software Usability Measurement Inventory
SUMI is a questionnaire designed to collect subjective feedback from users about a software product with which they have some experience. Users are asked to complete a standardised 50-statement psychometric questionnaire. Their answers are analysed with the aid of a computer program - SUMISCO. SUMI data provides a usability profile according to five scales: perceived efficiency, affect (likeability), control, learnability and helpfulness. It also provides a global assessment of usability. The international database with which the software is being compared contains some 300 software products. SUMI can be used to assess user satisfaction with high fidelity prototypes or with operational systems and can be used in conjunction with other usability tools and methods.
Supportive evaluation is a participatory form of evaluation similar to a 'principled focus group'. Users and developers meet together and the user representatives try to use the product to accomplish set tasks. The designers who observe can later explore the issues identified through a facilitated discussion. Several trials can be run to focus on different features or different versions of the product.
Surveys (through questionnaires)
A survey involves administering a set of written questions to a large sample population of users. Surveys can help determine information or customers, work practice and attitudes. There are two types: 'closed', where the respondent is asked to select from available responses and 'open', where the respondent is free to answer as they wish.
A successful system depends on the effective allocation of tasks between the system and the users. Task allocation decisions determine the extent to which a given task is to be automated or assigned to a human user. The decisions are based on many factors, such as relative capabilities and limitations of human versus technology in terms of reliability, speed, accuracy, strength, flexibility of response, cost and the importance of successful or timely accomplishment of tasks. The approach is most useful for systems that affect whole work processes rather than single user, single task products. Designers often identify functions that the technology is capable of performing and allocate the remaining functions to users, relying on their flexibility to make the system work. This does not make best use of users abilities and skills and can lead to unsatisfactory job design.
Task analysis defines what a user is required to do in terms of actions and/or cognitive processes to achieve a task. A detailed task analysis can be conducted to understand a system and the information flow within it. These information flows are important to the maintenance of the system. Failure to allocate sufficient resources to this activity increases the potential for costly problems arising in later phases of development. Task analysis makes it possible to design and allocate tasks appropriately within the new system. Once the tasks are defined, the functionality required to support the tasks can be accurately specified.
User-based evaluation for design feedback
This method offers a relatively quick and cheap way to conduct a user-based evaluation of a current product or prototype. The focus is on task completion and the acquisition of design feedback where users are unable to complete tasks or need assistance to complete tasks. The emphasis is a few typical users as participants and detailed recordings are not essential. Observers make notes as users interact with a system to accomplish set tasks and identify the most serious user-interface problems.
User-based evaluation for metrics
This form of user-based evaluation entails a detailed analysis of users interacting with the particular system being evaluated. It is suited for evaluating either high-fidelity prototypes or functional systems. The real world working environment and the product under development are simulated as closely as possible. Observers make notes, timings are taken and video and/or audio recordings made. The observations are subsequently analysed in detail, and appropriate usability metrics are calculated.
This method allows designers to create a video-based simulation of interface functionality using simple materials and equipment. Interface elements are created using paper, pens, acetates etc. For example, using a camcorder, the movements of a mouse pointer over menus may be simulated by stopping and starting the camcorder as interfaces elements are moved, taken away and added. Users do not directly interact with the prototype although they can view and comment on the completed video-based simulation.
WAMMI - Web site Analysis and Measurement Inventory
WAMMI is an evaluation tool for web sites. It is based on a questionnaire that visitors fill out, and which gives a measure of how easy to use they think a web site is. The questions in the WAMMI questionnaire have been carefully selected and refined to ascertain users' subjective rating of the ease of use of a web site.
Wizard of Oz
Wizard of Oz is a technique used to present advanced concepts of interactions to users. In essence an expert (the wizard), possibly located behind a screen, processes input from a user and emulates system output. The aim is to demonstrate computer capabilities that cannot be done by the computer, for technical reasons or lack of resources. It is highly applicable to "intelligent interfaces" which feature agents, advisors and/or natural language processing.