HCD Activity 4: Evaluate the designs against requirements
“a) allocating resources both to obtain early feedback in order to improve the product, and at a later stage, to determine whether the requirements have been satisfied;
b) planning the user-centred evaluation so that it fits the project schedule;
c) carrying out sufficiently comprehensive testing to provide meaningful results for the system as a whole;
d) analysing the results, prioritising issues and proposing solutions;
e) communicating the solutions appropriately so that they can be used effectively by the design team.”
The level of formality with which this is conducted increases as the project proceeds to completion. The final iteration of HCD Activity 4 is a summative evaluation; it confirms that the design meets the minimum level of the previously specified usability requirements (or the project fails). Prior iterations are formative; the results are used to shape the design.
Evaluation may proceed through the following stages:
4.1 Develop evaluation plan
This involves determining how to evaluate the design against the usability requirements. There are two aspects to this:
- Specifying the protocol for carrying out the evaluation; observational evaluation, survey, etc.
- Arranging the logistic of evaluation activities; checking availability of users for test sessions, booking usability labs, etc.
4.2 Provide design feedback
This can be done in a number of ways:
- Analytic evaluation of designs by experts
- Involvement of the designers in observational evaluation sessions
- Video recording of ‘critical incidents’ during evaluations sessions shown to designers
- Written reports from a separate evaluation team
4.3 Assess whether objectives have been achieved
After a number of design/evaluate iterations the designers will believe that they have succeeded in meeting the requirements and their work is done. At this point a formal summative evaluation is carried out. The results will be presented to the project sponsor and agreement is sought that the usability objectives really have been met.
4.4 Field validate
This requires at least a working beta version of the software to be available. The actual usability of the product in use in its target context is examined.
4.5 Monitor long-term
A characteristic of software is that, with exposure to the working application users change their behaviour and the product’s usability may change. Also the context of use may change, and the task that the software supports is highly likely to change over time. Maintaining the usability of software is an important aspect of the maintenance cycle.
4.6 Report results
There is a standard form of documentation to report the outcome of a summative usability evaluation test: ‘BS ISO/IEC 25062:2006 Software engineering – Software product Quality Requirements and Evaluation (SQuaRE) – Common Industry Format (CIF) for usability test reports’. The format for the report has four major sections:
- Executive Summary: A high-level overview of the test and results for decision makers who may not read the remainder of the report.
- Introduction: Gives a description of the product under test and the test objectives.
- Method: Provides sufficient information to allow replication of the procedure and details of the data recorded.
- Results: Contains an analysis of the data obtained and a presentation of the results under the headings of performance and satisfaction.
Any additional material relevant to the report may be place in appendices. The standard contains a template for formatting the summative evaluation report. Formative evaluation reports need be nothing like so formal, however the rational and background preparation of the kind documented in the standard report must in principle be available should the design team require it.
For some useful advice on report writing from another profession, see Report Writing.