Running Usability Lab Sessions

These notes are intended to help with preparation for a usability test, running the test sessions and securing the data obtained.  They have been kept brief and are really intended as a check-list rather than a comprehensive coverage of the issues.


This consists of specifying the objectives of the test, planning sessions, creating consent forms, carrying out a risk assessment, recruiting test users and assigning staff roles.

Specify Test Objectives

Before specifying the objectives for the planned testing session it is necessary to consider what it is possible to achieve in the lab. The main purpose of using a usability laboratory is to establish the usability of a software product. A widely accepted definition of usability is that of the International Standards Organisation [ISO 9241-11:1998 Ergonomic requirements for office work with visual display terminals (VDTs) — Part 11: Guidance on usability]:

The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.

The three aspects of usability listed here can be assessed with differing levels of accuracy due to the very particular (and peculiar) context of a usability lab.

Context of use

The specified context of use of a software product has a number of dimensions including prominently: environmental, social, psychological and technological.

The environmental context covers such detail as, inside or outdoors usage, stationary or mobile (e.g., used in a vehicle), in the presence of environmental contaminants (which may require the user to wear protective clothing), ambient noise, different light levels, etc. It will be possible to reproduce only a few of these aspects in a standard usability lab which best simulates an office or domestic interior.

The social context covers such details as, individual or group working, public or private, using the product as part of an employment contract or voluntarily, the economic cost or consequences of use, etc. Whilst it is possible to simulate, or provide analogues for some of these factors (e.g., for economic: reward test performance proportionately) the degree of realism may be questionable.

The psychological context relates to the state of the user when engaging with the product: focused, anxious, excited, fearful, etc. and any intrinsic psychological objective of the product: for entertainment: relaxation or stimulation, for computer based training: learning, etc. More extreme psychological contexts are difficult (and possibly unethical) to induce in a usability lab.

The technical context is the equipment and infrastructure configuration that the product will run on. Clearly it is important that the lab set-up replicates it accurately, and particularly it should not be better than that experienced by eventual real users (screen size and resolution, processor speed, communication bandwidth, etc.).

The extent to which the full context of use can be synthesised in the usability lab will influence significantly the accuracy with which the aspects of usability can be predicted in normal operation. The measurement of these aspects is discussed below.


In gross terms, does it actually do the job? At the lowest level, are there any bugs in the software or its interface? At the highest level, does the software articulate appropriately with the overall tasks that the use is engaged in, both on and away from the computer?

This is usually assessed by having representative users carry out typical tasks in the lab. The bugs revealed together with the errors and mistakes made are recorded as a narrative. This demonstrates the ‘in principle’ effectiveness of the product, and is clearly valuable as part of the development process. However it must be borne in mind that the outcome is merely a prediction of the effectiveness; in real use, unforeseen factors may significantly influence it.


There are a number of aspects to efficiency, predominantly: speed of performance, error rate, time to learn and memorability. These lend themselves to quantification: essentially counting and timing that can be done straightforwardly in the lab. However, the peculiar circumstances of the lab (the experience of being observed, the presence of a facilitator, the unfamiliar surroundings, etc.) may influence performance and measurement may not predict real use accurately.


The usability attribute of satisfaction is a product of the balance struck in the design between the conflicting requirements for effectiveness and efficiency together with the extent to which the design fits the context of use. It is also influenced by presentational factors (a ‘cool’ visual design can sometimes mask other shortcomings, at least in the short-term).  It is not reasonable to expect a complete assessment of satisfaction from a lab test participant, as they will have had minimal exposure to the product in an artificial setting.  Satisfaction can only be established after the product has been put to real use.  However, the lesser attribute of acceptability can be established, i.e. if the Test User would be willing to use the product for a real task.  A negative assessment of acceptability is a strong predictor of poor satisfaction.

In conclusion, testing in a usability lab is most appropriate for assessing the effectiveness of the product; some gross indications of efficiency may be obtained as well as an indication of acceptability.

This implies that the most productive approach will be to formulate crucial questions about the effectiveness of the product, and to orient the design of the test session to answering them. A by-product of the investigation may be indications of efficiency and acceptability.

Plan Test Sessions

This involves deciding on an appropriate test protocol, preparing test material, and assessing the length of session required.

Decide on test protocol

Usually the most effective testing protocol involves giving the Test User a context for the task, setting a goal, providing appropriate information both externally in documentation etc. and in the system under test, before asking the them to carry it out.

As an example, in testing a shopping website, the facilitator might say:

“I would like you to imagine that you are using this site to do your weekly home shopping. Here is a list of items that I would like you to purchase. [Hand TestUser a list of items and quantities.] You may also purchase other items if they seem good value. When you make a payment, please use the details on this credit card. [Hand Test User dummy credit card details.] For delivery details please use the name, address and phone numbers on this sheet. [Hand sheet to Test User.]”

In addition to collecting observation data, the Test User may be asked to comment as they carry out the task. The facilitator may prompt for information as the test proceeds, e.g. “you seemed puzzled just then.” It must be born in mind that both these types of intervention may subtly affect performance.

Decisions must be made as to how much help will be provided and how queries will be handled.  The facilitator should resist strongly the impulse to ‘help’ the Test User.  However if they are completely stuck and cannot proceed, then it may be appropriate to give an explicit prompt (and note the shortcoming revealed in the design).  It is a good idea to brief the Test User that if they hit a point where in real life where they would give up, they should say so.  If they are searching and trying out actions on the interface, depending on the status of the system under test, i.e., prototype or finished product, it may be necessary to steer the them away from areas that are not working.

Prepare test material

The test scenarios that will be used should be documented and information resources (e.g., the shopping list) prepared.  If a shallow prototype is being tested it will be necessary to ensure that appropriate data appears in the displayed fields.  Any database or data content required may need to be set up before each test run.

Assess session length

Carrying out a task as a test subject may be more stressful than in a real situation. Bear in mind that if the test subject becomes stuck at some point, they will know that the facilitator could ‘help them out’ but is deliberately letting them flounder.  Additionally and despite being told that it is not the case, some Test Users may have a residual feeling that they are being tested as well as the product.  Also if a prototype is being tested, it is likely to have significant usability bugs that will add to the stress.

Depending on the nature of the system under test and the tasks it supports, a session lasting up to one hour is generally acceptable.  If longer sessions are required then the Test User may require breaks.

Carry out a dummy run of the protocol with a colleague acting as user to establish gross timings and ensure that it is not excessive.

Scheduling Sessions

Assuming that the lab is used for a full day with testing sessions of one hour and using a full complement of staff, then an indicative schedule is shown below.

09:00 – 10:30 Set up the lab.

Run through test protocols.

Familiarise with, and ensure equipment is working correctly.

10:30 – 11:30 Test session 1
11:45 – 12:45 Test session 2
13:00 – 14:00 Lunch break
14:15 – 15:15 Test session 3
15:30 – 16:30 Test session 4
16:30 – 17:30 Carry out a brief review meeting with all staff participants.

Finalize and collect notes.

Ensure videos are successfully recorded to portable media.

Shut-down the lab.

The 15 minute gap between sessions is used to reset the test computer, ensure video recordings are secured and set up for the next session, discharge the finished Test User and welcome the next, and allow staff to get refreshment and take a comfort-break.  Clearly if using less than the full complement of staff, more time may be needed for the changeover.

Longer or shorter sessions may be appropriate, but a minimum 15 minute changeover should be scheduled or delays may build up causing the later sessions to overrun; this will not necessarily be acceptable to the Test Users.

Create Test User Consent and Broadcast Release Forms

Both professional ethics and legislation (Data Protection Act) require that Test Users must be fully aware of the nature of the activities they will undertake, that their personal details may be recorded and that their participation in a session will be videoed.  They must agree to the specific use that will be made of their personal details and the video recording.  That this is the case is confirmed by asking them to sign a consent-form detailing their understanding.  Here is an example Test User Consent Form that may be adapted appropriately.

Usually consent is obtained to personal details and video being viewed only for the purpose of analysis.  If it is intended that wider use should be made of the material, e.g., for presentation at a conference, in publicity or advertising, then a broadcast release is also required.  Example broadcast release forms may be found on the web (search using that term or ‘talent release’).  It may well be the case that a particular Test User is willing to take part in a test, but would not want the video more widely distributed.

Where Test Users are employed by the company on whose behalf testing is taking place, the status of voluntary participation may be ambiguous. To avoid problems the company human resources department should be consulted before testers are recruited.

Make a Risk Assessment

Common sense and health and safety legislation require that before an activity is undertaken an assessment of risks involved is made and actions are taken to mitigate any identified. Carrying out usability testing of software intended for office or domestic environments is not an inherently risky activity, however for different contexts or when novel or prototype equipment is involved safety issues may arise.

The Health and Safety Executive (HSE) recommend a five-step process for routine risk assessment.

Step 1: Identify the hazards
Step 2: Decide who might be harmed and how
Step 3: Evaluate the risks and decide on precautions
Step 4: Record your findings and implement them
Step 5: Review your assessment and update if necessary

These steps are incorporated in a risk assessment form.  This should be reviewed and amended as the circumstances of the planned test determine. For further details see the HSE’s website at:

A template entry refers to the University’s Display Screen Equipment Assessment Form. This is a very comprehensive document used to assess the computer working environment of permanent staff. It is not expected that it will be completed for each Test User, but its contents should be reviewed for relevance to the test set-up during the planning phase.

Recruit Test Users

A budget must be established to pay for recruitment and reward of testers.  Generally Test Users expect to be paid.  The exceptions to this are when they are employed by the company whose product is being tested, when they expect a direct benefit from the product, or when the product addresses some general good (medical or charity).  The rate of pay will depend on the test population required.  If your user profile fits the student demographic, then they can be recruited locally at rates of about £20 per hour.  If recruiting a wider demographic profile then an increased payment may be required, together with transport costs.  If recruiting from a specialised occupational category then rates may need to be congruent with their normal charges, and for the professions (lawyers, dentists, etc.) will be very high.

There are two approaches to Test User recruitment: do-it-yourself or pay a recruitment agency (market research companies).  Recruitment by colleagues of friends and relatives is sometimes possible, though overuse of this resource may influence results as they become ‘expert’ testers.  Recruitment from a broader population can be through advertisements in appropriate media.  To recruit students, arrangement can be made with the Student Union for posters to be put up.  For the wider population, adverts can be placed in local newspapers, community newsletters or shop windows.  To target a specific occupational group approaches can be made through employers or professional associations.  Alternatively invitations can be sent directly to identified individuals (though care must be taken that contact details are obtained from a legitimate source).  Identifying a key individual who can be paid a ‘bounty’ for recruiting Test Users is sometimes appropriate.

Recruited Test Users should be given a written invitation detailing the date and time to attend and for how long.  Clear transport details should be given (location of the building, bus, train and driving directions).  Here is a template invitation letter.

You should plan for the recruitment process to involve a considerable amount effort and time.

Assign Staff Roles

Depending on the nature of the testing required, the roles listed below can be combined. The responsibilities of each role must in any case be covered.


In overall charge of the testing sessions, ensuring:

  • Safety of Test Users and staff
  • The focus of the session is on the test objectives.
  • Schedules are adhered to.
  • Agreed protocols are applied.
  • Appropriate notes are obtained from the observers.
  • Recording has been initiated at the appropriate time (monitoring the Recordist).
  • At the end of the session ensuring that all consent forms, notes and data are secured.
  • Producing a report of test findings.

Liaise with Test Users and usually accompanies them in the Subject Room.

  • Ensuring safety and comfort of the Test User.  Briefing them on actions in the event of a fire.
  • Ensuring the computer is set up appropriately to begin each test, e.g., logged on, test application loaded, cache and cookies cleared, etc.
  • Welcoming the Test User and explaining the purpose and activities involved in the session.
  • Ask the Test User to switch off their mobile phone.
  • Obtaining the Test Users’ signature on a Test User Consent Form and if required, a Broadcast Release Form (if this has not already been done by the Receptionist).
  • Obtaining appropriate personal details, e.g., previous computer and/or task experience, relevant previous education or training, etc.
  • Paying the Test User or giving another reward and obtaining a signed receipt.  Considerations should be given to when this is done: if before the test it may help to confirm to the tester that payment does not depend on their performance.
  • Instructing Test Users in the tasks that they should perform.
  • Execute the agreed protocol, which may involve encouraging the Test User to comment on the task as they undertake it.
  • Attending to instructions from the Director via a walkie-talkie earpiece.
  • On completion of the session, thanking the Test User.

Operates the camera controls and recording PC.  They will require previous training on the equipment set-up and operation.

  • Arranging the Subject Room appropriately: position of workstation, cameras, etc.
  • Check for trip hazards: route all cables near to the walls.
  • Starting up the lab equipment and ensuring it is operating as required. A manual containing start-up and shut-down procedures is available in the lab.
  • Ensuring that sufficient disk space is available on the recording PC.
  • Starting and stopping the recording. N.B., the Director should double-check this, as it is easy to forget.
  • Ensuring good quality images are recorded. This may involve realigning the cameras remotely if testers change their position during a test.
  • Securing the video footage (burning to DVD).
  • Removing video files from the PC and backup storage (unless arrangements have been made for it to be deleted after an appropriate period).
  • Shutting down the lab equipment.

Observes the test sessions, recording details and issues as they occur.  They are often stakeholders in the project, including interaction designers.  Some discussion may be provoked during the tests, but the Director should take care that not too much attention is taken away from viewing the test.  Whilst it is possible for observers to monitor activity directly through the semi-transparent mirror window, often attention is directed to the display monitor where the screen image can be seen.  If there is a larger group of observers than can comfortably fit in the control room, it is possible to take a video feed to another room nearby.


Greets the Test Users as they arrive, offering refreshments and holding them until their session.  While waiting it may be appropriate for the Test Users to fill out a questionnaire on their personal details, etc. and possibly sign the consent forms.

Operating with a reduced team

Whilst having individual staff taking each of the roles above is in some sense ideal, it is also expensive, so if necessary duties can be reallocated.  These roles can be combined:

  • Director, Recordist and Observer
  • Facilitator, Receptionist and Observer

The major compromise is that less detailed observation will be possible, making the subsequent analysis of the recorded video more crucial.

Combining the role of Receptionist with Facilitator works best if the sessions are planned with substantial breaks between.  The problem to be avoided is a Test User arriving before the previous session finishes when nobody is available to greet them.  Some may be confused by this and leave.

Running Sessions

A typical schema for a testing session, where the Test User is accompanied by the Facilitator in the Subject Room would be:

    1. Invite the Test User into the Subject Room and sit them at the workstation.
    2. Briefly explain the purpose of the testing session, emphasising that it is the software being tested and not the user. Answer any ensuing questions.
    3. Obtain signatures on the Test User Consent Form and, if necessary the Broadcast Release Form (unless this has been done previously by the Receptionist).
    4. If not previously obtained, take demographic and other relevant details.
    5. Ensure that the Test User is comfortable at the workstation; encourage them to adjust the seat if required. Point out the availability of any refreshments.
    6. Explain the test protocol.  State if testing may involve one prolonged task, or a series of shorter tasks.  The Test User may be asked to provide a commentary as they work.  Brief the subject to say if they reach a point where in normal circumstances they would give up on the task.
    7. Introduce the task or tasks: Set the context in which the Test User is to imagine the task being carried out together with the goals they are aiming for. E.g: “You are working as a call-centre operator whose responsibility is to answer customer queries regarding the progress of their previously placed order. Customers will phone you with a query and your task is to use this system to answer it, taking as little time as you can.”
    8. Initiate the task: Provide the Test User the information they need to carry out the task.  In the example above, this may be by way of a phone call.  For other tasks hand them a data entry form, or other appropriate material.
    9. Support the Test User as they carry out the task: They may require further information or clarification, also they may get stuck. Care must be taken to minimise the direct guidance given, or the purpose of the test may be undermined.  At the same time, letting the Test User remain for a prolonged period in an impossibly stuck state is likely to be counter-productive.  Depending on the agreed protocol, it may be necessary to encourage the Test User to comment as they proceed, or to answer questions to clarify their rationale for observed actions.
    10. If further tasks and time remain, repeat with the next task at step 7.
    11. On completion of the tasks, it may be appropriate to ask the Test User for any general comments on the software.  This may assist in establishing its acceptability.  Also valuable insights or suggestions for improvement may be obtained.
    12. If payment or other reward is to be made, hand it over and if necessary, obtain a signed receipt.
    13. Thank the Test User and show them out.

It may not be appropriate to adhere rigidly to this schema. Particularly in early testing, problems with the software under test or deficiencies in the task initiation material may be revealed that require improvised workarounds to obtain the maximum value from the test.  In these situations the Facilitator should be guided by the Director who can talk to them via the walkie-talkie without the Test User being aware.

Securing the Data

The video that has been recorded on the Observer Room PC must be transferred to a portable medium (DVD disc, memory stick) or uploaded to a file-server.  Sufficient time to complete this task must be allowed.  Use can be made of the lunch break to save the morning’s files.

Remember that storage and transfer of the video files must be compliant with the Data Protection Act.

Make sure that the video files are given meaningful names so that they can be coordinated with notes made by the observers.

Other Resources

Notes by the Interaction Design Foundation on ‘How to conduct user observations‘.