A selection of things we're thinking about at the moment
Prototyping on mobile devices
When user testing for mobile devices, it is still commonplace to simulate the experience on a desktop computer screen, rather on the mobile device itself (this is usually because it is easier to record the session on a computer screen than on a mobile device). The issue with this approach is that by not testing on a device similar to that which will be used in the real world, the tests lose a lot of their validity.
While paper prototypes are fine for testing the early stages of a design, they cannot be used to test a finished application. We use two approaches for testing high-fidelity prototypes on mobile devices:
Static image prototype
A prototype made of static screenshots is loaded onto the mobile device. This gives the user an understanding of what each screen will look like. Hotspots can be added to the screens to allow users to interact with the prototype. Depending on the fidelity of the images, the experience can be a very realistic one. This is a very cost-effective solution during early stages of design.
A HTML prototype is loaded on to the mobile device. This gives the most ‘true to life’ experience, as all the functionality is represented, but can require a large amount of of effort to create. This type of prototyping is most suitable later in the design process.With our mobile testing kit, we are then able to record user testing sessions undertaken on a mobile device.
These days, design for the ‘web’ means more than making your site look smart in your favourite desktop browser. More than ever before, your website is likely to be viewed on a device such as a smart phone or a tablet. Some users may never see your site on a desktop browser at all (think of all the times you’ve quickly looked at the TFL website from your iPhone whilst on the move).
Rather than a ‘one layout fits all’ approach, responsive design and media queries allow us to optimise the layout of your website to best suit the multiple screen sizes represented by different devices. Media queries detect your device’s window size, and can change a three column layout best suited for viewing on a desktop, to a single column layout that is more easily read and navigated on your smart phone. All this without the need to build and maintain a separate mobile site or app*.
This site utilises the principles of responsive design. If you’re reading this from your desktop, try resizing your browser to a smaller width – this gives you an idea of what a smart phone user would see. We’ve also considered how horizontal menus behave in a small viewport, as well as the sizing of images. Column widths are now coded in percentages rather than pixels to keep their proportions in a flexible layout.
Responsive design gives us the ability to optimise for mobile by using the power of existing tools (CSS) and no requirement for detections scripts and separate URLs.
*Of course, there are times when a separate mobile site or app is a better way to service your content. Content should always come first!
Eye tracking: yes or no?
Eye tracking is the process of measuring a user's point of gaze on a page. The idea is that by objectively analysing these points at which the user was looking, we can accurately understand how elements on that page perform.
Eye tracking has many proponents, and for some tasks it can be very useful. We are trained in the use of eye tracking, and believe that in certain situations it can provide useful data when used alongside qualitative research. However, when considering how it fits into studies of user experience, we rarely recommend due to its many limitations:
Eye tracking doesn’t tell you where the user has looked; it tells you where they fixated
Eye tracking records when the eye moves and rests (‘saccades’ and ‘fixations’) but a fixation could be when the user is consciously looking at a piece of information (and taking it in) or just resting their eyes (and not taking it in at all). Just because a user’s eye rests on a certain area of the page, it does not guarantee that they have actually seen the information, even though it produces nice looking heat maps.
Eye tracking doesn’t tell you what the user has missed
The human eye can sometimes see something it is not directly looking at. We can perceive alot of our environment through our peripheral vision. Eye tracking cannot say that a user didn’t see something; it can only say they didn’t fixate there.
Eye tracking is quantitative, not qualitative
Eye tracking for user experience research should always be used in addition to user testing, never as a replacement. Knowing where a user is looking does not tell us how an interface should be improved.
Eye tracking requires a scripted process
User testing with eye tracking must follow a script with no ‘think-aloud’ user feedback, and prevents the user discovering a website on their own. This usually means that participants do not complete tasks that are realistic to them, and will not able to explain their thinking as they are doing it. It is very difficult to claim naturalistic validity in such tasks.
Eye tracking can't cope with the certain technologies
Eye tracking does not work well with pages whose elements change dynamically, such as drop down menus and popup messages. For many websites, eye tracking data will not be very useful, or unobtainable.
Eye tracking is expensive and unreliable
Setting up tasks and analysing data from eye tracking is extremely time consuming and expensive. The technology is also notoriously unreliable with many ways data can be lost due to technical issues.
Remote user testing
With clients based and operating in markets all over the world, we are often asked about remote usability testing. That is, can we conduct testing with users who are unable to come to our London office, even users who are located in other countries or across multiple locations?
The answer is: yes we can.
We can travel to the user's location and conduct user testing in person, or we can conduct user testing remotely. There are a number of options for the latter:
Unmoderated remote user testing
This option is ideal for testing very specific things such as a single page, or the navigation structure of a website. Users undertake the testing tasks at their own convenience. Because we will not be in attendance to facilitate, particular attention is paid to the setup and structure of the test, as well as clearly communicating the tasks to the user beforehand.
Facilitated remote user testing
Users in a remote location (we don't mean Antarctica, just those not in our office) undertake the testing tasks while communicating with us, the facilitators, via a webcam, microphone, or even over the phone. We can see what is happening on the user's screen via a screen-sharing application. It can be as simple as a telephone call while the participant is using the website or prototype, to being in depth as a test in a usability lab. It all comes down to the needs of the project.
A specific time will need to be agreed upon between all participants. While still adequate and reliable for research purposes, the quality of the recordings and the observation can vary greatly, depending on the user’s computer setup. A good broadband connection is a must.
Using web analytics in UX
There is a common misconception that website analytics can help you understand your users and lead to design changes. Whilst analytics can be a very valuable tool, it is very possible that misinterpretations of the data could lead to catastrophic changes in the design.
Website analytics show you the 'what' but not the 'why' of user behaviours. You can see the common user journeys taken and the searches that have been performed,but they do not tell you why. In order to understand why, rich qualitative research must be conducted alongside the quantitative analysis of website analytics. Some examples:
Users spend a lot of time on a single page
- The page is very interesting and engaging
- The users cannot find what they’re looking for
Individual users visit a lot of pages within the site
- The website is very interesting and engaging
- The users cannot find what they’re looking for
A search term is popular
- Users cannot find what they’re looking for from the navigation
- Users just prefer searching to browsing
Website analytics should be carefully analysed and serve as guidance for qualitative user research by identifying the areas that should be the focus of the user research. They can also be used to track the traffic of the website before and after design changes to understand their effects.
Websites are sometimes used in circumstances we don't often think about when we create them. A user might use the website at the same time as having a conversation or watching TV. They might use it on their iPhone whilst walking down the street. Sometimes users do not give their full attention to the website they are using.
While there are methods to simulate such circumstances in a lab environment, they are impossible to recreate in full fidelity. If you want to observe user behaviour as close to reality as possible, we recommend an ethnographic approach. That is, observe the user in a realistic environment with all the distractions that naturally occur while they are interacting with the website. By placing the user in their natural environment, it is likely that they will be more relaxed, which will enhance the validity of the results.
One of the challenges of this approach is the difficulty in obtaining a good quality recording of the users’ interactions and/or their facial expressions as that might occur within the natural process of the interaction.
Iterative design and test methodology
During a day of user testing, it is quite common to discover the same usability issues with different users over and over again. When we test an interactive prototype which we have designed, we always like to have the designer on hand. This is so when a usability issue is uncovered, the designer can take immediate action to resolve it (if it’s simple) or design a possible solution (if it’s more complicated). The updated version of the prototype is then immediately deployed for the next user testing session.
This eliminates the time spent in future testing sessions discussing a known issue, and provides feedback on the newly created solution to that issue.
More complex redesigns can be made in-between days when we’re conducting user testing sessions. For example:
- Mon: User testing
- Tue: Redesign 1 (informed by the user testing)
- Wed: User testing redesign 1
- Thu: Redesign 2 (informed by the user testing redesign 1)
- Fri: User testing redesign 2
Multivariate, A/B testing and usability testing
Multivariate testing and A/B testing are used to get quantitative results that prove that one design decision is better than another. It is an invaluable technique for testing subtle differences in the design and for optimising specific pages.
However, multivariate and split testing are often misused and the results are misjudged. These tests produce quantitative results and as such, these results do not help you understand why option A is better than option B. Similarly to eye-tracking and website analytics, these methods in isolation cannot help you understand user behaviour. If you compare two radically different pages using A/B testing and find which one increases conversion, you will still don’t know what contributed to that change. It could be the different copy, the different images, the different layout, or a combination. This entails the risk of making the page worse in the future as you will not know which are the successful features of the page.
Sometimes, the main goal of a website might not be to convert people by clicking a button but to create a positive impression of the company, and loyal customers. These long-term goals cannot be measured with A/B or multivariate testing.
User experience is about user journeys and the flow of a website. This flow cannot be tested by comparing versions of a single page. The user journey in it's entirety must be examined. This is something that can be done with user testing, by watching users as they use the website and explaining what they think as they go along.
Only after the user experience of the website has been researched and designed it is valuable to experiment with multivariate testing.
Creating a flexible testing environment
When testing for a relaxed environment, like a lounge, we have to keep in mind that people have a less task-oriented approach. Multitasking is more common as is switching between platforms and leaving tasks unfinished. People might be watching TV, while using their iPad to tweet about the program they’re watching. This is where ethnography comes very handy as the researcher has the opportunity to observe all these behaviours.
However, in some projects, ethnographic research might not always be possible. In these cases, an easily adaptable testing environment must be used to conduct the user testing. The test environment should create a relaxed atmosphere and several real-life distractions.
In our adaptable user testing rooms, we can rearrange the furniture and the setup to recreate different environments, such as a lounge. This can be used for:
- Tablet testing
- Video games testing
- TV testing
- Mobile phone testing
- Out-of-box experience
- a combination of the above
Conducting these tests in a controlled environment has the following benefits:
- Good quality audio and video recording
- Ability to observe the testing from the observation room
Use of video highlights reels
After a set of user testing sessions, deliverables traditionally come in the form of presentations or reports. These deliverables can sometimes be lengthy and timely to digest. An alternative we offer is the highlights reel. We believe it is crucial for clients to get a real insight into user’s experiences of using the site or application that we’re testing.
The richest way of gaining this insight is a compilation of short recorded clips of the users in the test sessions, which help to summarise the key findings of the research. A highlights reel typically last 10-15 minutes, so it is a perfect way to inform other stakeholders who may have been unable to attend the user testing of the conclusions. This deliverable sits alongside a more detailed report, which can be referred to for a deeper understanding into the research.
The highlights reel is a useful summary of the results, however, it does not act as a substitute for seeing the research live; we always encourage our clients to come and observe user testing sessions when possible.