|

Home
Usability
Paper
Usability
Testing of Community Data and Mapping Systems
Denice Warren,
Chief Information Systems Designer,
[email protected]
Joy Bonaguro, Web and Data Production Coordinator, [email protected]
Abstract
Usability testing
is the only way to ensure that a web site designed for the public is
truly usable. Devoting resources to such testing pays off by making
a web site more efficient, effective, and credible, and leaves a trail
of satisfied users who come back to the site and recommend it to their
colleagues. This document summarizes research behind why usability testing
is important, especially in web-based GIS, and concludes with a basic
protocol for applying usability testing to community data and mapping
systems. On the web page www.gnocdc.org/usability
are links to detailed protocol resources such as scripts, sample users
tasks, release forms, and sample analyses of results available for download.
Introduction
Web-based data and
mapping systems can be powerful tools for the public to access and visualize
complex information. However, the inherent complexity of these web systems
can itself create a barrier between the public and the information they
need.
Philosophy
of user-centered design
A philosophy of user-centered
design, coupled with a culture of continuous quality improvement through
formal usability testing, can create a web system that meets the public
where they are and capitalizes on people's existing strengths in finding
and using information. A highly usable web site allows users to focus
on the information within the system, rather than struggling with how
to use the system itself.
Usability testing
is a natural extension of the community engagement process, and having
an easy-to-use site is an essential element in respecting the community
audience. IBM has published a "User Bill of Rights" that summarizes
the philosophy underlying true user-centered design (IBM).
User
Rights (verbatim from http://www-3.ibm.com/ibm/easy/eou_ext.nsf/Publish/12)
- Perspective:
The user is always right. If there is a problem with the use of the
system, the system is the problem, not the user.
- Installation:
The user has the right to easily install and uninstall software and
hardware systems without negative consequences.
- Compliance:
The user has the right to a system that performs exactly as promised.
- Instruction:
The user has the right to easy-to-use instructions (user guides, online
or contextual help, error messages) for understanding and utilizing
a system to achieve desired goals and recover efficiently and gracefully
from problem situations.
- Control:
The user has the right to be in control of the system and to be able
to get the system to respond to a request for attention.
- Feedback:
The user has the right to a system that provides clear, understandable,
and accurate information regarding the task it is performing and the
progress toward completion.
- Dependencies:
The user has the right to be clearly informed about all systems requirements
for successfully using software or hardware.
- Scope:
The user has the right to know the limits of the system's capabilities.
- Assistance:
The user has the right to communicate with the technology provider and
receive a thoughtful and helpful response when raising concerns.
- Usability:
The user should be the master of software and hardware technology, not
vice-versa. Products should be natural and intuitive to use.
What is
usability testing?
Usability is formally
defined as the effective, efficient and satisfying completion of tasks
by users (Lee, 1999, pp. 38). In a community data and mapping system,
task completion might include printing a thematic map of one's neighborhood,
downloading data about crime rates within a city, or analyzing statistical
phenomena within a specific community.
Usability testing
is a formal method of watching users interact with a system to complete
a task. In such testing, a naïve "typical" user is given
realistic tasks to complete on the web site. A variety of qualitative
and quantitative data is gathered while the user navigates to complete
the task. An analysis of the data from such testing then informs the iterative
design of these web systems to better meet the needs of the audience.
Rationale
for usability testing
Why designing
from usability guidelines isn't sufficient
The field of usability
has spawned a massive collection of web design guidelines based on sound
principles of human-computer interaction (HCI). With so much guidance
on how to design a web site, it almost seems that one would be able to
design a perfect web site from the onset - one that did not require usability
testing. This is not the case.
HCI is based on human
behavior, which invariably produces "fuzzy" results (Nielsen,
2003). And, guidelines are generalizations based on the characteristics
of all users, not your target audience. As such, they can fail to address
the unique goals of your system and the specific needs of your users (Microsoft
Corporation, 2000). Guidelines are an excellent starting place for design,
but the only way to know if a particular guideline works is to test it
with members of your target audience.
What makes
for poor usability?
Across the web, users
are most frustrated by slow download times, being unable to find specific
information, and confusing site design (Bernard, 2001). All of these annoyances
harm the effectiveness, efficiency and satisfaction of task completion
in different ways and all can be improved through usability testing and
redesign.
When users have to
wait for a long download, their subjective evaluation of the web site
suffers even though they may still be able to complete a given task (Bernard,
2001; Selvidge, 1999). This harms the satisfaction component of task completion,
and thus the usability. Willingness to wait for pages to download differs
with audience characteristics and bandwidth; as you would expect, younger
adults and those with faster Internet connections (cable/DSL) are less
willing to wait for pages to download (Selvidge, 2003).
When users cannot
find the information they seek, the underlying causes are typically in
navigation and organization, the sheer volume of information, or page
placement of content and links (Bernard, 2001). Difficulties in navigation
arise when the site is organized counter-intuitively and users cannot
predict where their actions will take them in the system. Users might
find a web site confusing if it employs a lot of jargon, has insufficiently
described links, or contains an overwhelming amount of content. Information
or links located in unconventional places on the page or distracting information
may also generate confusion. These types of problems inhibit the likelihood
of task completion.
Special
challenges in designing web-based mapping systems
Many of the issues
discussed above are problems that all online systems face. These problems
are even more pronounced in online mapping systems due to the 1) increase
in complexity that occurs when specialized functionality is added to a
conventional web browser environment, and 2) the inherent complexity of
the content itself.
Online community
mapping systems were adapted from software designed for expert users who
had common training in both GIS content and tools (Slocum, 2001, pp. 12;
Cartwright, 2001, pp. 13). Lay users of these systems are in the difficult
position of concurrently learning new content and new tools. (See Haklay,
2003 for a review.) Visitors to an e-commerce site, in contrast, have
an easier time learning the system because they can rely on their previous
experiences shopping in the real world.
Moreover, there are
still technical and system incompatibilities that exist in online mapping
systems. With a wide, uncontrolled user base, client-side technology may
not be capable of handling memory-intensive functionality, specialized
plug-ins, or high bandwidth requirements.
One reason for the
often high-level client system requirements is that, as GIS has been applied
to a variety of disciplines, it has become very feature-rich. The same
flexibility that allows GIS to perform a wide range of tasks in different
fields of expertise also acts as a hindrance when attempting to modify
the system for the lay public. Many data and mapping systems inadvertently
overwhelm the users with functionality that requires specialized knowledge.
To avoid this phenomenon, designers can employ such functionality sparingly
and with caution, choosing to implement only those features that have
been proven usable (Cartwright, 2001, pp. 5). The premise is that it is
better to have a simple, easy-to-use site than one with extravagant features
and poor usability (Fogg, et al, 2001, pp. 67).
Credibility
& usability
Good usability can
increase the credibility of a web site and its content - an effect that
is especially positive for community data and mapping systems whose purpose
is to inform decisions that impact communities. Credibility also helps
persuade users to perform actions such as registering personal information,
participating in surveys, contributing content, returning to use the site
again, and referring the site to colleagues. Users are less likely to
perform these actions if they doubt the credibility of the site (Fogg
et al, 2002, pp. 4). Not only will poor usability make it more difficult
for users to find information they need, it will also make them less likely
to trust it when they do find it.
Usability
testing protocol for community mapping systems
Usability expert
Jakob Nielsen (Nielsen, 1997) notes that there is a gap between people
watching a demonstration or discussing a product and actually using it.
This is because people are not necessarily aware of how they work when
using something, and often do not notice the subtle techniques they use
to compensate for poor design (Usability.gov). Asking people what they
think of a site's usability will result in potentially misleading results.
The only way to truly understand a site's usability is to watch people
use it, and formal usability testing provides a structure for doing so.
For many of the reasons
outlined earlier in this document, testing the usability of community
data and mapping systems is notably different from testing other types
of web sites, especially commercial ones. Most usability testing protocols,
however, are designed for commercial web sites. We found that a customized
combination of the following usability techniques worked best in evaluating
a community data web site:
- Contextual
inquiry (to understand the users' work context)
- Ethnographic
study (to help refine user requirements)
- Usability
testing (experiments where users are given tasks and asked to think
aloud as they perform them).
The major steps in
designing and implementing this integrated protocol are:
- Generate research
questions from which to design user tasks
- Recruit users
to test the site, and then conduct the tests
- Analyze the results
of testing and make design changes accordingly
1.
Generating research questions from which to design user tasks
Usability
testing requires a commitment of time, energy and resources. To get the
most out of this effort, you'll want to do two things in preparation.
One, maximize your time with the usability testers - catch and fix all
broken links, misspellings and other obvious problems before conducting
your tests. You want your usability testers to tell you about problems
that you can't discover on your own. Two, take a good look at what you
already know about user experiences with your site and use that knowledge
as a starting place for designing your testing protocol.
Good clues can lead
to good research questions, which set the stage for user tasks that will
generate usability data from which you can redesign your site. The more
specific your clues, the more specific your research questions will be,
and therefore the more actionable the results will be. On the other hand,
there is also tremendous value in more general research questions, because
the answers they elicit can then be the starting place for the next iteration
of usability testing. Within a few generations of testing, a vague usability
problem can turn into very specific design solutions. A good usability
testing protocol includes a mix of specific and general research questions.
Table
1: Formulating user tasks
| Clue
to usability problem |
Sample
research question |
Sample
user task |
| From the
web site server statistics, the "most popular pages" viewed
by visitors to the site do not include what you consider to be the
most important pages in your site (such as the page that gives the
technical definitions for the indicators you publish).* |
Is the 'Definitions'
link sufficiently visible? |
How many
blighted houses are in the Holy Cross neighborhood?(In order to answer
this question, users must use the 'Definitions' link to learn the
difference between blighted and vacant houses.) |
| Functionality
that you spent a great deal of resources to create doesn't get used
much (such as a feature that allows users to create custom neighborhood
boundaries). *, ** |
Is
the 'Define your neighborhood boundaries' link visible? Once people
get to the feature for defining their own neighborhood, is it clear
what to do? Is the tool for selecting the custom neighborhood geography
intuitive? Are there other barriers to using this feature? |
Use
the web site to create a map showing homeownership rates in the neighborhood
your nonprofit serves. (To answer this question, users must use the
'Define your neighborhood boundaries' feature.) |
| People ask
you how to find information that should be easy to find on the site
(e.g., "Do y'all have data on teen births?"). |
Is data on
teen pregnancy in a predictable category? |
What is the
rate of teen pregnancies in St. Bernard Parish? (To answer this, users
must click on the category that contains the teen pregnancy data.) |
| The
site seems to generate the same questions in people's heads, and those
questions aren't answered by the site (e.g., "What Census tract
am I in?"). |
Do
the maps provide enough geographic detail to help users choose their
area(s) of interest? |
Download
the data profile for the Census tract in which you live. (This task
requires users to find their Census tract on the map.) |
| Server statistics
show that one of the most common exit pages is a key navigation page
(such as a required registration page). |
What's happening
when people land on the registration page? Is usability a barrier?
Is the form too long? Are users suspicious of our intent? |
Any user
task that requires users to register would also answer this question.
(You don't want to directly task people with registering because that
lends false motivation; instead you want to see what happens when
registering is a means to another end.) |
| User characteristics
(domains, browser versions, operating systems, etc.) that show up
in the server statistics don't match what you would expect for your
target audience. |
How
do our pages look on the computers of our users? Do our pages download
quickly enough? Do the pages print well on their printers? |
These questions
are answered a little bit by every task. (In order to be complete,
conduct some basic "web site calisthenics" on the user's
computer.) |
| Design decisions
that you debated during the production of the site, or that you have
a 'weird feeling' about. |
Will people
expect to find voter registration information under the "community
participation" category? |
How many
registered voters are there in St. Tammany parish compared to the
state as a whole? [This puts the search term 'voters' in the user's
head and will test whether they expect to find 'voters' in 'community
participation.'] |
|
|
|
*
When you're trying to determine if a feature is getting used (or, a page
is being viewed) make sure that you filter out internal users. A programmer
testing a feature, or another staffperson demonstrating it at workshops,
will look like regular traffic unless you filter them out.
** Depending on the type of feature this is, you may have to look at records
of database queries, server statistics, or other indicators of "use."
The user tasks are
printed and taped onto index cards. These cards help minimize cueing by
the interviewer, and provide some degree of standardization across testing
sessions.
2. Recruit users
to test the site, and then conduct the tests
This
composite test protocol has the following major elements (more detailed
testing protocols and sample scripts and release forms are available for
download at www.gnocdc.org/usability/):
- Participants
are members of the target audience, but ideally naïve ones, meaning
that they have spent little or no time at your web site. (They, or their
organization, are compensated for their participation.) A sample of
3-5 users for each round of usability testing is enough to get actionable
data from testing.
- Usability testing
is conducted in the field (rather than in a usability lab) at the place
where the user would typically access your site.
- Two staffpersons
(or consultants) conduct the testing. One is the interviewer, the other
is the notetaker. Two hours is a good amount of time for a field usability
test.
- The interviewer
has the user fill out a questionnaire, and interviews them briefly about
their work. Concurrently, the notetaker records the specifications of
the user's computer (OS, browser, bandwidth speed, screen resolution,
bit-depth) and the browser's default home page setting.
- The notetaker
visits key pages on the site being tested from the user's computer and
notes (subjectively) if the pages download in a reasonable amount of
time and whether they display well. The notetaker also prints out samples
of pages to see how they print on the user's printer.
- When the notetaker
is finished testing the computer, they log the computer into a tracking
system that sets a temporary cookie so that all pages clicked on the
web site are recorded for later analysis. (One could note each page
visited manually, but there is a great risk for lost data, especially
when a user hits a page and backs out quickly.) The primary analysis
for this data is "number of clicks to target" which is a measure
of efficiency. ("Time to target" is a common usability metric
for testing that occurs in a controlled environment. This metric does
not work when testing occurs in the user's environment, though.)
- When the usability
testing begins, the interviewer explains how the test will work. It
is emphasized that the web site is being tested, not the user. The interviewer
explains that she will hand the user a note card with a task written
on it. The user is then to use the web site to perform the task, thinking
aloud as they go. The interviewer will not answer questions during the
task, and can't help the user find the answer. (The notetaker takes
copious notes about what the user says and, especially behaviors that
will not be captured with the tracking system.)
- When the user
is performing a task, the interviewer prompts them to continue thinking
aloud if they get quiet, and may ask non-leading questions to elicit
explanations and motivations for the user's behavior.
3.
Analyze the results of testing and make design changes accordingly
Usability testing
will generate a combination of qualitative and quantitative data. You'll
want to craft the analysis so it answers the initial research questions
and leads to recommendations for redesign. For each research question,
consider the data gathered for all of the user tasks addressing that question.
From the quantitative data generated by user tracking, you can determine
the number of clicks it took to get from the home page to complete the
task. (This number can be compared to the minimum number of clicks required
to complete the task; a greater number of clicks in the user testing may
indicate lack of efficiency in the design.) When you combine that with
notes taken from the user thinking aloud as they attempted to complete
the task, you may gain insight into where they encountered troubles, and
whether they were frustrated at specific design features.
Table
2: Sample analysis of results and design recommendation
|
Research
question
|
User
task
|
Analysis
of results
|
Design
recommendation
|
| Will people
expect to find voter registration information under the "community
participation" category? |
How many
registered voters are there in St. Tammany parish compared to the
state as a whole? |
3 user took
an average of 7 clicks to find the target data (compared to only
3 clicks minimum required from home page).
All users
went to the page with "People & Households" data rather
than "Community Participation" data.
One user
commented, upon finding the data on the "Community Participation"
page, "Hmm
I was expecting to find information on neighborhood
watch groups and church activities here, not voting."
|
Create
a link especially for "Voting," since nobody expected "Voting"
data to be in the vaguely-titled category of "Community Participation." |
|
|
|
|
After redesigning
the site based on feedback from usability testing, you'll want to test
the site again, using the same tasks but new, naïve, users somewhat
matched to the users in the first cohort. This way you can determine whether
your design change was indeed an improvement. Also, compiling results
that show a positive effect from usability testing help justify its continued
role in your organization (and budget).
Conclusions
Usability testing
is an essential technique in the continuous improvement of community data
and mapping systems. As described in this document, users benefit because
they are able to access the information they need more efficiently and
with less frustration. The organization responsible for the web site benefits
from added credibility. Additional side benefits are that the web site
development team develops a greater understanding of the audience, making
future development efforts more efficient and better targeted. And, the
development team can more confidently make design decisions with the right
information in front of them. Users involved in usability testing often
turn out to be enthusiastic advocates of your site and can accelerate
word of mouth marketing to reach out to new visitors.
Return on investment
(ROI), the metric typically applied in commercial web sites to judge the
worth of a usability initiative, is difficult to measure in a community
web site that is a public good. However, the cost of not conducting usability
testing is unarguably too high - users are likely to feel disrespected,
not trust your site, and might not come back.
References
Bernard, Michael
L, 2001. "Criteria for Optimal Web Design: How can I reduce the major
user annoyances on my site?" Software Usability Research Laboratory,
Wichita State University. Retrieved 9 June 2003: http://psychology.wichita.edu/optimalweb/annoyances.htm
Cartwright, W.,
Crampton, J., Gartner, G., Miller, S., Mitchell, K., Siekierska, E., and
Wood, J., 2001. "Geospatial Information Visualization User Interface
Issues." Cartography and Geographic Information Society, Vol. 28,
No. 1, January 2001. Retrieved 9 June 2003:
http://www.geovista.psu.edu/sites/icavis/agenda/PDF/Cartwright.pdf
Fogg, B.J., Marshall,
J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar,
A., Shon, J., Swani, and P., Treinen, M., 2001. "What Makes Web Sites
Credible? A Report on a Large Quantitative Study." Persuasive Technology
Lab. Stanford University. Available at www.webcredibility.org
Fogg, B.J., Kameda,
T., Boyd, J., Marshall, J., Sethi, R., Sockol, M., and Trowbridge, T.
(2002). "Stanford-Makovsky Web Credibility Study 2002: Investigating
what makes Web sites credible today." A Research Report by the Stanford
Persuasive Technology Lab & Makovsky & Company. Stanford University.
Available at www.webcredibility.org
Haklay, M., and
Tobon, C., 2002, Usability Engineering and PPGIS: Towards a Learning-improving
Cycle, presented at the 1st Annual Public Participation GIS Conference,
Rutgers University, New Brunswick, New Jersey, 21st-23rd July.Available
at:
http://www.casa.ucl.ac.uk/muki/pdf/Haklay-Tobon-URISA-PPGIS.pdf
IBM. "User
rights: The customer is always right." IBM Corporation. Retrieved
9 June 2003: http://www-3.ibm.com/ibm/easy/eou_ext.nsf/Publish/12
Lee, Alfred T,
1999. "Web Usability: A Review of the Research," in The SIGCHI
Bulletin. Vol. 31, No.1, January 1999. Edited by Ayman Mukerji. Minneapolis,
MN. Pp 38-40. Retrieved 9 June 2003: http://www.acm.org/sigchi/bulletin/1999.1/lee.pdf
Microsoft Corporation,
2000. "UI Guidelines vs. Usability Testing." MSDN Library. Retrieved
9 June 2003: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwui/html/uiguide.asp
Nielsen, Jacob,
1997. "The Use and Misuse of Focus Groups." Alertbox. Retrieved
9 June 2003: http://www.useit.com/papers/focusgroups.html
Nielsen, Jacob,
2003. "Employee Directory Search: Resolving Conflicting Usability
Guidelines." Alertbox. Retrieved 9 June 2003: http://www.useit.com/alertbox/20030224.html
Selvidge, Paula,
1999. "How Long is Too Long to Wait for a Website to Load?"
Usability News Vol. 1, Issue 2. Retrived 9 June 2003: http://psychology.wichita.edu/surl/usabilitynews/1s/time_delay.htm
Selvidge, Paula,
2003. "Examining Tolerance for Online Delays." Usability News
Vol.5, Issue 1. Retrived 9 June 2003: http://psychology.wichita.edu/surl/usabilitynews/51/delaytime.htm
Slocum, Terry
A., Blok, C., Jiang, B., Koussoulakou, A., Montello, D.R., Fuhrmann, S.,
and Hedley, N.R., 2001. "Cognitive and Usability Issues in Geovisualization."
Cartography and Geographic Information Society, Vol. 28, No. 1, January
2001. Retrieved 9 June 2003: http://www.geovista.psu.edu/sites/icavis/agenda/PDF/SlocumLong.pdf
Usability.gov.
"Methods for Designing Usable Web Sites." Retrieved 9 June 2003:
http://www.usability.gov/methods/data_collection.html
Home
Usability
Paper

 |
|
Greater New Orleans Community Data Center
[email protected]
Copyright
© 2000-2002 Greater
New Orleans Community Data Center.
All Rights Reserved.
Last
modified:
July 28, 2003
|
|