Research and Evaluation Process and Challenges

Joyce Ma
Senior Researcher
Toni Dancu
Evaluator

Evaluation of the Outdoor Exploratorium began in 2001, years before the project’s final realization as a set of exhibits at Fort Mason. The Exploratorium has a long history of evaluating visitor exhibit experiences to better understand visitor reactions and incorporate their feedback into exhibit development. But the Outdoor project presented both opportunities and challenges that we had rarely encountered within the more familiar walls of the museum—and each new phase of the project raised different questions to address.

Originally, the Outdoor Exploratorium was conceived as a space adjacent to the museum’s current location at the Palace of Fine Arts. As such, our early evaluation efforts focused on front-end studies designed to identify visitors’ outdoor behaviors and expectations. In addition to using traditional interview methods, we also experimented with other ways of learning about outdoor behavior and specific noticing techniques, including open-ended noticing activities to gauge noticing behaviors and expert-led “noticing tours” (e.g., a mushroom tour led by a naturalist; a writerled poetry walk) to gain a focused look at visitors’ interests in particular content areas.

Although not all findings from these studies found direct application in the final exhibits, we learned valuable lessons about supporting outdoor noticing. In particular, we identified some of the reasons why visitors spent time noticing outside, including wanting to be in an attractive area, participate in independent exploration, and see things they hadn’t noticed before (or familiar things from new perspectives). We also discovered some impediments to exploring and noticing outside, such as limited time, worries about safety (outdoor environments not always being wellbounded or predictable), and self-consciousness about activities that might seem unusual (such as using a magnifying class to examine dirt). Our findings also helped us redefine project content areas. For example, before our front-end studies, the Outdoor Exploratorium was primarily focused on natural phenomena, but evaluation results led us to broaden the project’s scope to include the built environment as well.

In 2004, as active exhibit prototyping began, the project’s final location was still uncertain. This (and the related fact that the eventual site would be a key factor in defining the exhibits themselves) spurred us to rethink our approach to prototype evaluation. Typically, iterative formative evaluations are used to inform and improve a particular prototype in a specific context, and these findings are often not generalizable. But given our uncertainty about site, we sought to use formative evaluation to study promising techniques to foster noticing that might later find broader application. For example, we looked at different ways of framing phenomena to help visitors notice particular aspects of the outdoors—and found that framing was not always effective. When framing worked, it tended to help people move into a good position to see something in their surroundings, to think about composition, or to focus on only one portion of the larger landscape. Some of the prototypes developed and evaluated during this period became part of the final set of exhibits installed at Fort Mason; some served to inspire the final exhibit collection; and others never led to complete exhibits but did generate ideas for learning about ways of encouraging visitors to notice and think about outdoor phenomena.

It was also during this period that we began to envision the Outdoor Exploratorium as a set of exhibits at one or more distributed sites away from the museum itself. We know quite a bit about Exploratorium visitors, but considering remote locations required us to reevaluate our audience assumptions. The team thus began asking fundamental questions about the people who might use our exhibits: Who are they? What are they doing there? When are they there? We were particularly inspired by the work of William Whyte, who conducted observational studies in the 1970s of how people use New York City plazas1, and we conducted our own set of informal observations to learn about the ‘social life’ of the candidate sites our exhibits might eventually occupy.

These observations taught the team two key lessons. First, the demographics of potential visitors would be different from the Exploratorium’s typical audience. For example, outdoor exhibits at new sites would likely be seen by a higher percentage of adults, and by a higher percentage of individual (rather than family group) visitors. Furthermore, visitors to outdoor exhibits away from the museum would likely have a wider variety of reasons to be outside, and thus to be pursuing a broader range of activities. Potential visitors could, for example, include people traveling to and from work, eating lunch during a break, sunbathing, sitting and chatting with others, or simply taking in the view. Some people might pass through the area every weekday; for others, the site could be a stop during their only San Francisco visit.

By 2006, it was clear that the main body of Outdoor Exploratorium exhibits would be installed at Fort Mason, allowing us to focus our primary development and evaluation work at that location. Accordingly, we refocused formative evaluation efforts on improving individual exhibit experiences. Initial formative evaluations at Fort Mason employed a rapid prototyping and evaluation technique2 best suited for decisionmaking in early stages of exhibit development. This collaborative method allows developers and evaluators to address each visitor’s difficulties before the next visitor’s experience and make rapid iterative changes in prototype design. This type of formative evaluation helped the team identify critical issues with exhibit concepts and challenged our assumptions about text and label design.

The team began final exhibit installations at Fort Mason in the fall of 2008. In anticipation of a February 2009 completion date, we asked Beverly Serrell, principal at Serrell and Associates, to conduct a summative study beginning in October 2008. Ms. Serrell brought a wealth of expertise in exhibit evaluation, including conducting summative evaluations for The New York Hall of Science’s Science in the City, which placed museum exhibits in the streets of New York City. However, due to delays in exhibit approval and installation, only seven of the planned twenty exhibits were ready for summative evaluation by the time Serrell was to begin data collection. In the end, we decided to proceed with the evaluation, but also asked that the process identify areas for remediation. This summative evaluation of the first seven exhibits, therefore, (a) provided a preliminary understanding of how well a subset of the collection met the project’s visitor goals, and (b) informed final development of these exhibits as well as the remainder of the collection.

In terms of achieving key project goals, the first summative evaluation found that “[a]ll seven of the exhibits evaluated in this study succeeded to one degree or another at encouraging noticing and promoting noticing skills with visitors... Among the intended goals, noticing skills were the strongest outcome with the participants in this study. Enabling noticing skills was an unusual and exciting experience for many people. This goal is very suited to helping visitors feel competent and interested in outdoor natural phenomena—a worthy visitor outcome for many science museums. The OE exhibits can serve as good models for what is possible.” On the other hand, however, “[t]here was less evidence for the other two goals of helping visitors ‘explore complex systems and interactions at play in an outdoor environment’ and ‘come to a deeper understanding of the phenomena by applying scientific concepts and principles to the outdoor environment’.”

Serrell strongly recommended that the team focus remediation efforts on developing clearer information architecture and wayfinding systems and making sure that formative evaluation was conducted on exhibits not yet installed. The team took these evaluation recommendations to heart. During this period, formative evaluation focused on identifying potential visitor difficulties with using, accessing, or understanding each exhibit and iteratively improving those exhibit experiences. The team also returned to the previous set of exhibits to assess the effectiveness of remediation steps resulting from the first summative evaluation.

Building on these findings, Wendy Meluch of Visitor Studies Services (VSS) conducted a second evaluation in late spring of 2009. By this time, fourteen exhibits were installed at Fort Mason, and a new information architecture and wayfinding system was in place. VSS interviewed visitors who used most of these exhibits as well as visitors cued to use a cluster of three exhibits in close proximity. Evaluators also unobtrusively observed visitors at several exhibits. Overall, the second summative evaluation found the Outdoor Exploratorium “fun and engaging for users.” However, some goals were met more clearly than others. More specifically, VSS found that approximately 65% of those interviewed described the noticing skills they used at exhibits or as a result of their exhibit experiences. (This is consistent with other observations suggesting that visitors were purposefully engaged with looking and comparing.) A smaller percentage of visitors interviewed (42%) discussed how the exhibits encouraged them to notice their surroundings. More than 80% either described or articulated an awareness of the complex systems underlying exhibits, but only 32% were able to describe in some detail the relationships they noticed as a result of using the exhibits. Finally, almost half of those interviewed (48%) expressed an appreciation or understanding of the outdoor world as a result of using the exhibits; a smaller group (22%) mentioned science as a way of studying and understanding the world. In addition, although this summative evaluation was not intended to be a remediation study, visitor interviews did suggest that people wanted additional help finding the exhibits dispersed at Fort Mason, and that the collection could benefit from additional attention to the current wayfinding system.

At the Exploratorium, visitor evaluation has long been viewed as an integral part of the exhibit development process. However, evaluation was especially critical to addressing this project’s many unknowns; in particular, evaluation was key to learning how to foster and support visitors’ outdoor noticing skills, to characterizing new audiences, and to working with external partners in developing exhibits at a remote site. Addressing these challenges required us to experiment with methods that we rarely used inside the museum. In addition to informing this project, then, we anticipate that some of the evaluation approaches used for the Outdoor Exploratorium will find application in future projects— indoors and out, at the Exploratorium and elsewhere.

 

Notes

1 Whyte, W. H. (1980). The Social Life of Small Urban Spaces. Washington, D.C.: Conservation Foundation.

2 Medlock, M.C., Wixon, D., McGee, M, & Welsh, D. (2005). The Rapid Iterative Test and Evaluation Method: Better products in less time. In Bias, R.G. & Mayhew, D. (Eds.), Cost-Justifying Usability: An Update for an Internet Age. 2nd Ed. San Francisco, CA: Morgan Kaufmann.


This project is supported by the National Science Foundation under Grant No. ESI-0104478. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the National Science Foundation.