Post-occupancy evaluation: benchmarking for health facility evaluation tools
A shortage of post-occupancy evaluations means that planners and architects do not have access to accurate findings regarding healthcare buildings’ performance. This paper proposes an alternative evaluation method that is robust, informed and easily shared
Adjunct Professor Ian Forbes, University of Technology, Sydney
When people talk about the health and social care system in any country, it is usually with an understanding that this is the most complex and rapidly changing organisational environment one can imagine. Built environments that provide for these services are equally complex. It is also acknowledged that this complexity is enhanced by the multitude of stakeholders who exercise power and preferences over it, not only concerning how and whom shall deliver services, but what allocation of resources will enable that to happen.1,2 One of the most expensive and therefore problematic aspects of the delivery system is the issue around capital investment in buildings, equipment and health facilities of many kinds. It is well recognised that design decisions, when translated into physical facilities that accommodate health services and patient care environments, will need to be evaluated in order to determine if they are fit for purpose.3 It is well accepted that to avoid making repeated design mistakes and even alignment with strategic business intent through Building Performance Evaluation4 some form of systematic evaluation is required. Further, as funding for the major public hospitals in developed countries and almost all developing countries comes from government sources it means the long-term responsibility for maintenance and functioning of the assets falls to them. This concern for longer-term issues has resulted in the development of guidelines often to achieve regulatory controls but using evaluation processes to feedback findings to those who prepare the guides with refreshed knowledge.
There is considerable research and material written about the conducting of health facility evaluations and specifically the methods used for Post Occupancy Evaluation (POE).5 It is not my intention here to go over the details of history or methods although it has been recognised6 that in the early evaluation studies undertaken for health facilities, they were of academic interest and initially conducted by academic researchers who investigated select issues in institutional settings (eg examination of different ward design, operating theatre systems, logistic handling approaches, etc).
What was achieved by these approaches? Through the 1970s and 1980s this evaluative research was developed as a systematic methodological process giving better scope and rigor to facility studies. Initially derived from architectural concerns with social and behavioural issues as opposed to aesthetic ones.7 POE has now become an internationally accepted approach to learning from building experiences. Variables such as task performance, privacy, communication, safety and thermal comfort would all be considered. Evaluations were conducted by an individual or teams on site. They followed a specified format, which could range from a simple to complex investigation. Performance was typically measured on three dimensions: technical, functional and behavioural.8
Significantly the concern was with technical issues3 relating to the capability of the building and its engineering services systems as well as functional aspects such as the ability to achieve operational and clinical tasks efficiently and effectively. However it was the behavioural aspects that drew continued attention.5 This was the psychological and social aspects of user satisfaction and concerns related to better understanding the general wellbeing of building inhabitants that had not previously been considered in the more typical areas of evaluation.8
The POE process itself Today, POE is a well-developed field of scholarship and the process is taken for granted as the generally accepted approach to finding evidence about what should or should not be repeated in building solutions. This fits within the current wisdom surrounding evidence-based design using the quasi-scientific POEs to establish a more valid base for evidence-based design decisions.1 The learning aspects of POE also fit within the current context of Continuous Quality Improvement.9
However, literature reviews show there is no industry standard or standardised methodology for building evaluation.5 The UK Higher Education Funding Council for England has identified six accepted methods for conducting POEs in its Guide to Post Occupancy Evaluation.10
Unfortunately the POE has been identified as having several major shortfalls. These include an unwillingness to participate by design teams and owners because POEs might discover a failure, which could lead to litigation. Essentially they are not being undertaken in sufficient quantity due to the large cost associated with undertaking them.3 Early POEs were done as individual research activities and were funded as such. In recent years, although many governments have put POE as an essential aspect to be conducted as part of the process of planning new facilities, they don’t actually fund it sufficiently. In addition it has been found there were several problems with Scotland’s POEs.11 It was difficult to obtain sufficient responses due to the time and participant effort involved in carrying them out, and there was a lack of comparability between old and new facilities as most patients and many staff had not experienced both. It was also hard to avoid distortion of the results by low-level concerns – such as parking and availability of magazines – not being distinguished from more significant environmental concerns. Frequently the standardised questionnaires generated responses that were not usable in establishing how, or if, improvements had been made. The survey team was often hampered as the questionnaire format left little opportunity to capture comments, particularly if these did not refer to a specific question or issue.
Other barriers to sustained POE activity noted in the study conducted by the American Federal Facilities Council (FFC) of the National Research Council12 are the fear by organisations and professionals of costly changes as a result of doing POEs and the lack of expertise in organisations to carry them out. Interestingly, architects have not taken ownership in pushing funding for or doing POEs and making sure their buildings are evaluated for effectiveness or satisfaction of the users. Zimring et al13 suggests that historically, large organisations such as the US Court System, Disney, US General Services and other government bodies in the US and internationally have all carried out extensive POE programmes. However his research shows that organisational learning, defined as the ability to constantly improve routine activities, especially the inputs to operational processes, was not achieved. Members of project teams, project managers and clients were unaware of these POEs unless a special evaluation had been conducted to address a specific problem they were facing. The implication of these concerns, whether right or wrong, are that evaluations are not being done and as a consequence changes that should be taken in future designs are not included. Too often anecdotal ideas and poorly assessed information are used to determine what should be included in future design solutions. Decisions are based on the so-called expertise of senior planners or clinical staff. We might call this eminence-based design, not evidence-based design.
Understanding the evaluation methodology required There are several objectives that have been described as driving the need for POEs in health facilities. Importantly the main concern is whether a building fulfils its design requirements.3,6 Perhaps what is missing in all the rigorous evaluation methodologies and processes developed for POE is the need for continuing discussion around why things have happened. These evaluations can provide a rich picture generated from many cases reviewed. Dialogue among participants involved in the review is perhaps the most important aspect of the evaluation. Seeking to find simplistic methodologies that will answer all aspects of health facility design is not possible.
The question that we need to address is whether there is an alternative evaluation method that will move design decisions forward and be more informed, yet robust, so that the findings can be implemented quickly and most importantly shared with a network of designers and clients.
Benchmarking as a method for design-in-use Preiser, Rabinowitz and White8 suggested that we need to have a variety of such evaluation approaches reflecting the degree of effort involved. The types of evaluation they described are indicative, investigative and diagnostic, in which each has a different objective: Indicative: is a very general short-time evaluation in which the presence, frequency and location of factors that support or impede activities are identified and compared to the expert’s knowledge. This is intended to provide an indication as to whether further work is then required. Investigative: is a longer and larger evaluation with greater surveying and interviews and includes a literature review and comparisons with similar facilities to achieve a more comprehensive understanding of what has occurred and what can be adjusted. Diagnostic: is a large research activity with multi-phasic studies over longer periods of time. They require a large team of investigators who employ triangulation or multi-levelled strategies for gathering data on numerous variables; they use basic scientific research designs; and they employ representative samples, which allow the results to be generalised to similar buildings and situations. In essence they are intended to develop new ideas.
In the review of evaluations undertaken at the University of Technology, Sydney (UTS) we believed that it would be possible to develop a more responsive evaluation method to overcome the time and cost issues of the full POE. Several POEs had been commissioned by the NSW Health Department over many years and as a consequence of their reluctance to release findings, none of these were able to provide learning to the wider health facility design community. However, they may have influenced changes in the Australasian Health Facility Guidelines (AHFG) issued by the Australian State Health Departments and generally used for both public and private acute hospitals across Australia.
There was, however, no feedback into the general design knowledge base that would benefit the many private firms engaged in public health facility design. If the only new knowledge is to be within the AHFG then this is very problematic. A debate has already occurred about how useful the guidelines are beyond being a regulatory control, essentially for cost control, and as a useful introduction to inexperienced designers and user groups. There are some serious concerns about their rigid imposition, thereby limiting the possibility for change and innovation in functional and physical solutions. In order to provide faster feedback and some indication of factors to improve aspects of spatial design, a combination of the indicative and investigative evaluation approaches seemed possible while leaving diagnostic to more specific research. This approach required the setting to be the focal point and not a generalised evaluation tool. An examination of the various evaluation methods all assume a project-by-project analysis in which the questioning processes becomes broad enough to be used for any kind of health facility or department.
Good examples of current tools created to do evaluations in this vein are the British AEDET Evolution (Achieving Excellence Design Evaluation Toolkit)14 and ASPECT (A Staff and Patient Environment Calibration Toolkit).15 The AEDET Toolkit will enable the user to evaluate a design by posing a series of clear, non-technical statements, encompassing the three key areas of Impact, Build Quality and Functionality. ASPECT is a tool used either in an individual evaluation or in conjunction with AEDET. It is used for evaluating the quality of design for staff and patient environments in healthcare buildings. It delivers a profile that indicates the strengths and weaknesses of a design or an existing building.
These tools cover two very important elements of evaluation. They are simple to use, although it is recommended to have experienced users as well and they lead to a discussion of what is found in the evaluation, especially around the scores that are derived from them. In this way scores give a measurable value for what would otherwise be arguable and subjective interpretations. However they attempt to cover a very generic set of health facility situations and although many elements are of value, they don’t address the many specific concerns of each department.
Consistent with the desire to focus on these concerns as requested by specialist nursing staff, the research team from the Faculty of Health and the Faculty of Design, Architecture and Building at UTS developed a series of evaluation tools. They were based on the following principles:
1. To focus on the design-in-use of a specific department, rather than use a generic tool 2. To identify whether the important elements of the space considered to be essential to the operational philosophy where present 3. To identify from literature reviews a benchmarked solution that would address the operational concerns of the members of staff and other users of the specific location 4. To create a simple tool that compared the benchmarked solution with what was there to identify any deficits or benefits observed in the space 5. That the tool could be used by non-experts for self-assessment 6. That the tool would have a high level of inter-rater reliability after a short training time 7. The use of a scoring system that identified overall scores and sub-scores to enable discussion as to what might be changed or which aspect could be avoided in future design solutions.
In addition, the results could be shared as case studies for further research and the findings would stand as a hypothesis for others to test or challenge.

The dementia EAT tool The first tool to be developed was for dementia-specific aged care design solutions. The opportunity to explore this arose from a request by the NSW Health Department for an assessment and solution to problems in small rural acute hospitals where the facility designed for acute patients at times had 80% of patients present with dementia co-morbidity and behavioural issues. Richard Fleming and Ian Forbes15 undertook the study and – based on an international literature review as well as the findings from the pilot study on three rural facilities – they prepared a draft tool. The elements included were specific to dementia facility design and inclusions in the tool were:
1. Be safe and secure 2. Be small 3. Be simple with good visual access 4. Have unnecessary stimulation reduced 5. Have helpful stimuli highlighted 6. Provide for planned wandering 7. Be familiar 8. Provide opportunities for a range of private to communal social interactions 9. Encourage links with the community 10. Be domestic in nature, providing opportunities for engagement in the ordinary tasks of daily living.
An opportunity to further develop the tool occurred when it was needed for a major research project undertaken by the combined teams from UTS, University of New South Wales (UNSW), University of Wollongong (UW) and Sydney University (USyd). The study, called PerCen, deliberately separated built environmental changes from staff training in Person Centred Care (PCC) to determine the separate effects from these interventions. Psychometric tests were used to determine before and after changes in the quality of life and quality of care. This included 500 people living with dementia in 40 residential aged-care facilities that were randomly assigned to groups of 10, being: care as usual; person-centred changes only; physical changes only; and both changes.
The tool was then examined against other tools historically used internationally for evaluating dementia specific and aged care facilities. The tool was found to be valid and reliable.16 The tool was labelled EAT (Evaluation Audit Tool) and specifically compared to a similar tool developed at Stirling University in Scotland. Both tools were used and compared favourably in evaluating the 40 facilities in the pre-test round of this study. EAT was used in the post-test and final rounds. Further use of the tool continued in other studies with dementia-specific units and minor modifications were made. Findings in the PerCen study showed the EAT tool was effective in evaluating the relationships between operations and space in terms of effectiveness when compared to ideal residential care. The scores were used in determining the required changes to meet patient-centred environmental principles at the different sites. Using the EAT scores, discussions were held with managers and care staff at each site to determine their dysfunction as identified by the tool and gaining agreement to the interventional changes planned. Due to limited funds in the grant, only minor modifications to the environment at each of the 20 intervention sites were possible.
In regard to the objectives of the evaluation tool, it has proved to be easy to use, with only a few hours of training by a variety of researchers and with high reliability in the scores. The results provided an overall score for comparison to other facilities and a series of sub-scores that highlight areas of concern in order to make changes at individual sites.

The birthing unit BUDSET tool While the EAT tool was being tested, the researchers in the UTS Centre for Midwifery, Child and Family Health in the Faculty of Health, believed that the work with environments in dementia care would have similar results to those for birthing spaces. Their objective was to establish a birthing environment that was supportive but that was also unconsciously unobtrusive. The cues given to birthing occupants by the configuration of the spaces were either frightening or not comfortable. The objective was to develop a specific evaluation tool that would take the philosophy of ideal birthing spaces into account. This tool would determine whether the hospital birthing units in NSW were currently achieving spatial benchmarks for such facilities, or to see what could achieve them.
The conceptual framework that applied to this was the theory of “birth territory”, which was co-developed by UTS Professor Maralyn Foureur.18 Birth territory recognises the physical territory of the birth space over which jurisdiction or power is claimed for the woman involved, and builds on work of philosophers including Foucault. Birth territories affect how women feel and respond as embodied beings; safe and loved, or unsafe, fearful and self-protective.19 The resulting Birthing Tool, called BUDSET (Birthing Unit Design Spatial Evaluation Tool), was developed to respond to these specific issues and to see what aspects of birthing spaces were needed in support of the women and carers involved. A considerable literature was reviewed and the benchmarked elements from birthing design were included from this review.20 Some of the findings showed that key elements of spatial design were lacking and needed to be considered specifically for Birthing Units.21 These were:
• Many women did not have access to facilities they felt were essential • Women wanted control of their environment – heat, light and especially who came into the room • Women did not want to change rooms to give birth or to use a birth pool • Women birthing in hospital were less likely to have helpful facilities than those birthing at home or in midwife-led birthing centres • Women with good facilities were more likely to have a natural birth • The objective was to remove the medicalisation of birthing.
The above key principles underpinned the BUDSET and included provision for those elements. A pilot study was undertaken to test the tool with seven facilities in one Area Health Service of Sydney that covered new and old, large and small units. The early results showed that some of the elements were not strong on inter-rater reliability when midwives’ views did not match architects’ views of adequacy. Changes were made to the tool for clarification of questions, and a PhD student carried out another study using the new tool, which had a more successful result. In regard to achieving the principles for the evaluation tools, it was found that the BUDSET was easy to use, required little training for people not familiar with building design and gave a clear indication of where design-in-use was not matching benchmarks. It produced scores that were able to be discussed in recommending changes to physical space. The operational philosophy derived from the literature reviews was able to be accommodated.22 Additional research from a further study has now been completed, using videos of seven births that show how the spaces are actually being used. These observations show the use is consistent with expectations of the benchmarked objectives covered in the tool.
Further developments The next set of evaluation tools are being developed for Mental Health and Emergency Departments. Collaboration between members of the Faculty of Design, Architecture and Building at UTS, Psychiatric Nursing in the Faculty of Health at UTS and a PhD student from the Australian Institute of Health Innovation (AIHI) at UNSW are currently undertaking a research project that will develop a Mental Health Evaluation Tool. The aim of this research is to investigate the relationship between the mental healthcare built environment and safety, thereby furthering an understanding of how the physical habitat may support or hinder therapeutic objectives and the building of interpersonal trust in clinical settings. The main unit of analysis is a 10-year-old, 50-bed mental-health unit, which is currently being refurbished (the intervention) to improve the physical, social and symbolic environments of care. The embedded unit of analysis is the staff, and the aim is to understand how the built environment affects their perceived safety climate and propensity to trust patients.
The physical environment in the unit is recognised as deficient in terms of high social density, poor noise control, minimal acoustic privacy, institutional aesthetics, little access to nature, and poor functionality. These stress-inducing conditions are probably implicated in the intractably high seclusion rate in the unit’s 20-bed observation ward. Both the number of patients secluded more than once during an admission, and the length of time that people are secluded (more than four hours) have been above averages. From this work the underlying philosophy that a benchmarked health facility required will lead to an evaluation tool that complies with the principles for design-in-use evaluations in mental health. An emergency department (ED) project taking the opportunity to develop a further tool followed a workshop on the design implications for emergency facilities following the introduction in NSW of the four-hour turnover rule. In the case of EDs, designers look for factors that affect the processing and movement of patients. From a facility perspective, the planning should seek to identify the essential physical resources, particularly treatment spaces and a variety of waiting areas. Reviews of literature show that, regardless of the specific geographic location with their various external demand patterns, there were some key elements involved in benchmarked EDs. These involve the various patient flow models and how patients are moved in peak hours to holding, waiting and treatment. Access points and triaging are considered critical. There are also the implications for information gathering and for continuous access to avoid repeated patient data collection using digital Information technology. This aspect is now a major part of the contemporary EDs. Through further work in this area, a new tool will be useful for quickly assessing the current situation in NSW and the areas where simple or larger changes to spaces will be necessary.
Conclusion Although POE has been used for over 50 years and is accepted as the gold standard we believe that POEs are not being used enough or effectively. A short-form evaluation tool developed at UTS can achieve a great deal of what is found with POE results by examining design-in-use assessments when compared to benchmarked information. It would normally be shunned as a time consuming process to examining each department and then developing information required to benchmark all health units. However we have found literature reviews abound, and the underlying philosophy needed to establish best practice for these benchmarks has usually been developed for the Briefs of Requirements. Various methods of evaluation are needed to achieve a variety of measured outcomes. We offer this as another one in the range.
Author Professor Ian Forbes is managing director of Forbes Associates International Health Facility Planning Consultants, and an adjunct professor at the University of Technology, Sydney, New South Wales.
References 1. Ulrich, RS, Zimring, CM, Zhu, X, Dubose, J, Seo, HB, Choi, YS, Quan, X, Joseph, A. A review of the research literature on evidence based healthcare design, White Paper Series 5/5, Evidence-Based Design Resources for Healthcare Executives, The Center for Health Design; 2008. 2. Ulrich, RS, and Zimring, C. The role of the physical environment in the hospital of the 21st century: A once in a lifetime opportunity. Concord, CA: The Center for Health Design; 2004. 3. Preiser, W, and Vischer, J. Assessing Building Performance. Burlington, MA: Elsevier; 2005. 4. Steinke, C, Webster, L and Fontaine, M. Evaluating Building Performance in Healthcare Facilities: An Organisational Perspective. Health Environments Research & Design Journal, Vol. 3, No. 2; 2010 5. Zeisel, J. Towards a POE paradigm. In W Preiser (Ed.). Building Evaluation, pp167-180. New York: Plenum; 1989. 6. Zimring, C. Post-occupancy evaluation and implicit theory: An overview. In W.Preiser (Ed.). Building Evaluation, pp.113-125. New York: Plenum; 1989. 7. Wener, R. Advances in evaluation of the built environment. In E. Zube and G. Moore (Eds.). Advances in Environment, Behavior and Design. Vol. 2. pp287-313. New York: Plenum; 1989. 8. Preiser, W, Rabinowitz, H, White, E. Post-Occupancy Evaluation. John Wiley & Sons, Incorporated; 1988. 9. Sollecito, WA, and Johnson, JK (Eds). McLaughlin and Kaluzny’s Continuous Quality Improvement in Health Care (4th Ed) Jones and Bartlett Learning, Burlington; 2013. 10. Barlex, MJ, Blyth, A, Gilby, A. Guide to Post Occupancy Evaluation. London; 2006. Accessed at www.aude.ac.uk/info-centre/goodpractice/AUDE_POE_guide 11. Nicholson, K. Exploring Post Occupancy Evaluation in Health care. Architecture and Design Scotland, (A+DS), Government of Scotland, Edinburgh; 2011. 12. Learning from Our Buildings: A State-of-the-Practice Summary of Post-Occupancy Evaluation. Federal Facilities Council Technical Report No. 145. National Academy Press Washington, DC; 2001. 13. Zimring, C, Rashid, M, Kampschroer, K, Facility Performance Evaluation (FPE), paper for Whole Building Design Guide, National Institute of Building Science, Washington USA; 2005 14. National Health Service. (2007). AEDET evolution: Design evaluation toolkit. Retrieved March 20 2013, from www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_082089 15. National Health Service. (2007). ASPECT: Staff and patient environment calibration toolkit. Retrieved March 20 2013, from www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_082087 16. Fleming, R, Forbes, I, and Bennett, K. Adapting the Ward for People with Dementia. Sydney, NSW Department of Health; 2003. 17. Forbes, I, Fleming, R. Dementia Care: Determining an environmental audit tool for dementia-specific research, Design & Health Scientific Review. World Health Design, London; 2009. 18. Fahy, K, Foureur, M, Hastie, C. Birth Territory and Midwifery Guardianship: Theory for Practice, Education and Research. Oxford, Elsevier; 2008. 19. Foureur, M. Creating birth space to enable undisturbed birth. In Fahy, K, Foureur, M, Hastie, C. Birth Territory and Midwifery Guardianship: Theory for Practice, Education and Research. Oxford, Elsevier: 57-77; 2008. 20. Lepori, B, Foureur, M, Hastie, C. Mindbodyspirit Architecture: Creating Birth Space. In Fahy, K, Foureur, M, Hastie, C. Birth Territory and Midwifery Guardianship: Theory for Practice, Education and Research. Oxford, Elsevier: 95-112; 2008. 21. Forbes, I, Homer, CSE, Foureur, M, Leap, N. Birthing Unit Design: Researching New Principles. Design & Health Scientific Review 1, 47–53. World Health Design, London; 2008. 22. Foureur, M, et al. The Relationship Between Birth Unit Design and Safe, Satisfying Birth: Developing a Hypothetical Model. Midwifery, Elsevier, London; 2010.
|

1.1.jpg)






|