Health Blog

Tips | Recommendations | Reviews

What Is High Reliability Organizations In Healthcare?

What Is High Reliability Organizations In Healthcare
High-reliability organizations (HRO) in healthcare are defined as those that operate in complex, high-hazard situations for extended periods while managing to avoid serious failures. These organizations continually evolve their operations to maintain this high standard, and technology is an essential part of that.

What does a high reliability organization do?

A high reliability organization (HRO) is defined as an organization that has maintained high levels of safety, quality, and efficiency over an extended period. The concept of high reliability is growing in health care, due to the complexity of operations and the risk of significant consequences when failures occur. However, the five principles of high reliability go beyond standardization:

preoccupation with failure reluctance to simplify sensitivity to operations commitment to resilience deference to expertise

High reliability organizations cultivate resilience by relentlessly prioritizing safety over other performance pressures. Becoming a high reliability organization will undoubtedly improve safety within your organization and improve all other areas of performance as well.

Which are examples of high reliability organizations?

Defining HROs – Researchers at Berkley set out to create a definition of high-reliability organizations. They did extensive research on United States nuclear aircraft carriers, the Federal Aviation Administration’s Air Traffic Control system, and nuclear power operations at Pacific Gas and Electric’s Diablo Canyon reactor.

Hypercomplexity Tight coupling with interdependence across units and levels Extreme hierarchical differentiation Large numbers of decision makers in complex communication networks Redundancy in control and information systems A degree of accountability that does not exist in most organizations Frequent and immediate feedback about decisions Compressed time factors with major activities are measured in seconds More than one critical outcome that must happen simultaneously

While lots of organizations have some of these traits, HROs display them all at the same time. The types of organizations that can be considered HROs are diverse, but there are remarkable similarities. Their technologies are risky, and the potential for error is inherent. What Is High Reliability Organizations In Healthcare

What are the 5 principles of high reliability organization?

Jessica Stultz – Director of Clinical Quality Topic

Process Improvement Quality and Safety

Tags process improvement quality and safety toolkit Hospital leaders who choose to establish a high reliability organization (HRO) embark on a mission to commit to zero harm and realize true, purposeful work in health care. What Is High Reliability Organizations In Healthcare Applying high reliability concepts in an organization does not require a huge campaign or a major resource investment. It begins with leaders at all levels thinking about how the care they provide could be safer. A commitment to support and sustain a system of high reliability principles and safety should be an organization’s overarching strategy.

  • Culture is the foundation for vision and strategy.
  • At the core of an HRO, there are five key principles, which are essential for any improvement initiative to succeed: deference to expertise, reluctance to simplify, sensitivity to operations, commitment to resilience and preoccupation with failure.
  • This online toolkit is designed to guide the process of transforming an organization into one where safe, timely, effective, efficient, equitable and patient-centered care is received by all patients.

Each HRO principle and technique is described, along with practical application strategies and resources to guide organizations on their HRO journey. Explore the Key Principles HRO Principle: Deference to Expertise What Is High Reliability Organizations In Healthcare When confronted by a new risk, it is essential to have mechanisms in place to identify the individuals with the greatest expertise relevant to managing the situation. A high reliability culture requires staff at every level to be comfortable sharing information and concerns with others, and to be commended when they do.

Anyone can ask questions without looking unwise. Anyone can ask for feedback without looking incompetent. Anyone can be respectfully critical without appearing negative. Anyone can suggest innovative ideas without being perceived as disruptive.

Achieving transparent communication requires a flat hierarchy and a strong learning environment that supports a climate in which people feel they can comfortably make suggestions. Effective teams develop communication expectations that lead to shared understanding, that anticipate needs and problems, and that use agreed-upon methods to manage situations – including those that involve conflict. What Is High Reliability Organizations In Healthcare While simplifying work processes is highly desirable, it is risky to oversimplify explanations of what has happened or what might happen in the future. Being able to identify the often-subtle differences among safety risks may make the difference between early and late recognition.

  1. Simple processes are good, but simplistic explanations for why things work or fail are risky.
  2. Avoiding overly simple explanations of failure is essential to understand the true reasons patients are placed at risk.
  3. Health care cultures that are highly reliable understand that their systems may fail in ways that have never happened before and that they cannot begin to identify all the ways in which their systems could fail in the future.

One component of this principle involves using standard improvement tools to enhance work processes and patient outcomes. HROs build diverse teams and use the experiences of team members who understand the complex nature of their field to continually refine their processes.

Sometimes improvements are initiated in response to a defect; however, HROs focus on preventing problems before they arise by deeply understanding the processes of care and operations. Once defects are identified, a systematic approach enables teams to redesign processes and achieve outcomes that matter to patients, families and staff.

HRO Principle: Sensitivity to Operations What Is High Reliability Organizations In Healthcare Keeping situational awareness is important for staff at all levels, because it is the only way anomalies, potential errors and actual errors can quickly be identified and addressed. Maintaining constant awareness by leaders and staff of the care delivery process is essential.

  • This awareness is key to recognizing risks and working to prevent them.
  • Sensitivity to operations and continuous learning entails the proactive and real-time identification and prevention of defects and harm.
  • Organizations need to be aware of errors occurring and correct them before they reach the patient.

More time and focus should be placed on being proactive rather than reactive. Nurturing a more reliable culture requires continuous feedback loops in the form of data and insights to prompt action learning and outcomes. HROs exist in complex environments that depend on multi-team systems that must coordinate for safety. What Is High Reliability Organizations In Healthcare Despite the best efforts and past safety successes, health care errors will occur. Organizations with a highly reliable culture quickly contain errors, creating the ability to function despite setbacks. Leaders and staff should be trained to perform quick situational assessments and must be prepared to respond when system failures do occur.

  1. Resiliency in hospitals stem from accountable staff.
  2. HROs hold people accountable for their actions, but not for flaws in processes or systems.
  3. Individuals are responsible to embody organizational values and, in return, the organization is responsible for treating individuals fairly and justly when things go wrong.

A commitment to resiliency involves constructing systems in a reliable fashion that take human factors into account. They make it hard to do the wrong thing and easy to do the right. Four components for making systems and processes more reliable over time include the following.

Standardize – Design processes so that people do the same thing the same way every time. Standardization makes it easier to train people on the processes, and it becomes apparent if the processes fail, and where they fail, enabling the organization to better target improvement. Simplify – The more complex something is, the less likely it is to be successful. Staff may avoid following processes that are too difficult or time-consuming, which presents opportunities for mistakes. Simplified processes make it easy for people to do the right thing. Reduce autonomy – Health care professionals historically have been autonomous, making decisions based on personal preference or an individualized belief. However, this can result in care variation and less consistent outcomes. To achieve greater reliability, organizations must set the expectation that care delivery follows evidence-based best practices, unless contraindicated for specific patients, and the contraindications are then documented. Highlight deviation from practice – Clinicians sometimes have good reasons for departing from standardized processes. Smart health care organizations create environments in which clinicians can apply their expertise intelligently and deviate from protocols when necessary, but also relentlessly capture the deviations for analysis. Once analyzed, the new insights can lead to educating clinicians or altering the protocol. Both result in greater reliability.

HRO Principle: Preoccupation with Failure What Is High Reliability Organizations In Healthcare Health care cultures that are highly reliable start with recognizing that harm or error can occur at any time regardless of how long it has been since the last event. New safety risks may be developing, and unintended consequences can arise when changes are made.

Even when celebrating improved quality, reliable cultures always are vigilant and look for what could have gone wrong to learn from it. Near-misses are viewed as evidence of systems that should be improved to reduce potential harm to patients, and they are opportunities to better understand what went wrong in earlier stages that could be prevented in the future.

Transparently and openly discussing errors and near-misses supports the environment of HROs. Care should be taken to ensure these discussions remain systematic in nature and not devolve into finger-pointing. If staff feel they are being targeted, they will be reluctant to report future near-misses.

  • Operational transparency exists when leaders, staff, patients and their families, organizations, and the community are able to visibly see the activities involved in the learning process.
  • Having the courage to display work openly is a component of being an HRO.
  • Transparency is the key to changing culture.

HROs thrive on transparency and managing the unexpected. They are observant and focused on predicting and eliminating catastrophes rather than reacting to them. Back to Top

What does reliability mean in healthcare?

An Introduction to Measuring Reliability Measuring reliability can improve the quality and value of our health care systems and quality improvement projects. Reliability measures how consistent the quality and safety of health care systems or processes perform over a required period of time.

  • A highly reliable system has a lower risk of errors and process failures that can cause patients harm.
  • Routine anesthesia, for example, is considered very reliable.
  • It’s associated with very few errors and process failures, and there are seldom deaths resulting from those errors or failures.
  • Reliability matters in population health initiatives too.

For example, a reliable developmental screening system would result in a consistent process for providing developmental screenings to young children and then, when necessary, referring them to interventions. Ensuring reliable systems, whether at the hospital, state, or community level, requires measuring it. What Is High Reliability Organizations In Healthcare Evaluating current processes In order to measure reliability, teams must first observe and evaluate the system processes. In a developmental screening system, one process may be training health care professionals to conduct screenings during well-visits, and ensuring those screenings align with the American Academy of Pediatrics (APP) developmental milestones.

Safety : Patients should not be harmed by the care that is intended to help them. Effective : Care should be based on scientific knowledge and offered to all who could benefit, and not to those not likely to benefit. Patient-Centered : Care should be respectful of and responsive to individual patient preferences, needs, and values. Timely: Waits and sometimes-harmful delays in care should be reduced both for those who receive care and those who give care. Efficient: Care should be given without wasting equipment, supplies, ideas, and energy. Equitable: Care should not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socio-economic status.

Focusing on processes rather than people, is important, says NICHQ Quality Improvement Advisor, Jane Taylor, EdD, MBA, MHA, “By focusing on processes, we avoid assigning blame. We shouldn’t be looking for who’s at fault for a failure because from the patient’s perspective, it doesn’t matter who’s at fault.

All that matters is what happened or didn’t happen. We must look at systems and process design and their reliability.” How is reliability measured The effectiveness of these processes and occurrence of failures is the key to measuring the reliability of your system. One way of measuring reliability involves comparing the number of actions that achieved the intended results to the total number of actions taken.

One example of a failure may be a roadblock in your processes that then prevents systems from achieving their intended outcomes. Referring to developmental screenings, you might discover that the current practices for screening children does not follow AAP recommendations on how often screening should take place.

If the screenings are not done, then the referral isn’t made. The result of fewer screenings and referrals than recommended is that children miss opportunity for early interventions and optimal outcomes. Every missed screening would be considered a failure. In the developmental screening example, you would track how many screenings and referrals were made.

To measure reliability, you would compare the number of infants who received a screening against the number of infants due for a screening who did not receive a screening. This helps you determine your failure rate, and the higher the failure rate, the less reliable your system.

For example, if 80% of infants are receiving the screening, then 20 out of 100 times a failure occurs, and the process is considered unreliable and chaotic. Most quality improvement projects aim to operate at 80% reliability. If your system is i.e., only 80% reliable or less (referred to as “chaotic”), it’s important to make adjustments, especially if the process has catastrophic consequences such as death or serious harm.

Steps to increase reliability To increase the reliability, it’s helpful to backtrack by identifying where a failure occurred, then analyzing each process and outcome that led to that failure. The first step is to collect data over time and display it in a run or control chart to understand the process or system performance, signaling when a system moves from stability to instability or vice versa.

  • During this process, it’s important to pay attention to human factors and poor system and process design.
  • The IOM dimensions may be useful in understanding the type of failure, such as a failure in safety, timeliness, etc.
  • Then the second step is to prevent future failure with tools like standard work, checklist, prompts, or memory aides.

Develop a way to identify failures in real time and use error-proofing strategies to mitigate harm from the failure. The third step is for critical process that cause serious harm or injury. It requires redesign of the process using human factors engineering and to design robust systems and processes that rarely fail.

You may find you need a project team to support improving reliability. In doing so, you can begin by identifying causes for failures and developing options to improve these processes, test the ideas, and track data to determine whether your changes are improving reliability. This strategy will help you create stronger processes, result in fewer failures, and create lasting change.

, learn more about developing a successful measurement strategy. : An Introduction to Measuring Reliability

What are the 3 pillars of high reliability organizations?

2002 — VHA’s High Reliability Organization (HRO) Initiative: Identifying Strategic and Tactical Evaluation Goals – Lead/Presenter: Laura Damschroder, COIN – Ann Arbor All Authors: Objectives: The Veterans Health Administration (VHA) has launched a system-wide initiative to achieve High Reliability Organization (HRO) status throughout all levels of the VHA.

An HRO Summit was convened in February to kick-off the initiative with executive Directors from all 18 VISNs and Directors from 18 medical centers; one within each VISN. The interest is so strong, that another 18 VISNs will be added to this forward vanguard of facilities ready to begin their journey to HRO.

The HRO framework has three Pillars: 1) Leadership commitment; 2) culture of safety; and 3) process improvement guided by five Principles; 1) Sensitivity to Operations; 2) Preoccupation with Failure; 3) Reluctance to Simplify; 4) Commitment to Resilience; and 5) Deference to Expertise; and seven Values; 1) It’s about the Veteran, 2) Support a Safety Culture, 3) Commit to Zero Harm, 4) Learn, Inspire, and Improve, 5) Duty to Speak Up, 6) Respect for People, and 7) Clear Communications.

The objective of this partnered forum is to convene operations partners and research experts to collaboratively develop strategic evaluation goals and tactical steps for how to achieve those goals. The ultimate aim for this forum is to map a path forward in a way that helps operations partners and researchers work together to achieve system impacts that will be greater than either group working alone.

Methods: This forum will be highly interactive, using brainstorming and problem-solving approaches commonly used in co-design (refer to https://gamestorming.com/) of innovations and processes. Laura Damschroder will be the lead facilitator and will engage participants in Affinity Mapping (aka “KJ Technique”) and similar exercises that are designed to generate ideas for how to address each of three key topics listed below.

  1. First, participants will engage in brainstorming with the goal of generating as many ideas as possible.
  2. Participants will then cluster ideas into categories and agree on labels for each category.
  3. They will then prioritize these categories based on key criteria; e.g., importance, feasibility.
  4. The session will be designed to move from generating ideas to articulating high-level prioritized roadmaps for addressing knowledge gaps and helping VHA achieve its HRO goals.

This process was used successfully in a recently convened State of the Art Conference (SOTA) related to Embedded Research with leaders from health systems and research entities in the U.S. The forum will be structured to address the following topics. First, results from a rapid review, completed by Portland VA’s Evidence Synthesis Program Coordinating Center (ESP CC; led by Ms.

  1. Veazie) will be presented.
  2. The goals of this rapid review were to synthesize peer-reviewed literature on HRO frameworks, metrics, and implementation effects. Dr. Cox, Dr.
  3. Watts, Dr.
  4. Gunnar, and Mr.
  5. Mannozzi will each describe VHA current state with respect to the VHA’s HRO operational framework, performance metrics, and define HRO impact goals.

Participants will be engaged to identify high-priority knowledge gaps to address within each of three areas: 1) strategic frameworks to guide implementation; 2) metrics to assess progress toward system goals; 3) tracking impact of HRO on the system and for Veterans.

  • The last half-hour of the session will be dedicated to developing a summary to report back to HSRandD.
  • Results: Implications: Target audience includes additional operational partners interested in supporting HRO, along with researchers with expertise and/or interest in organizational development, performance measurement, and/or maximizing quality and safety of health care delivery for Veterans within an HRO framework.

Impacts: Products from this forum will be designed to guide researchers in helping operational partners assess progress toward achieving success related to becoming an HRO. Outputs will include a summary of how research outputs can directly help our operations partners achieve their goals including impacts on national policies and Veteran and organizational impacts.

What are the 3 pillars of high reliability companies?

3 Essential Domains of High Reliability Organizations – What Is High Reliability Organizations In Healthcare High Reliability Organizations (HROs) achieve such a status through persistent and detailed efforts to improve outcomes, even seeking “perfect reliability.” But, how do you get there from where you are? Chassin and Loeb, writing about healthcare, have summarized the requirements into three broad domains: leadership, process, and culture.

What are the characteristics of a high reliability organization?

Here, he expands on the five traits of high reliability organizations: sensitivity to operations, reluctance to oversimplify the reasons for problems, preoccupation with failure, deference to expertise and resilience.

What does highly reliable mean?

The quality of being able to be trusted or believed because of working or See more at reliability. (Definition of high and reliability from the Cambridge English Dictionary © Cambridge University Press)

What is the high reliability organization perspective?

A high-reliability organization (HRO) is an organization that has succeeded in avoiding catastrophes despite a high level of risk and complexity.

What is the difference between reliability and validity in healthcare?

Abstract – Reliability and validity are among the most important and fundamental domains in the assessment of any measuring methodology for data-collection in a good research. Validity is about what an instrument measures and how well it does so, whereas reliability concerns the truthfulness in the data obtained and the degree to which any measuring tool controls random error.

  • The current narrative review was planned to discuss the importance of reliability and validity of data-collection or measurement techniques used in research.
  • It describes and explores comprehensively the reliability and validity of research instruments and also discusses different forms of reliability and validity with concise examples.

An attempt has been taken to give a brief literature review regarding the significance of reliability and validity in medical sciences. Keywords: Validity, Reliability, Medical research, Methodology, Assessment, Research tools.

Why it is important for high reliability organizations to be mindful?

Managing the Unexpected through Mindfulness – Weick, as a social psychologist, initially studied the effect of on performance. He subsequently studied the flow of action following surprise, enactment, sense making of the unexpected, and mindfulness. As a social psychologist he contributed the concept of collective mindfulness and collective enactment.

Managing the Unexpected, then, views HRO through the lens of the effect people have on other people’s behaviors. It is not simply mindfulness or enactment they describe but collective mindfulness and collective enactment.Mindfulness is a mental orientation that continually evaluates the environment as opposed to mindlessness where a simple assessment leads to choosing a plan that is continued until the plan runs its course.

Mindfulness track small failures, resists oversimplification, remains sensitive to the operations in practice, maintains the capability for resilience, and takes advantage of changes in who has expertise. Enactment happens when the individual engages the circumstances and environment creating a new situation and environment.

  • This can be a constraining action if it limits the ability of the person to think or it can expand the possibilities of solutions.
  • Enacting involves shaping the world which can be a self-fulfilling prophecy; by engaging the problem you change the problem and your perception of the problem.Weick and Sutcliffe studied diverse organizations that must maintain structure and function in uncertainty where the potential for error and disaster can lead to catastrophe.

They found that not only do HROs have a unique structure but that HROs think and act differently from other organizations. HROs use mindful organizing for the unexpected as well as the expected.Expectations, as in repetition, can develop into blind spots where unexpected events can develop and become unmanageable.

  • Mindful organizing helps the organization maintain resilience during such events through anticipation and containment.Anticipation is more than sensing early events as it also includes the efforts to stop the development of undesirable events.
  • Because we cannot anticipate all events we have containment practices for those unanticipated or unexpected events that occur.

Containment directs activities toward events after an unexpected event has occurred while anticipation directs activities toward events before the unexpected event can occur.

  • Anticipation has three elements:
  • 1) Preoccupation with failure: To avoid failure we must look for it and be sensitive to early signs of failure.
  • 2) Reluctance to simplify: Labels and clichés can stop one from looking further into the events.

3) Sensitivity to operations: Systems are not static and linear but rather dynamic and nonlinear in nature. As a result it becomes difficult to know how one area of the organization’s operations will act compared to another part.

  1. Containment has two elements:
  2. 4) Commitment to resilience: The organization must maintain function during high demand events. Resilience has three components:

i. Absorb strain and preserve function despite adversityii. Maintain the ability to return to service from untoward events iii. Learn and grow from previous episodes.5) Deference to expertise: This includes deference downward to lower ranking members of the organization.

What are reliability examples?

Reliability and Validity

  • EXPLORING RELIABILITY IN ACADEMIC ASSESSMENT
  • Written by Colin Phelan and Julie Wren, Graduate Assistants, UNI Office of Academic Assessment (2005-06)
  • is the degree to which an assessment tool produces stable and consistent results.
  • Types of Reliability

Test-retest reliability is a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time.

Example: A test designed to assess student learning in psychology could be given to a group of students twice, with the second administration perhaps coming a week after the first. The obtained correlation coefficient would indicate the stability of the scores.

Parallel forms reliability is a measure of reliability obtained by administering different versions of an assessment tool (both versions must contain items that probe the same construct, skill, knowledge base, etc.) to the same group of individuals. The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions.

Example: If you wanted to evaluate the reliability of a critical thinking assessment, you might create a large set of items that all pertain to critical thinking and then randomly split the questions up into two sets, which would represent the parallel forms.

Inter-rater reliability is a measure of reliability used to assess the degree to which different judges or raters agree in their assessment decisions. Inter-rater reliability is useful because human observers will not necessarily interpret answers the same way; raters may disagree as to how well certain responses or material demonstrate knowledge of the construct or skill being assessed.

Example: Inter-rater reliability might be employed when different judges are evaluating the degree to which art portfolios meet certain standards. Inter-rater reliability is especially useful when judgments can be considered relatively subjective. Thus, the use of this type of reliability would probably be more likely when evaluating artwork as opposed to math problems.

Internal consistency reliability is a measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results.

Average inter-item correlation is a subtype of internal consistency reliability. It is obtained by taking all of the items on a test that probe the same construct (e.g., reading comprehension), determining the correlation coefficient for each pair of items, and finally taking the average of all of these correlation coefficients. This final step yields the average inter-item correlation.

Split-half reliability is another subtype of internal consistency reliability. The process of obtaining split-half reliability is begun by “splitting in half” all items of a test that are intended to probe the same area of knowledge (e.g., World War II) in order to form two “sets” of items. The entire test is administered to a group of individuals, the total score for each “set” is computed, and finally the split-half reliability is obtained by determining the correlation between the two total “set” scores.

refers to how well a test measures what it is purported to measure. Why is it necessary? While reliability is necessary, it alone is not sufficient. For a test to be reliable, it also needs to be valid. For example, if your scale is off by 5 lbs, it reads your weight every day with an excess of 5lbs.

What is an essential component of a high reliability organization *?

TEAMS AND HROs – The characteristics of HROs dictate that teamwork is an essential component of such organizations. HROs will not achieve high reliability unless its members are able to effectively and efficiently coordinate their activities. In the previous section, we have spent considerable time clarifying what we mean by “teams” and “teamwork.” In this section we define the concept of high reliability, review the key characteristics of HROs, and demonstrate why teamwork is so critical in such organizations.

  • HROs are defined by their potential for causing failures that lead to catastrophic consequences.
  • If the potential is high (thousands of dramatic failures), but the actual number of failures is low, the organization is an HRO ( Roberts 1990a ).
  • For example, a nuclear power plant failure could result in horrific consequences for its surrounding community, although such failures are extremely rare.

The same can be said for many U.S. hospitals—there are thousands of opportunities for major accidents everyday. Although the IOM estimated that 98,000 preventable deaths occur per year, the actual occurrence of medical error resulting in deaths is extremely low ( Kohn et al.1999 ).

  1. HROs are those organizations that function in hazardous, fast-paced, and highly complex technological systems essentially error-free for long periods of time ( Roberts 1990a, b ).
  2. Roberts and Rousseau (1989) identified eight characteristics of HROs: (1) hypercomplexity, (2) tightly coupled, (3) extreme hierarchical differentiation, (4) many decision makers working in complex communication networks, (5) high degree of accountability, (6) frequent, immediate feedback regarding decisions, (7) compressed time factors, and (8) synchronized outcomes ( Roberts and Rousseau 1989 ).

Below we review each of these characteristics, demonstrate how teamwork is an essential component of effective performance in such organizations, and provide a health care example, where appropriate. Hypercomplexity is defined as an extreme variety of components, systems, and levels, each having their own standard procedures, training routines, and command hierarchy ( Roberts and Rousseau 1989; ).

Based on its definition alone, successful performance in hypercomplex environments relies upon multiteam systems and teamwork is an essential component of such environments. For example, Roberts and Rousseau describe aircraft carrier operations is indicative of hypercomplexity. Pilots, air traffic controllers, dispatchers, ground crews, and many others must work collectively to launch and recover aircraft.

These interdependent teams (e.g., air traffic control team, aircrews, maintenance teams, etc.), must coordinate their activities and efficiently monitor each other’s performance. Similarly, the delivery of health care occurs in a hypercomplex environment that is dependent on multiteam systems.

  • Even though health care workers have historically operated in distinct silos and have been trained in separate professions and possess distinct expertise, these individual must coordinate to deliver safe care.
  • At the most basic level, physicians must accurately communicate treatment information to the nurse based, in part, on information the nurse presents to the physician regarding the patient’s condition.

Orders are written on the basis of this discussion and the physician’s examination of the patient. These orders are distributed to the pharmacy, X-ray, labs, physical therapy, etc., so that other health care professionals can collect additional information to provide insight regarding the patients or initiate treatment.

Tight coupling is defined as reciprocal interdependence across many units and levels. Tight coupling relates to task interdependence, which is the defining characteristic of teams. That is, tasks performed by one member of the team are dependent on tasks performed by other members of the team and the performance of these tasks must be coordinated among team members for effective team performance (delivery of safe care).

For example, in health care an emergency C-section is a tightly coupled event that involves several different members of the labor and delivery team. The nurse handling the case is typically the first to observe fetal distress and must communicate this information to the attending physician.

The doctor must decide if a C-section is necessary based upon the information the nurse provides and review of information collected from the fetal heart rate monitor. If the attending decides to operate, appropriate staff must be notified (anesthesiologist, neonatologist, or pediatrician), and the patient must be moved to surgery.

Before making the initial incision, the patient must be properly anesthetized and the staff should be briefed as to the status of the patient and the baby. This process, which involves a series of interdependent steps, can take place in a matter of minutes depending on the history of the case and the level of fetal distress observed.

  1. For such a sequence to run smoothly, teams must use effective communication and have a shared understanding of the mother’s and baby’s condition.
  2. The Joint Commission (JCAHO) reported that ineffective communication resulted in 70 percent of all preventable errors involving death or serious injury from 1995 to 2003 ( JCAHO 2004 ).

Extreme hierarchical differentiation is defined as an organizational structure in which levels and roles are clearly differentiated. This characteristic is also true of most health care teams. Physicians tend to be at the top of this hierarchy with the case or treatment resulting from their directions.

  1. Therefore, a great deal of coordination is necessary to keep physicians, nurses, and technicians working together as a cohesive unit.
  2. Unfortunately hierarchy often makes it more difficult for medical teams to achieve this level of coordination and cohesiveness.
  3. In fact, research suggests that the extreme hierarchical difference between physicians and nurses in particular can contribute to dysfunctional communication yielding less than optimal patient care Keenan Cooke and Hillis,1998; Knox and Simpson 2004 ).

Although most medical teams are hierarchical, high-reliability teams trained in teamwork exhibit characteristics such as assertiveness and mutual trust, which reduce the negative effects of hierarchy. Mutual trust, an essential teamwork KSA, involves a shared belief that team members will protect and support the interests of their team ( Sims et al.2004 ).

Team members with mutual trust are willing to admit to mistakes and accept and appreciate feedback Bandow 2001; Webber 2002 ). This allows team members to firmly assert their concerns even to a higher-ranking team member without fear of reprisal. Another key characteristic of HROs is that they contain many decision makers working in complex communication networks ( Roberts and Rousseau 1989 ).

This characteristic personifies most health care teams. First, team members continually need to make important decisions concerning patient care (e.g., start an IV, induce labor, administer narcotics, admit patient). Consequences of these decisions clearly have implications on the ultimate well being and safety of patients.

As most teams in health care are comprised of four to six unique individuals, however, decisions are not always unanimous. Second, as different team members are trained separately in their respective professions (e.g., medical school and nursing school), they have learned to communicate differently and have varying styles of conveying information depending on their role.

Fortunately, new and emerging techniques like the Situation Background Assessment Recommendation (SBAR) strategy have been used in health care to overcome such communication difficulties with positive results ( Leonard et al.2004 ). This particular strategy facilitates clear and concise communication among members of health care teams by providing an easy-to-remember acronym used for framing critical conversations.

A high degree of accountability in HROs is characterized by the severe consequences that can result from errors ( Roberts and Rousseau 1989 ). Although serve consequences may be characteristic of all teams (e.g., project teams), in health care, the consequence of a mistake can often be death the patient.

Preventable medical errors that result in loss of human life eternally affect the patient’s family, the staff that tended to the patient, the community, and the hospital’s reputation. However, even small mistakes resulting in patient harm yield grave consequences, yet not all medical team members are held equally accountable when errors do occur.

  • More often then not, it is the physician in charge of the patient’s care that gets the brunt of the blame for any mistakes made.
  • Malpractice lawsuits or the possibility of losing one’s license are very real outcomes that add to pressures felt by already stressed physicians.
  • Consequences may also be present for the hospital such as loss of accreditation, and negative media attention.
See also:  How To Submit Documents To Healthcare Gov?

HROs are also characterized by “immediate feedback” resulting from their decisions; the plane crashes; there is a nuclear disaster; the patient is injured ( Roberts and Rousseau 1989 ). In other words, there is an identifiable, measurable outcome associated with HRO performance.

  • Such outcomes are typically an indicator of poor team and/or system processes within the HRO.
  • For example, in aviation 60–80 percent of all accidents are attributed to human error as opposed to anything technically wrong with the aircraft.
  • Similarly, the IOM report points to human error as a major contributor in patient deaths ( Foushee 1984 ; Kohn et al.1999 ).

Immediate feedback is also a characteristic of effective team performance. Team members must monitor each other and provide each other feedback to maximize team functioning. However, feedback here focuses on team process and its improvement rather than solely on team outcomes.

  • To ensure that feedback occurs, team members must be trained to deliver timely, behavioral, and specific feedback to one another (using such strategies as TDT).
  • The ability to monitor each other’s performance and effectively provide feedback to other team members is a critical facet of achieving higher reliability in health care and elsewhere.

Major HRO activities often occur under compressed time, as in the case of naval flight operations where aircraft are launched and recovered in 48–60-second intervals ( Roberts and Rousseau 1989 ). Somewhat different than the other characteristics, the extent to which this variable is related to team performance is a function on the environment in which the team operates as opposed to the team itself.

Some teams operate under compressed time while others do not. The same is true of health care, with a slightly different twist. Routine procedures like childbirth can quickly become a stressful, time compressed situation should a problem arise with the mother or baby. In such cases, teams need to be able to quickly adapt.

Team members may have to be quickly added and integrated into the team and task—anesthesiologist, emergency response team—or existing members may have to take on new roles—OB-GYN converts from coaching the mother through a normal delivery to conducting an emergency C-section.

  1. The last characteristic of HROs is that critical outcomes occur simultaneously ( Roberts and Rousseau 1989 ).
  2. As discussed earlier, team members work together on interdependent tasks.
  3. This is what separates teams from groups or individuals working in isolation.
  4. Interdependency creates the need for synchronization of activities and outcomes.

For instance, when delivering a baby, each member of a labor and delivery team is actively engaged in different aspects of the process yet their actions are synchronized. As can be seen from the description above, the HRO environment demands teamwork.

Teamwork, or the KSAs that comprise it (refer to Table 1 ), are critical for successful performance in organizations that are hypercomplex, tightly coupled, hierarchical, time compressed, and rely upon synchronized outcomes ( Sims et al.2004 ). Aviation, the military, and now health care acknowledge the criticality of teamwork in achieving high reliability despite data that show a direct relation between team training and the ultimate criterion, a reduction in errors (e.g., accidents, deaths, etc.) ( Salas et al.2001 ).

However, the science of teamwork and team training is still evolving, particularly in health care. The IOM report did much to stimulate research on health care teams, but much of this early work relied upon direct transitions from the commercial airlines to health care.

  1. Despite the work of Gaba and colleagues, it is only within the last 3 years that the science of health care teams has really begun to emerge and take hold ( Baker et al.2003 ).
  2. As a result, a number of questions remain that health care must address to have a firm understanding of teamwork and its relation to patient safety and high reliability.

Direct transitions from aviation without additional study are insufficient. In the next section, we outline a series challenges the health care community must address to better understand health care team performance, how to maximize this performance, and ultimately improve patient safety.

What is the advantage of high reliability?

High Reliability Teams have the ability to promote safety through task-relevant knowledge, high levels of communication, and adapting to the environment. Such teams are effective, social in nature and feature team members with high task interdependency and shared, common values (Salas, Cooke & Rosen, 2008).

What is the main goal for becoming an HRO high reliability organization?

Evidence Brief: Implementation of High Reliability Organization Principles The VA Evidence Synthesis Program (ESP) was established in 2007 to provide timely and accurate syntheses of targeted health care topics of importance to clinicians, managers, and policymakers as they work to improve the health and health care of Veterans. These reports help:

  • Develop clinical policies informed by evidence;
  • Implement effective services to improve patient outcomes and to support VA clinical practice guidelines and performance measures; and
  • Set the direction for future research to address gaps in clinical knowledge.

The program is comprised of four ESP Centers across the US and a Coordinating Center located in Portland, Oregon. Center Directors are VA clinicians and recognized leaders in the field of evidence synthesis with close ties to the AHRQ Evidence-based Practice Center Program and Cochrane Collaboration.

The Coordinating Center was created to manage program operations, ensure methodological consistency and quality of products, and interface with stakeholders. To ensure responsiveness to the needs of decision-makers, the program is governed by a Steering Committee comprised of health system leadership and researchers.

The program solicits nominations for review topics several times a year via the, Comments on this evidence report are welcome and can be sent to Nicole Floyd, Deputy Director, ESP Coordinating Center at, The ESP Coordinating Center (ESP CC) is responding to a request from the VA National Center for Patient Safety for a rapid evidence review on implementing High Reliability Organization (HRO) principles into practice.

  • Findings from this review will be used to inform the implementation of the VA’s High Reliability Organization Initiative.
  • To identify studies, we searched MEDLINE®, PsycInfo, CINAHL, Cochrane Central Register of Controlled Trials, and other sources from Jan.2010- Jan.2019.
  • We used prespecified criteria for study selection, data abstraction, and rating internal validity and strength of the evidence.

Full methods are available on PROSPERO register of systematic reviews (CRD42019125602) Objective: To systematically evaluate literature on frameworks for high reliability organization (HRO) implementation, metrics for evaluating a health system’s progress towards becoming an HRO, and effects of HRO implementation on process and patient safety outcomes.

  • We identified 5 common HRO implementation strategies across 8 frameworks. Based on those, the Joint Commission’s High Reliability Health Care Maturity Model (HRHCM) and the Institute for Healthcare Improvement’s Framework for Safe, Reliable, and Effective Care emerged as the most comprehensive, as they included all 5 strategies, contained sufficient detail to guide implementation, and were the most rigorously developed and widely applicable.
  • The Joint Commission’s HRHCM/Oro TM 2.0 is the most rigorously developed and validated tool available for evaluating health care organizations’ progress on becoming an HRO; however, it has some conceptual gaps that may be addressed by incorporating metrics from other evaluation tools.
  • Multicomponent HRO interventions delivered for at least years are associated with improved process outcomes ( eg, staff reporting of safety culture) and patient safety outcomes ( eg, serious safety events). However, the overall strength of evidence is low, as each HRO intervention was only supported by a single fair-quality study.

High Reliability Organizations (HROs) are organizations that achieve safety, quality, and efficiency goals by employing 5 central principles: (1) sensitivity to operations ( ie, heightened awareness of the state of relevant systems and processes); (2) reluctance to simplify ( ie, the acceptance that work is complex, with the potential to fail in new and unexpected ways); (3) preoccupation with failure ( ie, to view near misses as opportunities to improve, rather than proof of success); (4) deference to expertise ( ie, to value insights from staff with the most pertinent safety knowledge over those with greater seniority); (5) and practicing resilience ( ie, to prioritize emergency training for many unlikely, but possible, system failures).

  1. Nuclear power and aviation are classic examples of industries that have applied HRO principles to achieve minimal errors, despite highly hazardous and unpredictable conditions.
  2. As death due to medical errors are estimated to be the third leading cause of death in the country, a growing number of health care systems are taking interest in adopting HRO principles.

In 2008, the Agency for Healthcare Research and Quality (AHRQ) published a seminal white paper that described the application of the 5 key HRO principles in health care settings, including the specific challenges that threaten reliability in health care, such as higher workforce mobility and care of patients rather than machines.

Adoption of these HRO principles in health care offers promise of increased excellence; however, major barriers to widespread implementation include difficulty in adopting organization-level safety culture principles into practice; competing priorities between HRO and other large-scale organizational transformation initiatives such as electronic health records; and difficulty in creating and implementing process improvement tools and methods to address complex, system-level problems.

In February 2019, the Department of Veterans Affairs (VA) rolled out a new initiative outlining the definitive steps toward becoming an HRO. As literature has emerged to guide health systems in implementing and evaluating their HRO journey, an understanding of the quality and applicability of existing HRO resources is important to developing best practices, identifying barriers and facilitators to implementation, measuring progress, identifying knowledge gaps, and spreading implementation initiatives to other systems.

In this review, we evaluate literature on the frameworks for HRO implementation, metrics for evaluating a health system’s progress towards becoming an HRO, and effects of HRO implementation on process and patient safety outcomes. We identified 20 articles published on HRO frameworks, metrics, and evidence of effects.

Eight articles addressed frameworks, and of these, the Joint Commission’s High Reliability Health Care Maturity Model (HRHCM) and the Institute for Healthcare Improvement’s (IHI) Framework for Safe, Reliable, and Effective Care emerged as the most comprehensive, rigorously developed, applicable, and sufficiently detailed to guide implementation.

The most commonly reported implementation strategies across the 8 frameworks were: (1) developing leadership, (2) supporting a culture of safety, (3) building and using data systems to track progress, (4) providing training and learning opportunities for providers and staff, and (5) implementing interventions to address specific patient safety issues.

Most of these frameworks were developed via a consensus process – typically with a group of health system leaders and experts in patient safety – and were intended to be implemented by a variety of health care providers and staff. Articles varied in the depth of information provided on how to implement these frameworks, with some providing specific guidance on implementation activities such as workshops and time frames for implementation and others providing overarching, conceptual guidance.

Eight articles and 1 online tool described metrics for measuring a health system’s progress towards becoming an HRO. The Oro TM 2.0 tool emerged as the most rigorously designed and validated, as it was developed by a leading group in health care improvement, informed by industries leaders across HROs, and tested in a total of 52 US hospitals both within and outside of the VA.

Otherwise, metrics varied in terms of the concept measured, ranging from surveys on culture of safety to extent of integration of HRO principles into practice. The process for developing these metrics also varied by tool. Many groups relied on a literature review or expert consensus, whereas others underwent rounds of revisions and piloted their tool in multiple hospital settings.

  1. Seven articles evaluated the effects of HRO implementation, primarily in children’s hospitals.
  2. The most notable finding is that organizations experienced significant reductions in serious safety events (range, 55% to 100%) following the implementation of the 4 most comprehensive, multicomponent HRO initiatives.

Moreover, time since initiation and safety improvements appear to have a dose-response relationship. Only one of these studies explicitly discussed using a framework identified in ( ie, the IHI framework). Common implementation activities included some form of basic error prevention training for staff and leadership training for leaders, enhanced root cause analysis processes using an electronic tracking system, provider peer safety coaches to coach their peers in the use of error prevention techniques, routine sharing of good catches and lessons learned, and increased communication through safety huddles.

  1. Successful facilitators to implementation include hiring an outside consultant ( eg, Healthcare Performance Improvement), leadership commitment to implement HRO principles, and enacting policies to facilitate data-sharing.
  2. Barriers to implementation include competing priorities ( eg, widescale implementation of an Electronic Medical Record systems) and high costs.

A major limitation of the literature is that none of these studies compared an HRO intervention to a concurrent control group. Therefore, it is difficult to determine whether these effects are due to HRO implementation or a concurrent intervention or secular trend.

  1. Studies also lacked information on whether intervention components were delivered with fidelity over time and whether the interventions were associated with unintended effects on provider workload or efficiency.
  2. Future HRO implementation research should utilize quasi-experimental designs, such as natural experiments that deliver HRO interventions at a group of sites with other sites serving as a wait list control, to evaluate the effects of specific intervention components and assess the mechanism of change driving outcomes.

The ESP Coordinating Center (ESP CC) is responding to a request from the Department of Veterans Affairs (VA) National Center for Patient Safety for a rapid evidence review on implementing High Reliability Organization (HRO) principles into practice. The purpose of this review is to evaluate the literature on frameworks, metrics, and evidence of effects of HRO implementation.

  1. Findings from this review will be used to inform the implementation of the VA’s HRO Initiative.
  2. In their 2000 report “To Err is Human,” the Institute of Medicine’s (IOM) Committee on Quality of Health Care in America cited deaths due to medical errors as more common than those due to motor vehicle accidents, breast cancer, or AIDS.

Despite continued widespread, discrete process improvement initiatives such as handwashing protocols, patient identification to reduce ‘wrong person’ procedures, protocols for clear communications between care teams and visual indicators for high risks such as fall injury or allergies, a 2016 British Medical Journal report estimated that medical errors continue to be the third leading cause of death in the US.

Additionally, the IOM Committee identified care fragmentation as a root cause of medical errors. In response, they called for a comprehensive, system-level approach to improve patient safety, that shifts the focus away from a culture of blame to one of error analysis and process improvement. Therefore, health care organizations have begun to explore system-level approaches to cultivating a culture of safety, with a focus on collaboration, communication, and coordination.

AN HRO IS: “An organization that experiences fewer than anticipated accidents or events of harm, despite operating in highly complex, high-risk environments.” HRO is one such organizational approach to achieving safety, quality, and efficiency goals.

, At the core of HRO is a culture of “‘collective mindfulness’, in which all workers look for, and report, small problems or unsafe conditions before they pose a substantial risk to the organization and when they are easy to fix.”, Use of HRO is designed to change the thinking about patient safety through the following 5 principles: (1) sensitivity to operations ( ie, heightened awareness of the state of relevant systems and processes); (2) reluctance to simplify ( ie, the acceptance that work is complex, with the potential to fail in new and unexpected ways); (3) preoccupation with failure ( ie, to view near misses as opportunities to improve rather than proof of success); (4) deference to expertise ( ie, to value insights from staff with the most pertinent safety knowledge over those with greater seniority); (5) resilience ( ie, to prioritize emergency training for many unlikely but possible system failures).

See below. HRO was originally pioneered in extremely hazardous industries, such as nuclear power and commercial aviation, where even the smallest of errors can lead to tragic results. These industries have achieved and sustained extraordinary safety levels, thereby generating much interest in how to adapt HRO principles to health care and replicate this success.

In their 2007 book “Managing the Unexpected,” Weick and Sutcliffe define the 5 principles of HROs and describe how these principles can be applied to improve reliability across diverse industries. In their 2008 seminal Agency for Healthcare Research and Quality (AHRQ) white paper, Hines et al apply these 5 principles to health care settings and describe the specific challenges threatening health care reliability, such as higher workforce mobility and care of patients rather than machines.

Implementation of HRO initiatives into health care settings is an inherently complex and costly process that involves organizing people, processes, and resource activities across often large organizations. For example, the Nationwide Children’s Hospital’s HRO journey involved increasing their quality improvement (QI) personnel from 8 in 2007 to 33 in 2012, with a budget increase from $690K to $3.3M.

External consultants, such as Healthcare Performance Improvement, LLC, can provide support to organizations undertaking an HRO journey. HRO interventions commonly include activities like basic error prevention education; leadership training in reinforcement approaches; enhanced root cause analysis processes using an electronic tracking system; promotion of a ‘just culture’ – a culture in which providers and staff are fairly penalized for mistakes – that supports routine reporting errors; sharing good catches and lessons learned; and training in error prevention technique by provider peer safety coaches.

Examples of health systems’ successful adoption of HRO principles are already emerging. Providence St. Joseph Health – a national, not-for-profit Catholic health system comprised of more than 50 hospitals, 800 clinics and 5 million patients across 7 states – has had success implementing their HRO program, Caring Reliably.

Two years after implementation of the program, which included partnering with an outside consulting firm to coach them through a leader toolkit, which focused on culture, and a toolkit for everyone, which reduced errors, Providence St. Joseph Health experienced a 5% improvement in the safety climate domain of the Safety Attitudes Questionnaire and a 52% decrease in serious safety events (G.

Battey, oral communication, February 2019). The VA has also experienced HRO implementation successes. The Harry S. Truman Memorial Veterans Hospital began a 3-year HRO project in March 2015 by partnering with the VA National Center for Patient Safety to deliver Clinical Team Training to every inpatient and outpatient clinical service.

  • This included formal interactive classroom training, application of the principles in a project that was unique for each clinical area, and refresher classroom and simulation training after one year.
  • In May 2016, Truman VA augmented their HRO program using a 23-module HRO Toolkit provided by VISN 15, as part of its HRO initiative rolled out across all 7 of its medical centers.

According to Truman VA Associate Director Robert Ritter (R. Ritter, oral communication, February 2019), their HRO program has already resulted in remarkable improvements in staff attitudes and perceptions and significant increased participation in morning multidisciplinary huddles.

However, despite the promise of increased excellence as described in the 2013 Joint Commission’s HRO report, major barriers to widespread implementation readiness of HRO at the VA and elsewhere include the complexity of organization-wide incorporation of safety culture principles and practices and prioritizing the adoption of process improvement tools and methods, among other competing priorities.

To reaffirm their commitment to high reliability and zero harm (working to “reduce errors and to ensure that any errors that may occur do not reach our patients and cause harm”), in February 2019, the VA rolled out a new initiative outlining the definitive steps for becoming an HRO.

  1. The first step is for HRO activities to begin at 18 lead facilities selected based on greater readiness as demonstrated by higher levels of safety performance, leadership commitment, and staff engagement.
  2. Initial HRO activities include the establishment of work groups, performance readiness assessments, and conducting training events and programs.

Following analysis of lessons learned from these lead sites, the VA plans a national roll-out to achieve the goal of an VA-wide HRO transformation. To ensure success of HRO-related activities and consistent outcomes across the enterprise, VA is using resources from the Joint Commission Center for Transforming Healthcare resource library, including the Oro 2.0 High Reliability Assessment tool.

Additionally, VA is working on developing a standard set of HRO tools, including training, implementation models, and measures. Emerging literature can guide health systems in implementing and evaluating their HRO journey., However, an understanding of available frameworks, metrics, and initiatives and their use are currently limited by their complexity and wide variability of their key characteristics, their target participants ( eg, leadership, medical staff), their foundation, their structure, which of the 5 HRO principles they address, and health system setting type.

Understanding the quality and applicability of existing HRO resources is important to developing best practices, identifying barriers and facilitators to implementation, spreading implementation initiatives to other systems, measuring progress, and identifying knowledge gaps.

, What are the frameworks for guiding HRO implementation?, What are the main implementation strategies of these frameworks?, What were the processes for developing these frameworks ( eg, consensus, literature review, etc )?, What are the intended settings of these frameworks?, Who participates in implementing these frameworks?, What are the processes for implementing these frameworks?
, What are the metrics for measuring a health system’s progress towards becoming an HRO?, What are the main characteristics ( ie, domains, scales) of these metrics?, What were the processes for developing these metrics ( eg, consensus, literature review, etc )?, To what extent have these metrics been validated or used to inform health system decision-making?
, What is the evidence on HRO implementation effects?, On patient safety/organizational change goals ( eg, number of sites that met goal of 50% reduction in serious safety events)?, On patient safety/organizational change measures ( eg, mean change in number of serious safety events)?, On process measures ( eg, mean change in inter-departmental communication, provider or patient satisfaction)?

The ESP included articles published from January 2010 to January 2019 that describe implementation frameworks, metrics for measuring progress towards becoming an HRO, and its effects. The timeframe of 2010 and onward was selected because it is 2 years after the publication of AHRQ’s 2008 white paper, when one could reasonably expect publication of new research on implementing HRO principles in health care settings.

To be included, articles needed to be explicitly grounded in HRO theory and specifically seek to advance organizational or cultural change. We operationalized this by only including articles that evaluated HRO principles at the organization level or higher ( ie, we excluded articles of HRO implementation in individual departments).

Outcomes for include any that are linked to the pathway between the 5 principles of HROs ( ie, sensitivity to operations, reluctance to simplify, preoccupation with failure, deference to experience, and resilience) and the ultimate goal of health care organizations: exceptionally safe, consistently high-quality care, as outlined in the AHRQ white paper.

  1. See below for the logic model linking the 5 HRO principles to the end goal of improved patient safety outcomes, based on the model described in Hines 2008.
  2. We prioritized articles using a best-evidence approach to accommodate the timeline ( ie, we considered meeting safety goals to be a higher priority than intermediate outcomes ).

We also prioritized evidence from systematic reviews and multisite comparative studies that adequately controlled for potential patient-, provider-, and system-level confounding factors. We only accepted inferior study designs ( eg, single-site, inadequate control for confounding, noncomparative) to fill gaps in higher-level evidence.

To identify articles relevant to the key questions, our research librarian searched MEDLINE, CINAHL, PsycINFO, and Cochrane Central Register of Controlled Trials (CCRT) using terms for high reliability and health care from January 2010 to January 2019 (see Supplemental Materials for complete search strategies).

Additional citations were identified by hand-searching reference lists and consultation with content experts. We limited the search to published and indexed articles involving human subjects available in the English language. Study selection was based on the eligibility criteria described above.

Titles, abstracts, and full-text articles were reviewed by one investigator and checked by another. All investigators have expertise in conducting systematic reviews of health services research. Any disagreements were resolved by consensus. No standard tool is currently available to assess the quality of complex interventions.

We therefore culled concepts from reporting checklists for complex interventions, QI initiatives, and implementation interventions – including the Standards for Quality Improvement Reporting Excellence (Squire 2.0), Standards for Reporting Implementation Studies (StaRI), and Template for Intervention Description and Replication TIDieR – to develop a 7-item quality assessment checklist.

Through this checklist, we evaluated whether the study adequately reported on (1) the conceptual link between the intervention and HRO principles, (2) intervention components and delivery, (3) implementation fidelity, (4) evaluation of the intervention, (5) adverse events, (6) confounders, and (7) the use of a concurrent control group.

We considered items 1-4 to be basic criteria in determining whether the study was reported well enough to be reproduced. We considered items 5-7 to be advanced criteria that would increase our confidence that bias was minimized in the study results (see Supplemental Materials for detailed information on the quality assessment checklist).

All quality assessments were completed by one reviewer and then checked by another. We did not quantify inter-rater reliability through a kappa statistic; however, qualitatively, our agreement was generally high. Disagreements were generally limited to interpretation of individual risk of bias domains and not overall risk of bias ratings for a study.

We resolved all disagreements by consensus. We abstracted data from all studies and results for each included outcome. All data abstraction and internal validity ratings were first completed by one reviewer and then checked by another. We resolved all disagreements by consensus.

We informally graded the strength of the evidence based on the AHRQ Methods Guide for Comparative Effectiveness Reviews by considering study limitations (includes study design and aggregate quality), consistency, directness, and precision of the evidence. Ratings typically range from high to insufficient, reflecting our confidence that the evidence reflects the true effect.

Where studies were appropriately homogenous, we synthesized outcome data quantitatively using StatsDirect statistical software (StatsDirect Ltd.2013, Altrincham, UK) to conduct random-effects meta-analysis to estimate pooled effects. We assessed heterogeneity using the Q statistic and the I 2 statistic.

  1. Where meta-analysis was not suitable due to limited data or heterogeneity, we synthesized the evidence qualitatively.
  2. Throughout the report, we use the following terminology to describe different levels of HRO theory and implementation ().
  3. The complete description of our full methods is available on the PROSPERO international prospective register of systematic reviews (; registration number CRD42019125602).

A draft version of this report was reviewed by peer reviewers as well as clinical leadership. Their comments and our responses are presented in the Supplemental Materials (see ). The literature flow diagram () summarizes the results of search and study selection (see Supplemental Materials for full list of excluded studies).

Our search identified 525 unique, potentially relevant articles. Of these, we included 20 articles that addressed one or more of our key questions. Eight articles addressed,, – 8 articles addressed,,, – and 7 articles addressed,,, – We identified 8 frameworks that guide implementation of HRO principles into a health care system: the Joint Commission’s High Reliability Health Care Maturity Model (HRHCM); the Institute for Healthcare Improvement’s (IHI) Framework for Safe, Reliable and Effective Care; the American College of Healthcare Executives’ (ACHE) Culture of Safety framework; 2 frameworks developed at Johns Hopkins’ (JH) Armstrong Institute for Patient Safety and Quality including an Operating Management System and a Safety and Quality framework; the Office of the Air Force Surgeon General’s Trusted Care framework; Advancing Research and Clinical Practice through close Collaboration (ARCC) Model; and a framework focused on developing high reliability teams.

The Joint Commission’s HRHCM and IHI Framework for Safe, Reliable and Effective Care emerged as the most comprehensive, as they both covered all 5 strategies commonly reported across frameworks (); were the most rigorously developed; were broadly applicable; and were sufficiently detailed to inform implementation. The first key strategy is developing leadership, The Joint Commission discussed the need for leadership ( eg, board members, CEO/management, and lead physicians) to commit to the goal of zero patient harm. IHI described the need for leaders to facilitate and mentor teamwork, improvement, respect, and psychological safety.

ACHE incorporated elements from both of these frameworks, including selecting, developing, and engaging a board; prioritizing safety in the selection and development of leaders; and establishing a compelling vision for safety. The JH Operating Management System framework and the Air Force emphasized the importance of leadership accountability.

, The JH Safety and Quality framework encouraged QI leaders to pursue formal degrees to support their work. The ARCC and high reliability team models did not explicitly discuss leadership as a key strategy, although the ARCC model did discuss the importance of developing and using mentors to guide evidence-based decision-making. The second key strategy is supporting a culture of safety, The Joint Commission described building trust, accountability, identifying unsafe conditions, strengthening systems, and assessment as key activities within this strategy. The IHI listed culture, including psychological safety, accountability, teamwork and communication, and negotiation, as one of their major 2 domains.

The ACHE named their framework “culture of safety” and emphasized the need to both lead and reward a just culture and establish organizational behavior expectations. The Air Force described the importance of trust between leaders and staff, respectful communication, and willingness to admit errors within their culture of safety domain.

The ARCC model incorporated an assessment of culture as a key aspect of implementation, and the high reliability team model emphasized that responses to poor outcomes should be based on behavioral choices and not severity of outcome. Neither JH frameworks explicitly discussed culture of safety. The third key strategy is building and using data systems to measure progress, The Joint Commission discussed the need to track and display quality measures and to involve IT support in the development of solutions to quality problems. IHI described the need for open sharing of data and other information concerning safe, respectful, and reliable care and to continually improve work processes and measure progress over time.

The JH Operating Management System discussed the need to share and synthesize data to gain insights to make new discoveries and improve processes, and their Safety and Quality framework included a plan to evaluate processes. The Air Force described standardizing processes to gather and share information about patient care episodes, knowledge data, and processes to improve care delivery.

The ARCC model described data management and outcomes monitoring as one of their implementation workshops. The high reliability team model did not include a strategy related to measurement of progress. The fourth key strategy is providing training and learning opportunities for providers and staff, The Joint Commission discussed the importance of training all staff on robust process improvement ( eg, a blended performance improvement model aimed at improving patient safety in health care settings by integrating Lean Six Sigma and formal change management principles) as appropriate to their jobs.

, IHI and the Air Force discussed developing learning systems, although the learning has more to do with implementing QI initiatives and learning from results, rather than learning how to implement HRO principles., The JH Safety and Quality framework listed examples of training that each type of staff member should receive.

The ARCC model described a workshop dedicated to evidence-based practice skills-building, and the high reliability team model discussed implementation of TeamSTEPPS, a teamwork curriculum for health care staff. ACHE and the JH Operating Management System did not specifically discuss training or learning opportunities. The fifth key strategy is implementing quality improvement interventions to address specific patient safety issues, This strategy is discussed in broad strokes as robust process improvement by the Joint Commission and Air Force,, and as improvement and measurement by the IHI.

In the ARCC model, participants complete a 12-month evidence-based practice implementation project focused on improving quality of care, safety, and patient outcomes. The JH Safety and Quality framework discussed the role of safety and quality experts in designing and directing system improvement efforts and provided examples of potential initiatives.

The high reliability team framework described simulation training where teams can practice briefing, huddles, and debriefing strategies. Neither the ACHE nor the JH Operating Management System explicitly discussed QI initiatives. In addition, we identified several complementary practices for strengthening implementation,

  • Incorporation of justice, equity and patient-centeredness: The ACHE describes building trust, respect and inclusion as a key domain of building a safety of culture. The framework encourages leaders to value diversity and inclusion when selecting leaders and staff and to work towards evaluating and eliminating disparities in patient care. The Air Force selected patient-centeredness as a key domain of its framework. This practice could be integrated into HRO delivery through activities such as hiring a diverse workforce or prioritizing QI initiatives that address safety issues that disproportionately affect patients from racial/ethnic minority groups.
  • Involvement of a variety of stakeholders involved in health care delivery, including patients and families: The JH Operating Management System described establishing patient and family advisory councils as an implementation activity that could be undertaken to advance one of their key implementation strategies. Other possible activities include assessing patient perspectives of culture of safety or inviting patients to serve on HRO leadership committees.
  • Assembling transdisciplinary teams : Several frameworks – including the JH Operating Management System, ARCC model, and high reliability team framework – discuss forming transdisciplinary teams as an important activity towards advancing HRO. This practice could be integrated into HRO delivery through activities like inviting providers from different specialties to attend daily safety huddles; or having nurses, physicians, and staff all attend the same HRO training sessions together.
  • Utilizing change management strategies such as Lean Six Sigma to promote change: Most frameworks recommended health systems use complementary change management strategies – such as Lean Six Sigma, – IHI’s Model for Improvement, or a combination of strategies such as the Joint Commission’s robust process improvement, – to implement HRO principles into practice. This complementary practice could be integrated into several aspects of HRO delivery, such as training staff on Lean Six Sigma, or applying Lean thinking to root cause analysis to identify what is contributing to patient safety events and identifying and implementing solutions.

The Joint Commission’s HRHCM stood out as being the most rigorously developed framework, as the process involved a literature review, consensus among subject experts, pilot testing among an expert panel, and pilot testing with leadership at 7 US hospitals.

However, the latter pilot testing effort was primarily focused on evaluating the tool to measure a health system’s progress on the framework (). The Air Force and JH Safety and Quality framework were developed through both a literature review and consultation with health care leaders and content experts.

The IHI framework was developed specifically for the IHI Patient Safety Executive Development Program curriculum and was informed by an analysis of high-performing, proactive, and generative work settings. The ACHE framework was developed through partnership between the ACHE, the IHI, and the National Patient Safety Foundation (NPSF) Lucian Leape Institute (LLI).

  1. It involved consensus-building with industry leaders and experts who have had success in transforming their organizations into system-wide cultures of safety.
  2. The ARCC model was initially developed through a strategic planning process on how to rapidly integrate research findings into clinical processes.
See also:  What Is Pacs In Healthcare?

The 2 remaining articles did not discuss the process of how frameworks were developed (JH Operating Management System, high reliability teams). All frameworks were intended to be delivered in any health care delivery setting, except for the Air Force’s framework, which was designed specifically for the Air Force Medical Service.

  • IHI’s framework was initially developed for use in acute care settings, although it has since evolved to be applicable to other settings.
  • Most frameworks were intended to be implemented by a variety of health care leaders, providers, and staff, including frontline providers, local and middle managers, and high-level managers and executives, as well as safety and quality leaders, across a variety of service areas.

,, – IHI’s framework also included components to be implemented by patients and families. Exceptions are the ACHE and the JH Operating Management System frameworks, which were specifically designed for health care leadership, and the high reliability team framework which was designed for nursing professionals.

  • The ARCC model provided details on providing learning and training opportunities ( ie, 6 educational workshops, 8 days of educational and skills-building sessions over 1 year), as well as on implementing an intervention to address a specific patient safety issue ( ie, 12-month project focused on improving quality of care, safety, and/or patient outcomes).
  • The Joint Commission and IHI provided high-level recommendations for operationalizing HRO implementation, including building and using tools to measure progress ( ie, assess the current state of HRO maturity; develop tools to advance maturity), as well as specific examples of activities that could advance these strategies.

Other frameworks provided some guidance on how to operationalize implementation, although they were less comprehensive.

  • ACHE described 2 levels of implementation practices: foundational practices which focus on laying the groundwork for HRO implementation and sustaining practices which focus on spreading and embedding HRO concepts, specifically a culture of safety.
  • The JH Operating Management System suggests approaches to implementing the core concepts of the model, including developing and using data systems ( ie, providing leaders with a standardized reporting format to assist in reporting on department progress), using systems engineering methodology, and convening stakeholder groups.
  • The JH Safety and Quality initiative provided recommendations based on the role of a specific health care provider or staff member. For example, they have specific suggestions on training and learning opportunities ( ie, provide front line providers and staff with basic medical school education on safety and quality; provide managers with patient safety certificate programs and workshops on Lean Six Sigma and other change management processes).
  • The Air Force’s suggestions for operationalization include standardizing and stabilizing processes, engaging staff in behaviors to continuously improve these processes, mentoring staff, and leadership goal-setting, as well as a description of the desired future state of HRO integration into practice.
  • The high reliability team framework described specific approaches that touch on several implementation strategies including learning and training opportunities ( ie, simulation training and provision of a structured HRO curriculum) and supporting a culture of safety ( ie, development of a just culture system for penalizing staff when patient harm occurs).

We identified 8 articles,, – on 6 tools for measuring the progress toward becoming an HRO (). The Joint Commission’s HRHCM/Oro TM 2.0 emerged as the most rigorously developed, validated, and applicable tool for VA settings. However, other tools such as the ACHE’s Culture of Safety Organizational Self-Assessment Tool may be useful in developing specific items missing from the Oro TM 2.0 framework, such as teamwork culture and system-focused tools for learning and improvement.

  • Four additional tools have unclear applicability to the VA, as they were developed in countries outside the US, – did not report measurement items,, or require qualitative expertise to analyze results.
  • Full details on these studies appear in Supplementary Materials,, and selected findings appear below.

The tool that most comprehensively addressed all 5 HRO implementation strategies identified in was the HRHCM/Oro TM 2.0., As discussed in, the HRHCM is the Joint Commission’s framework for implementing HRO principles. This framework includes 4 levels (beginning, developing, advancing, approaching) for each of the 14 components (56 total) to guide health care leaders in assessing their systems’ level of maturity on becoming an HRO.

The Oro TM 2.0 is a web-based application that uses branching logic to guide health care leaders through the HRHCM assessment and produces a visual report that synthesizes data from multiple respondents within a single hospital. Of note, the Oro TM 2.0 was designed to be used at the individual hospital, rather than at a system level, and is only available to Joint Commission-accredited organizations.

The tool outputs data into reports that could theoretically be shared between hospitals but it is not an automatic feature. To develop the metrics used in by the HRHCM/Oro TM 2.0, a team at the Joint Commission spent over 2 years engaging with high reliability experts from academia and industry, leading safety scholars outside of health care, and the published literature.

Iterative testing with hospital leaders – first among 5 individuals in executive leadership positions, then among leadership teams from 7 US hospitals – was conducted to finalize the framework and included metrics. The resultant tool has since been validated in peer-reviewed research studies, including 1 study that tested the content validity of the tool at 6 VA sites.

Another study tested the internal reliability and discriminative ability in detecting different levels of HRO maturity in 46 hospitals from the Children’s Hospitals’ Solutions for Patient Safety network. The VA study was a secondary analysis of qualitative data from 138 VA employees with patient safety expertise at various levels of leadership ( eg, patient safety managers, executive leadership and service chiefs, infection control nurses) from 6 VA sites.

The original study validated the AHRQ-developed patient safety indicator tool; the secondary analysis looked at how well responses mapped onto the Joint Commission’s HRHCM model. Researchers found that 12 of the 14 HRHCM components were represented, indicating good content validity. Two additional HRO components were identified through interviews that were not represented in the HRHCM model: teamwork culture and systems-focused tools for learning and improvement.

While less applicable to the VA, the study that tested the HRHCM in 46 children’s hospitals found that the HRHCM had good internal reliability (Cronbach’s alpha = 0.72 to 0.87, depending on the domain), good discriminative ability ( ie, health system average scores on beginning, developing, advancing, and approaching levels of maturity resembled a bell curve), and was responsive to change ( ie, safety culture decreased after major organizational changes), indicating it may perform well at detecting progress on becoming an HRO.

While less comprehensive, rigorously developed, or evaluated than the HRHCM/Oro TM 2.0, the ACHE’s Culture of Safety Organizational Self-Assessment Tool is an additional metric for evaluating progress on becoming an HRO. It incorporates additional perspectives ( ie, patients, families) and specific items ( eg, teamwork culture) that may be informative to the VA.

The ACHE tool addresses 3 (leadership, culture of safety, and data systems) of the 5 key HRO implementation strategies. It consists of 18 items concerning an organization’s capabilities and processes scored on a 5-point Likert scale. Lower (worse) scores prompt a review of foundational tactics towards becoming an HRO, moderate scores prompt a review of both foundational and sustaining tactics, and higher (better) scores prompt a review of sustaining tactics.

  1. The ACHE tool was developed through partnership with the IHI/NPSF LLI and others as described in,
  2. The tool has not undergone any formal validation processes.
  3. While limited in terms of the number of strategies covered and extent of validity testing, the ACHE tool offers 2 additional features not covered by the HRHCM/Oro TM 2.0.

First, it specifically seeks perspectives beyond leadership, including providers and staff, as well as patients and families. However, of note, patients and families may have difficulty completing many of the ACHE tool items, such as the extent to which board members spend discussing patient safety issues in meetings and the extent to which leadership performance assessments and incentives are aligned with patient safety metrics.

Second, the ACHE tool includes items related to teamwork and systems, such as the item: “My organization uses and regularly reviews a formal training program and defined processes for teamwork and communication.” We identified 4 additional tools that covered 2 or fewer of the 5 HRO implementation strategies.

They have more limited applicability to the VA due to their narrower focus, lack of reporting on the specific tool items, and/or development outside the US. The Cultural Assessment Survey (CAS) is a metric used to measure culture of patient safety and was designed specifically for use in obstetric units in Canada.

The CAS had a rigorous development process, including a literature review to develop a list of over 100 values and practices that support a culture of safety, a short list of prioritized values and practices developed after sending the 300 surveys to employees at 8 hospitals, a pilot test of the short list at 10 hospitals, and testing of its internal reliability and content validity.

However, the article did not include a copy of the tool or the items included in the tool. The narrow focus on obstetric units also limits the applicability of the tool to the VA’s broad HRO implementation. The University of Tehran developed 2 metrics: The first is a 55-item survey assessing a health care system’s readiness for HRO implementation.

It was developed through a literature review and pilot-testing among 98 senior or middle managers from 15 hospitals. The second is a 24-item survey and checklist that assesses knowledge of HRO concepts and integration of HRO principles into practice. It was developed through interviews with managers and staff at 80 medical and nonmedical departments.

These metrics are notable as being the only ones specifically designed around the 5 HRO principles described by Hines et alia 2008. However, both metrics were limited in terms of the extent to which they covered HRO implementation strategies – with one assessing 2 out of 5 strategies and the other with unclear coverage, as it did not report any specific examples of its metric items.

Both of these were evaluated in terms of their content validity and performed well. However, the applicability of these tools to the VA is unclear, as they were developed for a specific health care system in Tehran, Iran. One additional metric developed by the Delft University of Technology in the Netherlands offers a qualitative framework for assessing level of reliability.

This framework resembled the HRHCM/ORO 2.0 in that it has 4 stages of maturity: craft, watchful professional, collective professionalism, and high reliability. It was developed through literature review to identify the common domains that are essential to high reliability hospitals and did not undergo any validity testing.

This metric also has unclear applicability to the VA, due to significant differences between the US and Dutch health care systems. Delivering the framework in its current state at the VA would also be challenging, as it has open-ended items to promote thinking about the overall strengths and limitations of a health care system, rather than specific questions to which a provider or health care leader could concretely respond ( eg, under organizational culture, a less reliable hospital would have qualities of “learning by doing” while a more reliable hospital would have “a preoccupation with possible failure.”) We identified articles from 7 health care organizations, primarily children’s hospitals, on the effects of HRO initiative implementation on safety culture, HRO process, and patient safety measures.

,, – Full details on these articles are available in Supplementary Materials, and selected findings appear below. The most notable finding is that organizations experienced significant reductions in serious safety events (SSEs) (range, 55% to 100%) following the implementation of the 4 most comprehensive, multicomponent HRO initiatives.

  • Moreover, time since initiation and safety improvements appear to have a dose-response relationship, and the improvements were maintained for upwards of 9 years ().
  • Of note, only one of these studies explicitly discussed using one of the frameworks discussed in ( ie, the IHI framework).
  • Two years after implementation, SSE reductions were 55% and 83%, respectively, in hospitals with a 12-month average of 0.9 (Ohio Children’s Hospital Association) and 1.15 (Nationwide Children’s Hospital) SSEs per 10,000 adjusted patient days.

At 4 years, Cincinnati Children’s Hospital Medical Center reported a 67% reduction in SSE rates and a baseline 12-month average of 0.9 events per 10,000 adjusted patient days. After 9 years, Genesis Health System reported achieving its goal of zero SSE (100% reduction).

In these studies, SSE was typically defined as “the most serious harm events that occur in hospitals and are defined by serious patient harm events that directly results from a deviation in best practice or standard of care.” Improvements in safety culture were also reported, including improvement in safety attitudes and an increase in safety success story reporting, but changes across various other safety culture dimensions had mixed results.

At Cincinnati Children’s Hospital Medical Center, responses to the AHRQ Hospital Survey on Patient Safety Culture indicated improvements in organizational learning and continuous improvement, feedback and communication about error, and staffing. However, they reported no change in supervisor/manager expectations and actions promoting safety, teamwork within hospital units, nonpunitive responses to error, and a decline in communication openness.

  • A commonality across the 4 hospitals that reported SSE reductions is that they implemented their HRO initiative with the help of the same external consultant, Healthcare Performance Improvement (HPI), LLC.
  • Although the components varied somewhat across these 4 hospitals, they generally aligned with the 5 strategies discussed in : (1) developing leadership ( eg, leadership training); (2) supporting a culture of safety ( eg, increased communication through safety huddles; routine sharing of good catches and lessons learned); (3) providing training and learning opportunities for providers and staff ( eg, error prevention training for staff; provider peer safety coaches coached their peers in use of the error prevention techniques); (4) building and using data systems to track progress ( eg, enhanced root cause analysis processes using an electronic tracking system); and (5) implementing interventions to address specific patient safety issues ( eg, embedding “time outs” and “debriefs” into standard surgical processes, using standardized checklists).

Despite these similarities, initiatives conceptualized their goals of zero patient harm in different ways: one initiative’s board encouraged management to “aspire to eliminate preventable harm” by reducing the preventable harm index to zero; one aimed to reduce SSEs to zero; and 2 others aimed to reduce SSEs by 75%-80%.

In addition, the structure of the Ohio Children’s Hospital Association was unique in that it is a state-wide collaboration of 8 tertiary pediatric referral centers that specifically refuse to compete on matters related to patient safety. To promote transparent sharing of critical safety data among the collaborative to facilitate lessons learned without fear of undue liability, Ohio House Bill 153 was passed in 2010 to provide a legal framework expressly providing peer review protection for the 8 participating hospitals.

In addition to the 4 HPI-assisted initiatives, we also identified a similarly comprehensive initiative independently implemented by JH Hospital and Health System: the Operating Management System. Although the study did not report on SSEs, the authors reported improved compliance in Joint Commission process measures and a 79% reduction in potential preventable harms.

  • Finally, we found that process improvements are possible even with less intensive HRO initiatives that are more focused in scope.
  • When the Riley Hospital for Children at Indiana University Health implemented a Daily Safety Brief, they found improvement in communication, awareness, and working relationships, but not in comfortability in sharing errors.

The Children’s National Medical Center experienced an increase in Apparent Cause Analysis (ACA) reliability scores following implementation of 13 interventions across education, process, and culture categories. They also reported an increase in efficiency (4 fewer days to turn around ACA) and increased satisfaction with the process.

While the results of these studies are promising, the overall strength of this evidence is low. Each initiative was only evaluated in a single study (consistency unknown), and each study was fair quality (common methodological weaknesses included lack of reporting on implementation fidelity and no concurrent control groups), with generally indirect outcomes and populations (few reported whether they met their goal of zero harm; none were conducted in Veterans).

The main strengths of these studies were that they generally provided sufficient detail on how the intervention is conceptually linked to HRO, their main intervention components, and how they evaluated effects. Their main limitation was that a cause-effect relationship could not be established between these HRO initiatives and outcomes, because no study used a concurrent control group that would have ruled out the possibility that the effect was due to concurrent interventions ( eg, implementation of an Electronic Medical Record ) or improved specialty-specific disease management).

  1. To our knowledge, this is the first evidence review to systematically evaluate primary research on the effects of HRO implementation in health care settings.
  2. Furthermore, although much has been written about the concepts of HRO and individual health care systems’ experience with HRO implementation, few have looked across different systems to describe similarities and differences in frameworks and metrics, and what lessons might be learned based on the successes and challenges encountered using different approaches.

Gaining a better sense of how HRO has been successfully delivered is critical to informing the work of the VA and other health systems as each embarks on its HRO journey. Although a variety of frameworks for implementation of HRO principles are available, the Joint Commission’s HRHCM and the IHI’s Framework for Safe, Reliable, and Effective Care stand out as being the most comprehensive, applicable, and sufficiently descriptive to be used by the VA.

Both of these frameworks cover 5 common HRO implementation strategies seen across frameworks, including (1) developing leadership, (2) supporting a culture of safety, (3) building and using data systems to track progress, (4) providing training and learning opportunities for providers and staff, and (5) implementing interventions to address specific patient safety issues.

Complementary practices to strengthen implementation seen across these frameworks include the need to incorporate an awareness of justice, equity, and patient-centeredness into all elements of HRO implementation; the importance of involving a variety of stakeholders involved in health care delivery, including patients and families; and the value of integrating change management strategies into HRO delivery.

The selection of one of these frameworks – or development of a new framework – should be informed by the staff being targeted for HRO implementation ( eg, all providers and staff, only leadership, only nursing professionals); the approach desired ( eg, developing a high-level operations management system vs training staff and providers on HRO principles and practices); and the capacity of the system in implementing certain components of the HRO framework ( eg, a system that does not have strong leaders in evidence-based medicine may not want to implement the ARCC model).

Of the metrics available to evaluate a health system’s progress towards becoming an HRO, the Joint Commission’s HRHCM/Oro 2.0 TM is the most comprehensive, rigorously developed, and applicable to the VA HRO initiative, given that its content validity has been evaluated at 6 VA hospitals.

  1. This tool was not designed to facilitate sharing data across hospitals; however, the tool outputs data into reports that could be shared.
  2. Of note, findings from the VA validation study indicate that certain concepts (teamwork culture and system-focused tools) are missing from the HRHCM framework and should be added.

An example from the ACHE tool that might address these concepts include: “My organization uses and regularly reviews a formal training program and defined processes for teamwork and communication.” The VA HRO Initiative may consider adding these or similar concepts to the current tool being used to assess VA sites’ progress on becoming HROs.

  • Additionally, other tools published prior to 2010 may be appropriate for capturing process outcomes on the pathway between the 5 HRO concepts and the end-goal of improved safety outcomes, such as the Safety Attitudes Questionnaire and the Safety Organizing Scale.
  • Multicomponent HRO interventions that incorporate some of the 5 common HRO implementation strategies identified in and that are delivered for at least 2 years are associated with improved process outcomes ( eg, staff reporting of safety culture) and patient safety outcomes ( eg, SSEs).

However, the overall strength of evidence is low, as each HRO intervention was only evaluated in a single fair-quality study. Successful facilitators to implementation may include hiring an outside consultant ( eg, HPI) to assist in the implementation, enacting of policies to facilitate data sharing ( eg, passage of a state house bill to enable a collaborative of children’s hospitals to share critical safety data), and leadership committing to implementing HRO principles.

Barriers to implementation may include competing priorities, such as widescale implementation of an EMR system, and costs ( eg, one system increased quality improvement staff from 8 to 33, with a budget increase of over $2 million). HRO interventions and other complex interventions are inherently difficult to study, because many interventions are implemented by many different people across multiple time points.

Each hospital may also choose to implement different components of HRO interventions, depending on their individual needs and context. As a result, isolating the specific components of an HRO intervention that cause a specific effect on process and patient safety outcomes is difficult.

Furthermore, without a control group, we cannot conclude that the HRO intervention, rather than another concurrent intervention or secular trend, caused the change. One study commented that other simultaneously implemented interventions, including EMR implementation and improved specialty-specific disease management, may have contributed to improved outcomes.

EMR implementation is likely to be a confounder across multiple studies and could improve patient safety by making it easier to find and use patient health information, to collaborate with colleagues in other departments, and by building checklists and other automated processes into patient appointments.

Other plausible confounders include utilization of other change management strategies, such as Lean Six Sigma, before or during the HRO implementation. Therefore, while promising, evidence of improved outcomes after HRO implementation should be interpreted cautiously. Many studies commented that HRO was delivered among high-performing hospitals.

Whether or not lower-performing hospitals would have the same outcomes is unclear. In addition, few studies commented on the fidelity of implementation or compliance, such as whether providers attended the required number of trainings or continually maintained safety event reporting systems.

Therefore, we cannot determine whether health care staff continued to be invested in HRO implementation over time. Studies that reported some compliance measures reported that staff responses to culture surveys increased over time and the number (but not percent) of providers that completed trainings.

Only 1 study described the potential unintended consequences of HRO implementation ( ie, ACA turnaround time decreased). Study authors hypothesized that reasons for this increased efficiency included the availability of a standardized toolkit, clear rubrics to follow, and the availability of additional resources facilitated completion of the process.

  • The effect of HRO implementation on provider and staff workload and efficiency is an important research question that should be the subject of future research.
  • First, searching from 2010 forward means that we did not include earlier publications on HRO framework design and implementation.
  • However, our search strategy and consultation with topic experts likely resulted in identification of the most recent and relevant articles that incorporated AHRQ’s conceptualization of the 5 HRO principles in healthcare settings.

Second, our use of a single investigator to review articles, with second reviewer checking, may also have resulted in missing eligible studies. However, we used objective criteria to minimize the potential for differences between investigators. Finally, our quality assessment checklist on complex interventions was not designed to conduct a comprehensive assessment of all areas of bias, but rather to ascertain whether the study authors reported enough information that the intervention and evaluation could be reproduced and to highlight common issues in reporting and methodology seen across studies.

Therefore, while it may not have captured all areas of bias seen in these studies, the use of another more formal tool would likely not have changed our conclusions. The biggest gaps in knowledge on HRO implementation are (1) whether the improvements in process and safety outcomes are truly caused by HRO interventions or due to concurrent interventions or secular trends; (2) if HRO does indeed lead to improved outcomes, which components of HRO interventions are causing the effects; (3) whether certain implementation frameworks lead to better outcomes; and (4) what are the contextual factors (such as barriers and facilitators) affecting successful HRO implementation.

Randomized controlled trial study designs are not a practical option for evaluating HRO interventions due to both the complexity of intervention as well as the delivery; therefore, other study designs such as quasi-experimental or natural experiments should be utilized instead.

  • The VA HRO initiative is in a unique position to conduct these types of experiments.
  • Implementing HRO principles at a select number of VA sites while other sites serve as a “wait-list” control would create a natural experiment to see if HRO implementation leads to improved outcomes.
  • If this approach is taken, consideration should be given to how much wait-list control sites have begun implementing HRO concepts on their own or whether they’re implementing similar initiatives such as Lean Six Sigma.

In addition, the widescale implementation of HRO across different sites likely means that each site will deliver slightly different interventions based on their individual contexts. Careful recording of the intervention components, when they were delivered, where they were delivered ( eg, medical or surgical service areas), and whether they continued to be delivered may help to elucidate the effects of some of these individual intervention components on outcomes.

This can inform where to invest future resources, and to tailor HRO delivery to specific contexts. In addition, we were unable to determine what the mechanism of change was between HRO implementation and improvement in outcomes. While HRO delivery is theorized to lead to change in thinking about patient safety, resulting in improved processes and outcomes, this was not empirically examined in any of our included studies.

Instead, some studies suggested that the impact of HRO on other process measures, such as safety culture, is mixed. This indicates that the mechanism of action driving changes in outcomes is more complex. Future studies should evaluate what is the mechanism of change, such as improved mindfulness or safety culture, to help answer both the how and why HRO implementation may lead to improved patient safety outcomes.

  1. Future studies may also want to consider the extent to which HRO implementation overlaps – or doesn’t – with system redesign strategies, as these are complementary approaches to improving quality of care.
  2. A variety of frameworks and evaluation tools are available for HRO implementation and evaluation, with the Joint Commission’s High Reliability Health Care Maturity (HRHCM)/ORO 2.0 among the most rigorously developed and validated.

Multicomponent HRO interventions that include several of the 5 common implementation strategies and that are delivered for at least 2 years are associated with improved process outcomes, such as staff perceptions of safety culture, and important patient safety outcomes, such as reduced SSEs.

  1. Future research studies should incorporate concurrent control groups through quasi-experimental designs to rule out the possibility that the effects are due to other interventions or secular trends.
  2. Future research should also focus on identifying whether certain frameworks, metrics, or components of interventions lead to greater improvements.

This topic was developed in response to a nomination by the VA National Center for Patient Safety for the purpose of informing the implementation of the VA’s High Reliability Organization Initiative. The scope was further developed with input from the topic nominators ( ie, Operational Partners), the ESP Coordinating Center, and the technical expert panel.

  1. In designing the study questions and methodology at the outset of this report, the ESP consulted several technical and content experts.
  2. Broad expertise and perspectives were sought.
  3. Divergent and conflicting opinions are common and perceived as healthy scientific discourse that results in a thoughtful, relevant systematic review.

Therefore, in the end, study questions, design, methodologic approaches, and/or conclusions do not necessarily represent the views of individual technical and content experts. The authors gratefully acknowledge Emilie Chen and Julia Haskin for editorial support, Scott Grey for his expertise on HRO research, and the following individuals for their contributions to this project: Operational partners are system-level stakeholders who have requested the report to inform decision-making.

    • William Gunnar, MD
    • Executive Director
    • National Center for Patient Safety
    1. Amy Kilbourne, PhD, MPH
    2. Director
    3. Quality Enhancement Research Initiative

To ensure robust, scientifically relevant work, the TEP guides topic refinement; provides input on key questions and eligibility criteria, advising on substantive issues or possibly overlooked areas of research; assures VA relevance; and provides feedback on work in progress. TEP members are listed below:

    • Laura Damschroder, MPH, MS
    • Center for Clinical Management Research
    • Ann Arbor, MI

The ESP sought input from 2 Key Informants with diverse experiences and perspectives in implementing HRO interventions into large, integrated health care systems.

  • Glenda J.L. Battey, PhD Providence St Joseph Health Renton, WA
    1. Robert G. Ritter, FACHE
    2. Harry S. Truman Memorial Veterans’ Hospital
    3. Columbia, MO

The Coordinating Center sought input from external peer reviewers to review the draft report and provide feedback on the objectives, scope, methods used, perception of bias, and omitted evidence. Peer reviewers must disclose any relevant financial or non-financial conflicts of interest.

  • Because of their unique clinical or content expertise, individuals with potential conflicts may be retained.
  • The Coordinating Center and the ESP Center work to balance, manage, or mitigate any potential nonfinancial conflicts of interest identified.1.
  • Committee on Quality of Health Care in America.
  • To Err Is Human: Building a Safer Health System,

Washington, DC: Institute of Medicine; 2000.2. Makary MA, Daniel M. Medical error-the third leading cause of death in the US. BMJ,2016;353:i2139.3.4. Hines S, Luna K, Lofthus J, Marquardt M, Stelmokas D. Becoming a high reliability organization: operational advice for hospital leaders,

AHRQ Publication No.08-0022. Rockville, MD: Agency for Healthcare Research and Quality; 2008.5. Weick KE, Sutcliffe KM. Managing the unexpected: Resilient performance in the age of uncertainty,, 2nd ed, San Francisco, CA: Jossey-Bass; US; 2007.6. Brilli RJ, McClead RE, Jr., Crandall WV, et al. A comprehensive patient safety program can significantly reduce preventable harm, associated costs, and hospital mortality.

J Pediatr,2013;163(6):1638–1645.7. Meyer D, Battey G, Mezaraups L, Severs L, Feeney S. High Reliability + Value Improvement = Learning Organization. Paper presented at: IHI National Forum on Quality Improvement in Health Care 2018; Orlando, FL.8. Department of Veterans Affairs.

  1. Memorandum: Veterans Integrated Service Networks (VISN) Plan for High Reliability and High Reliability Organization (HRO) Lead Facilities (VIEWS #00167710),
  2. February 11, 2019.9.
  3. Zero Harm: How to Achieve Patient and Workforce Safety in Healthcare,
  4. Press Ganey Associates, Inc.; 2018.10.
  5. Weick KE, Sutcliffe KM.

Managing the Unexpected: Sustained Performance in a Complex World, Wiley; 2015.11. Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process.

BMJ Qual Saf,2015;25:986–992.12.13. Hoffmann TC, Glasziou PP, Milne R, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ,2014;348:g1687.14. Berkman ND, Lohr KN, Ansari M, et al. Grading the Strength of a Body of Evidence When Assessing Health Care Interventions for the Effective Health Care Program of the Agency for Healthcare Research and Quality: An Update Methods Guide for Effectiveness and Comparative Effectiveness Reviews,

Rockville MD 2013.15. Aboumatar HJ, Weaver SJ, Rees D, Rosen MA, Sawyer MD, Pronovost PJ. Towards high-reliability organising in healthcare: a strategy for building organisational capacity. BMJ Qual Saf,2017;26(8):663–670.16. American College of Healthcare Executives.

Leading a Culture of Safety: A Blueprint for Success,2017.17. Day RM, Demski RJ, Pronovost PJ, et al. Operating management system for high reliability: Leadership, accountability, learning and innovation in healthcare. J Patient Saf Risk Manag,2018;23(4):155–166.18. Frankel A, Haraden C, Federico F, Lenoci-Edwards J.

A Framework for Safe, Reliable, and Effective Care, White Paper. Cambridge, MA: Institute for Healthcare Improvement and Safe & Reliable Healthcare; 2017.19. Office of the Air Force Surgeon General. Trusted Care Concept of Operations (CONOPS),2015.20. Melnyk BM.

Achieving a high-reliability organization through implementation of the ARCC model for systemwide sustainability of evidence-based practice. Nurs Adm Q,2012;36(2):127–135.21. Riley W, Davis SE, Miller KK, McCullough M. A model for developing high-reliability teams. J Nurs Manag,2010;18:556–563.22. Ikkersheim DE, Berg M.

How reliable is your hospital? A qualitative framework for analysing reliability levels. BMJ Qual Saf,2011;20(9):785–790.23. Kenneth MJ, Bendaly N, Bendaly L, Worsley J, FitzGerald J, Nisker J. A measurement tool to assess culture change regarding patient safety in hospital obstetrical units.

  1. J Obstet Gynaecol Can,2010;32(6):590–597.24.
  2. Mousavi SM, Dargahi H, Mohammadi S.
  3. A study of the readiness of hospitals for implementation of high reliability organizations model in Tehran University of Medical Sciences.
  4. Acta Med Iran,2016;54(10):667–677.25.
  5. Mousavi SMH, Jabbarvand Behrouz M, Zerati H, et al.

Assessment of high reliability organizations model in Farabi Eye Hospital, Tehran, Iran. Iran J Public Health,2018;47(1):77–85.26. Randall KH, Slovensky D, Weech-Maldonado R, Patrician PA, Sharek PJ. Self-reported adherence to high reliability practices among participants in the children’s hospitals’ solutions for patient safety collaborative.

  1. Jt Comm J Qual Patient Saf,2019;45(3):164–169.27.
  2. Sullivan JL, Rivard PE, Shin MH, Rosen AK.
  3. Applying the high reliability health care maturity model to assess hospital performance: a VA case study.
  4. Jt Comm J Qual Patient Saf,2016;42(9):389–411.28.29.
  5. Cropper DP, Harb NH, Said PA, Lemke JH, Shammas NW.

Implementation of a patient safety program at a tertiary health system: A longitudinal analysis of interventions and serious safety events. J Healthc Risk Manag,2018;37(4):17–24.30. Lyren A, Brilli R, Bird M, Lashutka N, Muething S. Ohio children’s hospitals’ solutions for patient safety: a framework for pediatric patient safety improvement.

  • J Healthc Qual,2016;38(4):213–222.31.
  • Muething SE, Goudie A, Schoettker PJ, et al.
  • Quality improvement initiative to reduce serious safety events and improve patient safety culture.
  • Pediatr,2012;130(2):e423–e431.32.
  • Saysana M, McCaskey M, Cox E, Thompson R, Tuttle LK, Haut PR.
  • A step toward high reliability: implementation of a daily safety brief in a children’s hospital.

J Patient Saf,2017;13(3):149–152.33.34.35. Sexton JB, Helmreich RL, Neilands TB, et al. The Safety Attitudes Questionnaire: psychometric properties, benchmarking data, and emerging research. BMC Health Serv Res,2006;6:44.36. Vogus TJ, Sutcliffe KM. The Safety Organizing Scale: development and validation of a behavioral measure of safety culture in hospital nursing units.

2. Systematic reviews currently under development (forthcoming reviews & protocols) Date Searched: 1/31/19
Sources: Strategy:
(SR registry) Search: High-reliability
(SR Protocols) Search: High-reliability

table>

3. Current primary literature Date Searched: 1/31/19 Sources: Strategy: MEDLINE
  • Database: Ovid MEDLINE(R) and Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Daily and Versions(R)
  • Search Strategy:

1 (High-reliability organization* or High-reliability practice* or High-reliability principle* or High-reliability healthcare or High-reliability health care).mp. (211) *************************** CINAHL

  1. Database: CINAHL Plus with Full Text
  2. Search Strategy:
  3. 1 TX (High-reliability organization* or High-reliability practice* or High-reliability principle* or High-reliability healthcare or High-reliability health care) (370)
  4. 2 Limit Source Type: Academic Journals (217)
  5. ***************************
PsycINFO
  • Database: PsycINFO
  • Search Strategy:

1 (High-reliability organization* or High-reliability practice* or High-reliability principle* or High-reliability healthcare or High-reliability health care).mp. (175) *************************** CCRCT

  1. Database: EBM Reviews – Cochrane Central Register of Controlled Trials
  2. Search Strategy:

1 (High-reliability organization* or High-reliability practice* or High-reliability principle* or High-reliability healthcare or High-reliability health care).mp. (1) *************************** Search: “High-reliability organization*” or “High-reliability practice*” or “High-reliability principle*” or “High-reliability healthcare” or “High-reliability health care”

Exclude reasons: 1=Ineligible population, 2=Ineligible intervention, 3=Ineligible comparator, 4=Ineligible outcome, 5=Ineligible timing, 6=Ineligible study design, 7=Ineligible publication type 8=Outdated or ineligible systematic review

Comment # Reviewer # Comment Author Response
Are the objectives, scope, and methods for this review clearly described?
1 1 Yes None.
2 2 Yes None.
3 3 Yes None.
4 4 Yes None.
5 5 Yes None.
6 6 Yes None.
7 7 Yes None.
Is there any indication of bias in our synthesis of the evidence?
8 1 No None.
9 2 No None.
10 3 No None.
11 4 No None.
12 5 No None.
13 6 No None.
14 7 No None.
Are there any published or unpublished studies that we may have overlooked?
15 1 No None.
16 2 No None.
17 3 No None.
18 4 No None.
19 5 Yes – Zero Harm: how to achieve patient and workforce safety in health care, 2019, Clapper, Merlino & Stockmeier (editors). Press Ganey associates inc. We added a reference to this book in the “Background” section.
20 6 No None
21 7 No – I think that ESP was very thorough in their literature search and found all the relevant articles for this review. There is a book titled Managing the Unexpected: Sustained performance in a complex world, by Karl Weick & Kathleen Sutcliffe, 3rd Edition, Wiley & Sons, New York, NY., that has a Mindful Organizing Scale (p.43) that is noteworthy. This scale was originally published in 2007, so it fell outside of the scope of this review. It is one of the few such scales and may be worth mentioning in the review. Added a description of the Safety Organizing Scale (as it is referred to by Weick & Sutcliffe in 2007) to the discussion.
Additional suggestions or comments can be provided below. If applicable, please indicate the page and line numbers from the draft report.
22 1 This is an excellent and well-written report of a difficult topic (because of it’s “fuzzy” definitions) in a quick timeframe. I would say it’s quite responsive to our partners’ request for state of published knowledge on HRO. It will provide an excellent starting point to inform VA’s push toward more mature HROs throughout the system. My comments below are suggested in the spirit of further strengthening the report. Thank you.
23 1 1. The authors seem to rely on the AHRQ report on HRO as the “core” or “standard” definition for HRO. This is implied by the timeframe for review starting with 2008 (2 years after AHRQ’s 2008 report). If this is the case, then this should be stated at the beginning and reinforced throughout.1a.E.g., L40, p1: needs a citation AHRQ? Added a sentence describing the Hines 2008 paper as a seminal white paper describing the adaption of HRO principles into healthcare settings. We do not include citations in the executive summary.
24 1 2. The authors need to more clearly differentiate the domains of HRO (as listed in the AHRQ report) versus the components (or strategies) for ‘implementation*. This language needs to be set forth early in the report. The KQs all relate to information about ‘implementation* (of the AHRQ-defined HRO framework with the 5 domains)and measurement Changed terminology used in from “implementation domains” to “implementation strategies.” The 5 components of the AHRQ HRO model are described as “principles.”
25 1 2a. Starting L10, P2 and L8/P10 and and elsewhere: Terminology around “Implementation frameworks” needs clarification. For example, referring to five “domains” across the implementation frameworks and five domains of AHRQ’s HRO. My suggestion is this: refer to implementation frameworks that are comprised of high-level strategies for implementing HRO. (you could cite Nilsen 2015, who would characterize these frameworks as “prescriptive” “which are frameworks that help guide implementations). The five strategies listed all have active verbs except the first one which should be reworded slightly to: “Developing leadership” We used the Nilsen 2015 article to guide us in developing a table that defines the terminology we use throughout the report. This table appears in the “Methods” section and defines the terms: HRO principles, implementation strategies, implementation cross-cutting themes, and implementation activities. In the findings section, we changed the terminology to indicate that “implementation frameworks” comprised of “implementation strategies” or just “strategies.” We also changed “leadership development” to “developing leadership.”
26 1 2b. L47, P2: use the term “strategies” instead of “components” Nilsen P. Making sense of implementation theories, models and frameworks. Implementation science.2015 Dec;10(1):53. Changed “components” to “strategies.”
27 1 3. P1/L51 – (and again later in the report) the authors cite lack of leadership commitment to “zero patient harm.” Is this how the goal is worded/conceptualized in the literature? There is much discussion about how singular focus on zero harm may cause unintended negative consequences. Some refer to this goal as “zero avoidable harm” – or link it to key cultural goals (e.g., just or safety culture). Can something be said about this, or is the literature (the 20 articles) silent on this important point? Revised to say “leadership commitment to implement HRO principles” and framed as a facilitator rather than a barrier, as it is more often framed this way in the literature. There is much variation in the literature on how ‘zero harm’ is characterized. For clarification, we added the MA’s definition of “zero harm”- reducing errors and ensuring that errors that do occur do not reach patients and cause harm- to the fifth paragraph in the introduction. Yes, we agree that we should add something about this variation of ‘zero harm characterization and to illustrate this variation, we also added a sentence about how the 4 most comprehensive HRO initiatives defined their goals of zero harm to the “Findings” section.
28 1 3a. What about leaders’ lack of “managerial patience” – i.e., are leaders lacking commitment to zero harm as an end goal altogether, or do focus at first and then lose interest? I ask this in context of the finding related to dose-response relationship with time. This linkage could be made more clear even in EXEC SUMMARY bullets by acknowledging the 2-year outcomes based on the articles, but that 2-year horizon may be limited by the lack literature; though there may have been good initial effects in focused areas, this timeframe may be too short for lasting, meaningful effects. Is 2 years realisticare there indications that longer timeframe is needed to achieve more lasting effects – especially related to changes in culture? Of the studies >2 years long, there continued to be improvements over time in patient safety outcomes (i.e., SSE rates continued to decrease) or improvements were maintained (i.e., SSE rates plateaued at a rate lower than baseline). We have added a sentence in “Findings” to indicate improvements were maintained. There is no clear pattern in whether HRO interventions resulted in improvements in process outcomes (i.e., safety culture), which includes results >2 years after initiation of the intervention.
29 1 4. L42, p2 (and elsewhere). Authors refer to strategies working in “primarily children’s hospitals.” It’s not that these findings only work in children’s hospitalsrather that these findings come from studies only done in children’s hospitals (a potential limitation). It’s notable that a couple of different systems/networks of children’s hospitals (Nationwide and CHSPS Network) have led the way with HRO – they are early adopters. Edited throughout to indicate our identified studies were primarily conducted in children’s hospitals- not that we only found improvements in children’s hospitals.
30 1 5. L9/P3: authors should acknowledge the impracticality of RCTs to test HRO because of its complexity and complex implementation. Highlight the need for pragmatic, quasi-experimental study designs with full transparent reporting as a way to more feasibly build the knowledge base needed. Edited this section as well as “Future research needs” to speak to the impracticality of RCTs and how quasi-experimental designs with detailed reporting of intervention elements should be utilized instead.
31 1 6. L37/P4: Build the history of HRO more clearly. It started within the nuclear and aviation industries and then AHRQ is the seminal report introducing/defining HRO for healthcare, yes? Did AHRQ describe the same 5 domains as used in nuclear and aviation industries? Added a sentence on the 2007 Weick and Sutcliffe book that defined the 5 principles of HROs. The Hines 2008 paper built on this by applying the principles to health care settings.
32 1 7. L5/P5. Lists of “components” (should be strategies) seem to be differently described in different places. Be consistent Changed this sentence to indicate these are common HRO intervention activities. Activities are the actual tasks that a health care organization would take to implement the more overarching implementation strategies.
33 1 8. Paragraph starting L13/p5: suggest flipping the order of the Providence St Joseph case with the VA to better segue into the next paragraph about VA. Put the Providence St. Joseph Health example before the VA example.
34 1 9. L10/P13: I’m not sure how differs from the overall goal to ID frameworks to guide implementation of HRO. This paragraph muddles concepts: intervention, process, implementing. I think this can be clarified by providing more detailed descriptions for how to operationalize the 5 high-level strategies in the implementation frameworks. For example, educational workshops might be a way to “Provide training and learning.” Revised this paragraph to make it clear we are talking about how to operationalize HRO implementation and linked the 5 implementation strategies to the specific implementation activities described by each model.
35 1 10. We have found that it’s impossible to use JC’s Oro system for measurement because participants are told not to share with anyone outside their organization, and the questions seem to shift. Is there any reference to this in the literature? Sullivan’s article seems to have the best open definitions/operationalization of their domains. This isn’t explicitly discussed in the literature, but the ORO 2.0 website discusses how it’s designed to be used in at the individual hospital level rather than the health system level. We added a sentence and reference to this. In this section, we also discuss how the tool uses branching logic, which explains why the questions shift.
36 1 10a. Their “RPI” domain relies on a trademarked (proprietary?) program, I think. Noted.
37 1 10b. These are all limitations to using this system for measurement.though the development and intent of it, is the best developed. Noted.
38 1 11. Love ! Thank you.
39 1 12. L60/P16: It would be clearer to refer to AHRQ HRO rather than Hines 2008 – this is first mention of Hines other than in the reference list In response to an earlier comment, we added a description of Hines 2008 to the introduction, so this sentence now refers back to that description.
40 1 EXECUTIVE SUMMARY

  1. Comments about the bullets
    • Add a bullet that identifies AHRQ source as the “seminal” (or core or foundational) definition of HRO which has 5 (fuzzily defined) domains
Added a sentence describing the 2008 AHRQ paper to the first paragraph of the ES.
41 1

Clarify that the current 1st bullet (L8+/p1) refers to ‘implementation* frameworks.

In response to an earlier comment, we edited this to indicate there were 5 implementation strategies across frameworks.
42 1

Also, 5 are listed here, but later in the report, 8 were identified.

Edited to indicate we identified 5 common HRO implementation strategies across 8 frameworks.
43 1

Oro 2.0 may be well-defined/develop but may have an issue of not being openly/publicly available (see comment above)

Edited the “Findings” section to indicate this tool is only available to Joint Commission-accredited organizations.
44 2 Great report! Thank you.
45 3 none None.
46 4 I thought this Evidence Brief was well written and describes my intuitive understanding of the current state of HRO frameworks, metrics, and effects. I thought the authors did a nice job of simplifying what can sometimes be very complicated concepts. I’ve provided several questions and clarifying comments below. None.
47 4 Page ii: title capitalization looks off Fixed this.
48 4 EXECUTIVE SUMMARY page 1 lines 40-52: might mention how HROs differ for health care (similar to background section) Added the healthcare-related definition of each HRO principle in parentheses and added a description of the unique challenges that threaten reliability in health care to this section.
49 4 page 2 line 3-5: may move “spreading implementation initiatives” to the last thing mentioned in the sentence. Moved “spreading implementation initiatives” to the end of the sentence.
50 4 lines 17-20: although there were 5 common domains, it would be interesting to mention some of the other domains not reported as commonly. Added a list of additional complementary practices that emerged from the literature.
51 4 line 21: might say a little more about what consensus process means here Added language to indicate this consensus process typically involved a group of health system leaders and experts in patient safety.
52 4 line 32: how many does multiple hospitals refer to? Clarified the tool/framework was tested in 52 hospitals.
53 4 line 33-34: might give an example of “the variation in concepts measured” also I think the phrase “types of measures” is missing from that sentence. I might also define what levels of practice refers to Revised to indicate the range of concepts measured and removed “levels of” before practice.
54 4 lines 55-58: It’s striking that there are so few barriers to implementation in the literature given all we know about implementation and organizing for quality. This seems like a major limitation. The identification of barriers and facilitators to HRO implementation was not a key aim of our review. Therefore, we did not do a thorough search or analysis of these outcomes, but instead provide a few examples that were discussed in our included articles. More details on these barriers we found are available in the “discussion” section.
55 4
  • INTRODUCTION
  • page 4
  • line 55-56: add “in health care organization” or hospitals after the phrase “Implementation of HRO initiatives is an
Added “into healthcare settings.”
56 4 page 5 lines 3-10: I do not see provide training in systems redesign (e.g. LEAN six sigma, Kaizen events, hFEMA, etc) or robust process improvement tools listed Our findings did not indicate that systems redesign training was a key component of HRO implementation success. However, we agree it is important to discuss these change management strategies in this report, so we added a description of which frameworks recommend which strategies to the “Findings” section.
57 4 lines29-33: Could the caring reliably program assess if it was the toolkit or the consulting which made the differences or was it bundled? It was a bundled initiative.
58 4 line 36-40: Were the barriers reported in a particular type of service (e.g. focus on medical or surgical) or more general? These are more general barriers.
59 4 page 7 line 13-15: might outline the 5 HRO principles again here. Defined the 5 HRO principles again here.
60 4 page 8 line 9-10: what was the rational for hand-searching references lists and consulting with content experts? These are both steps typically conducted in systematic reviews.
61 4 line 15-15: describe the types of expertise the investigator/staff had Experiences health services research, HROs, evidence briefs, etc. Added that all investigators have expertise in conducting systematic reviews of health services research.
62 4 line 34-35: What was the level of disagreements which needed to be resolved by consensus? Agreement was generally high. We added a qualitative description of level of agreement to the report.
63 4 page 10 lines 37-60: seeing the table made me think about what were the other domains highlighted in the articles but not shown here. Added a list of additional complementary practices that emerged from the literature under in response to an earlier comment.
64 4 page 11 lines 50-53: might define what robust process improvement means. It can be a confusing term. Added definition of robust process improvement.
65 4 page 12 line 58-59: say more about what variety of health care leaders, providers and staff means what service areas do they cover ? what type of managers? are safety and quality leaders executive level leaders or middle level managers? Added more detail here to indicate the range of leaders, providers, and staff targeted by these frameworks. Also added detail to indicate these frameworks target a variety of service areas.
66 4 page 13 lines 24-46: There is a lot of information in this paragraph and it’s easier to get lost in the details. It might be easier to comprehend it if it was provided in a bulleted format to allow easier comparison across frameworks. Added bullet points to this paragraph.
67 4 page 14 line 42-42: “VA sites were interviewed about integration of HRO into their health care systems” is not an accurate depiction of this study. I believe the study assessed patient safety practices aligned with HRO principles. It was a secondary analysis of data collected for a study focused on patient safety indicators. An important shortcoming of the ORO 2.0 tool is that it is not meant to compare results across multiple hospitals. As it has developed, I’m not sure if Joint Commission’s opinion has moved on this. I’m not certain if any of the tools presented have tried to compare cross-hospital progress. Revised this section to better describe the original study and the secondary data analysis. Also included the fact that the ORO 2.0 was designed to be used in a single hospital in response to an earlier comment.
68 4 page 17 line 38: term SSE hasn’t been used in awhile, may want to define here again. Added definition here.
69 4
  1. SUMMARY AND DISCUSSION
  2. A few discussion points come to mind as I read this section.
  3. 1) it is critical to think about context
Added a statement on how health care systems may implement different HRO interventions depending on their individual needs and contexts to our “Limitations” section in response to another comment.
70 4 2) How do these tools allow for cross-hospital comparisons? Is this the goal of VA’s HRO initiative? Correct, the VA is looking for tools that allow for cross-hospital comparisons. We added a statement to the “Findings” and “Discussion” sections to describe that the although the ORO 2.0 (the most comprehensive HRO evaluation tool) was not designed specifically for cross-hospital comparisons, the data is output in a way that it could be shared and analyzed between VHA hospitals.
71 4 3) the need for training on HRO principles may not be enough to move an organization. I did not see training on system redesign tools and methodologies listed Agreed that HRO training may not be enough to move an organization. We added a statement to “Gaps and future research” suggesting that future research may want to explore the extent to which HRO training does-or doesn’t- address/overlap with system redesign.
72 4 4) It’s unclear how HRO frameworks deal with differences in HRO practices across different service (e.g. medical, surgical). Should they? Have frameworks focused on this? We only included studies that assessed HRO implementation at a system level (ie, included both medical and surgical units as appropriate), so all our frameworks addressed multiple services and none conducted subgroup analyses by service type. We added a statement to the “Gaps and future research” suggesting that future research studies note where intervention components were delivered (eg, medical or surgical service areas) to help tailor HRO delivery to different contexts.
73 4 5) Have HRO frameworks been developed and aligned with organizational transformation models or other frameworks for improving quality? There may be other measures or concepts to assess which have not been presented in this evidence-brief. Yes, 6 out of 8 frameworks recommended using other change management strategies in HRO implementation. We added a description of which frameworks recommend utilization of which change management strategies to the “Findings” section in response to an earlier comment.
74 4
  • LIMITATIONS
  • page 23
  • lines 12-14:
  • I might mention HRO intervention are inherently difficult to study because they can have many different components (potentially with different foci across different hospitals)
Added that each hospital may also choose to implement different components of these interventions, depending on their individual needs and context.
75 4
  1. GAPS AND FUTURE RESEARCH
  2. page 24
  3. line 7-8: “3) whether certain implementation frameworks or facilitators lead to better outcomes” could be separated out to 3) whether implementation or other frameworks for improving quality frameworks are applied and lead to better outcomes and 4) what the factors affecting HRO implementation are.
Revised to split up #3 into 2 parts.
76 4 line 10-11: the wait-list control point is a good one BUT many facilities already have in place high reliability practices at baseline which will need to be assessed. Many sites could also have already participated in initiatives so they are more prepared for the journey (Improvement capacity/adoption of Lean Six Sigma, old Clinical Teams training, etc). How do we account for these on-going or older initiatives? Added language to indicate consideration should be given to how much wait-list control sites have begun implementing HRO on their own, or are delivering similar interventions such as Lean Six Sigma
77 4 line 27-28: say more about mechanism for change. is this organizational transformation? something else? Added that the mechanism of change might involve improving mindfulness or safety culture, as this aligns with our conceptual model based on Hines 2008. We see organizational transformation as the end-goal, represented through improved patient safety outcomes.
78 4 CONCLUSION I might mention something about measurement here as it is a key aim of the brief. Changed “tools” to “metrics” in the last sentence of the conclusion.
79 5 P1 L47: I would be cautious in stating that medical error is the 3rd leading cause of death in the affirmative and/or saying continues to be as the Makary & Daniel article was a commentary based off of extrapolated data from current literature attempting to articulate how big a problem it is. Since medical errors are not listed as the cause of death this number is difficult to find and the assessment of death from harm is not as black and white in all cases. I would recommend stating something along the lines of if we were to document medical error as cause of death, Makary and Daniel have ascertained that it would be the 3rd leading cause of death in the country. Changed to indicate that death due to medical errors are estimated to be the third leading cause of death in the country.
80 5 P2 L10: remove total, reads as if there are only a total of 20 articles published which is not the case. Removed “total.”
81 5 P5 L33: In review of the additional reviews of measurement, I don’t recall if I mentioned that we also improved on the Safety Climate Domain of the Safety Attitudes Questionnaire (SAQ) from 2016 to 2017 (during the time of everyone up and running on training) by 5 percentage points with a sample size greater than 68,000 respondents so it was found to be quite significant. In addition, when drilling down to our regions, all showed improvements from 3 to 10 percentage points. Your option to add if you so choose. Added that Providence St. Joseph Health had a 5% improvement in the safety climate domain of the Safety Attitudes Questionnaire.
82 5 P5 L37: “lack of leadership commitment to zero patient harm” I would revise to indicate that it is lack of what it takes to get to zero harm. Most leaders would agree yes we need to get to zero patient harm and even indicate that they are doing work to do so. What doesn’t happen from my experience is they believe in it but do not provide the resources (people, money, skills) that it takes to get there. This sentence also needs a colon and some comas to separate the ideas of the list. Revised this section to remove “leadership commitment to zero patient harm” in response to an earlier comment and included additional detail on what are the barriers to incorporating safety culture principles and practices and adopting process improvement tools and methods.
83 5 P16 L35 Many who assess HRO use some form of Safety Climate survey as part of the assessment such as Safety Attitudes Questionnaire (SAQ) which was created by Sexton and team at Univ Texas and reflects similarities to the Flight Management Attitudes Questionnaire used in aviation to assess some of its HRO components. Something to consider adding as a measurement perhaps. Since this tool was published before 2010, we added a discussion of this tool to the “discussion” section
84 6 Thank you for providing me the opportunity to review this report. Excellent rapid review on a complex topic. See some suggested revisions below: For the Key Findings box contained within the Executive Summary, it would have been helpful to have an initial bullet that succinctly listed the goals of the report, such as the aims described in the last sentence of the second paragraph of the Executive Summary. It would have also been helpful to have the 5 domains listed in the first bullet of the Key Findings box. Added the “objective” of the report to the key findings box. Because we want to keep this section brief, and the 5 common implementation strategies appear shortly afterward in the executive summary, we did not add these to the key findings box.
85 6 In the Background section of the Introduction, in the fourth paragraph, the Joint Commission’s 2013 HRO report is noted, but should also be cited/referenced. Added citation.
86 6 In the Background section of the Introduction, in the sixth paragraph, the second sentence states that an understanding of available frameworks and their use is limited, but what about our understanding of available measures, and the impact of initiatives on those measures? Given the aims of this report, should note these areas as well. Added “metrics and initiatives” to this sentence as the description of variability actually applies to all 3 key questions.
87 6 Under Eligibility Criteria, why not extend the search from 2008 to present, instead of 2010? Seems like if AHRQ is publishing a white paper in 2008, others may have also begun publishing on this topic at this time. 2010 was chosen as a start date in consultation with the operational partner. We expected it would take at least 2 years for research integrating the 5 HRO principles discussed by Hines 2008 to be published.
88 6 In the Oro 2.0 section, third paragraph, last sentence, did safety culture decrease as described, or is this a mistake, and did it increase? Safety culture did indeed decrease. Study authors don’t note what these organizational changes were, but it appears they negatively affected safety culture.
89 6 In, please include abbreviation for PHI in the Table legend; in the third row of the Table, “zero SSE rate achieved in 2017” seems redundant with the statement directly above; in the fourth row of the Table, in the last column, please include Month and Year for the baseline
  • Added definition of PHI to table key.
  • Deleted “zero SSE rate achieved in 2017” from Cropper 2018 study.
  • Added dates used for baseline data in Lyren 2016 study.
90 6 In the first paragraph in the Summary and Discussion, in the second sentence, please change the order to “frameworks and metrics”, rather than “metrics and frameworks”, to better match the aims. Reordered “frameworks and metrics.”
91 6 In the Limitations section, in the first paragraph, second sentence, please consider citing: J Clin Epidemiol.2014;67(11):1181-91. PMID: 25438663 Added this citation.
92 6 In the Conclusions, please change the order of the first sentence to read: “frameworks and evaluation tools”. The second sentence should probably read “reduction in SSEs” rather than simply “SSEs”. Reordered “frameworks and metrics.” Revised to say, “reduced SSEs.”
93 7 Overall, I think this evidence brief is excellent. It is thorough, thoughtful, and very well done! The ESP team identified their Key Questions, which were tied to the request from the Office of the National Center for Patient Safety. The method was clearly laid out and executed. The Key Questions were answered, gaps identified, and plans for future research addressed. Thank you.
94 7 I found “. Common HRO implementation domains across 8 identified frameworks,” very useful. This table quickly identified all 8 HRO frameworks and their included components. Only 3 of the 8 contained all 5 HRO components. None.
95 7 . – Metrics for measuring progress on becoming an HRO – was also extremely enlightening. This side-by-side comparison of the 6 methods identified by the ESP group will be helpful for VHA Leadership to understand the differences between these methods, and then select the best one. None.
96 7 highlighted the challenge of comparing studies of disparate quality, methods, measures, and results reporting. This is a shortcoming in the HRO literature and was clearly communicated in this table. None.
97 7 I agree with the ESP assessment of the gaps in the research. It is theorized that the implementation of HRO principles leads to improved safety outcomes and a culture of safety. This has not been validated by the research, nor has the mechanism by which these changes and improvements occur. The secular trends mentioned on page 24, which cannot be ruled out as contributing to improvements in patient safety outcomes, could be expanded on. What are these secular trends, and how are they impacting patients safety outcomes? Added a sentence on the role that EMR could play in improving patient safety outcomes. Also added a sentence that implementation of Lean Six Sigma before or during interventions could plausibly affect outcomes as well.
98 7 I also agree with the statement about the VA being in a unique position to conduct a natural experiment with the current HRO Initiative. This is an excellent insight on the part of the ESP team. I am not criticizing, only providing additional information. The HRO Initiative is limited to 18 lead sites, but many other sites are clamoring to be part of it. I am not clear on the criteria VISN Directors used to select the lead sites, but it is likely that other sites within their VISNs, and across the VHA, are not experimentally naive.1 am aware of 2 other sites within VISN 15 that are on HRO journeys already, and were not selected as the lead site for that VISN. I imagine that is may be true for other VISNs as well. There is no “perfect” way to conduct this type of research, and all research has limitations of some kind. I personally would love the opportunity to be involved in that kind of research. Added that consideration should be given to the extent to which “wait-list control” sites are implementing HRO on their own or using other types of change management strategies in response to an earlier comment.

1. Aboumatar HJ, Weaver SJ, Rees D, Rosen MA, Sawyer MD, Pronovost PJ. Towards high-reliability organising in healthcare: a strategy for building organisational capacity. BMJ Qual Saf,2017;26(8):663–670.2. American College of Healthcare Executives. Leading a Culture of Safety: A Blueprint for Success,2017.3.4.

Day RM, Demski RJ, Pronovost PJ, et al. Operating management system for high reliability: Leadership, accountability, learning and innovation in healthcare. J Patient Saf Risk Manag,2018;23(4):155–166.5. Frankel A, Haraden C, Federico F, Lenoci-Edwards J. A Framework for Safe, Reliable, and Effective Care,

White Paper. Cambridge, MA: Institute for Healthcare Improvement and Safe & Reliable Healthcare; 2017.6. Melnyk BM. Achieving a high-reliability organization through implementation of the ARCC model for systemwide sustainability of evidence-based practice.

Nurs Adm Q,2012;36(2):127–135.7. Office of the Air Force Surgeon General. Trusted Care Concept of Operations (CONOPS),2015.8. Riley W, Davis SE, Miller KK, McCullough M. A model for developing high-reliability teams. J Nurs Manag,2010;18:556–563.9. Ikkersheim DE, Berg M. How reliable is your hospital? A qualitative framework for analysing reliability levels.

BMJ Qual Saf,2011;20(9):785–790.10. Kenneth MJ, Bendaly N, Bendaly L, Worsley J, FitzGerald J, Nisker J. A measurement tool to assess culture change regarding patient safety in hospital obstetrical units. J Obstet Gynaecol Can,2010;32(6):590–597.11. Mousavi SM, Dargahi H, Mohammadi S.

  • A study of the readiness of hospitals for implementation of high reliability organizations model in Tehran University of Medical Sciences.
  • Acta Med Iran,2016;54(10):667–677.12.
  • Mousavi SMH, Jabbarvand Behrouz M, Zerati H, et al.
  • Assessment of high reliability organizations model in Farabi Eye Hospital, Tehran, Iran.

Iran J Public Health,2018;47(1):77–85.13. Randall KH, Slovensky D, Weech-Maldonado R, Patrician PA, Sharek PJ. Self-reported adherence to high reliability practices among participants in the children’s hospitals’ solutions for patient safety collaborative.

Jt Comm J Qual Patient Saf,2019;45(3):164–169.14. Sullivan JL, Rivard PE, Shin MH, Rosen AK. Applying the high reliability health care maturity model to assess hospital performance: a VA case study. Jt Comm J Qual Patient Saf,2016;42(9):389–411.15. Brilli RJ, McClead RE, Jr., Crandall WV, et al. A comprehensive patient safety program can significantly reduce preventable harm, associated costs, and hospital mortality.

J Pediatr,2013;163(6):1638–1645.16.17. Cropper DP, Harb NH, Said PA, Lemke JH, Shammas NW. Implementation of a patient safety program at a tertiary health system: A longitudinal analysis of interventions and serious safety events. J Healthc Risk Manag,2018;37(4):17–24.18.

Lyren A, Brilli R, Bird M, Lashutka N, Muething S. Ohio children’s hospitals’ solutions for patient safety: a framework for pediatric patient safety improvement. J Healthc Qual,2016;38(4):213–222.19. Saysana M, McCaskey M, Cox E, Thompson R, Tuttle LK, Haut PR. A step toward high reliability: implementation of a daily safety brief in a children’s hospital.

J Patient Saf,2017;13(3):149–152.20. Muething SE, Goudie A, Schoettker PJ, et al. Quality improvement initiative to reduce serious safety events and improve patient safety culture. Pediatr,2012;130(2):e423–e431. Prepared for: Department of Veterans Affairs, Veterans Health Administration, Health Services Research & Development Service, Washington, DC 20420 Prepared by: Evidence Synthesis Program (ESP), Coordinating Center, Portland VA Health Care System, Portland, OR, Mark Helfand, MD, MPH, MS, Director Veazie S, Peterson K, Bourne D.

  1. Evidence Brief: Implementation of High Reliability Organization Principles.
  2. Washington, DC: Evidence Synthesis Program, Health Services Research and Development Service, Office of Research and Development, Department of Veterans Affairs.
  3. VA ESP Project #09-199; 2019.
  4. Available at:,
  5. This report is based on research conducted by the Evidence Synthesis Program (ESP) Center located at the Portland VA Health Care System, Portland, OR, funded by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development.

The findings and conclusions in this document are those of the author(s) who are responsible for its contents; the findings and conclusions do not necessarily represent the views of the Department of Veterans Affairs or the United States government. Therefore, no statement in this article should be construed as an official position of the Department of Veterans Affairs.

What is the high reliability organization perspective?

A high-reliability organization (HRO) is an organization that has succeeded in avoiding catastrophes despite a high level of risk and complexity.

Adblock
detector