Developing an Effective ASC Quality Outcomes Monitoring Program: Q&A With Jean Day of Pinnacle III

November 29, 2011 | Print  |

Jean Ann Day, RN, CNOR, is director of clinical education for Pinnacle III. She previously served as administrator of the Boulder (Colo.) Community Musculoskeletal Surgery Center.

 

Q: You have overseen the development of a robust quality outcomes monitoring program at Pinnacle III. What brought about the development of this program?

 

Jean Day: One of my key focuses as an ASC administrator was making certain the staff members who were intimately involved in direct patient care understood how critical their role was in the reporting of unexpected outcomes. They were far better prepared and informed about the events than anyone in an administrative position. It took a fair amount of repeating the same message until personnel recognized their empowerment as well as their duty to report on those unexpected events — some of which ended up being designated as adverse.

 

Seven years ago, I identified the clinical outcomes I believed would be beneficial to report on in the centers under my watch. I shared this information with my clinical operations colleagues within Pinnacle III who indicated their desire to incorporate use of these outcomes throughout our operations. I simply took the information I had been reporting on at my own individual centers and, with assistance from the remainder of the team, enhanced it for use in both single- and multi-specialty facilities.

 

In 2010, Pinnacle III adopted an initiative requiring quality outcomes monitoring in all of our ASCs. Although we allowed centers to implement some facility-specific benchmarks, standardized materials were released and integrated into their programs.

 

Sign up for our FREE E-Weekly for more coverage like this sent to your inbox!

 

Q: What are some examples of quality outcomes benchmarking data you're having your ASCs collect and why?


JD: One example is on-time starts. This critical element of time management measures turnover time that exceeds the established threshold. We use 15 minutes as the benchmark for normal turnover time events and allow the leniency of an additional 10 minutes to account for those times when "things happen." The 10-minute clock does not start ticking based on a printed schedule; rather, this element is measured in real time. The real-time rationale follows because the schedule is merely a posting of the order of planned events; OR personnel cannot control the schedule. Shifting to real time allows tracking of a benchmark that is ostensibly within the staff's control, which allows the focus to be shifted to performance improvement.

 

There are so many variables which may interfere with the process of getting the patient to the OR in a timely manner. In circumstances which require reporting (patient delays greater than 25 minutes), the circulating nurse who receives the patient is tasked with identifying why the delay occurred when the OR was ready. Questions the circulating nurse may reflect upon will likely be:

 

 

Occurrence reporting in the patient accounting system is used to compile data throughout the month to allow for easy extraction of the information at month end via a system report.

 

Another measurement reviewed is surgical delays greater than 30 minutes. This measurement differs from on-time starts — which tracks "real-time" turnover between cases — because it tracks delays based on the scheduled time. In essence, on-time starts are affected by processes which personnel can and do control while surgical delays reflect events that personnel cannot control. For surgical delays, we are analyzing whether the delay is tied to a scheduling inaccuracy, a case-specific late start time or a cascade effect that has occurred because of a holdup earlier in the day. This element provides us with a gross evaluation of single providers or specialties where less reliability in time estimates occur secondary to unexpected pathology or other challenges/complications encountered during the course of surgical intervention.

 

We also review overtime cases — defined as those that exceed the total time scheduled for the procedure by greater than 30 minutes. This element helps us evaluate appropriate scheduling practices. For example, a surgeon indicates to the scheduling nurse in his office that a case will take 120 minutes. That information is relayed by the office's scheduling nurse to the scheduler in the ASC. But how long does it really take in the OR? Trust me, OR personnel know which surgeons squander time but that "evidence" can feel anecdotal when it is not backed up with statistical data. In seeking out scheduling inefficiencies, there are times when trends by physician or by the type of service being provided are identified. Armed with this information, the administrator is better equipped to initiate a productive dialogue with physicians who have a habit of under-scheduling cases. The ripple effect of these overtime cases not only extend the overall workday, adding to the facility's labor costs, but may also delay physicians who are scheduled to use that OR next.

 

We're capturing the standard National Quality Forum clinical outcomes — surgical site hair removal, prophylactic IV antibiotics, the availability of surgical supplies at time of service, and surgical site infections (as a CMS requirement). Unplanned admissions, unplanned return for medical care and any type of complication relating to anesthesia or surgery are standard for us as well.

 

Another element we monitor — and this is really more for business operations — is add-on cases. We measure this because all of the conditions for a "normal" admission must also be satisfied for an add-on patient. The same preoperative tasks must take place —verification of insurance, pre-op interview, evaluation of their suitability for the ASC environment. Does the patient have their post-op conditions for returning to a safe home environment lined up? Is this a patient who suffers from sleep apnea? Add-on cases compress the entire scheduling and patient preparation process. They tax human resources; suddenly you have one or two people who are forced to stop their workflow for 2-3 hours to make this surgical event happen.

 

To help our centers track these outcomes, I created what we call our "quality outcomes report card."

 

Q: Can you describe this quality outcomes report card?

 

JD: It details each one of the 20 measurable outcomes for which we are capturing data. It provides personnel in a leadership role at the centers very detailed explanations of why are we collecting this information, defines the thresholds we are looking for and outlines how they are going to be reported. Leadership has to then, in turn, educate their personnel on the outcomes.

 

Q: How has this data gathering exercise helped Pinnacle III and its centers?

 

JD: What pleases me now, here in 2011, is that we have a full 18 months of consistent reporting [since the program started]. I am able to take the data that is reported to me by each of our individual facilities and evaluate it based on the numbers themselves, disregarding the unique personality of each center. In a mass email to all of our centers, I provide a spreadsheet that plots the external benchmarking data in graphs so they can see how their data compares against their contemporary centers.

 

When appropriate, I provide objective editorial commentary such as:

 

"Data represented by some of our centers indicates a higher incidence of add-on cases. As we would expect some urgency in the need to add cases for some of the subspecialties we serve, you might determine if there are any patterns of excessive use of add-on case volume unique to your subspecialties, population served or physician habits. I encourage you to include the reasons for urgent add-on of cases when reporting this event to afford you the opportunity to drill down and look deeper. You may, for example, determine surgeons' offices are experiencing inefficiencies as a result of personnel turnover, whereby educational support and opportunity to re-build relationships with physician offices is presented as a performance improvement opportunity."

 

I provide that kind of feedback to our facilities in hopes that they take the lead by generating their own performance improvement studies and are then able to demonstrate an improvement in the noted trend.

 

Another great byproduct of collecting this type of data is providing tangible evidence of the improved direction in data trends. Sometimes the improvement shows up for individual facilities. At other times, though, we see improvement across the board for all of our facilities.

 

Q: Have you encountered any challenges with the data reporting?

 

JD: It's important to me, as an element of quality control, to make certain everybody is reporting their data the same way. The report card not only outlines the definition of the benchmark but details how the data should be collected, where to obtain the data, what types of cases are included in the monitoring of the benchmark and the format of the reporting value (numeric or percentile). Part of the work is ensuring the persons collecting the data are detail oriented and report each element the same way every time.

With time-related events, like delays to the OR or extended stays, some of our facilities are gathering data manually because their patient accounting system platforms aren't properly equipped to provide that data automatically. Because I don't want anybody to have to resort to non-automatic data extraction, I finally advised them to report "no data available" when manual data gathering is the only means for data compilation. If nothing else, we can impress upon the patient account system provider that if their competitors can provide for automatic data extraction, they should be able to as well.

 

Q: Outside of improved benchmarking and performance improvement opportunities, how else have you seen the quality outcomes monitoring program help your centers?

 

JD: It is important for personnel serving in a management position and staff members with delegated positions of importance (quality assurance, infection prevention and safety) to understand they are problem solving every day. When they grasp decision making is only one step in a complex process, they begin to appreciate the bigger picture which contributes to improved patient safety, infection prevention and/or sound business operations.

 

ASCs are dynamic and nimble. When facility personnel recognize they have encountered some kind of problem over the course of a day and they say it is an unexpected event, I respond with, "That's what I want you to look for! When those unexpected things happen, is there an opportunity [for improvement]? If it happened today, is it possible it happened last week and the week before that and nobody else mentioned it?" I now ask them to write up these unexpected events as an incident, which initiates the activation of a problem solving process.

 

Also, this isn't just about clinical outcomes. Opportunities for improvement happen across multi-disciplinary areas of service. The quality outcomes monitoring program served to heighten staff member awareness of unexpected outcomes and created recognition of potential study opportunities. When studies were undertaken, they could go through the steps of problem solving to reach a lasting solution. Prior to this shift in thinking, the solution wasn't cascading down to other people in their own department, let alone to all members in the facility.

 

Once the persons in leadership began to understand what the performance improvement process was really about, it increased their own awareness. They were then able to become opportunistic leaders in the organization so other people could turn to them for help. I helped them learn to take those stumbles or those challenges and say "Wow, if this is bothering and impacting me, it must be impacting my colleague who was struggling with the same thing yesterday."

 

Q: What does this new-found awareness do for ASC leadership?

 

JD: It makes them recognize their contribution. It affirms the part they play every day they show up at work — it matters. It matters not just to the patients they attend to and care for today, but it could just as well matter to the patients they are going to see three weeks from now. It also serves to change the culture of the ASC because empowerment is shifted to the staff. It makes reporting of those unexpected events something people want to share with, rather than hide from, leadership.

 

From a management point of view, it is important to impress upon personnel that this is not a punitive process. It is a process intended to allow us to take a closer look at ourselves, recognize those areas where we meet or exceed our own performance expectations, and then offer us great opportunities to improve in other customer service-related services. After all, that is what we are in the business to do — all of us are trying to meet or exceed our customers' expectations.

 

Q: Can you discuss how one of your ASCs might go from identifying an unexpected event to working toward a solution?

 

JD: We have adopted a simple QAPI format. For example, let's say a patient shows up and they are not on the schedule. The receptionist reports "there is a problem" to the appropriate leader. The leader responds with, "In one sentence, identify the problem" — typically a difficult task for personnel to accomplish. They can relay all of the details, but when asked to explain the problem in one sentence, it can be challenging for them to be able to respond with something like, "The patient's scheduled surgery date did not appear on the surgery schedule as expected by the patient." The leader then responds with, "If our scheduling were perfect, what would you expect to happen?" The one-sentence answer is, "The patient's date of service would be the same as what appears on our schedule." That response identifies the expected outcome.

 

In the scenario I'm describing, something didn't happen. Now the staff can search for the details, figuring out where the breakdown in communication occurred. They conduct their investigation; they come up with their conclusions. They seek to find out how something happened and determine how a similar occurrence can be prevented in the future. The final question ends up being, "What were the steps you took to develop a corrective action for the prevention of a like situation happening two weeks from today?"

 

When they detail the process — write it down — they learn about what it means to investigate a problem. That's important because it shows them they are critical thinkers. A lot of people might not think of themselves as a decision maker in the organization and, while they may not make the tough decisions, they do make decisions every day. That's why I use the term "empowerment."

 

Q: Are there any other significant benefits you have seen your ASCs take away from this program?

 

JD: From my perspective, I think the most dynamic transformation at our centers was the realization [by staff members] that quality assessment performance improvement is not a task confined to administrators to fix problems. It has been really great to see our personnel empowered to recognize that when things deviate from what they expect to happen, they report it, and they themselves can begin the process to find a resolution which results in long-term improvement.

 

Learn more about Pinnacle III.


More Articles Featuring Pinnacle III:

Social Media Use by Surgery Centers: Q&A With Kim Woodruff of Pinnacle III

Pinnacle III to Open Texas Surgery Center in Pearland

New Colorado Surgery Center Coming to Boulder

© Copyright ASC COMMUNICATIONS 2019. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.

To receive the latest hospital and health system business and legal news and analysis from Becker's Hospital Review, sign-up for the free Becker's Hospital Review E-weekly by clicking here.