Tuesday, March 17, 2015

6 Methods for CAPA Effectiveness Verification

Verifying the effectiveness of corrective and preventive actions closes the loop between identifying a problem and completing the actions to solve a problem. It seems reasonable to expect that if a problem is worth solving, it is also worth verifying that the problem is actually solved. But, determining the best verification approach and deciding when to conduct the verification for the wide range of problems that could occur can be elusive.

Before we discuss CAPA effectiveness, we need to look at a few of the reasons why performing this check is often a challenge.

Why is it so difficult to determine an appropriate CAPA Effectiveness Verification method? Here are a few reasons:
  • The problem is not well defined. Sometime breaking the mental logjam is as simple as asking "What problem were we trying to solve?" That sounds like an easy question, but when the answer is not well defined or stated simply, measuring success is not easy to grasp.
  • The root cause is not determined. This is a natural consequence of the first reason. It is next to impossible to  determine the root cause for a fuzzy problem, or one that seems too complicated to explain. Those who try also get a fuzzy root cause.
  • It's not really a CAPA. It is ever my experience that the CAPA system has become a quality work order system (a.k.a. dumping ground) because the common data managements systems utilized, such as Trackwise, provide project management structure and visibility. But without a stated problem or the determination of a root cause, it is not a CAPA. It's just a project.
  • CAPA Effectiveness Verification is used for everything. CAPA Effectiveness Verification can be too much of a good thing when it is expected for every possible CAPA. This usually occurs from the cascading problem of a CAPA being required for every deviation, and a deviation being required for every conceivable blip. Soon you become a drowning victim of your own making.
  • We overthink it. Rather than allowing reason to prevail, there are those who tend to complicate just about everything. Determining and applying the effectiveness method is no exception. Yes, we operate in a scientific environment, but not every method of verifying effectiveness has to be labor intensive. Major processes need not be applied to minor problems.
  • It's considered not important. There are those who believe that living with an ongoing problem is the path of least resistance when compared to conducting the same boilerplate investigation (same problem different day) and getting on with production. Having a low tolerance for recurring problems is truly the root cause for many who are trending water in a deviation-swirling tide pool. 
Assuming that we have a real CAPA where an investigation was conducted on a well defined problem to determine the root cause and product impact, we can turn to the regulatory requirements and business obligation to evaluate how well we spent our resources to permanently eliminate the problem. This brings us to options for methods for verifying CAPA effectiveness.

What are some examples of CAPA Effectiveness Verification Methods? Here are 6 examples:

  • Audit Method is used when the solution involves changes to a system where a determination is made whether changes are in-place procedurally and in-use behaviorally. An example is an audit of a new line clearance checklist to demonstrate effective implementation of the new line clearance checklist.
  • Spot Check is used for random observations of performance or review of records provide immediate, but limited feedback. An example is a spot-check of batch records to ensure that the pH step was performed correctly after training on the new procedure.
  • Sampling is used for observations of variables or attributes per defined sampling plan. An example of sampling is when a statistical sample is randomly drawn from lot XYZ123 post implementation of process improvement to confirm the absence of defect.
  • Monitoring is used for real-time observations over a defined period. An example of monitoring is the real time observation of to verify that changes to operator owning practices were implemented.
  • Trend Analysis is the retrospective review of data to verify that expected results were achieved. An example of trend analysis is the review of environmental monitoring (EM) data for the period covering the last 30 batches to show the downward trend in EM excursions due to process improvements.
  • Periodic Product Review is a retrospective review at least annually of trends of multiple parameters to confirm the state of control. An example of periodic product review is the review of data after major changes were made to the facility and equipment as part of a process technology upgrade post recall.
Now that we have a real CAPA and selected a method to verify the effectiveness, we need to determine an appropriate timeframe to perform the verification. Timeframes are subjective, but there needs to be a basis for the decision. This brings us to points to consider when determining an appropriate timeframe for the CAPA Effectiveness Verification.

How do we select an appropriate CAPA Effectiveness Verification timeframe? Here are points to consider:

  • Less Time. Allow relatively less time after implementing the solution when:
    • Higher opportunity for occurrence / observation
    • Higher probability of detection
    • Engineered solution
    • Fewer observations needed for high degree of confidence
  • More Time. Allow relatively more time after implementing the solution when:
    • Lower opportunity for occurrence/ observation
    • Lower probability of detection
    • Behavioral/ training solution
    • More observations needed for high degree of confidence

The following are several fictitious examples of CAPAs that require an Effectiveness Verification. What CAPA Verification Effectiveness method would you recommend?
What timeframe do you recommend?

Example 1.

Problem:
There are widespread errors in selecting an appropriate effectiveness verification and timeframe in the Trackwise fields when compared to the requirement in the new procedure.
Root Cause:
There is a general lack of understanding of acceptable CAPA Effectiveness Review methods that would satisfy the procedural requirement.
CAPA:
Develop and deliver targeted training on CAPA Effectiveness Verification methods to CAPA system users who have the responsibility to make this determination.

Example 2.

Problem:
Transcription errors are being made when copying information from sample ID labels to laboratory notebooks.
Root Cause:
Labels made on the current label printer (make/ model) are frequently unreadable.
CAPA:
Replace the current label printer with one that produces legible labels.

Example 3.

Problem:
The incorrect number of microbiological plates as required by SOP XYZ123, were delivered to the lab of two separate occasions by a newly trained operator after routine sanitization of Room A.

Root Cause:
The instructions in SOP XYZ123 are more interpretive than intended, which can mislead inexperienced operators to place the correct number of plates in Room A.

CAPA:
Revise SOP XYZ123 to add the specificity required for the correct number and specific placement of micro plates in Room A.

Example 4.

Problem:
Increased bioburden levels were noted in the microfiltration process train A.

Root Cause:
The phosphate buffered saline (PBS) delivery piping system upstream of the microfilter exhibited high bioburden levels.

CAPA:
Revise the cleaning procedure to incorporate a water for injection fish to remove residual harvest material from the process piping and provide training on the flushing process.

Example 5.

Problem:
A statistically significant trend was observed in assay X results for 6 lots of the 25mg vial manufactured at site A, but not the 10mg vial manufactured at site B for the same period.

Root Cause:
There was a difference in sample preparation techniques between the two sites.

CAPA:
Revise the sample preparation of the test method for consistency between sites and provide training on revised test method.

Please share your experiences with CAPA Effectiveness Verification in the comment section below.


The QA Pharm







7 comments:

  1. I have had an ISO auditor insist that *every* CAPA requires an effectiveness check. And it had to be an actively defined and implemented check, not a retrospective review when a deviation occurs to determine if a previous CAPA was ineffective in preventing the recurrence.

    From that experience I have learned not to call an action a CAPA unless it was addressing the root cause of the deviation, so as to prevent burdening the CAPA/eff check system with other 'improvements' identified during an investigation.

    For consideration of the method to evaluate the effectiveness of a CAPA, I would propose that you have to account for the detectability of the recurrence in your quality system and the likelihood of recurrence based on your root cause. Also, you have to take into account the opportunities for the issue to recur. For example, with each manufacturing run - whenever they actually happen - or on a time-based schedule such as every noon when the HVAC turns on/off?

    Then when you have an idea of what method you will use to detect a recurrence (normal operations or special project/operation/collection?), and the measurement of opportunities, you can set your checkpoint (3 batches, or 2 weeks, for example).

    And the acceptance criteria for saying it was effective has to be unambiguous, objective, and *prescribed*...do not let someone wave away unusual results or blips in measurements. Or leave it open for interpretation by others.

    Some particularly pedantic players would say that if you set a *date* to perform the check, and your check plan didn't account for extensions (for example, no lots were made by that date due to schedule issues), then you have to follow your extension process, even though the conditions were vetted...so be sure to account for the condition check and auto-extension in your plan.

    ReplyDelete
    Replies
    1. Good points, Robert. Congratulations on differentiating between a CAPA and garden variety improvement projects. Go to the head of the class.

      By the ISO Auditor saying that every CAPA requires an Effectiveness Check assumes that they are true CAPAs, and not an employee quality suggestion box. Some companies undertake no improvement activities without a CAPA because they get visibility. Unfortunately, such companies find it difficult to deploy resources toward the biggest risks--and are always plagued with backlogs.

      You are also correct to point out that Effectiveness Check timeframe must consider the probability of detection, which may be a function of production activity, etc. The point here is that the decision needs to be rational.

      Concerning blips. I am definitely jaded by what I see in practice. I would rather trend the blips to allow half a chance at not chasing windmills. The blip could well be the result of normal random distribution. Remember that Deming said that tweaking a process as a reaction to randomly distributed observation only adds to the unwanted variation. For certain, the detection systems that link into the Deviation Management System need to define how blips are handled within that input system so that each is not independently investigated. That's the road to nowhere.

      Thanks for sharing, Robert. Much appreciated!

      Delete
    2. Not to go too far from the topic, but I would be interested in opening up the conversation to what "tracking tool" outside of CAPAs has worked for improvement projects or other quality system updates.

      I would imagine you could initiate and track quality system changes for documents in the Document Control system (the majority of changes taking place within documents), but I think Management wants something that carries some bite...something that has the stick of the stick-and-carrot approach to driving improvements. And Doc Control doesn't have that weight...

      You would have to implement some sort of categorization of those identified changes and reports to routinely generate as a tracker and escalate as needed.

      But that doesn't address training sessions needed, or development of solutions/improvements that are outside of controlled documents...

      In the past we had the CAPA system (the electronic system) designate actions with a different category, such as "continuous improvement action", and use the exact same elements, forms, and reporting to drive them to finish....but they could easily be filtered out of any requests for CAPA lists. We just had to rely on the QA Approver to recognize and classify them appropriately.

      As for every *true* CAPA implemented, it does appear that the FDA as well expects the effectiveness to be measured...per 21CFR 820 (med devices)...

      (4) Verifying or validating the corrective and preventive action *to ensure that such action is effective* and does not adversely affect the finished device...

      So, it comes back to deciding how best to ensure this happens for properly identified CAPAs, as you summarize in your post.

      Delete
  2. I'm new to CAPA and value any advice. A difficult aspect of a successful effectiveness check is after procedures were changed and engineering controls put in place, you still have a failure due to human error. How can a CAPA be effective if there is no tolerance for human error?

    ReplyDelete
    Replies
    1. The topic of Human Error is a difficult one. Yes, there will always be human error and all human error will never be eliminated. However, what can be eliminated are opening CAPAs in the first place where human error has little consequence. There's nothing worse than a clogged CAPA system with trivial issues that are many degrees away from product, process or systems. However, one might choose to bundle like human errors and treat them as many examples of the same root cause, thus one CAPA.

      Delete
  3. what are some options if your CAPA is found ineffective, per your effectiveness
    check?

    ReplyDelete
    Replies
    1. When a CAPA is determined to be ineffective, it strongly suggests that the true root cause was not determined in the first place.

      Delete