Thursday, March 1, 2018

A Poster: CAPA Effectiveness Verification

This is the second in a series of posts in which I reduce a large concept into one image that my readers can make a poster, or build into a presentation.

This post deals with CAPA Effectiveness Verification. I discussed this topic in my March 17, 2015 post, which has become one of my most popular posts with over 7000 views.

As I revisit this topic, I have reduced the concepts into the following image. Please feel free to contact me at for a higher-resolution image and PowerPoint slide.

From my March 17, 2015 post:

Verifying the effectiveness of corrective and preventive actions closes the loop between identifying a problem and completing the actions to solve a problem. It seems reasonable to expect that if a problem is worth solving, it is also worth verifying that the problem is actually solved. But, determining the best verification approach and deciding when to conduct the verification for the wide range of problems that could occur can be elusive.

Before we discuss CAPA effectiveness, we need to look at a few of the reasons why performing this check is often a challenge.

Why is it so difficult to determine an appropriate CAPA Effectiveness Verification method? Here are a few reasons:
  • The problem is not well defined. Sometime breaking the mental logjam is as simple as asking "What problem were we trying to solve?" That sounds like an easy question, but when the answer is not well defined or stated simply, measuring success is not easy to grasp.
  • The root cause is not determined. This is a natural consequence of the first reason. It is next to impossible to  determine the root cause for a fuzzy problem, or one that seems too complicated to explain. Those who try also get a fuzzy root cause.
  • It's not really a CAPA. It is ever my experience that the CAPA system has become a quality work order system (a.k.a. dumping ground) because the common data managements systems utilized, such as Trackwise, provide project management structure and visibility. But without a stated problem or the determination of a root cause, it is not a CAPA. It's just a project.
  • CAPA Effectiveness Verification is used for everything. CAPA Effectiveness Verification can be too much of a good thing when it is expected for every possible CAPA. This usually occurs from the cascading problem of a CAPA being required for every deviation, and a deviation being required for every conceivable blip. Soon you become a drowning victim of your own making.
  • We overthink it. Rather than allowing reason to prevail, there are those who tend to complicate just about everything. Determining and applying the effectiveness method is no exception. Yes, we operate in a scientific environment, but not every method of verifying effectiveness has to be labor intensive. Major processes need not be applied to minor problems.
  • It's considered not important. There are those who believe that living with an ongoing problem is the path of least resistance when compared to conducting the same boilerplate investigation (same problem different day) and getting on with production. Having a low tolerance for recurring problems is truly the root cause for many who are trending water in a deviation-swirling tide pool. 
Assuming that we have a real CAPA where an investigation was conducted on a well defined problem to determine the root cause and product impact, we can turn to the regulatory requirements and business obligation to evaluate how well we spent our resources to permanently eliminate the problem. This brings us to options for methods for verifying CAPA effectiveness.

What are some examples of CAPA Effectiveness Verification Methods? Here are 6 examples:

  • Audit Method is used when the solution involves changes to a system where a determination is made whether changes are in-place procedurally and in-use behaviorally. An example is an audit of a new line clearance checklist to demonstrate effective implementation of the new line clearance checklist.
  • Spot Check is used for random observations of performance or review of records provide immediate, but limited feedback. An example is a spot-check of batch records to ensure that the pH step was performed correctly after training on the new procedure.
  • Sampling is used for observations of variables or attributes per defined sampling plan. An example of sampling is when a statistical sample is randomly drawn from lot XYZ123 post implementation of process improvement to confirm the absence of defect.
  • Monitoring is used for real-time observations over a defined period. An example of monitoring is the real time observation of to verify that changes to operator owning practices were implemented.
  • Trend Analysis is the retrospective review of data to verify that expected results were achieved. An example of trend analysis is the review of environmental monitoring (EM) data for the period covering the last 30 batches to show the downward trend in EM excursions due to process improvements.
  • Periodic Product Review is a retrospective review at least annually of trends of multiple parameters to confirm the state of control. An example of periodic product review is the review of data after major changes were made to the facility and equipment as part of a process technology upgrade post recall.
Now that we have a real CAPA and selected a method to verify the effectiveness, we need to determine an appropriate timeframe to perform the verification. Timeframes are subjective, but there needs to be a basis for the decision. This brings us to points to consider when determining an appropriate timeframe for the CAPA Effectiveness Verification.

How do we select an appropriate CAPA Effectiveness Verification timeframe? Here are points to consider:

  • Less Time. Allow relatively less time after implementing the solution when:
  • Higher opportunity for occurrence / observation
  • Higher probability of detection
  • Engineered solution
  • Fewer observations needed for high degree of confidence
  • More Time. Allow relatively more time after implementing the solution when:
    • Lower opportunity for occurrence/ observation
    • Lower probability of detection
    • Behavioral/ training solution
    • More observations needed for high degree of confidence

The following are several fictitious examples of CAPAs that require an Effectiveness Verification. What CAPA Verification Effectiveness method would you recommend?
What timeframe do you recommend?

Example 1.

There are widespread errors in selecting an appropriate effectiveness verification and timeframe in the Trackwise fields when compared to the requirement in the new procedure.
Root Cause:
There is a general lack of understanding of acceptable CAPA Effectiveness Review methods that would satisfy the procedural requirement.
Develop and deliver targeted training on CAPA Effectiveness Verification methods to CAPA system users who have the responsibility to make this determination.

Example 2.

Transcription errors are being made when copying information from sample ID labels to laboratory notebooks.
Root Cause:
Labels made on the current label printer (make/ model) are frequently unreadable.
Replace the current label printer with one that produces legible labels.

Example 3.

The incorrect number of microbiological plates as required by SOP XYZ123, were delivered to the lab of two separate occasions by a newly trained operator after routine sanitization of Room A.

Root Cause:
The instructions in SOP XYZ123 are more interpretive than intended, which can mislead inexperienced operators to place the correct number of plates in Room A.

Revise SOP XYZ123 to add the specificity required for the correct number and specific placement of micro plates in Room A.

Example 4.

Increased bioburden levels were noted in the microfiltration process train A.

Root Cause:
The phosphate buffered saline (PBS) delivery piping system upstream of the microfilter exhibited high bioburden levels.

Revise the cleaning procedure to incorporate a water for injection fish to remove residual harvest material from the process piping and provide training on the flushing process.

Example 5.

A statistically significant trend was observed in assay X results for 6 lots of the 25mg vial manufactured at site A, but not the 10mg vial manufactured at site B for the same period.

Root Cause:
There was a difference in sample preparation techniques between the two sites.

Revise the sample preparation of the test method for consistency between sites and provide training on revised test method.

Please share your experiences with CAPA Effectiveness Verification in the comment section below.

John E. Snyder
The QA Pharm

The QA Pharm is a publication of John Snyder and Company, Inc.

John Snyder and Company, Inc., provides consulting services to companies regulated by the Food and Drug Administration. We help our clients to build an effective Quality Management System to enable reliable supply of quality products to their patients. We also help our clients to develop corrective action plans to address regulatory compliance observations and communication strategies to protect against accelerated enforcement action.

Contact us at

Saturday, August 19, 2017

A Poster: Three Stages of Quality Management System Implementation and Oversight

A Poster: Three Stages of Quality Management System Implementation and Oversight

If I could summarize in one page the most important lessons I have learned in pharmaceutical Quality Assurance over the last 40 years, this is it.

This acknowledges that putting words in a procedure does not mean they will be put into action or be effective.

The responsibility of Quality Assurance is to ensure that an effective Managing the Quality Management System (QMS) is put in place procedurally, is in use behaviorally, and is in control measurably.

The responsibility of Management is to enable the QMS through accountability of each element of the QMS by an identified owner; to provide oversight through performance metrics; and to promote the QMS as a normal and valued part of the business---not to make the FDA happy or pass an inspection.

As I reflect on my pharmaceutical career and the many clients I have served over the years, the best results and most rewarding experiences were with those who embraced this concept.

My hope is for all my followers to use this simple graphic as a way to communicate a QMS implementation strategy.

I would be pleased to support your effort with details behind each of these points.

If you would like a high resolution image, please write to me at my personal email address: You have my permission  to use freely.

John E. Snyder
The QA Pharm

Friday, May 20, 2016

Good Metrics Practice for Quality Management Reviews

A Quality Management Review (QMR) of quality data with responsible company leadership is a CGMP requirement.  QMR practices vary, but there seems to be a struggle with presenting data from across the Quality Management System is a meaningful and consistent manner when there are multiple contributors. Here are a few suggestions:

Report the opportunity for improvement
Reporting the opportunity helps to focus where to improve. For example: Ten percent (10%) of investigations were overdue, rather that ninety percent (90%) were completed on time.

A decrease shows improvement
A downward trend means improvement toward zero problems. For example: Following root cause analysis training, recurring deviations were reduced from twenty percent (20%) to five percent (5%) in six months.

Compare versus historical performance
Comparing current performance versus previous period, or this time last year, helps to illustrate improvement over time. For example: This quarter, ninety-five percent (95%) of supplier audits conducted versus plan was accomplished compared to five percent (5%) previous quarter.

Index metrics for relative comparisons
Indexing eliminates the effect of arbitrary data sets and helps to make comparisons. For example, there were seven (7) Complaints per Billion Units Manufactured Year-to-Date versus eighteen (18) for the same period the previous year.

Report absolute numbers for critical issues
Indexing should be avoided when the issue is critical or numbers are low. For example, report that two (2) batches were recalled, rather than 0.2% batches were recalled.

Note events with markers on the timeline
When data is reported versus time, it is helpful to note significant events that had an effect on the data. For example, the trend line for environmental monitoring excursions started to increase when building construction started.

Define an unacceptable trend
Trends should be defined for run chart performance data. For example, consider statistical process control method of five (5) consecutive movements in the same direction, or seven (7) seven consecutive points on same side of average.

Report measure of variability with averages
When reporting averages, be certain that that data can be legitimately combined, and provide a measure of variability. For example, reporting an improvement with a decrease in the average number of seventeen (17) deviations per batch record for the last ten batches compared to an average of twenty-five (25) with the previous ten batches, is misleading when the range of deviations increased from five (5) to forty-five (45) compared to twenty-three (23) to twenty-eight (28).

Chart scales must be sensitive for intended purpose
The scale of a chart should sufficiently large to illustrate the range of normal variation, and small enough to include all excursions within the time frame depicted. For example, the chart scale for Percent Overdue Nonconformance Investigations of 0 to 100% is inappropriate for a 12-month performance chart with normal variation of 3-6%.   A more appropriate scale would be 0-12%. If the same time frame included an excursion of 18%, a chart scale of 0-20% would be appropriate.

And remember---Data talk; opinions walk.

John Snyder
The QA Pharm

The QA Pharm is a publication of John Snyder and Company, Inc.

John Snyder and Company, Inc., provides consulting services to companies regulated by the Food and Drug Administration. We help our clients to build an effective Quality Management System to enable reliable supply of quality products to their patients. We also help our clients to develop corrective action plans to address regulatory compliance observations and communication strategies to protect against accelerated enforcement action.

Friday, October 30, 2015

Management Responsibility for GMP Oversight and Control: A Review of Requirements

Supreme Court Cases
Historically, the FDA has cited the Supreme Court decisions of  United States v. Dotterweich (1943) and United States v. Park (1975) as FDCA legal cases that establish that the manager of a corporation can be prosecuted under the Federal FDCA, even if there is no affirmation of wrong-doing of the corporation manager individually.
In the Dotterweich case, the jury found Dotterweich, the president and general manager of a drug repackaging company, guilty on two counts for shipping misbranded drugs in interstate commerce, and a third for shipping an adulterated drug. One dissenting judge of the Circuit Court of Appeals reversed the decision on the grounds that only the corporation was the “person” subject to prosecution, thus protecting the president personally. But the Supreme Court reversed the decision thus holding Dotterweich individually responsible, not just the manufacturer. Justice Frankfurter delivered the opinion of the Court, “... under § 301 a corporation may commit an offense and all persons who aid and abet its commission are equally guilty….”
In the Park case, the chief executive officer was found guilty on all counts involving food held in a building accessible to rodents and being exposed to contamination by rodents, resulting in the adulteration of the food within the meaning of the Federal Food, Drug, and Cosmetic Act (FDCA). Park’s defense was that he had an organizational structure responsible for certain functions to handle such matters. However, evidence from inspections of multiple locations indicated the same problems and inadequate system for which he had overall responsibility. Chief Justice Burger delivered the opinion of the Court, “... by reason of his position in the corporation, responsibility and authority either to prevent in the first instance, or promptly to correct, the violation complained of, and that he failed to do so... the imposition of this duty, and the scope of the duty, provide the measure of culpability...”
More recently, Public Law 112-144 (July 9, 2012) called the Food and Drug Administration Safety and Innovation Act (FDASIA) added to the definition of CGMP in the Food and Drug Cosmetic Act (Section 501, 21 U.S.C. 351) to explicitly include management oversight of manufacturing to ensure quality. Section 711 of FDASIA states:
  • For the purpose of paragraph (a)(2)(B), the term “current good manufacturing practice” includes the implementation of oversight and controls over the manufacturing of drugs to ensure quality, including managing the risk of and establishing the safety of raw materials, materials used in the manufacturing of drugs, and finished drug products.
The addition of oversight and controls to the definition of CGMP has strengthened the FDA position with specific language for management’s responsibility for oversight and control as a requirement in the Act. The question remains how to practically and operationally to perform this responsibility. The following model describes essential elements of a CGMP Management System for oversight and control.

Management System 

The implication of the impact of CGMP noncompliance on the business is not theoretical. There are ample examples in the pharmaceutical industry where ineffective implementation of CGMP systems resulted in the loss of control that materially affected product quality, which, in turn, affected inventory and patient supply. Establishing a Pharmaceutical Quality System (PQS) that effectively implements the CGMPs is the means for maintaining a state of control—the fundamental intent of these regulations.

Management does not assume positions of responsibility with the intent of neglecting CGMP compliance. However, management may not enter the top position fully equipped to assume responsibility for CGMPs in a practical way. Management may delegate all CGMP matters to the Quality Department and take a hands-off approach and rely on this function to bring matters to its attention at their discretion. Such passivity leads to hearing only the bad news when it is far too late to contain and resolve the problem in the most cost-effective way with least risk to public safety.

Likewise, some Quality Departments may not be adequately equipped to bridge the space between top management and daily operations with effective structures and processes that enable management to exercise its responsibility for CGMP oversight. Too often the default position is to rely upon the outcome of regulatory inspections. But as one might expect, a good outcome can give a false sense of security, and a poor outcome can be viewed as the exhaustive list of problems. As in any area of the business where risks must be managed, there is no better approach than having an intentional management system in place that provides actionable data to know internally where your daily operation stands at any given moment.

Assess, Improve, and Implement--and Perform

For nearly 20 years, John Snyder and Company has served the pharma industry to assess, improve, and implement the Pharmaceutical Quality System (PQS). Our Management Triad Model will help you to assess and develop the crucial structures, systems, and processes for Management Oversight and Control for monitoring your state-of-control and to become an anticipating organization.

Please contact me at I want to partner with you.

John Snyder
The QA Pharm

The QA Pharm is a publication of John Snyder and Company, Inc.

John Snyder and Company, Inc., provides consulting services to companies regulated by the Food and Drug Administration. We help our clients to build an effective Quality Management System to enable reliable supply of quality products to their patients. We also help our clients to develop corrective action plans to address regulatory compliance observations and communication strategies to protect against accelerated enforcement action.