One of the first questions I ask a pharmaceutical executive is how he or she gets information to know that their quality management system is effective. On one occasion and after a slight hesitation, I had to re-phrase my question. The conversation went something like this:
“What I mean is: How do you know that your product quality assurance/ GMP compliance program is working well?”
“Oh, well—quality is a priority for me. I pay a lot of personal attention to our quality program. We have a lot of quality projects at this site.” He motioned out the window to the Six Sigma flag in the courtyard. “I’d love to give you a tour. We have posters of all our projects in the cafeteria.”
That wasn’t exactly the answer to my question, so I tried another approach. “How do you decide the projects to work on?”
Proudly, he replied, “We have self-directed work teams here. They are empowered to select their own projects that make their areas more efficient.”
“That sounds really special.” I tried to sound sincere. But now I was starting to worry. After all, the reason for my visit was to help him prepare for an FDA District meeting regarding his most recent FDA483.
I tried again, “How do you think you fared through your last inspection?”
“Well, that’s why I asked you here.” He pointed for me to take a seat. “Our FDA inspections have continuously improved.” He passed a trend chart to me and continued, “Our corporate risk management group keeps a rolling average of the number of 483 observations. We only had six observations this time—greatly reduced number compared to the last time. That brought down our average.”
I pointed to a dotted line running across the chart. “What’s this?”
“Oh, that’s the norm for our entire global network. These last inspection results brought us well below the corporate norm. So we’re looking good from a corporate perspective.”
“So this is how you get a picture of how well you’re quality management system is working?” I asked trying to keep the tone of my voice flat.
“Sure,” he replied confidently. “But it’s the damndest thing.” I let the comment hang a moment as I waited for him to complete his thought. “The FDA inspector was rather hostile. So I thought we’d better go pay a visit to smooth things over. Six—just six observations! The inspection was over in two days! It wasn’t but two years ago that the FDA camped out here for three weeks and gave us nineteen observations! That’s continuous improvement, isn’t it?”
I didn’t answer his question right away. Instead, I took a few minutes to review the FDA483’s from their previous inspections. It became abundantly clear what was going on between this company and his FDA district office.
The inspector went only as far as he needed to go to document that the same issues remained—then he simply quit and went home.
Management was high-fiving each other in the corridor at the relatively few observations calling the one-pager “a light one.” Evidently the FDA inspector left with a different impression.
Although this is the year 2010, the kind of thinking depicted in this dialog dating to the Paleozoic era is still around.
Here’s what I say to such fossilized specimens:
First, numbers of observations cannot be statistically treated. You can neither average them, nor calculate a relative standard deviation because the number of observations is not derived from a single, predictable, defined process. Inspectors differ; the scope of inspections differs; the length of inspection differs. So, you cannot pool the data and statistically treat to make any kind of meaningful inference. In this case, fewer definitely was not better.
Second, the absence of a negative comment does not mean agreement. For example, just because an inspector walked through the warehouse that was not temperature and humidity controlled to get to the receiving inspection area, does not imply concurrence with operating an uncontrolled warehouse.
Third, inspection results are a seriously lagging performance indicator—if not lethargic. Much like the company stock price, by the time serious lack of compliance has had its dastardly effect—it’s too late to stop the shareholder lawsuits. Additionally, inspectors do not look favorably upon their observations being the measure of compliance—or the impetus for a quality plan. In fact, boilerplate language in Warning Letters points out that the list is not intended to be an all-inclusive list of deficiencies. The expectation is for company management to know for themselves the state of control though established internal review processes.
Fourth, efficiency “projects” may not address, but rather cause compliance issues. Efficiency and compliance are two different things. Efficiency deals with the most economical way of working. Compliance deals with a consistent way of working that meets regulatory requirements. For example, a self-conducted survey by an active ingredient supplier of its own operation is very cost effective, but inadequate for meeting the requirement for you to know their capability of consistently meeting your specifications.
More important than the number of 483 observations is the qualitative message. Taken collectively, the qualitative themes in this example were: (a) inadequate management control, (b) inability to sustain compliance, (c) failure of the quality control unit to establish an effective quality system, and—what we are seeing more of lately—(d) lack of corporate oversight and action.
An appropriate response to the qualitative message should be considerably different than the quantitative one. The qualitative response reads more like an organization overhaul—organization transformation, while the quantitative response appears more technical.
No comments:
Post a Comment