I am encountered difficulty to justify the hardness OOS at the compression stage for all the three PV batches. All three batches are higher than the specification. The hardness is increased to achieve desired thickness. I am justifying that although hardness is OOS, the dissolution and other test profile are still within specification.
but, what make this more complicated is the subsequent routine batches (further monitoring) are found quite consistent within the hardness specification, but somehow there are few batches are slightly higher a bit than the specification. Thus, seem like the hardness is not consistent. Operator keep changing the hardness so that can get the thickness.
Under this condition, I am very hard to recommend a “new specification” for the hardness, because the range of hardness run before and current batches, are very big difference between batches.
I don’t knw what are the possible root cause, where the particular three PV batches have a very high OOS (compared to other batches). Is it possible due to the machine setting? I have no evidence on this…
1 - It sounds like your company is repeatedly getting OOS’s for harness.
2 - It sounds like the operators are changing the hardness to achieve another characteristic (thickness).
3 - It sounds like there are big differences in hardness between batches.
4 - It sounds like during PV hardness was very high compared to other batches.
I think you are correct. You cannot change the specification to match your product. I also think you are correct, there is another factor(s) which affects your process, which is right now uncontrolled.
Regarding root cause of the hardness changes, you might start here.
I think this is probably a much larger problem than you will be able to sort out without providing addition information.
I could easily see this investigation taking days/weeks, and a lot of resources.
Hi Jared, thanks alot for your feedback. I set few root cause including particle size distribution, machine, thickness range specification. I would think it might due to machine, since other product (same size, same diameter, same hardness spec) compressed in another machine, but the reading is consistent.
If let’s say, we dont want to trial run in other machine, what else we can do ?
The only thing I can recommend at this point is to gather the data you have already, and organize it and run statistical analysis on it. If you don’t have a statistician in house, you might want to see about paying someone as a consultant to go through your data.
But you’ll absolutely have to give your statistician a list of potential critical parameters. These are called CPPs (critical process parameters in QbD talk). You will also have to give your statistician a list of critical quality attributes (CQAs).
Then the statistician will sift through the parameters to determine what affects your attributes. Basically it is figuring out what inputs affect your outputs. This might seem like a step backward, but it actually also helps ensure product quality, and it has the added benefit of potentially optimizing the process (so that it is faster, with less waste, and appropriate levels of control).
I think here, it would be nice to avoid redo-ing some studies, and hopefully you wont have to create more data.
Would it be possible to collect data during current manufacturing and then sorting it and sifting it statistically? That way you can continue with what you are doing, while performing ongoing “optimization studies”. The various agencies I’ve worked with (in audits) have been pleased when a company collects extra data to do further optimization, they call this process analytic or PAT, and continual improvement.
BUT, AND THIS IS CRITICAL… You cannot optimize a current process process using ongoing data analytics, if you process currently poses any kind of patient risk.
If your product is currently in control, but you’d like to improve it, work with a statistician, propose inputs/CPPs you would like to monitor, and outputs/CQAs which you would like to monitor. The statisticial will help you determine the relation between inputs and outputs.
This is way over my head, as am not qualified to speak as a statistician. But overall it basically involves “multivariate analysis”. Which calculates if an input/variable affects an outcome, how much the effect is, if there is curvature in the response, or if the response is linear. It can also help optimize the process so that it meets quality standards, but also goes faster or is more efficient.
The FDA 2011 guidance “strongly recommends” the consultation of a trained statistician. Below is a link to the guidance. I would do a search for the word “statistic” when you open the document.
If the link doesn’t work google “FDA Validation Guidance 2011”
I apologize if you are not in the US, and the FDA guideline is not perfectly applicable. The World Health Organization and the EU (and PIC/S) are in alignment with the principles, but I’ve not included the links here.