Difference between Low,Medium and High

Hi,

I would appreciate some input for the following scenario

I am currently in the process of developing a MVP for a small laboratory (23 pieces of equipment). I am following my own risk assessment that will give me a final outcome of low, medium or high. The resulting outcome will then determine what level of validation is required in order to fully validate the piece of equipment.

My question is whether the outcome is low medium or high, I assume that the basic URS-FS-IQ-OQ-PQ will have to be generated. So in terms of documentation generation what is the difference between low, medium and high.

Any input would be greatly appreciated.

Thank you

[quote=chandra]
My question is whether the outcome is low medium or high, I assume that the basic URS-FS-IQ-OQ-PQ will have to be generated. So in terms of documentation generation what is the difference between low, medium and high.

Any input would be greatly appreciated.

Thank you[/quote]

It depends; there is no set way, but your company Validation SOPs or methodology should provide guidance on such issues.

Assuming this a “retrospective validation” project, there is no set way to do such a task, but that really doesn’t matter as long as you document up front in your MVP your intended path to complete the validation project, and stick to it.

You can consult the GAMP4 guidelines and categorise your “systems/ equipment” independently of course, but I would also consider the following approach, assuming this is a retrospective validation project, as it sounds to be:-

1). You MUST have, or develop, a URS for each piece of kit.

2). These days you MUST conduct, for each piece of kit, a Part 11 assessment for any ERES implications, including any CSV predicate rule implications/requirements.

3). You may not have a FS for each piece of kit, especially if it’s bench top lab equipment. This is not a total ‘show stopper’, you simply have to do a Risk Assessment on the equipment(s) intended use to determine the level of criticality of the equipment (also using O&M Manual), and map this alongside your URS requirements to identify the testing required in your test protocols. This will justify your testing approach……you MUST document this!

4). Consider, for simple pieces of equipment, the combining of IQ/OQ and even your PQ protocols, to reduce the number of document deliverables. Also include your CSV test requirements in the same protocols i.e. an integrated test protocol approach, but make sure that all of your instruments are calibrated ‘up front’ in the IQ section before functional testing is conducted, or your OQ test results will be seen as invalid! Don’t forget the Part 11 stuff either from the ERES assessment.

5). Produce a ‘Traceability Matrix’ from your URS-to-FS (or RA)-to-IQ/OQ/PQ protocols. This will help you in defending yourself in front of an auditor by demonstrating that you have tested all requirements and design elements etc in your test protocols.

6). Produce a Validation Report to report on what you’ve done, how, and what the results were, indicating whether each equipment item is suitable for GxP use. Your Validation Report should provide ‘closure’ on what you said you were going to do in your MVP, and/or indicate what you intend to do with any outstanding deviations or actions you may have.

If it was all new equipments and/or systems i.e. a ‘prospective validation’ environment, then things would have to be done differently. However this is what I would do in a ‘retrospective validation’ laboratory environment, others may do things slightly differently, but I hope it helps.

Thank you David for your response.

I see what you mean, I never thought of combining docs for simple non complexed pieces of equipment that sounds good.

Kind Regards

Chandra