How do you PQ?

Hi there,

Do you have difficulties finding stuff to insert into the PQ protocol
for a basic data entry software system. Before inserting a new
software into production we do a supplier Audit, DQ, FAT’s and then
a “User acceptance test” then we follow it all with a IQ, OQ and PQ.
The problem I am having is that by the time we get to the PQ all the
software has been tested, all the SOPS have been written tested and
authorised, everyone has been trained and all the documentation is
present.

Hence I am having trouble finding things to insert into the PQ which
hasn’t already been thoroughly tested. The normal V shaped diagram
where "Design=IQ, “Functional Spec = OQ” and “URS = PQ” doesn’t
always work. To test the functions properly in the OQ you basically
have to simulate the user process and therefore for most software
systems you are also testing all the User requirements.
Obviously for systems which control a production process or have lots
of workflow then a PQ is a must, but for a normal data entry system
everything is thoroughly tested in IQ and OQ. So if we don’t want a
one page PQ we basically have to repeat the tests which were already
in the OQ.

To stay on topic lets take the Part 11 part of the software.
Once I have tested that the software conforms and performs
technically
That all the policies and SOPS have been written, tested and
authorised.
That all the users have been trained

Etc…

Then what tests would I put in the PQ which weren’t already in the
OQ. Besides checking that the users are following the SOPS and
policies correctly, etc which are normally tested with internal
audits anyhow.

Any thoughts OPQ maybe?

In response to your comment, “If you have tested all related functions,
what’s left in a PQ?”, I believe there is a definite place and use for a
PQ. Testing all the related functions does not ensure that the related
functions work as intended, in the correct order, or in the end, provide
expected results.

In other words, for our CSV activities, an OQ is
intended to demonstrate that the equipment/software work within
established ranges or boundaries requirements. We allow testing in
almost any order as long as the current test being executed does not
rely on any other test results. Usually we test security first, but
from there, you could test any number of functions in any order (as long
as the test pre-requisites are met).

Examples are: making sure alarms
generate the correct responses, ensuring valves activate at the
appropriate times, etc. The ‘testers’ can be anyone who has knowledge
of what is being tested and what the ‘system’ or ‘function’ should and
should not do. For us, the operational procedures are not necessarily
completed as the test scripts provide the usage guidance.

The purpose of a PQ is to demonstrate that the ‘system’ as a whole, operates as expected from start-to-finish, in the step-wise order in which is was intended, that it can accomplish this in a repeatable manner, and that
the results/outcomes are accurate. In our vision of a PQ, the system
users are the ‘testers’ (if you will) and operate the system using
established procedures.

Regards

Hi there

Like other comments earlier, don’t get hung up on the ‘names.’ i
think the confusion lies in what you’re intending to do, not fitting
tests into boxes without knowing what you’re trying to prove.

in my opinion, the GAMP 4 V-model is really a developers model and it
leads to confusion if you don’t understand it applies to the most
complex of software (in-house custom made)…for COTS systems, you
aren’t creating the software, so why are you generating a Functional
requirement and design doc? since we buy a majority of our products
in this industry (the customized in-house built portions should be
treated as GAMP cat 5 and require a FRS, design doc, unit/integration
tests by the developer), we have user requirements, and
we ‘configure’ the software (i call it a configuration spec doc). so
the FRS = OQ, URS = PQ can lead to confustion.

so what to do for a PQ? here is my suggestion (this is simplified,
not looking at system complexity like workstation systems, C/S
deployments, etc):

OQ: i’d test my URS in the OQ (in the ‘validation’ environment).

PQ:
for instrument related systems (i.e. mass spec): if you’ve qualified
the instrument parameters in the OQ (pump, detectors, etc), and
tested the software URS in the OQ…then you’ve tested everything as
you’ve said. but what you didn’t test is that the system works as
a ‘whole.’ so with a PQ that does a test run (nothing hard, can be 5
injections of ethanol), you can ensure that the components all talk
to each other and if there’s firmware issues, etc. you can also do
load tests if multiple systems are connected to a computer.

for enterprise software systems like LIMS: you can do load tests in
your production environment, or maybe re-test critical user
requirements since this is a different instance/environment. if
you’ve taken a risk-based approach saying your validation/prod
environments are equivalent, maybe you won’t think that’s necessary.

what i’m trying to say (and it probably doesn’t come accross that
clear!) is that i’m not doing the same test in the same environment
again. the PQ intent is not to re-hash what you did in the OQ, so you
need to know what your goal is and what you’re trying to prove.

as a side note, it seems like a lot of people are defining PQ as a
test where the end users execute against an SOP. i don’t agree with
this mainly because a PQ is not a verification that your SOP is
written correctly. you’ll write an SOP after you’ve verified the
system works against your user requirements (you have requirements,
you make sure the system meets the requirements, then the SOP is put
in place)…i think of the SOP as your user requirements in a
coherent format once you’ve proven the system does what you wanted.