PQ for software validations

PQ means we have test the performance of the system operations and out put for consistancy.

if so what we have to be tested in the PQ for different sites of an organization?

All the opeations we have tested in OQ. Then again in PQ it is to be tested again for performance.

Pl let me know the test case or scenario of the PQ if possible.

Tarakam

Where the operational functionality is tested in the OQ the PQ is usually perfomed to ensure that the throughput is meeting the expected outcomes.

So for example if your line is supposed to be generating 100,000 +/_ 10% per day the PQ should test the performance of the line and show this

[quote=chandra]Where the operational functionality is tested in the OQ the PQ is usually perfomed to ensure that the throughput is meeting the expected outcomes.

So for example if your line is supposed to be generating 100,000 +/_ 10% per day the PQ should test the performance of the line and show this[/quote]

So, what is the PQ for a LIMS? which does not related to perforance of outcome of production.

For LIMS, in fact even for other systems, there should be its own performance requirements in stated in the URS.

The URS might be incomplete in such cases.

[quote=zol88]For LIMS, in fact even for other systems, there should be its own performance requirements in stated in the URS.

The URS might be incomplete in such cases.[/quote]

So, in URS, I said that I need this; and in PQ, I test the system to verify it gives me “this”. eg., In URS, I said: the sample test data can be searched out by its batch number, then in PQ, I test the searching process to verify that I do can find the sample test data by input the batch number as the query parameter.
Do you think this is an OQ test ? and do you think PQ is very similar to OQ?
Thanks for your thinking and reply.

In a smaller system development, you will find the PQ is actually being included in the OQ.

What I mean is that the physical separate PQ document might not be there but the performance parameters should be stated clearly in the URS.

If you look at the V-model, you will see the URS->FS->DS… etc… The sub-category of whether you will document the qualification in a PQ or OQ will actually be specified in the FS and DS.

Nevertheless, the most important thing in the development is to ensure that all the URS is being met in the qualification process and validated accordingly.

Coming back to your question, if you have separate PQ and OQ, this URS is normally listed as OQ in the DS.

But if the URS stated that “the sample test data need to be able to search by its batched number at average 40% CPU utilization within 60 seconds” then the DS will need to take note of this conditions and this need to be verified in the PQ testing.

Like what Chandra posted… testing of upper/lower limits under certain specific conditions would normally categorize as a PQ.

I hope this helps.

The PQ should ensure the overall performance and fitness for intended use by focusing on the critical to quality attributes. The actual tests will vary depending on the type of system, category of hardware and software, risk assessments, supplier assessments, VMP, etc. The Gamp Good Practice Guide: Testing of GxP Systems is a good reference for computerized and software based systems. It is available from ISPE.

Hi TARAKAM27

I have been following the replies people have been giving you, as recently I carried out a software qualification and am interested in others’ ideas. As usual with software qualification the official answer from the regulations/guides are quite theoretical…"it should agree with what has been asked for in the URS etc. etc. ":confused:

As per others Zol88, You probably have tested the functionality of the program in the OQ already and know the various modules/function work correctly under normal and challenging use. The aim of the PQ is to show it works consistently over time.

Here is how I did this practically

  1. Recording all problems encountered and documenting steps taken to overcome these problems; All incidents/problems encountered (during routine use) were recorded by the user on a copy on the Incident Record Form All incidents were then assessed (by validation eng/leader/supervisor etc.) against the following criteria to determine the cause of the problem:

Use: Misuse or misunderstanding of the system
Master: Master Data missing or incorrect
Connection: Problems encountered connecting or logon problems
Permissions: User unable to carry out transactions required by their job
Enhancement: Further configuration/development required to enable the task/process to be carried out
Transaction: Transaction failed to meet the requirements

  1. The criticality of the problem was then be assessed against the following criteria: critical/major/minor etc.

  2. Each form had a signature and a sign off by the leader or whoever when the problem had been solved (be it training, further development of program, etc. etc.), both user/supervisor signed off to say problem had been resolved.

  3. live running for a period of six months, initially, to demonstrate that program meets all requirements in a live situation and recording any problems or failures in meeting requirements. If, after 6 months it is felt that the PQ has not demonstrated that it meets all requirements, then the PQ may be extended at the discretion of the Qualification Team

  4. The incident sheets were freely issued to all users and they sent back the sheets to the leader/supervisor as issues arose. This way there was a documented feedback and resolution of any issues over the six months.

  5. The PQ report will then analyse the incident sheets and form opinions etc.

Hope this approach is of help.

regards

Niamh

Don’t get so hung up on terminology that you do ineffective, inefficient, and non-added-value things. The whole purpose of validation is to ensure the thing is fit for intended use. (By the way, your validation master plan should provide guidelines for you!) To me, PQ on software doesn’t make much sense. I would have as part of my validation plan statements on how I would approach validation (which, in this case, would likely not include a traditional PQ) with rationale on why I believed this was sufficient.

Regarding the post where PQ was stretched out 6 months… do you not consider the item under test suitable for use until after the 6 month period?

Hi TARAKAM27,
Testing the software during OQ and PQ might sound very similar. Both are similar except the fact the the OQ is executed in validation environment and all the functionalities are tested in depth. Example: In LIMS, if a test involves calculation, both positive and negative testing should be done for the calculation field.
To me, PQ would cover the overall workflows in Production environment. Example: Samples are logged, recieved, results are entered and samples are reviewed and authorized.

Hope this help.