Re-Validation

I’d like to throw this out for discussion:

I disagree with the opinion that a computer system needs to be “re-validated” at certain points in its life. I feel that the system is validated once and then kept in a validated state through a risk-based process of careful change control. But, auditors constantly ask me, “When was the last time the system was validated”?

This is why, when I install a major vendor upgrade on my configurable COTS system, I revise the Validation Summary document to reflect the change. I point to this revision when asked, “When was the last time the system was validated”.

How do others answer this question? What document do you show an auditor when he/she asks the question?

Good question for discussion. I’ll play!

If nothing ever changes, then there’s no reason for revalidation. But that’s not feasible, change happens all the time: you get all sorts of patches on PCs (OS patches, security tool updates, etc.); networks get and lose nodes; PCs get new hardware (extra memory, etc.); unrelated software is added / updated on the host or server; and so on. No matter how carefully you control your changes (be they software or system), I don’t believe there’s any way you can guarantee that all of the above changes have NOT impacted the validated system.

So, IMO, frequency is irrelevant. Indeed, it’s all about risk. Any change to a validated system poses a risk to the validated system operating in the expected manner. How you mitigate the risk is really the concern. The approach you take to these efforts should be defined in your validation master plan; e.g., (and I’m simplifying greatly) “when non-functional changes are made, tests x, y, and z will be run to show the system remains in a validated state.”

I believe the ‘last validation’ question is, essentially a trap. You give a date and then they ask for records regarding changes to the system. If there are changes since it was last validated, you have some explaining to do and will likely not come out cleanly.

Can you clarify your approach? The way I read it, you update the validation summary with the latest change. Yet the test documentation shows an earlier date?

Again, good question. Hopefully we’ll get some good discussion.

I agree it depends on what the change is, what impact it has and what the risk is.

It might be possible to cover some changes (eg patches, minor upgrades - 5.1 to 5.2 but not 5.1 to 6.0 etc) through procedures rather than “validation” - with appropriate considerations, testing etc?

However! If lots of minor changes are made - at what point do they roll up into a major change?

The work needed to ensure a changed system remains in, or comes into, a validated state may be significantly less than the work to validate it originally - in fact I would expect this to be the case for most changes. But I think each needs evaluating on its own merits.

Cat

Thanks for the feedback.

I just don’t like the word “re-validation” in the way auditors define it. They really mean “re-tested” (because they don’t understand that testing is just a subset of validation) and they use the term to mean “re-running ALL of your tests”.

If we upgrade to the vendor’s latest version, we consider this to be a major change and we will use a risk-based process to decide which of the tests need to be re-run. In this case, we revise the Validation Summary (VS). These re-run tests will have dates which coincide the the VS. If the new version doesn’t impact Part 11 functionality, for example, the Part 11 tests will not be re-run and they will have dates prior to the VS.

If I simply replace Mary’s name with Bob’s because Bob is the new workflow approver, I will show that state of the workflow approver property both before & after the change and I will run a simple test to prove that it works as intended.

But when asked, “When was last time the system was validated?”, I don’t want to just show the auditor 5 pages of documentation on my most recent change (where I replaced Mary’s name with Bob’s). So, I show him/her: 1) The latest revision of the VS - which coincides with the last time we upgraded to the vendor’s most recent version and 2) The 5 pages of documentation on the most recent change. But, of course, the dates are different so I can’t give a simple answer to the question, “When was the last time the system was validated?” The answer involves a long discussion of the difference between “validation” and “testing”. IMO, you can test and re-test all the time, but your system should always be validated.

I agree that change happens all the time and NO change goes undocumented in our system. We don’t handle any changes procedurally. I just don’t have the time to “re-validate” (re-run ALL tests) even once per year - and that’s how often the vendor issues a vew version. So, we base all testing decisions on risk and re-run tests when we feel it’s appropriate.

What it sounds like you’re doing is exactly what the auditors should be looking for: risk-based decision making and activities. The risk analysis would be part of the “re-validation” (non-testing).

I expect that auditors get jaded - they probably see no such effort so many times and tend to expect same-old same-old everywhere they go. If you’re really doing the risk analysis, testing justified through the risk analysis, and documenting the effort, I can’t imagine you’d get dinged. And even if so, a quick appeal should resolve the matter.

Have you been written up? If so, I’d be interested to hear what the rationale was and how it was resolved.

Yeah we got written up once. Not sure if it was a true software glitch or user error, but something out of the ordinary happened during an audit - great timing. Anyhow, the finding said that our system “was not validated”. That shows the ignorance of some auditors. The system was validated, but an error occurred. These things happen.

Thanks for all your input. You’ve been a big help.

Hi.

As long as the process/system/item operates in a state of GMP control and no changes have been made to the process or output product, the process/system/item does not have to be revalidated.

This policy must be stated within the company’s practices and procedure documentation. A register of validatable process/systems/items should indicate the validation policy for each system/item.

Alex Kennedy

Systems do not need to be ‘re-validated’ but should be subject to periodic review to ensure they maintain compliance. The frequency of periodic review for a system should be specified within the original validation report, and should be risk based. Assuming your change control approach is robust the risk from cumulative effect of change should be mitigated. As a general rule we tend to look upon software subversions e.g. v5.9.1.6 to 5.9.2.1 as being able to be introduced using local change control, whilst major upversionning e.g. v5.9.1.2 to v6.0 as requiring some serious validation effort.

Using a ‘periodic review’ approach answers the question of “when was the system last validated” by having a validation report as a deliverable. Effectively you are reporting on the period between last validation report (be it the original VMP or last periodic review) and the present focusing upon system change and change to procedures/practices. This gives you a baseline from which you can confidently state the system is in compliance and provides traceability back to historic documentation (i.e. when the system was first validated).

Monkey.

However…

With that said, when you do operate a periodic review process it is sometimes easier to ‘re-validate’ (start again) if legacy documentation doesn’t provide you with the confidence it should upon which to base your review.

Monkey