Good question for discussion. I’ll play!
If nothing ever changes, then there’s no reason for revalidation. But that’s not feasible, change happens all the time: you get all sorts of patches on PCs (OS patches, security tool updates, etc.); networks get and lose nodes; PCs get new hardware (extra memory, etc.); unrelated software is added / updated on the host or server; and so on. No matter how carefully you control your changes (be they software or system), I don’t believe there’s any way you can guarantee that all of the above changes have NOT impacted the validated system.
So, IMO, frequency is irrelevant. Indeed, it’s all about risk. Any change to a validated system poses a risk to the validated system operating in the expected manner. How you mitigate the risk is really the concern. The approach you take to these efforts should be defined in your validation master plan; e.g., (and I’m simplifying greatly) “when non-functional changes are made, tests x, y, and z will be run to show the system remains in a validated state.”
I believe the ‘last validation’ question is, essentially a trap. You give a date and then they ask for records regarding changes to the system. If there are changes since it was last validated, you have some explaining to do and will likely not come out cleanly.
Can you clarify your approach? The way I read it, you update the validation summary with the latest change. Yet the test documentation shows an earlier date?
Again, good question. Hopefully we’ll get some good discussion.