What are the common pitfalls experienced when Validating Computer systems?


In my company we are Installing the TrackWise system to manage initially our Incident Report System on site.This is an electronic Quality Management system on a web based server.

As this is my first experience in being involved in Computer systems Validation Project I think it as a relevant starting point for research which I am undertaking for completion of a Masters which I am studying part time.

I am just beginning my research at this stage and hope to formulate a questionaire for distribution to industry colleagues shortly to decipher what the common pitfalls and challenges of computer systems validation are. I may generalise this to the validation of TrackWise system i.e what are the common pitfalls when validating a TrackWise system, however I am not entirely sure what scope and direction this will take yet and may generalise the questionaire and research broadly to all aspects of Computer Systems Validation.

I would appreciate if anyone could provide feedback and/or ideas on your own experiences of Computer System Validation?

Thank you.

Take a look at this thread, for starters:


See how far that takes you.


Some believe that Off The Shelf software application support a lighter validation strategy and effort, but this is often not the case. Without using names, in two OTS application validation efforts I’ve been a part of, our efforts uncovered significant bugs with the programming. One vendor ended up giving us another license because by the end of the validation effort, they ended up rolling to the next version.

Additionally, the software validation packages that sometimes come with an application (usually at extra cost) are generally inadequate, in my experience. Organizations should expect to have to develop additional test cases to ensure a robust validation effort.

Good luck with your research!



We have done number of projects wherein COTS packages have been involved. In addition to inadequate documentation available, we have observed following major lacunas in validation.

  1. Proper User Requirement Specification defination - Generally this document is not available or expections are not defined thoroughly. This will lead to inadequate testing.
  2. Configuration Management - Also configuration documents are not available, either from vendor’s commissioning engieer or from user’s implementation team. This again leads to mismatch of expectation and implementation. Also, it will affect change management and further requalification issues.
  3. SOPs - SOPs are not available or they do not cover such systems adequately.
  4. Test evidences - Typically COTS vendor will provide basic IQ, OQ documents to demonstrate that their package works as specified. However main document missing is PQ which verifies users configuration and performace against expectations. This document also does not properly prove how electronic records creation, storage etc is handled for proper data authentication. Again, all these should have been identified in URS document and accordingly it can be linked in PQ tests.
  5. Certificate from Vendors - Generally vendors provide two different certificates. First one is called " CONFORMANCE CERTIFICATE" mentioning conformance of product with some or other practices or their internal specifications. Other one is " 21 CFR PART 11 COMPLIANCE CERTIFICATE". Both are misleading, more so for Part 11. As a user, we should be more concerned about performance of system against URS and CONFIRMANCE CFT does not satisfy that. Secondly, having Part 11 compliance cft means, the system has capability to be configured accordingly. This is generally sold and misunderstood by user as system already being Part 11 compliant!! Unless and otherwise during testing such features are configured and tested for compliance, it will come as rude shock during audit!
  6. Network or Infrastructure Qualification - One of the most important aspects of today, as most of COTS such as Trackwise etc are client-server based and actual data transfer depends upon adequately designed and properly supported infrastructure.

Hope above is useful. Good luck…


I think a more common pitfall is that Validation testing ends up being the QA testing for a software. A validation testing has to be more SOP specific, user specific, much smaller in size than the QA testing.


A good observation. Although verification and validation are both necessary, understanding the distinction is important so that one area or the other is not overlooked.