What are the risks of not doing computer validation

If you do not validate your software, any type of product problem you could imagine could happen if you rely on data from that system. If the data are corrupt, incomplete or inaccurate one would make incorrect decisions causing product to be released that could harm patients. That is the basic premise for validation. You need to be able to trust the data to produce product appropriately.

What scenarios could cause risk to the business like recalls and
discontinues production?

  1. Being shut down because we are not complying with the law.
  2. When we assume a function is working but it has never been
    tested such as a what happens when there is a product out of
    specification result, if it does not do what our SOP says or what we
    want it to do then we have risk.
  3. Computerized analytical lab equipment may show false negatives
    or false positives if not validated at the boundaries.
  4. Backups may not be performed; this is more common than most
    people think. There are many variables in `automated’ backups and
    disaster recovery.
  5. Incorrect setup of equipment may mean that results are
    incorrect. This would have disastrous consequences and would not be
    captured until the equipment fault was found.

I’m quite not sure about the purpose of your post, as the actual decision to “validate” is not an option, whether it be computer validation or any other form of validation? Also, the ‘software’ is just one component of the end-to-end ‘system’ validation life-cycle.

Ultimately, if a company chooses not to do computer validation (and many would prefer this route if not enforced by FDA/MHRA etc), then the “risk” you run is in the regulator either not supplying you with a product license in the first place, or withdrawing that license at a later date in the life-cycle in the belief that they are ‘protecting the patient’. They don’t give a damn about business risks to the company.

I may have misunderstood your question though?

I would prefer to ask about inadequate validation. Does anyone really implement software with no lifecycle or testing? Now that would stir a debate. What if you don’t test backups? Is that really a quality issue or is it more a good IT practice. That would be a nice discussion. Did not test improper values? Security not tested. Mathematical functions not tested.

Hi David,

It wasn’t really a question but more of a interesting post I found on the FDA.com website that I thought might be of interest here.

Regards

Ah!..now the ‘level’ of validation totally revolves around company policy/procedure and cultural expectations for computer validation, rather than well defined regulations (high level guidance). However I personally don’t know of any company that does no software life-cycle / testing at all, although many “pretend” to do more than they actually do, up until they get caught out by a systems inspector that is, but they live to fight another day and then have to do it properly :wink:

Although a passionate believer in computer validation, on the right
systems and to a risk-appropriate level of detail, let me play
devil’s advocate for a moment.

Much of the money spent of CSV in my experience (dealing with most
of the big-pharma firms and a few device manufacturers) is wasted.
Very little of our work has any bearing on quality- and I have even
encountered one situation where “compliance” was at odds with real
quality.

The light at the end of the tunnel was supposed to risk-based
validation. However, the very deep risk aversion of both companies
and employees of drug/device companies has meant that in many
situations this is just another deliverable to your existing GAMP v-
model. I have yet to see a company employ risk-based validation in
a thoughtful manner to provide more assurance in areas of real risk,
and to scale back on the burden in situations where risk to patients
is bordering on non-existent.

An example: being a dilegent validation guy, I want to provide
irrefutable proof that the version of code in a series of PLCs and
SCADA terminals is the same as was validated. I can do this is very
quickly and without fear of contradiction using an SHA-1 hash
generator tool. The utility does not store electronic records and
is reliable. Instead of simply doing a very quick requirements and
testing document that confirms succssful installation, I am having
to fight many people who want this utility to be subject to full
SDLC documentation and source code review etc.

So instead of using a tool that will allow us to rapidly provide
irrefutable proof of whether our change control is working, we face
either not verifying data integrity at all (like the vast majority
of firms), or doing it via an intrinsically error prone manual
process. In the name of “validation” we stop ourselves from doing
the easy things that can make a measurable difference.

In this personal tale of woe (wahh!!) I see a microcosm of all that
ails CSV in general.

Validation is more than important- it is vital. However, many of
its practitioners place imaginery requirements (cultural myths) and
braindead template approaches ahead of “documented evidence that a
system performs as intended”.

So to answer the question:
risks of not validating are all of those listed below (i.e possibly
very serious) but in a great many situations this risks will be
entirley procedural i.e. your internal QA, your boss, etc. will take
a dim view- but the effect on product quality and patient health
will be negligible.

[quote=chandra]Much of the money spent of CSV in my experience (dealing with most of the big-pharma firms and a few device manufacturers) is wasted. Very little of our work has any bearing on quality- and I have even encountered one situation where “compliance” was at odds with real quality.

Validation is more than important- it is vital. However, many of its practitioners place imaginery requirements (cultural myths) and braindead template approaches ahead of “documented evidence that a system performs as intended”.[/quote]

Generally I would agree with most of what you say. On the 2 points above:

  1. Lots of the money wasted was down to poor client project management and ignoring the value of doing ‘effective risk assessments’ to determine the ‘level’ of validation required; a failure of many companies even today! What I would also say though, is that the “validation revolution” has forced most companies to implement more effective quality systems and methodologies when compared against what they had 15 years ago, if they had one at all that is!

  2. Large numbers of so called “industry leading practitioners”, particularly within the USA and UK, have introduced ‘imaginary requirements’ and ‘cultural myths’ to, allegedly, enhance their own business agendas. This is sad, but true, and I would agree that the ‘braindead template’ and various CSV development model approaches you refer to, does ‘cloud the issue’ of, “do you have documented evidence that a system performs as intended” i.e. ‘a one fit template, doesn’t fit all scenarios…far from it in fact’. Again, many companies continue to be driven by a template and “set in stone deliverables”, when some good old fashioned common sense should prevail!

To your question, the only study I have ever heard of along these
lines was at a major pharmaceutical company in Connecticut. At a
conference I attended, a speaker referenced a CSV effectiveness
review which showed that validated and unvalidated systems failed at
exactly the same rates.

Of course, this can all be rationalized away as the result of poor
validation. My earlier post on this thread goes to this very point-
that the vast majority of our industry’s efforts in this area are
wasted. Trivia such as document formatting and enforcing rigid
compliance with SOPs (that were poorly written in the first place)
seem to matter more than patient health and safety.

Indeed rigid compliance with GAMP or guidance documents (written by
FDA personnel who clearly have very limited experience with the
subject matter) seems to have become the goal. “Compliance over
quality” is strangling the industry.

I can immediately think of 2 situations within the last year where a
client has continued with (or implemented) a manual process to avoid
the perceived difficulties of CSV. The risk of error is massively
higher with the manual process rather than via an automated system.
Yet- CSV is seen as too time consuming and too expensive.

How did we get here? How do we get better?

The FDA itself bears much of the blame. The entire Part 11 debacle
is symptomatic of the problem. Ill considered and poorly written
regulations (clearly designed for only a narrow subset of the
systems covered within its scope)- are supported by even less
helpful guidance documentation.

A review of warning letters would seem to indicate that the agency
only slams those where they find evidence of widespread quality
systems failure. Failure to validate your Visual Source Safe
application? I haven’t seen that one. Failure to use the correct
corporate template to present the necessary data? Nope- haven’t
seen that one either.

The FDA has very little incentive to clarify matters. All of the
over-shoot in compliance efforts can easily be misconstrued as being
better than just making it over the bar.
For manufacturers, the consequences of compliance failure (in terms
of 483s, commercially, and to a manager’s career) are massive. Risk
based validation is generally failing because the agency encourages
with its publications, but fails to do enough to alleviate fear.
(Are there enough people at the agency who understand the subject
well enough to really comment?)

The industry itself has failed to infuse its culture with sufficient
understanding and knowledge to make informed judgements. So- staff
who feel that they lack sufficient knowledge either hire consultants
(who often lack sufficient hands-on experience themselves) or just
take a “do everything” approach to minimize the risk that they will
get fired for making a mistake.
How many times have we heard presenters giggle through a training
session “… of course, it is always good to have your validation
binders make a big thud when you drop them on the table…” If it
is true that inspectors are more impressed by countless reams of
paper over a more thoughtful but “light-weight” alternative… then
we are all in a lot of trouble.

Both the agency and the industry must go back to basics. Less
prescriptive solutions, fewer (but more useful) SOPs- and more of an
emphasis on quality. (How many people in the industry use
statistics for anything other than product inspections? If CSV is
so important, surely it too merits measurement?)

The last part of this polemic addresses a sensitive subject:
validation and quality people. As a profession, validation and
quality often attract people who are obsessed with detail. In the
right measure this is no bad thing. However, the industry is awash
with people who cannot see the wood for the trees. Some were drawn
out of interest, others will have been pushed out of operations
roles where they were seen as slowing everything down. Validation
and quality can (and do) grind many perfectly good projects to a
halt simply out of nit-picking. Managers need to ensure that they
have the right mix of personalities to ensure that something as
important as drug/device quality does not become a dumping ground
for the obsessive-compulsive.

Please be in no doubt- I am not arguing for less quality assurance.
I am just pleading for an intelligent approach.

I agree with you to some extent. i think having a test system is key to finalization of your user requirements. lets face it, most of the vendor audits done by clients are a task to check off without actually looking into detail what and how the vendor tested their system. some vendors are great, some aren’t. so relying on the audit to determine the extent of validation can get you in trouble if you’re using that as the basis for extent of testing. having a test system allows you to truly see if the system meets your goals and finalize your requirements.

but the statement on testing what was already tested by the vendor i agree and disagree. i’ve been presented a model where you can classify your requirements by regulatory risk (hi/lo) and business need (hi/lo). put that in a matrix and test the critical requirements. i think that may make sense if you want to take that type of risk…but i’d only do this if i was 100% in my cots vendor product.

I personally test all my user requirements (positively test only, unless we’ve deemed the application or vendor didn’t sufficiently test their system) to ensure they are met in my system. i’ve worked on ‘legit systems used in all pharma’ and found audit trail errors, login issues, etc. but you call the vendor, figure out what’s wrong and assess the risks/impact and continue…like you said. but at least you know something is wrong because you tested the requirement.

I think the time wasted isn’t due to making sure you re-hash testing, i personally found lots of time is wasted because of the gamp risk-assessment which i’ve seen beaten to death at companies with zero value added. i don’t see anything wrong with a risk based validation for cots systems that positively test the requirement and go on. i don’t know why i’m assessing probability of obscure events, etc for something i didn’t develop. and if those scenarios do happen, how are you going to test for them? that’s why we have a problem reporting/change control mechanism is place once in production.

Here’s something I have been kicking around for a while. It takes about 6-9 months to implement an off the shelf system including validation. How long would it take to implement an unvalidated system. 1 month?
So I have to pay for software for about a year before I get to use it. Doesn’t seem right.

My basis for using the unvalidated software is that through the use of the software I am actually more compliant. For instance, an off the shelf document system, I can control versions, approvals etc a lot better in an electronic system.