Multi-component chromatography system implementation

A question on validation methodology, and what should be in a user
requirements document…

We have a large chromatography client-server application in the lab
where the system consists of 40 client computers each with various
hardware components (i.e. lab consists of 5 different types of hplc
pumps/autosamplers, 4 different detectors…so there are many different
combinations per client…all models are supported by the vendor
software). Since the `system’ consists of hardware and software, my
user requirements would have the software application requirements
(reporting, audit trails, security, etc) and hardware requirements
(operating parameters for each different pump model and detector
model).

In terms of timelines, we need to start rolling out the system as
fast as possible instead of having to validate this system with 40
clients as a whole. I’m thinking to validate the software with 1
client configuration (instead of all 40 clients). Release the
system, and implement the 39 other systems under change control.
These subsequent system implementations wouldn’t require any software
testing as that was done in the first system, so only a software IQ
and full equipment qualification for that client.

So I have a issue, if I do this, my user requirements has
requirements for 9 pieces of hardware. However, I’ve released a
system with 1 representative combination (pump A, detector A) of
hardware. Therefore my trace matrix for detector B,C,D will have no
testing. These other devices will be tested when those components
are implemented per change control; however, what do I write in the
trace matrix until then?

Has anyone taken this approach and have suggestions/criticism?

Why would you have to include HPLC requirements on the CDS documentation? If you’re using HPLCs, each unit should have some kind of testing/qualification; so those requirements should be already documented somewhere. Assuming that each unit has been tested, all you need from the CDS is to demonstrate that each unit is working correctly with the new software.

I worked recently with that same task (new CDS - 4 labs - 35+ instruments) and we started with 5 units. Once we finished the software part, we will add the reemaining units in batches (by lab) using change control. For each unit, there are some chromatographic runs demonstrating that the CDS-unit communication is working right. HPLC requirements are the same, no matter which software is used.

I suggeste reducing the system scope and leave the instrument themselves out of it. In that case, each HPLC (unit or model) would be a system by itself, but would have to interact with the CDS anyway. Two systems working as one, but their purposes and requirements are independent.

Hope this helps.

This sounds like a large-scale system and I would imagine that the cost/hour for any downtime is significant.

Here are some points to consider, but I’m sure there are plenty more!

In your email, you mentioned that all models of the analytical instruments are supported by the vendor software. A point to keep in mind here is that firmware is often key to something being supported. Model X ver1 may not be validated/supported on the system but Model X ver1.1 might be. This can be critical to minimizing the downtime factor as discovering/remedying these things late in the game can be time-consuming. This works into options for traceability. Possibly defining/testing/approving each type of analytical instrument (model X firmware Y.Yetc) for use within your system and then qualify/calibrate (whatever your process requires) the individual machines as you put them on the system.

I see many merits in your staged implementation plan, especially if you have a standardized client platform or ‘image’ that can be used. One thing to keep in mind is that you may have small sub-applications, services, windows components, control cards, etc that will need to go into particular clients in order to control particular analytical instruments so a true ‘client image’ may only be a baselined starting point.

Current information. With the number of distinct components involved, you may find that new releases or patches or communications about known challenges arise in between the start of the implementation and the end. If you have a mechanism to keep on top of everything you may find this effective. Often vendors will have publications about known limitations of their released products. Knowing these and keeping on top of new info may help to keep that downtime low.

Definition of system. It sounds like you will have the whole works defined as the system (app/db, clients, analytical instruments). I’m guessing that this is an upgrade or app change scenario and that you already have a defined quality management system that supports this structure. I imagine that there are some challenges to working under this definition. Change control and config managemnt come to mind. If system changes are all handled through one process, it might be interesting to look at the historical frequency of changes, their scope/risk, and the overhead required for those changes [high-frequency low-risk normal analytical instrument operation vs. lower-frequency higher-risk types (for instance main app/db changes)].

personal opinion only of course.

I’m not clear on this one. Are you saying that this giant client-server chromatography system controls each HPLC including detectors, pumps and autosamplers? Or is the client server system a data historian?

If it does control, I would recommend a family approach. Much like Graham suggested, qualify one member of the family and then add subsequent members via change control. The import issue is to demonstrate via installation that the two are equivalent. Then move to the next family. I would probably define the families in a validation plan.

If it is just a data historian, then it doesn’t really matter about the hardware of an HPLC now does it.