Please add any comment you have about this test script or feel free to upload your own test script example so we can improve the test model that we have and make it the worlds best test script
Once this thread gets a sufficient response I will be sending or final draft to the guys in ISPE.org for inclusion in their next whitepaper or GAMP release.
I wouldn’t want to initial and date each test step. You already have the tester sign the page so, IMO, this is sufficient.
The “Acceptance Criteria?” column is a bit confusing - is this intended to have the tester assess whether the results met the expected results or is this for additional acceptance criteria?
If the ‘Acceptance Criteria’ column is an assessment then I would require the tester to enter the incident log number there - primarily to allow more space for the actual results.
Wasn’t sure how / where you would record configuration information for the unit under test and for elements used in testing; e.g., software loads, consumables, etc.
Otherwise, it looks similar to what we already use!
I think I scrubbed this ok. I have a ton more information in my test scripts. They are treated as stand alone documents. Deviations are listed in the comments field at the bottom of each page. I like to have more room for the test information than lose space to a column that is not used a lot. Most of the information on yours template is also in mine. I would skipped the tested by on the bottom, the tester already signed each step.
Attached is a scrubbed copy of what we use. I eliminated all the intro data and just left in the script template.
Has anyone considered first writing the requirements, rules of thumb, and/or best practices? For example:
each test objective shall have a unique id;
all test equipment used in testing shall be uniquely identified;
all test equipment requiring calibration shall be in calibration prior to use;
it’s a good idea to record the last calibration date & calibration due date for any calibrated equipment used;
etc.
Hi all-
Regarding the test scripts arena, if one of your test script SET UP STEPS does not work (“fails”) how do you document that? An incident? Do you just N/A it and put a comment in ?
If it’s in the setup steps and (I have to be careful here) has no bearing on the test (in other words, corrects the setup to ensure the test is performed as really intended), then we redline the procedure, have a technical reviewer ok it (initial / date), and put a note in the report that it happened and will be corrected in the next revision of the procedure.
If it’s in an expected result section, we have to take much greater care. You can’t give the impression that you changed the expected results to match the actual results after you saw what happened. In this case, we would either fail the test and then jump through hoops to explain what happened and why we think it could really be passed due to an error in the expected results OR we redline the expected results and then go to great lengths to annotate the change, typically getting both technical and QA approval (initials / date) on the spot.