You may find this article useful in relation to the second part of your question.
The ability to provide timely, accurate, and reliable data is central to the role of analytical chemists and is especially true in the discovery, development, and manufacture of pharmaceuticals. Analytical data are used to screen potential drug candidates, aid in the development of drug syntheses, support formulation studies, monitor the stability of bulk pharmaceuticals and formulated products, and test final products for release. The quality of analytical data is a key factor in the success of a drug development program. The process of method development and validation has a direct impact on the quality of these data.
Although a thorough validation cannot rule out all potential problems, the process of method development and validation should address the most common ones. Examples of typical problems that can be minimized or avoided are synthesis impurities that coelute with the analyte peak in an HPLC assay; a particular type of column that no longer produces the separation needed because the supplier of the column has changed the manufacturing process; an assay method that is transferred to a second laboratory where they are unable to achieve the same detection limit; and a quality assurance audit of a validation report that finds no documentation on how the method was performed during the validation.
Problems increase as additional people, laboratories, and equipment are used to perform the method. When the method is used in the developer’s laboratory, a small adjustment can usually be made to make the method work, but the flexibility to change it is lost once the method is transferred to other laboratories or used for official product testing. This is especially true in the pharmaceutical industry, where methods are submitted to regulatory agencies and changes may require formal approval before they can be implemented for official testing. The best way to minimize method problems is to perform adequate validation experiments during development.
What is method validation?
Method validation is the process of proving that an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP) (1), International Conference on Harmonisation (ICH) (2), and the Food and Drug Administration (FDA) (3, 4) provide a framework for performing such validations. In general, methods for regulatory submission must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness.
Although there is general agreement about what type of studies should be done, there is great diversity in how they are performed (5). The literature contains diverse approaches to performing validations (as in References 6–10). This Report presents an approach to performing validation studies that encompasses much of the current literature and provides practical guidance. This approach should be viewed with the understanding that validation requirements are continually changing and vary widely, depending on the type of drug being tested, the stage of drug development, and the regulatory group that will review the drug application. For our purposes, we will discuss validation studies as they apply to chromatographic methods, although the same principles apply to other analytical techniques.
In the early stages of drug development, it is usually not necessary to perform all of the various validation studies. Many researchers focus on specificity, linearity, accuracy, and precision studies for drugs in the preclinical through Phase II (preliminary efficacy) stages. The remaining studies are performed when the drug reaches the Phase III (efficacy) stage of development and has a higher probability of becoming a marketed product.
The process of validating a method cannot be separated from the actual development of the method conditions, because the developer will not know whether the method conditions are acceptable until validation studies are performed. The development and validation of a new analytical method may therefore be an iterative process. Results of validation studies may indicate that a change in the procedure is necessary, which may then require revalidation. During each validation study, key method parameters are determined and then used for all subsequent validation steps. To minimize repetitious studies and ensure that the validation data are generated under conditions equivalent to the final procedure, we recommend the following sequence of studies.
Establish minimum criteria
The first step in the method development and validation cycle should be to set minimum requirements, which are essentially acceptance specifications for the method. A complete list of criteria should be agreed on by the developer and the end users before the method is developed so that expectations are clear.
For example, is it critical that method precision (RSD) be „ 2%? Does the method need to be accurate to within 2% of the target concentration? Is it acceptable to have only one supplier of the HPLC column used in the analysis? During the actual studies and in the final validation report, these criteria will allow clear judgment about the acceptability of the analytical method.
Examples of minimum criteria are provided throughout this article that indicate practical ways to evaluate the acceptability of data from each validation study. The statistics generated for making comparisons are similar to what analysts will generate later in the routine use of the method and therefore can serve as a tool for evaluating later questionable data. More rigorous statistical evaluation techniques are available and should be used in some instances, but these may not allow as direct a comparison for method troubleshooting during routine use.
Demonstrate specificity
For chromatographic methods, developing a separation involves demonstrating specificity, which is the ability of the method to accurately measure the analyte response in the presence of all potential sample components. The response of the analyte in test mixtures containing the analyte and all potential sample components (placebo formulation, synthesis intermediates, excipients, degradation products, process impurities, etc.) is compared with the response of a solution containing only the analyte. Other potential sample components are generated by exposing the analyte to stress conditions sufficient to degrade it to 80-90% purity. For bulk pharmaceuticals, stress conditions such as heat (50 |AoC), light (600 FC), acid (0.1 N HCl), base (0.1 N NaOH), and oxidant (3% H2O2) are typical. For formulated products, heat, light, and humidity (85%) are often used.
The resulting mixtures are then analyzed, and the analyte peak is evaluated for peak purity and resolution from the nearest eluting peak. If an alternate chromatographic column is to be allowed in the final method procedure, it should be identified during these studies. Once acceptable resolution is obtained for the analyte and potential sample components, the chromatographic parameters, such as column type, mobile-phase composition, flow rate, and detection mode, are considered set.
An example of specificity criteria for an assay method is that the analyte peak will have baseline chromatographic resolution of at least 1.5 from all other sample components. If this cannot be achieved, the unresolved components at their maximum expected levels will not affect the final assay result by more than 0.5%. An example of specificity criteria for an impurity method is that all impurity peaks that are „ 0.1% by area will have baseline chromatographic resolution from the main component peak(s) and, where practical, will have resolution from all other impurities.
Demonstrate linearity
A linearity study verifies that the sample solutions are in a concentration range where analyte response is linearly proportional to concentration. For assay methods, this study is generally performed by preparing standard solutions at five concentration levels, from 50 to 150% of the target analyte concentration. Five levels are required to allow detection of curvature in the plotted data. The standards are evaluated using the chromatographic conditions determined during the specificity studies.
Standards should be prepared and analyzed a minimum of three times. The 50 to 150% range for this study is wider than what is required by the FDA guidelines. In the final method procedure, a tighter range of three standards is generally used, such as 80, 100, and 120% of target; and in some instances, a single standard concentration is used.