### Assay Linearity, Limit of Detection & Reportable Range

Linearity:
The alternative approach for detecting non-linearity is to assess the residuals of an estimated regression line and test for whether positive and negative deviations are randomly distributed. It should be considered whether an estimated regression line passes through zero or not. The presence of linearity is a prerequisite for a high degree of trueness.

Linearity refers to the relationship between measured and expected values over the analytical measurement range. Linearity may be considered in relation to the actual or relative analyte concentration. In case of relative analyte concentrations, a dilution series of a sample may be studied. The relative analyte concentration method is often used for immunoassays to measure whether the concentration decreases with dilution as expected. Dilution is usually carried out with the appropriate sample matrix.

Analytical Range:
The analytical range or reportable range is the analyte concentration range over which the measurement is within the declared tolerances for imprecision and bias of the method. In practice, the upper limit is often set by the linearity limit of the instrument response and the lower limit corresponding to the lower limit of quantification (LoQ). Usually, it is presumed that the specification the method applies throughout the analytical measurement range.

Limit of detection:
Traditionally, the limit of detection (LoD) often has been defined as the lowest value that significantly exceeds the measurement of a blank sample. Thus, the limit has been estimated on the basis of repeated measurements of a blank sample and reported as the mean plus 2 or 3 SDs of the blank measurements. Now, ISO has recommended a formal procedure for the estimate of the LoD.
First, the distribution of blank values is often associated is often asymmetric making the application of parametric statistics inappropriate.
Second, repeated measurement of a sample with a true concentration exactly to the limit of statistical significance for blank measurements will yield distribution with 50% of values below and 50% exceeding the limit because of random error. Only if the true concentration of the sample is higher than the significance limit can one be sure that a measured value will exceed the limit with a probability higher than 50%.

In a statistical sense, one should take into account not only the Type I error (the significance test) but also the Type II error, the error of not detecting the presence of an analyte that indeed is present. Assay carryover also must be close to zero to ensure unbiased results.

Estimate of Type I error
Considering an asymmetric distribution of blank values, and applying 5% significance, the non-parametric test that estimates the 95th percentile needs to be applied. Ranking the number of blanks (NB), the 95th percentile may be calculated as the value of the (N (95/100)+0.5) observed observation. The limiting percentile (Perc) of the blank distribution, which cuts off the percentage alpha of the tail distribution is called as the limit of blank (LOB) as shown in the figure.

Estimate of Type II error
To address the type II error level, it should be assumed that measured concentration values exceed LoB with the specified probability. If the Type II error level beta is set to 5%, 95% of the measurements should exceed LoB. Usually, the sample distribution can be estimated from the means and standard deviation as
LoB = Mean(observed ) – 1.65 SDs
LoD = LoB + 1.65 SDs
The LoD expresses the capability of the method and should not be used for direct comparison with actually measured sample values.

What are type I and type II
Type I error is the incorrect rejection of the null hypothesis when it is actually true. Usually, type I error leads to a conclusion that a purported effect or relationship exists when in fact it does not. Example of type I errors includes an assay shows positive results when the patient does not have a disease (false positive). The rate of type I error is the size of the test and denoted by alpha. It usually equals the significance level of a test.

Type II error is the failure of a rejection of the false null hypothesis. Usually, type II error leads to the conclusion that the assay showed negative results when the patient is actually positive to the disease. It is failing to assert what is present or a miss (false negative). The rate of type II error is denoted by the beta and related to the power of a test (1- beta). The probability that an observed positive result is a false positive (as contrasted with an observed positive result being a true positive) may be calculated using Bayes’ theorem. Bayes’ theorem states that the true rates of false positive and false negatives are not a function of the accuracy the test alone but actual rate or frequency of occurrence within the test population; and, often the more powerful issue is the actual rates of the condition within the sample being tested.