Background
Mass spectrometry (MS) data are often generated from various biological or chemical experiments. Such vast data is usually analyzed automatically in a computer process consisting of pre-processing, significance test, classification, and clustering. Elaborate pre-processing is essential for successful analysis with reliable results. One pre-processing step is required to detect outliers, which which are extreme due to technical reasons. The plausible outlying observations detected can be examined carefully, and then corrected or eliminated if necessary. However, as the manual examination of all observations for outlier detection is time-consuming, plausible outlying observations must be detected automatically.
Identification of statistical outliers is the subject of some controversy in statistics[1]. Several outlier detection algorithms have been proposed for univariate data, including Grubbs’ test[2] and Dixon’s Q test[3]. These tests were designed to analyze data under the normality assumption, so that they may produce unreliable outcomes in the case of few replicates. Furthermore, they are not applicable for duplicated samples. Another naive approach to detect outliers statistically constructs lower and upper fences of differences between two samples, Q1 - 1.5 IQR and Q3 + 1.5 IQR, where Q1 is the lower 25% quantile, Q3 is the upper 25% quantile, and IQR = Q3 - Q1. They are claimed to be outliers if they are smaller than the lower fence or larger than the upper fence. However, this may generate a spurious result because variability is heterogeneous in high-throughput data even generated from MS experiments.
Figure1 shows the log-scale scatter plot of the technically duplicated samples under the same biological condition from a MS experiment. The variability differs according to the intensity levels in the plot, so that the naive outlier detection method, ignoring the heterogeneity of variability, may often miss true outliers at high levels and select false outliers at low levels. If a number of technical replicates for each peptide under the same biological condition can be obtained in MS experiments, the examination of outliers can be conducted for each peptide. However, a small number of replicates is usually conducted for MS experiments due to the high cost of experiments and the limited supply of biological samples.
Cho et al.[4] proposed a more elaborate approach for detecting outliers with low false positive and negative rates in MS data to solve the problem when the number of technical replicates is two. The algorithm was developed by utilizing quantile regression for duplicate MS experiments. The R package (called OutlierD) that was also developed can only be used for duplicate experiments. Therefore, we here propose a new outlier detection algorithm for multiple high-throughput experiments, particularly those with few, but more than two replicates.
Classical Approaches
Suppose that there are n replicated samples and p peptides in MS data. Then let x
ij
be the i th replicated sample from experiments under the same biological or experimental condition, where i = 1,…,n and j = 1,…,p. For convenience, let. Typically, n is small and p is very large in high-throughput data, i.e., p >> n. In addition, let y(1)j ≤ y(2)j≤⋯≤ y(n)j be ordered samples for peptide j, where and, the smallest and the largest observations, respectively.
Outliers are often detected by the classical approaches such as Dixon’s Range Test and Grubbs test. Dixon’s Range Test, also known as Dixon’s Q-test[3], utilizes order statistics as follows.
(1)
The denominator is the difference between the largest and smallest observations and the numerator is the difference between the smallest two values or the largest two values. If the test statistic Q
j
is smaller than the critical value given by Rorabacher[5], peptide j is flagged as an outlier. If n = 2, the statistic is always 1; thus, this test is applicable for n ≥ 3.
Grubbs’ test[2, 6] also utilizes order statistics and its test statistic is defined as follows.
(2)
where is the sample mean and s
j
the standard deviation for peptide j. The denominator is the standard deviation and the numerator is the difference between the smallest (or largest) value and the sample mean. If T
nj
or T1j is smaller than the critical value, peptide j is flagged as an outlier. If n = 2, the statistic is always; thus, this test is also applicable for n ≥ 3.
Proposed Methods
In duplicated experiments (n = 2), two observed values, x1j and x2j for each j, should be theoretically identical, but are not identical in practice due to their variability. Even though they are not identical, they should not differ substantially. The tolerance of the difference between the two observed values from the same condition is not constant because their variability is heterogeneous. The variability of high-throughput data depends on intensity levels.
Cho et al.[4] proposed the construction of lower and upper fences using quantile regression in an MA plot with M and A values in vertical and horizontal axes, respectively, where M
j
is the difference between replicated samples for j and A
j
is the average, i.e. and to detect the outliers accounting for the heterogeneity of variability.
In multiple experiments (n ≥ 2), it is natural to investigate outliers based on all observed values in a high-dimensional space. An outlier will be a very large distance from the center of the distribution of a peptide. The cutoffs of distances for classification of outliers depend on the degree of variability from the center. The degree of variability is dependent on intensity levels and the center can be defined as the 45° line from the origin. More flexibly, the center can be obtained by principal component analysis (PCA), as seen in Figure2. The first principal component (PC) becomes the center of each intensity level, i.e., a new axis for intensity levels. The experiments are replicated under the same biological and technical condition; hence, most variation can be explained by the first PC. It implies that it is enough to use the first PC practically. An outlier will have a large distance from its projection. Following the notations for applying quantile regression, we can define the distance of peptide j to the projection as M
j
and the length of the projection on the new axis as A
j
. Then the first and third quantiles can be obtained by applying quantile regression on an MA plot with M and A in the vertical and horizontal axes, repectively; hence, the upper and lower fences can be constructed to classify the outliers.
Describing this projection approach in more detail, we first subtract the sample mean of each sample from each observation to shift the sample mean to the origin because the PC go through the sample means. The first PC vector v can be found on the new sample space from and the projection of each peptide on the vector v can be obtained. Then, we can calculate the length of the projection,, and the length of the difference between a vector of peptide j and the projection,. The length of the projection is multiplied by the sign of to distinguish the positive and negative directions. The signed length of the project and the length of the difference are defined as A
j
and M
j
of peptide j, respectively. Outlying peptides will have unduly large M values. Judging whether it is undue or not depends on A
j
because the variability of M values is heterogeneous. Like OutlierD, we obtain first and third quantiles, Q1 and Q1, depending on intensity levels, and then construct the upper and lower fences to classify outliers from normal observations. Quantile regression[7] is utilized on an MA plot to obtain the first and third quantile estimates, Q1(A) and Q3(A), respectively, depending on the intensity levels A. The q-quantile linear quantile regression with {(A
j
M
j
),j = 1,…,p} is used to find the parameters minimizing
(3)
where 0 < q < 1, and g(A
j
;θ0θ1) = θ0 + θ1A
j
. Using Equation (3), the 0.25 and 0.75 quantile estimates, Q1(A) and Q3(A), are calculated depending on the levels A. Then, the lower and upper fences are constructed: Q1(A) - kIQR(A) and Q3(A) + kIQR(A), where IQR(A) = Q3(A) - Q1(A) and k is a tuning parameter. We set k to 1.5 as the default value in our algorithm and software program because the value is practically often used. A larger k value selects fewer peptides, while a smaller k selects more outliers. The value can be adjusted empirically according to the magnitude of the variation of the data.
We can obtain more flexible quantile estimates by nonlinear and nonparametric quantile regression approaches[8]. For nonlinear quantile regression, the asymptotic function[9] can be employed:
(4)
where θ1 is the asymptote, θ2 is the log rate, and θ3 is the value of A at which the response becomes zero. In addition, Self-starting, Frank, Asymptotic with Offset and Copula functions can be employed. For nonparametric quantile regression, we utilize smoothing spline with the total variation regularization for univariate data to our algorithm[10]. A smoothing parameter plays a role in adjusting the degree of smoothness. We set it to 1 as the default, but it can be changed by users. The algorithm using projection can be summarized as follows.
Proposed Algorithm
-
1.
Shift the sample means to the origin (0,…,0), i.e., .
-
2.
Find the first PC vector v using PCA on the space of .
-
3.
Obtain the projection of a vector of each peptide j on v, where j = 1,…,p.
-
4.
Compute the signed length of the projection and the length of the difference between a vector of peptide j and the projection , where j = 1,2,…,p.
-
5.
Obtain the first and third quantile values Q 1(A) and Q 3(A), on an MA plot using a quantile regression approach. Then calculate IQR(A) = Q 3(A)−Q 1(A).
-
6.
Construct the lower and upper fences, LB(A) = Q 1(A) - k IQR(A) and UB(A) = Q 3(A) + k IQR(A), where k is a tuning parameter.
-
7.
Declare peptide j as an outlier if it is located above the upper fence or under the lower fence.
This projection approach utilizes all the replicates simultaneously, and a high-dimensional problem reduces to two-dimensional one that can easily be solved. Shifts from biased experiments can be ignored due to the use of PCA.