Logtransformation and its implications for data analysis
Summary
The logtransformation is widely used in biomedical and psychosocial research to deal with skewed data. This paper highlights serious problems in this classic approach for dealing with skewed data. Despite the common belief that the log transformation can decrease the variability of data and make data conform more closely to the normal distribution, this is usually not the case. Moreover, the results of standard statistical tests performed on logtransformed data are often not relevant for the original, nontransformed data.We demonstrate these problems by presenting examples that use simulated data. We conclude that if used at all, data transformations must be applied very cautiously. We recommend that in most circumstances researchers abandon these traditional methods of dealing with skewed data and, instead, use newer analytic methods that are not dependent on the distribution the data, such as generalized estimating equations (GEE).
1. Introduction
The log transformation, a widely used method to address skewed data, is one of the most popular transformations used in biomedical and psychosocial research. Due to its ease of use and popularity, the log transformation is included in most major statistical software packages including SAS, Splus and SPSS. Unfortunately, its popularity has also made it vulnerable to misuse – even by statisticians – leading to incorrect interpretation of experimental results.[1] Such misuse and misinterpretation is not unique to this particular transformation; it is a common problem in many popular statistical methods. For example, the twosample ttest is widely used to compare the means of two independent samples with normally distributed (or approximately normal) data, but many researchers take this critical assumption for granted, using ttests without bothering to check or even acknowledge this underlying assumption. Another example is the Cox regression model used in survival analysis; many studies apply this popular model without even being aware of the proportionality assumption (i.e., the relative hazard of groups of interest is constant over time) required for valid inference.
In this article we focus on the logtransformation and discuss major problems of using this method in practice. We use examples and simulated data to show that this method often does not resolve the original problem for which it is being used (i.e., nonnormal distribution of primary data) and to show that using this transformation can introduce new problems that are even more difficult to deal with then the problem of nonnormal distribution of data. We conclude with recommendations of alternative analytic methods that eliminate the need of transforming nonnormal data distributions prior to analysis.
2. Lognormal transformation
2.1. Using the log transformation to make data conform to normality
The normal distribution is widely used in basic and clinical research studies to model continuous outcomes. Unfortunately, the symmetric bellshaped distribution often does not adequately describe the observed data from research projects. Quite often data arising in real studies are so skewed that standard statistical analyses of these data yield invalid results. Many methods have been developed to test the normality assumption of observed data. When the distribution of the continuous data is nonnormal, transformations of data are applied to make the data as “normal” as possible and, thus, increase the validity of the associated statistical analyses. The log transformation is, arguably, the most popular among the different types of transformations used to transform skewed data to approximately conform to normality.
If the original data follows a lognormal distribution or approximately so, then the logtransformed data follows a normal or near normal distribution. In this case, the logtransformation does remove or reduce skewness. Unfortunately, data arising from many studies do not approximate the lognormal distribution so applying this transformation does not reduce the skewness of the distribution. In fact, in some cases applying the transformation can make the distribution more skewed than the original data.
To show how this can happen, we first simulated data u_{i} which is uniformly distributed between 0 and 1,and then constructed two variables as follows: x_{i}=100(exp(μ_{i}1)+1, y_{i}=log(x_{i}).
Shown in the left panel in Figure 1 is the histogram of x_{i}, while the right panel is the histogram of y_{i} (the logtransformed version of x_{i}) based on a sample size of n=10,000. While the distribution of x_{i} is rightskewed, the logtransformed data y_{i} is clearly leftskewed. In fact, the logtransformed data y_{i} is more skewed than the original x_{i}, since the skewness coefficient for y_{i} is 1.16 while that for x_{i} is 0.34. Thus, the logtransformation actually exacerbated the problem of skewness in this particular example.
In general, for rightskewed data, the logtransformation may make it either rightor leftskewed. If the original data does follow a lognormal distribution, the logtransformed data will follow or approximately follow the normal distribution. However, in general there is no guarantee that the logtransformation will reduce skewness and make the data a better approximation of the normal distribution.
2.2. Using the log transformation to reduce variability of data
Another popular use of the log transformation is to reduce the variability of data, especially in data sets that include outlying observations. Again, contrary to this popular belief, log transformation can often increase – not reduce – the variability of data whether or not there are outliers.
For example, consider the following simple linear regression with only an intercept term: y_{i}=β_{0}+ε_{i}, ε_{i}~U(0.5, 0.5)
Unlike the ordinary regression analysis where the error term is assumed to have a normal distribution, the error term in this regression is uniformly distributed between 0.5 and 0.5. Thus y_{i} in the above model does not follow a lognormal distribution and the logtransformed y_{i} does not have a normal distribution. We then simulated data y_{i} for this model with a sample size of n=100 and a value of the β_{0} parameter ranging from 0.5 to 5.5. Note that β_{0} starts from 0.5, rather than from 0, to ensure y_{i}>0 and, thus, log(y_{i})is correctly estimated when performing the log transformation on the data simulated from the linear regression of the original data. We fit two different linear models on the same data. The first model used the data without transformation, the second model used the logtransformed data. The ordinary least square method was used to estimate the intercepts in both models.
Table 1 shows the original and logtransformed estimates of β_{0} and its standard errors averaged over 100,000 Monte Carlo (MC) simulations[1] from fitting the linear model to the original data. We use a large MC sample size to help reduce the sampling variability in the standard error estimates; thus the differences in the presented estimates from fitting the original and logtransformed data reflect true differences. The table shows that when β_{0}=0.5, the standard errors from the model fit to the original y_{i} were much smaller than those from fitting the logtransformed data. As β_{0} increased towards 5.5, the standard errors from fitting the original data remained the same, while their counterparts from fitting the logtransformed data decreased. When β_{0} increased past the value 1, the standard errors from fitting the logtransformed data became smaller than those from fitting the original data. Table 2 presents the same estimates of β_{0} as those in Table 1, except that we introduced four outlying points (4, 6, 8 and 10) in the simulated data, thereby increasing the sample size to 104.As can be seen in Table 2, the estimates of β_{0} and of the standard error of β_{0} changed after introduction of the outliers, but the pattern of differences in these estimates between the model for the original data and for the logtransformed data remains the same. This example shows that the conventional wisdom about the ability of a log transformation of data to reduce variability especially if the data includes outliers, is not generally true. Whether the log transformation reduces such variability depends on the magnitude of the mean of the observations — the larger the mean the smaller the variability.
Table 1.
Simulation results for simple linear regression without outliers (n=100; 100,000 simulations)
β_{0}  Original data  logtransformed data  




Estimated Intercept  SE  Estimated Intercept  SE  
0.50  0.5000  0.0288  0.9999  0.0998 
0.51  0.5100  0.0289  0.9440  0.0887 
0.55  0.5499  0.0289  0.7993  0.0718 
0.60  0.6001  0.0290  0.6647  0.0608 
0.70  0.7002  0.0289  0.4591  0.0480 
0.80  0.8000  0.0288  0.2977  0.0401 
0.90  0.8999  0.0288  0.1626  0.0347 
1.00  1.0001  0.0288  0.0451  0.0307 
1.50  1.5000  0.0289  0.3863  0.0198 
5.50  5.5000  0.0289  1.7034  0.0053 
Table 2.
Simulation results for simple linear regression with outliers (n=104; 100,000 simulations)
β_{0}  Original data  logtransformed data  




Estimated Intercept  SE  Estimated Intercept  SE  
0.50  0.7501  0.0277  0.8886  0.0960 
0.51  0.7599  0.0277  0.8350  0.0849 
0.55  0.7999  0.0277  0.6956  0.0689 
0.60  0.8500  0.0278  0.5660  0.0585 
0.70  0.9500  0.0287  0.3678  0.0461 
0.80  1.0499  0.0277  0.2119  0.0386 
0.90  1.1500  0.0278  0.0811  0.0335 
1.00  1.2501  0.0277  0.0323  0.0296 
1.50  1.7499  0.0278  0.4497  0.0190 
5.50  5.7501  0.0278  1.7328  0.0051 
A more fundamental problem is that there is little value in comparing the variability of original versus logtransformed data because they are on totally different scales. In theory we can always find a transformation for any data to make the variability of the transformed version either smaller or larger than that of the original data. For example, if the standard deviation of variable x is σ, then the standard deviation of the scale transformation x/K (K>0) is σ/K; thus by selecting a sufficiently large or small K we can change the standard deviation of the transformed variable x/K to any desired level.
3. Difficulty of interpreting model estimates from logtransformed data
3.1. Estimation of model parameters
Once the data is logtransformed, many statistical methods, including linear regression, can be applied to model the resulting transformed data. For example, the mean of the logtransformed observations (log y_{i}), is often used to estimate the population mean of the original data by applying the antilog (i.e., exponential) function to obtain exp( ).However, this inversion of the mean log value does not usually result in an appropriate estimate of the mean of the original data. For example, as shown by Feng and colleagues,[2] if y_{i} follows a lognormal distribution (μ,σ^{2}), then the mean of y_{i} is given by E(y_{i})=exp(μ+σ^{2}/2).If we logtransform y_{i}, the transformed log y_{i} follows a normal distribution with a mean of μ.Thus, the sample mean of the logtransformed data, is often used to estimate the population mean of the original data by applying the antilog (i.e., exponential) function to obtain exp( is an unbiased estimate of the mean μ of log y_{i}, and the exponential function of , that is, , is an estimate of exp(μ).However, the mean of the original data y_{i} is exp(μ+σ^{2}/2), not exp(μ).Thus, even in this ideal situation, estimating the mean of the original y_{i} using the exponent or antilog of the sample mean of the logtransformed data can generate inaccurate estimates of the true population mean of the original data.
3.2. Hypothesis testing with logtransformed data
It is also more difficult to perform hypothesis testing on logtransformed data. Consider, for example, the two sample ttest, which is widely used to compare the means of two normal (or near normal) samples. If the two samples have the same variance, the test statistic has a tdistribution. For skewed data (when the variance of samples is usually different), researchers often apply the logtransformation to the original data and then perform the ttest on the transformed data. However, as demonstrated below, applying such a test to logtransformed data may not address the hypothesis of interest regarding the original data.
Let y_{1i} and y_{2i} denote two samples. If the data from both samples follow a lognormal distribution, with lognormal (μ_{1}, σ_{1}^{2}) for the first sample and (μ_{2}, σ_{2}^{2}) for the second sample, then the first sample has the mean exp(μ_{1}+σ_{1}^{2}/2) and the second has the mean exp(μ_{2}+σ_{2}^{2}/2).If we apply the twosample ttest to the original data, we are testing the null hypothesis that these two means are equal, H_{0}: exp(μ_{1}+σ_{1}^{2}/2)=exp(μ_{2}+σ_{2}^{2}/2)
If we logtransform the data, the transformed data have the mean μ_{1} and variance σ_{1}^{2} for the first sample and mean μ_{2} and variance σ_{2}^{2} for the second sample. Thus, if we apply the twosample ttest to the transformed data, the null hypothesis of the equality of the means becomes, H_{0}:μ_{1}=μ_{2}.
The two null hypotheses are clearly not equivalent. Although the null hypothesis based on the logtransformed data does test the equality of the means of the two logtransformed samples, the null hypothesis based on the original data does not, since the mean of the original data also involves the parameters, σ_{1}^{2} and σ_{2}^{2}.Thus, even if no difference is found between the two means of the logtransformed data, it does not mean that there is no differences between the means in the original data of the two samples. For example, if the null hypothesis for the logtransformed data, H_{0}:μ_{1}=μ_{2}, is not rejected for the logtransformed data, it does not imply that the null hypothesis for comparing the means of the original data of the samples, H_{0}: exp(μ_{1}+σ_{1}^{2}/2)=exp(μ_{2}+σ_{2}^{2}/2), is true, unless the variances of the two samples are the same.
3.3. Effect of adding a small constant to data when performing log transformations of data
Since the log transformation can only be used for positive outcomes, it is common practice to add a small positive constant, M, to all observations before applying this transformation. Although appearing quite harmless, this common practice can have a noticeable effect on the level of statistical significance in hypothesis testing.
We examine the behavior of the pvalue resulting from transformed data using a simulation. We simulated data from two independent normal distributions, with sample size n=100.The data is generated in the following way: (1) generate two independent random numbers u_{i} and v_{i} (i=1, …, n), where u_{i} has a standard normal distribution and v_{i} has a normal distribution with mean of 1 and a standard deviation of 2; (2) generate y_{i1} and y_{i2} according to the following formulas: y_{i1}=exp(μ_{i})+15, y_{i2}=exp(y_{i}+13.)
We then added a constant, M, to each observation of y_{i1} and y_{i2} before the data were log transformed. Figure 2 shows the pvalues from comparing the means of the logtransformed data from the two samples, based on using different values of M. When M=0, the pvalue for the difference in the means of the two samples of logtransformed data is0.058, that is, the difference was not statistically significant at the usual type I error level of alpha=0.05. However, as M increases the pvalues dropped and fell below the 0.05 threshold for statistical significance after it rose above 100.This simulation study indicates that the pvalue of the test depends on what value is added to the data before applying the logtransformation, potentially making conclusions about differences between groups dependent on the somewhat arbitrary decision of the researcher about the size of M to be used in the analysis.
4. Discussion
Using transformations in general and log transformation in particular can be quite problematic. If such an approach is used, the researcher must be mindful about its limitations, particularly when interpreting the relevance of the analysis of transformed data for the hypothesis of interest about the original data. For example, we have demonstrated that in most circumstances the log transformation does not help make data less variable or more normal and may, in some circumstances, make data more variable and more skewed. Furthermore, logtransformed data cannot usually facilitate inferences concerning the original data, since it shares little in common with the original data.
For many applications, rather than trying to find an appropriate statistical distribution or transformation to model the observed data, it would probably be better to abandon this classic approach and switch to modern distributionfree methods. For example, a popular approach that can avoid many of these problems is the generalized estimating equations, or GEE.[3],[4] This approach forgoes the distribution assumption, providing valid inference regardless of the distribution of the data. However, this is only appropriate for skewed data, if the data can be reasonably modeled by a parametric distribution such as the normal distribution, it is preferable to use the classic statistical methods because they usually provide more efficient inference than GEE.
Biography
Changyong Feng received his BSc in 1991 from the University of Science and Technology of China and subsequently obtained a PhD in statistics from the University of Rochester in 2002. He is currently an associate professor in the Department of Biostatistics and Computational Biology at Rochester University. The main focus of his research is on survival analysis.
Funding Statement
This research was supported in part by the Novel Biostatistical and Epidemiologic Methodology grants from the University of Rochester Medical Center Clinical and Translational Science Institute Pilot Awards Program.
Footnotes
Conflict of Interest: The authors report not conflict of interest related to this manuscript.