Perhaps the most widely known procedure to account for multiple comparison errors in standard statistics is the Bonferroni correction. In its simplest form, the Bonferroni method merely divides the required Type I error level (α) by the number of independent tests (N) performed. Thus, one wishes to maintain an α = 0.05 error level for 10 tests, the p-value used would need to be set at 0.05/10 = 0.005. You can see that for an fMRI data set with N=~100,000 voxels being tested, the required p-value would be on the order of 5 x 10−7, an extremely stringent requirement. Using such a strict criterion to avoid Type I errors would severely impact the power of the fMRI data analysis leading to an increasing number of false negative results (Type II errors). Accordingly, several Bonferroni variants (Holm, Hochberg, Simes) including step-wise sequential testing have been devised. An alternative and increasingly popular approach is to control the false discovery rate (FDR), the expected proportion of falsely rejected voxels.
Although Bonferroni and related methods are statistically rigorous, they do not make use of the spatial structure of fMRI data nor the fact that signals from nearby voxels are commonly correlated. Random field theory (RFT) and permutation-based resampling methods overcome these limitations by capturing dependence of the data.
Random field theory was developed by Friston, Worsley, and others in the early 1990's for analysis of PET data. The RFT method begins with heavy spatial smoothing of the data. Through this process peaks due to random noise appearing in single voxels are averaged out (attenuated) by their neighbors. Instead of performing statistics on voxels, RFT performs them on resolution elements (resels). For example, if the fMRI data is smoothed with an 8 mm FWHM 3D Gaussian filter, the resel size would be 8 mm³. Statistical tests are then based on clusters, rather than individual voxels.
In addition to spatial smoothing RFT incorporates several additional metrics to account for local topology, including the Euler Characteristic, an index for "roughness" reflecting in part the relative numbers of signal peaks and valleys in a region. For practical purposes it is easiest to think of the Euler Characteristic as the number of activation "blobs" present in an image for a given level of thresholding. Notwithstanding the fact that practically no one except PhD's in imaging statistics really understand RFT, it is nevertheless widely used and has been incorporated into many popular fMRI analysis software packages (SPM, FSL, etc). It integrates nicely with the General Linear Model formulation as well.
Advanced Discussion (show/hide)»
What causes magnetism?