Di Gessa, Giorgio
(2008)
Simple strategies for variance uncertainty in metaanalysis.
MSc(R) thesis, University of Glasgow.
Full text available as:
Abstract
Reliability of the measure of precision of the estimate is crucial; a correct value of the standard error of the point estimate entails that the resulting significance of the analysis is correctly stated and that confidence intervals have correct coverage probabilities. Stating an incorrect precision, on the contrary, can often result in biased and misleading results. In particular, in fixedeffects metaanalysis the overall estimator usually used in practice tends to have a variance higher than the optimal one even though this appears to be lower, just by chance.
In performing a fixedeffects metaanalysis, individual estimates are weighted proportionately to the precision of the study. Such weighting is optimal only under the assumption that variances are known, which is never the case in practice. As a consequence, the estimator is suboptimal and the resulting metaanalysis overstates the significance of the results: in particular, overstatements are dramatic when we summarise studies with small number of patients. Focusing the attention to the fixedeffects model, the main aim of this thesis is to investigate the behaviour of the precision of the overall estimator under different circumstances in order to assess how biased and incorrectly reported the overall variance of the commonly used estimator is and also to highlight in which circumstances improved estimates are deemed necessary.
In fixedeffects metaanalysis, problems are related to poor estimates of the individual variances [] since these values are imprecise and both [theta] (the point estimator) and [V] (the overall variance estimator] depend upon them. Poorly estimated study variances can lead to the overall estimate of the variance of the treatment effect being badly underestimated. In order to evaluate the circumstances in which the imprecision in the estimates of [] badly affects [V], a number of simulations in different settings were performed. Under both the assumption of common and uncommon variance of the observations at the patient level, the average total number of patients per study plays an important role and this appears to be more important than the total number of each single study. Moreover, the allocation of patients per arm does not seem to be decisive for the estimated overall variance of the estimator even though balanced allocation as well as having roughly the same amount of patients per study yields better results. Furthermore, true to form, the higher the average number of patients per arm, the closer the estimator is to the optimal one, ie. the fewer the number of patients, the less precise the estimates of [] are and the greater the impact is on the results. Given the imprecision in the estimate of [], we may severely overstate the precision of [theta]. Better estimation of the variances are therefore investigated. Are there ways to account for the imprecise estimates of the withinstudies variances?
Shrunk variances were considered in order to assess whether borrowing information across variances would produce an overall variance estimate whose 'real' and 'average' dispersion were both closer to the optimal value. Combining measurements minimises the total 'Mean Squared Error'. Therefore, particularly when the nature of the problem is not to estimate each expected return separately but rather to minimise the total impact, shrinkage estimators represent a reasonable alternative to the classical estimators. This approach seems reasonable since the goal of this thesis is to minimise the real dispersion of the overall variance estimator. Moreover, shrinkage approaches (that combine variance information across studies and are studyspecific at the same time) usually perform well under a wide range of assumptions about variance heterogeneity, behaving well both when the variances were truly constant as well as when they varied extensively from study to study. In particular, in this thesis the 'modified CHQBC estimator' suggested by Tong and Wang is used (where CHQBC stands for the Jamestype shrinkage estimator for variances initially proposed by Cui, Hwang, Qiu, Blades and Churchill).
Results obtained via simulations (with different patterns for various variance schemes and divers average amounts of patients per study), emphasise that the estimator based on the 'shrunk variances' performs better than the one based on the estimated sample variances. Regardless of the variance structure across studies (homoscedasticity or uncommon variances), the estimator based on the shrunk variances performs optimally, even with an average small number of patients per trial, achieving almost optimal results even when the variances are strongly heterogenous and without relying on computational expensive procedures. Chapter 3 shows the results obtained if shrunk variances are used instead of the declared ones; moreover, this new approach is applied to some real datasets showing how the declared variance tends to be higher in all cases and presumably closer the the 'real' optimal value.
Finally, chapter 4 highlights the merits of this new approach to the problem of imprecise precision estimates in fixedeffects methods and also looks at the further work that needs to be done in order to improve results for this and other metaanalytical settings; this thesis, in fact, only considers the case of continuous normally distributed data ignoring binary, ordinal or survival deta metaanalyses. Moreover, despite the fact that the problem of estimating [] is particularly urgent and dramatic in the fixedeffects model, the estimation of [] might also be expected to influence random effects coverage probabilities especially when all studies in the metaanalyses are small (Brockwell and Gordon, 2001).
Actions (login required)

View Item 