Abstract:
In a simulation study, the output of the simulation is used to estimate target parameters. The estimated parameters typically has a variance, which impacts the precision of the estimation. In Monte Carlo simulations, the standard approach to increase the precision of the estimation is to increase the sample size. However, we can increase simulation size until a certain point due to resource restrictions. This is where Variance Reduction (VR) techniques help. These techniques use simulations as an input and return another simulation with almost the same expected value and less variance value. VR techniques are widely studied in the context of Monte Carlo simulations. However, the research on the application of these algorithms in Markov Chain Monte Carlo (MCMC) simulations is limited. In this thesis, we study the effectiveness of VR techniques for MCMC algorithms. We created an experiment environment that allows us to control the difficulty of the problems that are solved by MCMC algorithms. In our study, we use the Random-Walk Metropolis-Hastings algorithm with different types of proposal distributions. We first evaluate the convergence of the MCMC algorithms, select chains with good convergence properties, and then apply the VR techniques. The VR techniques of interest are {\it{general}} VR techniques, which are Antithetic Variates, Control Variates, and Latin Hypercube Sampling. We study the effectiveness of these algorithms using problems with varying difficulties. We address the problem of calculating the standard error of the MCMC simulation outputs, evaluate the accuracy of the estimated values, and we evaluate the correctness of the standard error calculations by calculating the coverage probabilities of the confidence intervals and RMSE/MAE values of the estimations. We show that VR techniques are successful at decreasing the variance of the MCMC simulations without introducing any bias.