Missing data are frequently encountered in high-dimensional data analysis, but they are usually difficult to deal with by standard algorithms, such as the EM algorithm and its variants. When the number of parameters is greater than the sample size, the MLE of the parameters can be non-unique and inconsistent for many problems. We propose an attractive solution to this problem, which iterates between an imputation step and a consistency step. At the imputation step, the missing data are imputed given the observed data and the current estimate of parameters; and at the consistency step, a consistent estimate of the parameters is calculated based on the pseudo-complete data. The consistency of the averaged estimate over iterations can be established under mild conditions. The use of the proposed algorithm is illustrated by high-dimensional Gaussian graphical models, high-dimensional variable selection, and a random coefficient model. The proposed algorithm has strong implications for high-dimensional computational problems: Based on the proposed algorithm, we propose a general strategy to improve Bayesian computation for
high-dimensional complex models. The proposed algorithm also facilitates data integration from multiple sources, which plays an important role in big data analysis.