Abstract
Randomized response scrambling techniques have been in existence for over fifty years. These scrambling methods are very useful in sample surveys where researchers deal with sensitive variables. Out of many available scrambling techniques, survey researchers often need to evaluate these techniques to choose the best technique for real-world surveys. In the current literature, only a limited number of model-evaluation metrics are available for analyzing the performance of different scrambling methods. This leaves a big research gap for the development of new unified evaluation measures which can quantify all aspects of a scrambling technique. We develop a novel unified metric for evaluation of randomized response models and compare it with the existing unified measure. The proposed measure can quantify the efficiency and the level of the respondents' privacy of any scrambling technique. Being less sensitive to sample sizes than the existing unified measure, the proposed measure can be used with small sample sizes to evaluate models.