Abstract
This study reviews 33 meta-analyses and systematic reviews on Computational Thinking (CT), focusing on research quality, intervention effectiveness, and content. Quality assessment of included studies was conducted using the AMSTAR 2 tool. The meta-analysis achieved an average score of 10.9 (a total of 16 points), while systematic reviews scored an average of 6.1 (a total of 11 points). The 15 meta-analyses showed diverse intervention strategies. Project-based learning, text-based programming, and game-based learning demonstrate more pronounced effects in terms of effect size and practical outcomes. Curricular integration, robotics programming, and unplugged strategies offered additional value in certain contexts. Gender and disciplinary background were stable moderators, while grade level and educational stage had more conditional effects. Intervention duration, sample size, instructional tools, and assessment methods were also significant moderators in several studies. The 18 systematic reviews used a five-layer framework based on ecological systems theory, covering educational context (microsystem), tools and strategies (mesosystem), social support (exosystem), macro-level characteristics (macrosystem), and CT development (chronosystem). Future research should focus on standardizing meta-analyses, unifying effect size indicators, and strengthening longitudinal studies with cognitive network analysis. Additionally, systematic reviews should improve evidence credibility by integrating textual synthesis and data-driven reasoning to reduce redundancy and homogeneity.