Hybrid optimization technique for matrix chain multiplication using Strassen's algorithm

基于斯特拉森算法的矩阵链乘法混合优化技术

阅读:1

Abstract

BACKGROUND: Matrix Chain Multiplication (MCM) is a fundamental problem in computational mathematics and computer science, often encountered in scientific computing, graphics, and machine learning. Traditional MCM optimization techniques use Dynamic Programming (DP) with Memoization to determine the optimal parenthesization for minimizing the number of scalar multiplications. However, standard matrix multiplication still operates in O(n (3)) time complexity, leading to inefficiencies for large matrices. METHODS: In this paper, we propose a hybrid optimization technique that integrates Strassen's algorithm into MCM to further accelerate matrix multiplication. Our approach consists of two key phases: (i) matrix chain order optimization, using a top-down memoized DP approach, we compute the best multiplication sequence, and (ii) hybrid multiplication strategy, we selectively apply Strassen's algorithm for large matrices (n ≥ 128), reducing the complexity from O(n (3)) to O(n (2.81)), while using standard multiplication for smaller matrices to avoid recursive overhead. We evaluate the performance of our hybrid method through computational experiments comparing execution time, memory usage, and numerical accuracy against traditional MCM and Strassen's standalone multiplication. RESULTS: Our results demonstrate that the proposed hybrid method achieves significant speedup (4x-8x improvement) and reduces memory consumption, making it well-suited for large-scale applications. This research opens pathways for further optimizations in parallel computing and GPU-accelerated matrix operations. CONCLUSION: This study presents a hybrid approach to Matrix Chain Multiplication by integrating Strassen's algorithm, reducing execution time and memory usage. By selectively applying Strassen's method for large matrices, the proposed technique improves efficiency while preserving accuracy. Future work can focus on parallel computing and GPU acceleration for further optimization.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。