Beyond Paterson–Stockmeyer: Advancing Matrix Polynomial Computation

For over fifty years, the Paterson–Stockmeyer method has been considered the benchmark for efficient matrix polynomial evaluation. In our recent open access article, we provide a summary of recent advances in this area and present a constructive scheme that evaluates a degree‑20 matrix polynomial using only 5 matrix multiplications—two fewer than Paterson–Stockmeyer.

We also show how the coefficients of this scheme can be derived from the solutions of a single equation involving one coefficient, and we include the full process in our supplementary materials.


Publication Details

  • Title: Beyond Paterson–Stockmeyer: Advancing Matrix Polynomial Computation
  • Authors: J. Sastre, J. Ibáñez, J. M. Alonso, E. Defez
  • Journal: WSEAS Transactions on Mathematics, Vol. 24, pp. 684–693, 2025
  • Conference: 5th Int. Conf. on Applied Mathematics, Computational Science and Systems Engineering (AMCSE), Paris, France, April 14–16, 2025
  • Open Access: https://doi.org/10.37394/23206.2025.24.68
  • Supplementary Material:

Main Contributions

  • Survey of recent advances in matrix polynomial evaluation.
  • Constructive result: A method to compute a degree‑20 matrix polynomial with just 5 matrix multiplications, improving efficiency over Paterson–Stockmeyer (needing 7 matrix products).
  • Coefficient derivation: All coefficients can be obtained by solving an equation in one unknown, documented step by step in the .txt file.
  • Generalization: We propose a framework for evaluation formulas of the type yk2(A)y_{k2}(A), see with Ck2C_k^2​ available variables, and set two conjectures for future research.

Why This Matters

Reducing matrix multiplications significantly lowers computational cost, which is crucial for:

  • Large-scale scientific computing
  • Numerical linear algebra
  • AI and machine learning models involving matrix functions

Access and Resources


Next Steps

If you work with matrix functions or large-scale computations:

  • Try the 5-multiplication scheme for degree‑20 polynomials.
  • Benchmark against Paterson–Stockmeyer.
  • Explore adapting the rational-coefficient approach to other degrees.

We welcome collaboration on proving the conjectures and extending these ideas to broader polynomial families.

Polynomial approximations for the matrix logarithm with computation graphs

Polynomial approximations for the matrix logarithm with computation graphs, E. Jarlebring, J. Sastre, J. Ibáñez, Linear Algebra Applications, in Press (open access), 2024. https://doi.org/10.1016/j.laa.2024.10.024, https://arxiv.org/abs/2401.10089, code.

In this article the matrix logarithm is computed by using matrix polynomial approximations evaluated by using matrix polynomial multiplications and additions. The most popular method for computing the matrix logarithm is a combination of the inverse scaling and squaring method in conjunction with a Padé approximation, sometimes accompanied by the Schur decomposition. The main computational effort lies in matrix-matrix multiplications and left matrix division. In this work we illustrate that the number of such operations can be substantially reduced, by using a graph based representation of an efficient polynomial evaluation scheme. A technique to analyze the rounding error is proposed, and backward error analysis is adapted. We provide substantial simulations illustrating competitiveness both in terms of computation time and rounding errors.

An Improved Taylor Algorithm for Computing the Matrix Logarithm

An Improved Taylor Algorithm for Computing the Matrix Logarithm, J. Ibáñez, J. Sastre, P. Ruiz, J.M. Alonso, E. Defez ), Mathematics. Vol. 9 (17), 2021, pp. 2018.

The matrix logarithm is the used in many applications of science and engineering [2], such as machine learning [7,8,9,10], computer-aided design (CAD) [19], computer graphics [17], the analysis of the topological distances between networks [23], graph theory [11,12], quantum chemistry and mechanics [3,4], buckling simulation [5], biomolecular dynamics [6], the study of Markov chains [13], sociology [14], optics [15], mechanics [16], control theory [18], optimization [20], the study of viscoelastic fluids [21,22], the study of brain–machine interfaces [24], and also in statistics and data analysis [25], among other areas.

The most popular method for computing the matrix logarithm is a combination of the inverse scaling and squaring method in conjunction with a Padé approximation, sometimes accompanied by the Schur decomposition. In this work, we present a Taylor series algorithm, based on the free-transformation approach of the inverse scaling and squaring technique, that uses recent matrix polynomial formulas for evaluating the Taylor approximation of the matrix logarithm more efficiently than the Paterson–Stockmeyer method. Two MATLAB implementations of this algorithm, related to relative forward or backward error analysis, were developed and compared with different state-of-the art MATLAB functions. Numerical tests showed that the new implementations are generally more accurate than the previously available codes, with an intermediate execution time among all the codes in comparison.