One advantage to using this routine is that it is output sensitive - the amount of computation it does depends mostly on the size of the result instead of the size of the input. To solve these systems we use LinearAlgebra:-Modular:-IntegerLinearSolve command, which uses hardware arithmetic to solve the system modulo primes and Chinese remaindering and rational reconstruction to recover the solutions. The equations containing the pivots are triangularized and a block of variables is eliminated from the system in each step of the algorithm.Įventually after running these methods one obtains a small dense linear system. To make our implementation efficient we select blocks of pivots at a time. A common problem with the strategy is that the overhead required to select a pivot and update the row and column weights after each elimination step is O(|A|). We subtract one from the row and column counts to force the selection of single elements in a row or column if they exist, since their choice produces no fill in at all. That is, each non-zero A is given weight (#row - 1)(#column - 1) and a pivot with minimal weight is selected. It selects a pivot to minimize the product of the number of non-zero row and column elements. Markowitz pivoting is a simple and popular heuristic for reducing fill in. Experience shows that extremely sparse systems will often collapse. Afterwards the dense equations are evaluated at the solutions to obtain a smaller system in the unsolved variables. At some point one can also drop a few of the densest rows and recurse on the rest of the system to quickly solve for most of the variables. This ensures that the light part of the matrix never fills in. Next it declares a small number of the densest columns "heavy" and uses equations with one "light" variable for elimination. It first solves for variables that appear in only one equation, repeatedly, until all triangular blocks are removed from the system. The algorithm assumes that the non-zero elements are not evenly distributed among the columns. You can read about it in more detail here. This algorithm was developed to solve very large systems coming from integer factorization and discrete logarithm computations. Good pivot choices preserve sparsity and keep the elimination process from getting bogged down. Repeatly choosing bad pivots will fill in the matrix entirely, producing a dense system that can be very hard to solve. A poor choice of pivot will increase the number of non-zero entries in each row and subsequently the amount of work required to solve the rest of the system. The main problem encountered when using a direct method is "fill in". Iterative methods are not considered, although we could implement them - see this post. This article focuses on direct methods for solving linear systems, ie: Gaussian elimination. We could modify the code below to run mod p, but that's another article. For example, Berlekamp's algorithm for factoring polynomials, the quadratic seive algorithm for integer factorization, and the index calculus algorithm for computing discrete logarithms all use sparse linear algebra mod p or p^k-1. Many mathematical algorithms also produce sparse linear algebra problems. In all of these cases each element is connected to only a handful of neighbours making the resulting linear system sparse. Consider networks, finite element models, structural analysis problems, and linear programming problems. Typically many quantities are related, but because of an underlying structure only a small subset of the elements appear in most equations. Sparse linear systems arise naturally from problems in mathematics, science, and engineering. An example implementation is provided, but first we present a bit of background. In this article we present strategies for solving sparse linear systems over the rationals. What is the largest linear system that Maple can solve? You might be surprised to find out.
0 Comments
Leave a Reply. |