MyraMath
Tutorial for /iterative routines

Overview

The /iterative folder contains various iterative solution algorithms. Generally speaking, direct solvers favor robustness over speed, whereas iterative solvers lean the other direction. Often the two solution frameworks are combined together. In finite element substructuring methods, for example, direct solvers are used to explicitly factor out interior subdomain unknowns, and the leftover interface-matching problem is then solved using iterative techniques. Another noteworthy example is mixed precision refinement, wherein a low (single) precision direct solver is used as a preconditioner to solve a high (double) precision A*x=b problem. By providing both direct and iterative solution routines, MyraMath seeks to be an attractive "ecosystem" for implementing these kinds of algorithms.

All the iterative algorithms in MyraMath take Action's as their inputs. Action is an abstraction for a "callback" to apply a matrix-vector product (b=A*x), the key step for Krylov-type linear solution and eigensolution algorithms. In addition to Krylov-type algorithms, simpler algorithms like backwards refinement and randomized low-rank sampling are also implemented in terms of Action.

Below are links to the sections of this page:

Action interface

Most of the classes we've used so far have been encapsulations of data. For example, Matrix encapsulates a 2D table of numbers, PatternBuilder encapsulates a bitmask of nonzero entries, and SparseRCholeskySolver encapsulates a (permuted) sparse triangular factor L such that A = L*L'. Action represents a slightly different concept, it is used to encapsulate behavior. Specifically, Action encapsulates the behavior of applying a linear operator A to a column vector x to produce another column vector b. This is a fundamental operation behind many iterative solution algorithms: given a candidate solution x, the Action A can be applied to it to measure a residual, which is used in turn to improve the solution (ad infinitum).

Many functions and methods could readily be adapted into Action's. For instance, diagonal scaling (think dense/dimm.h), a sparse matrix-vector multiply (think sparse/gemm.h) and even the .solve() method of a Solver, all of these model the concept of an Action by mapping one input vector x into another output vector b. Within /iterative there are many "adaptor" methods to make atomic Action's:

These atomic Action's can be combined together using the usual "vector-space" operations using the usual overloaded operators (+,-,*,/ etc):

This language of composition can be used to rapidly prototype all sorts of preconditioning effects (additive/multiplicative combination, deflation, etc). Even third-party or end-user code can be encapsulated within an Action (see iterative/UserAction.h) . Any process that maps input vectors to output vectors is a suitable model for the Action concept. See the Appendix below for a working example. Just like MatrixRange served as the "interface type" to get external algebraic data into direct routines (like getrf()), Action serves as the "interface type" to get external algebraic behavior into iterative routines (like gmres()).

Refinement

Backwards refinement (see iterative/refine.h) is a simple process for improving the accuracy of a direct solver. Each step entails:

Refinement delegates to two Action's: one to apply A when forming the residual, and another to solve by A when forming the correction (this "inverse" Action is traditionally called M). The example below (tutorial/iterative/refine.cpp) demonstrates how to use refine(). The forward multiply by A is accomplished by make_SymmAction(), and the backsolution M comes from wrapping a make_SolveAction() around an existing instance of an RLDLTSolver.

7 
11 
12 #include <tests/myratest.h>
13 
14 #include <iostream>
15 
16 using namespace myra;
17 
18 ADD_TEST("iterative_refine","[iterative]") // int main()
19  {
20  // Form random symmetric A and column vector b.
21  int N = 100;
22  auto A = LowerMatrix<float>::random(N);
23  auto b = Vector<float>::random(N);
24  // Factor A using an RLDLTSolver, denoted M.
25  auto M = RLDLTSolver<float>(A.add_const());
26  // Find A*x1 = b, by calling M.solve()
27  auto x1 = b; M.solve(x1.column());
28  // Find A*x2 = b, by calling refine()
29  auto x2 = b; refine(make_SymmAction(A), make_SolveAction(M), x2.column() );
30  // Compare answers x1 and x2.
31  std::cout << "|A*x-b| (solve) = " << euclidean( make_SymmAction(A)*x1 - b ) << std::endl;
32  std::cout << "|A*x-b| (refine)= " << euclidean( make_SymmAction(A)*x2 - b ) << std::endl;
33  }
Routines for computing euclidean norm of a Vector/VectorRange, or normalizing a Vector/VectorRange to...
Interface class for representing subranges of dense Vector&#39;s.
Range construct for a lower triangular matrix stored in rectangular packed format.
static Vector< Number > random(int N)
Generates a random Vector of specified size.
Definition: Vector.cpp:276
Definition: syntax.dox:1
An Action for multiplying by a symmetric dense Matrix. LowerMatrix or SparseMatrix using symm() ...
Specialized container for a lower triangular matrix, O(N^2/2) storage. Used by symmetry exploiting ma...
const SparseMatrix< Number > & add_const() const
Returns const reference to *this.
Definition: SparseMatrix.cpp:191
Action< typename ReflectNumber< Solver >::type > make_SolveAction(const Solver &solver, char op='N')
Free function for making SolveAction&#39;s.
Definition: SolveAction.h:67
Container for either a column vector or row vector (depends upon the usage context) ...
Adapts a class with a .solve() method into an Action.
Factors A = L*D*L&#39;, where D is a "sign matrix" of the form [I 0; 0 -I]. Presents solve methods...
Definition: RLDLTSolver.h:30
void solve(const Range &B, char side='L', char op='N') const
Solves op(A)*X=B or X*op(A)=B, overwrites B with X.
Definition: RLDLTSolver.cpp:53
A solver suitable for a real symmetric Matrix (indefinite is fine). Encapsulates sytrf() ...
Generic algorithm for solving with backwards refinement, implemented in terms of Action&#39;s.
static LowerMatrix< Number > random(int N)
Generates a random LowerMatrix of specified size.
Definition: LowerMatrix.cpp:249
|A*x-b| (solve) = 0.0124765
|A*x-b| (refine)= 1.37793e-05

Most of the sparse direct solvers in /multifrontal already encapsulate this sequence of steps into a solver.refine() method, see Iterative Refinement for details.

It's possible to mix precision types (float and double) when performing backwards refinement, typically by applying a low-precision solver (M) to a high-precision system (A*x=b). The advantage here is that a low-precision solver requires less memory than a high-precision one (roughly half), but there is some risk that a low precision factorization is more likely to breakdown due to ill-conditioning. See iterative/RaiseAction.h and iterative/LowerAction.h for helper classes that adjust the precision of an Action, or iterative/mixed_refine.h for canned algorithms.

Krylov linear solvers

The conditions under which iterative refinement converges are fairly restrictive. In particular, the original factorization must be fairly accurate to guarantee convergence/improvement. Krylov schemes are an alternative solution framework for solving A*x=b, wherein the k'th solution xk is sought by taking a linear combination of {b, A*b, A2*b, A3*b,..Ak*b}, called the Krylov space of (A,b). Under some mild assumptions about A and b, this collection of vectors will completely span Rn after n iterations, guaranteeing they capture to the exact solution to A*x=b (in exact arithmetic, at least).

In practice n iterations is still far too long to wait, but a more detailed analysis of the convergence properties of Krylov methods (too advanced to pursue here) reveals that they can converge quickly if A has favorable spectral properties (clustered eigenvalues, well conditioned). This observation leads to preconditioned Krylov schemes, in which another Action M is introduced and an alternative problem M-1*A*x=M-1*b is iterated instead. The preconditioner M-1 serves to improve the spectral properties of A and should be in some sense an approximation of A's inverse (such that M-1A is well conditioned and/or better clustered). Finding a good M-1 is problem dependent, but preconditioning strategies often draw from (i) incomplete factorizations (ii) classical smoothers (iii) approximating the underlying "physics" of A cheaply.

There are many Krylov-type solvers, but the most important ones are probably:

All of these routines take an Action for A and another Action to apply M-1, and overwrite your initial guess for x with the final solution. They also return profiling data (convergence history, etc), check the detailed source documentation for details. The example code below (tutorial/iterative/pcg.cpp) demonstrates pcg() on a symmetric positive definite A (a 2D graph laplacian), using an incomplete Cholesky factorization as a preconditioner M-1. For comparison, the problem is also solved with no preconditioner (by setting M-1=I).

2 
6 
9 #include <myramath/sparse/gemv.h>
10 
11 #include <myramath/iterative/pcg.h>
16 
17 #include <tests/myratest.h>
18 
19 #include <iostream>
20 
21 using namespace myra;
22 
23 ADD_TEST("iterative_pcg","[iterative]") // int main()
24  {
25  // Build graph laplacian over 2D grid of size IxJ
26  int I = 400;
27  int J = 200;
28  int N = I*J;
29  auto A = laplacian2<double>(I,J);
30  auto b = Vector<double>::random(N);
31  // Solve using pcg() with no preconditioner.
32  std::cout << "-------- no preconditioner -------" << std::endl;
33  auto x1 = Vector<double>::zeros(N);
34  auto output1 = pcg( make_IdentityAction<double>(N), make_GemmAction(A), b, x1, 1.0e-8);
35  std::cout << " |A*x-b|/|b| = " << euclidean(A*x1-b) / euclidean(b) << std::endl;
36  std::cout << " iterations = " << output1.history.size() << std::endl;
37  std::cout << std::endl;
38  // Solve using pcg() with incomplete cholesky preconditioner.
39  std::cout << "------- with preconditioner ------" << std::endl;
40  auto x2 = Vector<double>::zeros(N);
41  auto M = ICholeskySolver<double>(A);
42  auto output2 = pcg( make_SolveAction(M), make_GemmAction(A), b, x2, 1.0e-8);
43  std::cout << " |A*x-b|/|b| = " << euclidean(A*x2-b) / euclidean(b) << std::endl;
44  std::cout << " iterations = " << output2.history.size() << std::endl;
45  std::cout << std::endl;
46  }
int size() const
Returns size of underlying system A (it&#39;s square).
Definition: ICholeskySolver.cpp:220
Routines for computing euclidean norm of a Vector/VectorRange, or normalizing a Vector/VectorRange to...
Interface class for representing subranges of dense Vector&#39;s.
General purpose compressed-sparse-column (CSC) container.
static Vector< Number > random(int N)
Generates a random Vector of specified size.
Definition: Vector.cpp:276
Definition: syntax.dox:1
An Action that is just the identity operator.
Action< typename ReflectNumber< Solver >::type > make_SolveAction(const Solver &solver, char op='N')
Free function for making SolveAction&#39;s.
Definition: SolveAction.h:67
Routines for printing the contents of various std::container&#39;s to a std::ostream using operator <<...
Signatures for sparse matrix * dense vector multiplies. All delegate to gemm() under the hood...
static Vector< Number > zeros(int N)
Generates a zeros Vector of specified size.
Definition: Vector.cpp:280
Linear system solution via conjugate gradients (symmetric positive definite action A) ...
Container for either a column vector or row vector (depends upon the usage context) ...
Adapts a class with a .solve() method into an Action.
An Action for multiplying by a dense Matrix or SparseMatrix using gemm().
Incompletely factors A ~= L*L&#39;, presents approximate solve() method.
Definition: ICholeskySolver.h:28
Helper routines for reordering/filling 2D structured grids. Used by many unit tests.
Incomplete Cholesky preconditioner. Presents a solve() function, but it&#39;s only approximate.
-------- no preconditioner -------
|A*x-b|/|b| = 7.24724e-09
iterations = 28
------- with preconditioner ------
|A*x-b|/|b| = 3.55127e-09
iterations = 10

For sake of completeness, tutorial/iterative/bicgstab_gmres.cpp (shown below) compares the performance of bicgstab() and gmres() on a small 50x50 unsymmetric system.

2 
6 #include <myramath/dense/diag.h>
7 #include <myramath/dense/gemv.h>
10 
16 
17 #include <tests/myratest.h>
18 
19 #include <iostream>
20 #include <cmath>
21 
22 using namespace myra;
23 
24 ADD_TEST("iterative_bicgstab_gmres","[iterative]") // int main()
25  {
26  using myra_stlprint::operator<<;
27  // Make A, a random matrix with exponentially decaying diagonal shifts.
28  int N = 50;
29  auto A = Matrix<double>::random(N,N);
30  double e0 = 2.0;
31  double e1 = 0.5;
32  for (int n = 0; n < N; ++n)
33  {
34  double dn = n;
35  double dN = N;
36  double e = e0 + dn/dN*(e1-e0);
37  A(n,n) += std::pow(10.0,e);
38  }
39 
40  // Make forcing data b.
41  auto b = Vector<double>::ones(N);
42  // Extract diagonal of A, invert it to use as a preconditioner.
43  auto D = diag(A);
44  invert_inplace(D);
45  // Solve A*x=b using bicgstab()
46  std::cout << "-------- bicgstab() -------" << std::endl;
47  auto x1 = Vector<double>::zeros(N);
48  auto output1 = bicgstab(make_DimmAction(D), make_GemmAction(A), b, x1);
49  std::cout << " |A*x-b|/|b| = " << euclidean(A*x1-b)/euclidean(b) << std::endl;
50  std::cout << " history = " << output1.history << std::endl;
51  std::cout << std::endl;
52  // Solve A*x=b using gmres()
53  std::cout << "--------- gmres() ---------" << std::endl;
54  auto x2 = Vector<double>::zeros(N);
55  auto output2 = gmres(make_DimmAction(D), make_GemmAction(A), b, x2);
56  std::cout << " |A*x-b|/|b| = " << euclidean(A*x2-b)/euclidean(b) << std::endl;
57  std::cout << " history = " << output2.history << std::endl;
58  std::cout << std::endl;
59  }
Routines for computing euclidean norm of a Vector/VectorRange, or normalizing a Vector/VectorRange to...
static Vector< Number > ones(int N)
Generates a ones Vector of specified size.
Definition: Vector.cpp:284
Applies the "Action" of a linear operator, b := A*x, used in iterative solution algorithms.
static Matrix< Number > random(int I, int J)
Generates a random Matrix of specified size.
Definition: Matrix.cpp:353
Linear system solution via gmres (for invertible action A)
Linear system solution via bicgstab (for invertible action A)
Variety of routines all for dense Matrix*Vector multiplies. All just delegate to gemm() ...
Definition: syntax.dox:1
Routines for printing the contents of various std::container&#39;s to a std::ostream using operator <<...
static Vector< Number > zeros(int N)
Generates a zeros Vector of specified size.
Definition: Vector.cpp:280
General purpose dense matrix container, O(i*j) storage.
Container for either a column vector or row vector (depends upon the usage context) ...
Returns the diagonal of a dense Matrix or LowerMatrix.
Overwrites a LowerMatrix, DiagonalMatrix, or square Matrix with its own inverse. Or, returns it as a copy.
An Action for multiplying by a dense Matrix or SparseMatrix using gemm().
An Action for multiplying by a DiagonalMatrix using dimm()
Container for a diagonal matrix, O(n) storage. Used by SVD, row/column scaling, etc.
-------- bicgstab() -------
|A*x-b|/|b| = 4.97367e-05
history = [ 1 0.4513 1.60752 0.108491 0.0379676 0.011765 0.00958063 0.00847243 0.00931855 0.00127366 0.00110497 0.000371644 0.000362132 0.000269913 0.000223167 0.000212457 0.000279266 4.97367e-05 ] (18)
--------- gmres() ---------
|A*x-b|/|b| = 0.000400246
history = [ 0.893465 0.0607599 0.0262138 0.0099447 0.00348539 0.0011249 0.000349895 0.000106706 3.01123e-05 ] (9)

The performance of the two methods is difficult to compare directly: gmres() converges better (monotonically) but requires more internal workspace, while each bicgstab() iteration internally requires two applications of A and M. End user experimentation is encouraged. It is likely that more Krylov solvers will be added to MyraMath over time.

Regardless of the method, successful Krylov solution hinges on good preconditioning. The .schur() and .partialsolve() methods on MyraMath's /multifrontal Solver's are intended to be the building blocks of substructuring preconditioners. This would fall under strategy (iii), exploiting the physical intuition that "long range interactions" are less important, and that instead solving a collection of local subproblems can often strongly cluster a large sparse A that arises from discretizing a partial differential equation.

Krylov eigensolvers

Spans of Krylov vectors {b, A*b, A2*b, A3*b,..Ak*b} are also good spaces for approximating extremal eigenpairs (consider their similarity to the iterates generated by the classical power method). Eigensolvers are considerably harder to implement than linear solvers. However, unstructured nested dissection requires solving eigenproblems over sparse graph laplacians. So MyraMath does implement a few simple Krylov eigensolvers (as details of the internal reorder()'ing routine), and these algorithms are callable from user code.

Two algorithms are provided:

The example below (tutorial/iterative/lanczos_lopcg.cpp) uses both these algorithms to find one large eigenpair and one small eigenpair of a sparse real symmetric A:

6 
9 #include <myramath/sparse/gemv.h>
10 #include <myramath/sparse/diag.h>
11 
18 
19 #include <tests/myratest.h>
20 
21 #include <iostream>
22 
23 using namespace myra;
24 
25 ADD_TEST("iterative_lanczos_lopcg","[iterative]") // int main()
26  {
27  // Extract extremal eigenpairs of A =
28  //
29  // [ +2 -1 -1 ]
30  // [ -1 +2 -1 ]
31  // [ -1 +2 -1 ]
32  // [ . ]
33  // [ . ]
34  // [ -1 +2 -1 ]
35  // [ -1 +2 -1 ]
36  // [ -1 -1 +2 ]
37  //
38  // (1D laplacian with periodic boundary condition)
39  int N = 16;
41  for (int n = 0; n < N-1; ++n)
42  {
43  B(n,n) += 1.0;
44  B(n+1,n+1) += 1.0;
45  B(n,n+1) -= 1.0;
46  B(n+1,n) -= 1.0;
47  }
48  B(0,0) += 1.0;
49  B(N-1,N-1) += 1.0;
50  B(0,N-1) -= 1.0;
51  B(N-1,0) -= 1.0;
52  auto A = B.make_SparseMatrix();
53  // Form shifted ICholeskySolver to use as a preconditioner in lopcg1()
54  ICholeskySolver<double> M(A,1.0e-6);
55  // Find smallest eigenpair using lopcg1()
56  std::cout << "-------- lopcg1() -------" << std::endl;
57  auto result0 = lopcg1(make_SolveAction(M),make_GemmAction(A));
58  // The eigenvector x0 should be [+1 +1 +1 +1 +1 +1 ...]/sqrt(N)
59  const Vector<double>& x0 = result0.first;
60  std::cout << "x0 = " << x0;
61  // The eigenvalue lambda0 should be 0
62  double lambda0 = 1.0/result0.second;
63  std::cout << "lambda0 = " << lambda0 << std::endl;
64  // Check residual of eigenstatement.
65  std::cout << "|A*x0 - lambda0*x0| = " << euclidean(A*x0-x0*lambda0) << std::endl;
66  std::cout << std::endl;
67  // Find largest eigenpair using lanczos1()
68  std::cout << "------- lanczos1() ------" << std::endl;
69  auto result1 = lanczos1(make_GemmAction(A));
70  // The eigenvector x1 should be [+1 -1 +1 -1 +1 -1 ...]/sqrt(N)
71  const Vector<double>& x1 = result1.first;
72  std::cout << "x1 = " << x1;
73  // The eigenvalue lambda1 should be 4
74  double lambda1 = result1.second;
75  std::cout << "lambda1 = " << lambda1 << std::endl;
76  // Check residual of eigenstatement.
77  std::cout << "A*x1 - x1*lambda1 = " << euclidean(A*x1-x1*lambda1) << std::endl;
78  }
Routines for computing euclidean norm of a Vector/VectorRange, or normalizing a Vector/VectorRange to...
General purpose compressed-sparse-column (CSC) container.
Definition: syntax.dox:1
CVectorRange< Number > first(int n) const
Returns a VectorRange over the first n entries, this(0:n)
Definition: Vector.cpp:192
Action< typename ReflectNumber< Solver >::type > make_SolveAction(const Solver &solver, char op='N')
Free function for making SolveAction&#39;s.
Definition: SolveAction.h:67
Signatures for sparse matrix * dense vector multiplies. All delegate to gemm() under the hood...
General purpose dense matrix container, O(i*j) storage.
Tabulates a vector of length N, allows random access.
Definition: conjugate.h:21
Container for either a column vector or row vector (depends upon the usage context) ...
Finds one small eigenpair of a real symmetric Action.
Overwrites a LowerMatrix, DiagonalMatrix, or square Matrix with its own inverse. Or, returns it as a copy.
Adapts a class with a .solve() method into an Action.
Like SparseMatrix, but easier to populate via random access (i,j) operator.
Definition: SparseMatrix.h:32
An Action for multiplying by a dense Matrix or SparseMatrix using gemm().
Finds one dominant eigenpair of a real symmetric Action.
Convenience type for building SparseMatrix&#39;s, allow random access without fussing with upfront constr...
Returns the diagonal of a SparseMatrix.
An Action for multiplying by a DiagonalMatrix using dimm()
Container for a diagonal matrix, O(n) storage. Used by SVD, row/column scaling, etc.
Incompletely factors A ~= L*L&#39;, presents approximate solve() method.
Definition: ICholeskySolver.h:28
Incomplete Cholesky preconditioner. Presents a solve() function, but it&#39;s only approximate.
-------- lopcg1() -------
x0 = size 16 Vector of double:
[ -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 -0.25 ]
lambda0 = -2.36655e-18
|A*x0 - lambda0*x0| = 2.1403e-10
------- lanczos1() ------
x1 = size 16 Vector of double:
[ 0.25 -0.25 0.25 -0.25 0.25 -0.25 0.25 -0.25 0.25 -0.25 0.25 -0.25 0.25 -0.25 0.25 -0.25 ]
lambda1 = 4
A*x1 - x1*lambda1 = 1.23629e-15

Note these methods are only suitable for real symmetric A (because graph laplacians are real symmetric). For general A, more sophisticated algorithms (like the Arnoldi method) are needed, which are not implemented here. Fortunately there are robust open source packages in this space that can help, like ARPACK and SLEPc.

Appendix 1: Writing your own Action

The algorithms in /iterative can be used with any Action. A variety of common Action's (sparse matrix-vector multiply, diagonal scaling, etc) and a language to compose them (sum, cascade, etc) are available. However, advanced users may discover a need to inject their own code as an Action (most likely as a preconditioner). Adaption routines for this use case can be found within iterative/UserAction.h.

User code should be encapsulated within a class with the following signatures:

Then, call make_UserAction() to adapt an instance of your class into an Action. Any user code wrapped like this can be further composed (added, cascaded, scaled, etc) just like any other native Action. The following program (tutorial/iterative/action1.cpp) shows a working example of this process. The "user code" applies a 1D graph laplacian over N points and is encapsulated within the MyAction class. It is adapted into an Action using make_UserAction(), then composed with a rank-1 correction (a regularization), and finally fed into pcg() for linear solution.

6 
12 #include <myramath/iterative/pcg.h>
13 
14 #include <tests/myratest.h>
15 
16 #include <iostream>
17 #include <typeinfo>
18 
19 using namespace myra;
20 
21 namespace {
22 
23 // An example user routine to be adapted into an Action, it implements a 1D graph laplacian over N points:
24 //
25 // [ +1 -1 ]
26 // [ -1 +2 -1 ]
27 // [ -1 +2 -1 ]
28 // [ . ]
29 // b := [ . ] * x
30 // [ -1 +2 -1 ]
31 // [ -1 +2 -1 ]
32 // [ -1 +1 ]
33 //
34 class MyAction
35  {
36  public:
37 
38  // User routines must provide a Number typedef. This one works over double's.
39  typedef double Number;
40 
41  // User routines must report their size. This one maps from N-vectors into N-vectors.
42  std::pair<int,int> size() const
43  { return std::make_pair(N,N); }
44 
45  // User routines must provide a method to map b := A*x, don't forget that x and b may have multiple columns.
46  void multiply(const CMatrixRange<double>& x, const MatrixRange<double>& b) const
47  {
48  b.zero();
49  for (int n = 0; n < N-1; ++n)
50  {
51  b.row(n) += x.row(n);
52  b.row(n+1) += x.row(n+1);
53  b.row(n) -= x.row(n+1);
54  b.row(n+1) -= x.row(n);
55  }
56  }
57 
58  // User routines should capture any needed environmental references via constructor arguments.
59  MyAction(int in_N)
60  : N(in_N) { }
61 
62  private:
63 
64  // User routines can carry member data, but they must be copy-constructible.
65  int N;
66 
67  };
68 
69 } // namespace
70 
71 ADD_TEST("iterative_action1","[iterative]") // int main()
72  {
73  // Adapts user code (encapsulated in MyAction) into an Action U.
74  int N = 7;
75  auto U = make_UserAction( MyAction(N) );
76  std::cout << "U.make_Matrix() = " << U.make_Matrix() << std::endl;
77  // Compose a new Action A = U + VV' (a rank one correction to U, to move its 0 eigenvalue to 1)
78  auto V = Matrix<double>::fill(N,1, std::sqrt(1.0/N) );
79  auto A = U + make_GemmAction(V)*make_GemmAction(V,'T');
80  std::cout << "A.make_Matrix() = " << A.make_Matrix() << std::endl;
81  // Solve a linear system A*x=b, don't really need a preconditioner.
82  auto b = Vector<double>::random(N);
83  auto M = make_IdentityAction<double>(N);
84  auto x = Vector<double>::zeros(N);
85  auto result = pcg(M,A,b,x);
86  // Examine final residual.
87  double residual = frobenius(A*x-b);
88  std::cout << "|(MyAction+VV')*x - b| = " << residual << std::endl;
89  }
Action< typename UserClass::Number > make_UserAction(const UserClass &u)
Helper function to adapt some user code (encapsulated in a class) into an Action. ...
Definition: UserAction.h:61
Interface class for representing subranges of dense Matrix&#39;s.
Interface class for representing subranges of dense Vector&#39;s.
static Matrix< Number > fill(int I, int J, Number c)
Generates a Matrix of specified size filled with constant c.
Definition: Matrix.cpp:365
Routines for computing Frobenius norms of various algebraic containers.
CMatrixRange< Number > row(int i) const
Returns this(i,:)
Definition: MatrixRange.cpp:512
static Vector< Number > random(int N)
Generates a random Vector of specified size.
Definition: Vector.cpp:276
Definition: syntax.dox:1
Represents a const MatrixRange.
Definition: bothcat.h:22
An Action that is just the identity operator.
Adapts user code (encapsulated in a class) into an Action.
Composes two Action&#39;s A and B, yielding an Action that applies (A+B)*X.
Represents a mutable MatrixRange.
Definition: conjugate.h:26
static Vector< Number > zeros(int N)
Generates a zeros Vector of specified size.
Definition: Vector.cpp:280
Linear system solution via conjugate gradients (symmetric positive definite action A) ...
Composes two Action&#39;s A and B, yielding an Action that cascades A*(B*X)
General purpose dense matrix container, O(i*j) storage.
Container for either a column vector or row vector (depends upon the usage context) ...
MatrixRange< Number > row(int i) const
Returns this(i,:)
Definition: MatrixRange.cpp:151
An Action for multiplying by a dense Matrix or SparseMatrix using gemm().
void zero() const
Assigns zero to every entry.
Definition: MatrixRange.cpp:362
Matrix< Number > make_Matrix() const
Accumulates *this onto a Matrix<Number>.
Definition: SparseMatrix.cpp:581
U.make_Matrix() = size 7 by 7 Matrix of double:
[ 1 -1 0 0 0 0 0 ]
[ -1 2 -1 0 0 0 0 ]
[ 0 -1 2 -1 0 0 0 ]
[ 0 0 -1 2 -1 0 0 ]
[ 0 0 0 -1 2 -1 0 ]
[ 0 0 0 0 -1 2 -1 ]
[ 0 0 0 0 0 -1 1 ]
A.make_Matrix() = size 7 by 7 Matrix of double:
[ 1.14286 -0.857143 0.142857 0.142857 0.142857 0.142857 0.142857 ]
[ -0.857143 2.14286 -0.857143 0.142857 0.142857 0.142857 0.142857 ]
[ 0.142857 -0.857143 2.14286 -0.857143 0.142857 0.142857 0.142857 ]
[ 0.142857 0.142857 -0.857143 2.14286 -0.857143 0.142857 0.142857 ]
[ 0.142857 0.142857 0.142857 -0.857143 2.14286 -0.857143 0.142857 ]
[ 0.142857 0.142857 0.142857 0.142857 -0.857143 2.14286 -0.857143 ]
[ 0.142857 0.142857 0.142857 0.142857 0.142857 -0.857143 1.14286 ]
|(MyAction+VV')*x - b| = 6.93334e-16

Appendix 2: Writing an Action that calls third party code.

Another notable use case for make_UserAction() is to help adapt third party code (like an external preconditioning algorithm) into an Action. Probably, this third party code isn't something you can easily modify or cut/paste into a MyAction.multiply() method. Possibly it's only available in compiled form anyway. You can still make this work, instead you just need to write a class (similar to MyAction from before) that delegates into the external library. The following program (tutorial/iterative/action2.cpp) shows a working example of this process.

4 
7 
8 #include <tests/myratest.h>
9 
10 #include <cmath>
11 #include <iostream>
12 #include <typeinfo>
13 
14 using namespace myra;
15 
16 namespace {
17 
18 // A mock for "3rd party code", this function applies a discrete hartley transform to a real N-vector.
19 void hartley(const double* in, double* out, int N)
20  {
21  double pi = std::acos(-1.0);
22  double q2 = std::sqrt(2.0);
23  double qN = std::sqrt(N);
24  for (int k = 0; k < N; ++k)
25  {
26  out[k] = 0.0;
27  for (int n = 0; n < N; ++n)
28  out[k] += in[n] * q2 * std::cos(2*pi/N*n*k - pi/4);
29  out[k] /= qN;
30  }
31  }
32 
33 // Example user code, calls into the "3rd party code", will be adapted into an Action
34 class MyAction
35  {
36  public:
37 
38  typedef double Number;
39 
40  std::pair<int,int> size() const
41  { return std::make_pair(N,N); }
42 
43  void multiply(const CMatrixRange<double>& x, const MatrixRange<double>& b) const
44  {
45  // Our "3rd party code", hartley(), only works on 1 column at a time.
46  for (int j = 0; j < x.J; ++j)
47  hartley(x.column(j).begin, b.column(j).begin,N);
48  }
49 
50  MyAction(int in_N)
51  : N(in_N) { }
52 
53  private:
54 
55  int N;
56 
57  };
58 
59 } // namespace
60 
61 ADD_TEST("iterative_action2","[iterative]") // int main()
62  {
63  // Adapt 3rd party code (implemented by hartly(), encapsulated in MyAction) into an Action H.
64  int N = 7;
65  auto H = make_UserAction( MyAction(N) );
66  std::cout << "H.make_Matrix() = " << H.make_Matrix() << std::endl;
67  // The discrete hartly transform happens to involutory (is its own inverse).
68  // Cascade two together and examine the dense view of it. Should be identity.
69  auto HH = H*H;
70  std::cout << "HH.make_Matrix() = " << threshold(HH.make_Matrix(),1.0e-12) << std::endl;
71  }
Action< typename UserClass::Number > make_UserAction(const UserClass &u)
Helper function to adapt some user code (encapsulated in a class) into an Action. ...
Definition: UserAction.h:61
Interface class for representing subranges of dense Matrix&#39;s.
Replaces small values with explicit zeros.
Definition: syntax.dox:1
Represents a const MatrixRange.
Definition: bothcat.h:22
Adapts user code (encapsulated in a class) into an Action.
int J
---------— Data members, all public ----------------—
Definition: MatrixRange.h:241
Represents a mutable MatrixRange.
Definition: conjugate.h:26
CMatrixRange< Number > column(int j) const
Returns this(:,j)
Definition: MatrixRange.cpp:561
Composes two Action&#39;s A and B, yielding an Action that cascades A*(B*X)
General purpose dense matrix container, O(i*j) storage.
MatrixRange< Number > column(int j) const
Returns this(:,j)
Definition: MatrixRange.cpp:200
H.make_Matrix() = size 7 by 7 Matrix of double:
[ 0.377964 0.377964 0.377964 0.377964 0.377964 0.377964 0.377964 ]
[ 0.377964 0.531162 0.284383 -0.176542 -0.504527 -0.452593 -0.0598475 ]
[ 0.377964 0.284383 -0.504527 -0.0598475 0.531162 -0.176542 -0.452593 ]
[ 0.377964 -0.176542 -0.0598475 0.284383 -0.452593 0.531162 -0.504527 ]
[ 0.377964 -0.504527 0.531162 -0.452593 0.284383 -0.0598475 -0.176542 ]
[ 0.377964 -0.452593 -0.176542 0.531162 -0.0598475 -0.504527 0.284383 ]
[ 0.377964 -0.0598475 -0.452593 -0.504527 -0.176542 0.284383 0.531162 ]
HH.make_Matrix() = size 7 by 7 Matrix of double:
[ 1 0 0 0 0 0 0 ]
[ 0 1 0 0 0 0 0 ]
[ 0 0 1 0 0 0 0 ]
[ 0 0 0 1 0 0 0 ]
[ 0 0 0 0 1 0 0 ]
[ 0 0 0 0 0 1 0 ]
[ 0 0 0 0 0 0 1 ]

This is essentially the Adaptor design pattern, where MyraMath is the Client, the third party code is the Adaptee, and you need to write MyAction to be the Adaptor. The pre-canned Action's for calling BLAS routines (gemm(), etc) basically work like this. Injecting your own preconditioning routines is strongly encouraged. Or try building new ones from the other building blocks in MyraMath.

Go back to API Tutorials