|
|
Solvers test300:
|
|
|
=====
|
|
|
|
|
|
|
|
|
This test is located in Tests/test300
|
|
|
|
|
|
|
|
|
Solvers should accept intervals without dependence on their order.
|
|
|
|
|
|
|
|
|
Test generates simple matrix A = diag(10) and right hand side populated with local indexes for each processor r = (local_index_begin,local_index_end) and expect solver to return resulting vector x = (1/local_index_begin,1/local_index_end).
|
|
|
|
|
|
|
|
|
Local indexes are computed from current processor number with processor shift provided by user.
|
|
|
|
|
|
|
|
|
The test have two parameters, the first parameter is permutation value, the second parameter indicates solver type.
|
|
|
|
|
|
|
|
|
For solver type argument the values have the following meaning:
|
|
|
|
|
|
|
|
|
* 0 - internal BiCGStab(L) method with ILU2 preconditioner
|
|
|
* 1 - internal BiCGStab(L) method with multilevel DDPQ-ILUC2 preconditioner
|
|
|
* 2 - PETSc solvers
|
|
|
* 3 - Trilinos Aztec solver with Aztec preconditioner
|
|
|
* 4 - Trilinos Aztec solver with Ifpack preconditioner
|
|
|
* 5 - Trilinos Aztec solver with ML preconditioner
|
|
|
* 6 - Belos solver
|
|
|
* 7 - BiCG method with ILU2 preconditioner from ANI package
|
|
|
|
|
|
|
|
|
This test will seed test300_serial_* tests for all activated solvers into CMake. Those test will check that solvers correctly get input and correctly output solution.
|
|
|
|
|
|
|
|
|
If USE_MPI is activated and CMake have variable ${MPIEXEC} set up correctly then the test will seed test300_parallel_* tests. There test will check that solvers get input in parallel and correctly output solution as well as the solver will check correct functioning of permutation of local intervals. PETSc solver will avoid permutation test since it cannot handle the situation.
|
|
|
|
|
|
|
|
|
Test originated from [https://github.com/INM-RAS/INMOST/issues/6](https://github.com/INM-RAS/INMOST/issues/6) |
|
|
\ No newline at end of file |