Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • I INMOST
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Kirill Terekhov
  • INMOST
  • Wiki
  • 1710 FVDiscr Example

Last edited by igor-konshin Mar 21, 2017
Page history
This is an old version of this page. You can view the most recent version or browse the history.

1710 FVDiscr Example

Parallel Finite Volume Discretization

The code for this example is located in examples/FVDiscr

Brief

This example uses simple two-point FVM scheme to solve Poisson's equation in unit cube domain. The following classes are used: Mesh, Partitioner, Solver.

Description

This examples is used to solve the problem div(K grad U) = f with Dirichlet boundary conditions, where K is unit tensor and the right-hand side f is computed from the exact solution: U = sin(PI·x)·sin(PI·y)·sin(PI·z).

This example may run in both serial and parallel modes with NP processes.

The code loads the mesh for unit cube domain. If INMOST is built with USE_PARTITIONER=ON and input mesh is a serial mesh, then the Inner_RCM partitioner is used to partition the mesh.

One layer of ghost cells is created and exchanged. The simplest two-point FVM scheme is used to assemble local matrices. Using ghost cells effectively links local matrices in global matrix. Two-point FVM scheme is only valid when cell faces are orthogonal to segments connecting centers of neighboring cells.

Optionally the code saves the generated matrix and right-hand side in user provided files. The distributed matrix is solved using INNER_ILU2 solver. The solution is compared with known exact solution and C and L₂ norms are computed. The result mesh is saved either in result.vtk, or result.pvtk depending on number of NP.

Arguments

Usage: ./FVDiscr mesh_file [A.mtx b.rhs]
  • First parameter is the mesh file.
  • Two optional parameters – output file names for generated matrix and right-hand side.

Running example

If you compiled INMOST with USE_PARTITIONER=OFF you should provide the prepartitioned mesh, otherwise you can provide either serial mesh, or prepartitioned mesh.

GridGen/tmp/grid-32-32-32.pvtkGridGen example

$ cd examples/FVDiscr
$ mpirun -np 4 ./FVDiscr /tmp/grid-32-32-32.pvtk /tmp/A.mtx /tmp/b.rhs                                                                   
./FVDiscr                                                                                                                                                                                            
Processors: 4                                                                                                                                                                                        
Load(MPI_File): 0.34929
Assign id: 0.00495315
Exchange ghost: 0.0186219
Matrix assemble: 0.232133
Save matrix "/tmp/A.mtx" and RHS "/tmp/b.rhs": 0.368458
Solve system: 0.1323542e-06 | 1e-05
err_C  = 0.00237783
err_L2 = 0.000736251
Compute true residual: 0.561696
Retrieve data: 0.000573874
Exchange phi: 5.00679e-06
Save "result.pvtk": 0.150515

If you have ParaView installed, you can open the result mesh file:

$ paraview --data=result.pvtk

You can view the following tags:

  • Solution – the solution to the problem
  • K – tensor K (constant equal to 1 in this example)
Clone repository
  • 0100 Compilation
  • 0200 Compilation Windows
  • 0201 Obtain MSVC
  • 0202 Obtain MSMPI
  • 0203 Compilation INMOST Windows
  • 0204 Compilation ParMETIS Windows
  • 0205 Compilation Zoltan Windows
  • 0206 Compilation PETSc Windows
  • 0207 Compilation Trilinos Windows
  • 0400 Compilation Linux
  • 0401 Install MPI
  • 0402 Compilation INMOST Linux
  • 0403 Compilation PETSc Linux
  • 0404 Compilation Trilinos Linux
  • 0405 Compilation ParMETIS Linux
View All Pages