Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
Kirill Terekhov
INMOST
Commits
519bb261
Commit
519bb261
authored
Jan 03, 2015
by
Alexander Danilov
Browse files
Update examples and readme files
Install instructions from readme file were tested on Linux machine.
parent
86400a9a
Changes
6
Hide whitespace changes
Inline
Side-by-side
README
View file @
519bb261
...
...
@@ -31,7 +31,7 @@ rm -f petsc-3.4.5.tar.gz
cd petsc-3.4.5
export PETSC_DIR="`pwd`"
export PETSC_ARCH=linux-gnu-opt
./configure --download-f-blas-lapack --download-metis --download-parmetis --useThreads=0 --with-debugging=0 --with-mpi-dir=/usr --with-x=0 -with-shared-libraries=0
./configure --download-f-blas-lapack --download-metis --download-parmetis --useThreads=0 --with-debugging=0 --with-mpi-dir=/usr --with-x=0
-
-with-shared-libraries=0
make all
...
...
@@ -42,16 +42,15 @@ Download and unpack INMOST source archive.
cd "$INMOST_LIBS"
wget https://github.com/INM-RAS/INMOST/archive/master.tar.gz
tar zxf
INMOST-
master.tar.gz
tar zxf master.tar.gz
rm -f INMOST-master.tar.gz
We will create separate directory for INMOST compilation.
Depending on your version of gcc compiler you may need one of these flags for CMAKE_CXX_FLAGS variable: "-std=c++11" or "-std=c++0x".
mkdir -p INMOST-build
cd INMOST-build
cmake -DUSE_AUTODIFF=OFF -DUSE_SOLVER_PETSC=ON -DUSE_PARTITIONER
_PARMETIS=ON -DCMAKE_CXX_FLAGS="-std=c++11"
-DCOMPILE_EXAMPLES=ON ../INMOST-master
cmake -DUSE_AUTODIFF=OFF -DUSE_SOLVER_PETSC=ON -DUSE_PARTITIONER
=OFF
-DCOMPILE_EXAMPLES=ON ../INMOST-master
make
...
...
@@ -65,11 +64,11 @@ Each example may be executed in serial or parallel ways.
Parallel Grid Generation
------------------------
This example creates simple cubic or prismatic mesh. You can use ParaView to
This example creates simple cubic or prismatic mesh. You can use ParaView to
view the meshes.
cd "$INMOST_LIBS/INMOST-build"
cd examples/GridGen
mpirun -np 4
Grdi
Gen
3
32 32
4
mpirun -np 4
./Grid
Gen
4
32 32
32
Generator parameters are: ng nx ny nz
where ng=3 stands for Prismatic generator and
...
...
@@ -89,11 +88,11 @@ Parallel Finite Volume Discretization
This example uses simple two-point FVM scheme to solve Laplace's equation in unit cube domain.
cd ../FVDiscr
mpirun -np 4 FVDiscr ../GridGen/grid.pvtk A.mtx b.rhs
mpirun -np 4
./
FVDiscr ../GridGen/grid.pvtk A.mtx b.rhs
Files result.pvtk (as well as result_X.vtk with X=0,1,2,3) and A.mtx b.rhs will appear in the current directory.
Run
paraview --data=
grid
.pvtk
paraview --data=
result
.pvtk
and try the following tags in objects to display:
Solution - the solution to the problem
K - tensor K (constant equal to 1 in this example)
...
...
@@ -103,8 +102,8 @@ Solve the Matrix stored in mtx format
This example solves the linear system using different solvers.
cd ../
FVDiscr
mpirun -np 4 MatSolve 0 ../FVDiscr/A.mtx ../FVDiscr/b.rhs
cd ../
MatSolve
mpirun -np 4
./
MatSolve 0 ../FVDiscr/A.mtx ../FVDiscr/b.rhs
Solution time and the true residual will output to the screen.
The first parameter selects the solver:
...
...
README.md
View file @
519bb261
...
...
@@ -29,7 +29,7 @@ rm -f petsc-3.4.5.tar.gz
cd petsc-3.4.5
export PETSC_DIR="`pwd`"
export PETSC_ARCH=linux-gnu-opt
./configure --download-f-blas-lapack --download-metis --download-parmetis --useThreads=0 --with-debugging=0 --with-mpi-dir=/usr --with-x=0
–
with-shared-libraries=0
./configure --download-f-blas-lapack --download-metis --download-parmetis --useThreads=0 --with-debugging=0 --with-mpi-dir=/usr --with-x=0
--
with-shared-libraries=0
make all
```
...
...
@@ -39,20 +39,19 @@ Download and unpack INMOST source archive.
```
cd "$INMOST_LIBS"
wget https://github.com/INM-RAS/INMOST/archive/master.tar.gz
tar zxf
INMOST-
master.tar.gz
tar zxf master.tar.gz
rm -f INMOST-master.tar.gz
```
We will create separate directory for INMOST compilation.
Depending on your version of gcc compiler you may need one of these flags for
`CMAKE_CXX_FLAGS variable`
:
`"-std=c++11"`
or
`"-std=c++0x"`
.
```
mkdir -p INMOST-build
cd INMOST-build
cmake -DUSE_AUTODIFF=OFF -DUSE_SOLVER_PETSC=ON -DUSE_PARTITIONER
_PARMETIS=ON -DCMAKE_CXX_FLAGS="-std=c++11"
-DCOMPILE_EXAMPLES=ON ../INMOST-master
cmake -DUSE_AUTODIFF=OFF -DUSE_SOLVER_PETSC=ON -DUSE_PARTITIONER
=OFF
-DCOMPILE_EXAMPLES=ON ../INMOST-master
make
```
## Examples
## Examples
Several representative examples are provided in source archive.
Here we will try three parallel steps: grid generation, FVM discretization and linear matrix solution.
...
...
@@ -60,11 +59,11 @@ Each example may be executed in serial or parallel ways.
### Parallel Grid Generation
This example creates simple cubic or prismatic mesh. You can use ParaView to
This example creates simple cubic or prismatic mesh. You can use ParaView to
view the meshes.
```
cd "$INMOST_LIBS/INMOST-build"
cd examples/GridGen
mpirun -np 4
Grdi
Gen
3
32 32
4
mpirun -np 4
./Grid
Gen
4
32 32
32
```
Generator parameters are:
`ng nx ny nz`
where
`ng=3`
stands for Prismatic generator and
...
...
@@ -83,11 +82,11 @@ and try the following tags in objects to display:
This example uses simple two-point FVM scheme to solve Laplace's equation in unit cube domain.
```
cd ../FVDiscr
mpirun -np 4 FVDiscr ../GridGen/grid.pvtk A.mtx b.rhs
mpirun -np 4
./
FVDiscr ../GridGen/grid.pvtk A.mtx b.rhs
```
Files result.pvtk (as well as result_X.vtk with X=0,1,2,3) and A.mtx b.rhs will appear in the current directory.
Run
`paraview --data=
grid
.pvtk`
`paraview --data=
result
.pvtk`
and try the following tags in objects to display:
-
`Solution`
– the solution to the problem
-
`K`
– tensor K (constant equal to 1 in this example)
...
...
@@ -96,8 +95,8 @@ and try the following tags in objects to display:
This example solves the linear system using different solvers.
```
cd ../
FVDiscr
mpirun -np 4 MatSolve 0 ../FVDiscr/A.mtx ../FVDiscr/b.rhs
cd ../
MatSolve
mpirun -np 4
./
MatSolve 0 ../FVDiscr/A.mtx ../FVDiscr/b.rhs
```
Solution time and the true residual will output to the screen.
The first parameter selects the solver:
...
...
examples/CMakeLists.txt
View file @
519bb261
#add_subdirectory(DrawGrid)
add_subdirectory
(
OldDrawGrid
)
#add_subdirectory(DrawMatrix)
#
add_subdirectory(MatSolve)
#
add_subdirectory(GridGen)
#
add_subdirectory(FVDiscr)
add_subdirectory
(
MatSolve
)
add_subdirectory
(
GridGen
)
add_subdirectory
(
FVDiscr
)
#add_subdirectory(OctreeCutcell)
#add_subdirectory(Solver)
examples/FVDiscr/main.cpp
View file @
519bb261
...
...
@@ -128,7 +128,7 @@ int main(int argc,char ** argv)
Solver
::
Matrix
A
;
// Declare the matrix of the linear system to be solved
Solver
::
Vector
x
,
b
;
// Declare the solution and the right-hand side vectors
std
::
map
<
GeometricData
,
ElementType
>
table
;
tiny_
map
<
GeometricData
,
ElementType
,
5
>
table
;
table
[
MEASURE
]
=
CELL
|
FACE
;
table
[
CENTROID
]
=
CELL
|
FACE
;
...
...
@@ -160,10 +160,10 @@ int main(int argc,char ** argv)
{
//~ std::cout << face->LocalID() << " / " << m->NumberOfFaces() << std::endl;
Element
::
Status
s1
,
s2
;
Cell
*
r1
=
face
->
BackCell
();
Cell
*
r2
=
face
->
FrontCell
();
if
(
((
r1
==
NULL
||
(
s1
=
r1
->
GetStatus
())
==
Element
::
Ghost
)
?
0
:
1
)
+
((
r2
==
NULL
||
(
s2
=
r2
->
GetStatus
())
==
Element
::
Ghost
)
?
0
:
1
)
==
0
)
continue
;
Cell
r1
=
face
->
BackCell
();
Cell
r2
=
face
->
FrontCell
();
if
(
((
!
r1
->
isValid
()
||
(
s1
=
r1
->
GetStatus
())
==
Element
::
Ghost
)
?
0
:
1
)
+
((
!
r2
->
isValid
()
||
(
s2
=
r2
->
GetStatus
())
==
Element
::
Ghost
)
?
0
:
1
)
==
0
)
continue
;
Storage
::
real
f_nrm
[
3
],
r1_cnt
[
3
],
r2_cnt
[
3
],
f_cnt
[
3
],
d1
[
3
],
Coef
;
Storage
::
real
f_area
=
face
->
Area
();
// Get the face area
Storage
::
real
vol1
=
r1
->
Volume
(),
vol2
;
// Get the cell volume
...
...
@@ -175,7 +175,7 @@ int main(int argc,char ** argv)
f_nrm
[
2
]
/=
f_area
;
r1
->
Barycenter
(
r1_cnt
);
// Get the barycenter of the cell
face
->
Barycenter
(
f_cnt
);
// Get the barycenter of the face
if
(
r2
==
NULL
)
// boundary condition
if
(
!
r2
->
isValid
()
)
// boundary condition
{
Storage
::
real
bnd_pnt
[
3
],
dist
;
make_vec
(
f_cnt
,
r1_cnt
,
d1
);
...
...
@@ -266,7 +266,7 @@ int main(int argc,char ** argv)
if
(
m
->
GetProcessorRank
()
==
0
)
std
::
cout
<<
"Retrive data: "
<<
Timer
()
-
ttt
<<
std
::
endl
;
ttt
=
Timer
();
m
->
ExchangeData
(
phi
,
CELL
);
// Data exchange over processors
m
->
ExchangeData
(
phi
,
CELL
,
0
);
// Data exchange over processors
BARRIER
if
(
m
->
GetProcessorRank
()
==
0
)
std
::
cout
<<
"Exchange phi: "
<<
Timer
()
-
ttt
<<
std
::
endl
;
...
...
examples/GridGen/main.cpp
View file @
519bb261
...
...
@@ -71,7 +71,7 @@ Mesh * ParallelCubeGenerator(INMOST_MPI_Comm comm, int nx, int ny, int nz)
localend
[
j
]
=
localstart
[
j
]
+
localsize
[
j
];
}
std
::
vector
<
Node
*
>
newverts
;
ElementArray
<
Node
>
newverts
(
m
)
;
newverts
.
reserve
(
localsize
[
0
]
*
localsize
[
1
]
*
localsize
[
2
]);
for
(
int
i
=
localstart
[
0
];
i
<=
localend
[
0
];
i
++
)
...
...
@@ -83,7 +83,7 @@ Mesh * ParallelCubeGenerator(INMOST_MPI_Comm comm, int nx, int ny, int nz)
xyz
[
1
]
=
j
*
1.0
/
(
sizes
[
1
]);
xyz
[
2
]
=
k
*
1.0
/
(
sizes
[
2
]);
newverts
.
push_back
(
m
->
CreateNode
(
xyz
));
// Create node in the mesh
if
(
((
int
)
newverts
.
size
()
-
1
)
!=
V_ID
(
i
,
j
,
k
))
if
(
newverts
.
size
()
!=
V_ID
(
i
,
j
,
k
)
+
1
)
printf
(
"v_id = %ld, [%d,%d,%d] = %d
\n
"
,
newverts
.
size
()
-
1
,
i
,
j
,
k
,
V_ID
(
i
,
j
,
k
));
}
...
...
@@ -95,17 +95,17 @@ Mesh * ParallelCubeGenerator(INMOST_MPI_Comm comm, int nx, int ny, int nz)
for
(
int
j
=
localstart
[
1
]
+
1
;
j
<=
localend
[
1
];
j
++
)
for
(
int
k
=
localstart
[
2
]
+
1
;
k
<=
localend
[
2
];
k
++
)
{
const
INMOST_DATA_
ENUM
_TYPE
nvf
[
24
]
=
{
0
,
4
,
6
,
2
,
1
,
3
,
7
,
5
,
0
,
1
,
5
,
4
,
2
,
6
,
7
,
3
,
0
,
2
,
3
,
1
,
4
,
5
,
7
,
6
};
const
INMOST_DATA_
ENUM
_TYPE
numnodes
[
6
]
=
{
4
,
4
,
4
,
4
,
4
,
4
};
Node
*
verts
[
8
]
;
verts
[
0
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
1
)];
verts
[
1
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
1
)];
verts
[
2
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
1
)];
verts
[
3
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
1
)];
verts
[
4
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
0
)];
verts
[
5
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
0
)];
verts
[
6
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
0
)];
verts
[
7
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
0
)];
const
INMOST_DATA_
INTEGER
_TYPE
nvf
[
24
]
=
{
0
,
4
,
6
,
2
,
1
,
3
,
7
,
5
,
0
,
1
,
5
,
4
,
2
,
6
,
7
,
3
,
0
,
2
,
3
,
1
,
4
,
5
,
7
,
6
};
const
INMOST_DATA_
INTEGER
_TYPE
numnodes
[
6
]
=
{
4
,
4
,
4
,
4
,
4
,
4
};
ElementArray
<
Node
>
verts
(
m
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
0
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
0
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
0
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
0
)]
)
;
m
->
CreateCell
(
verts
,
nvf
,
numnodes
,
6
).
first
;
// Create the cubic cell in the mesh
}
...
...
@@ -176,7 +176,7 @@ Mesh * ParallelCubePrismGenerator(INMOST_MPI_Comm comm, int nx, int ny, int nz)
localend
[
j
]
=
localstart
[
j
]
+
localsize
[
j
];
}
std
::
vector
<
Node
*
>
newverts
;
ElementArray
<
Node
>
newverts
(
m
)
;
newverts
.
reserve
(
localsize
[
0
]
*
localsize
[
1
]
*
localsize
[
2
]);
for
(
int
i
=
localstart
[
0
];
i
<=
localend
[
0
];
i
++
)
...
...
@@ -188,7 +188,7 @@ Mesh * ParallelCubePrismGenerator(INMOST_MPI_Comm comm, int nx, int ny, int nz)
xyz
[
1
]
=
j
*
1.0
/
(
sizes
[
1
]);
xyz
[
2
]
=
k
*
1.0
/
(
sizes
[
2
]);
newverts
.
push_back
(
m
->
CreateNode
(
xyz
));
// Create node in the mesh
if
(
((
int
)
newverts
.
size
()
-
1
)
!=
V_ID
(
i
,
j
,
k
))
if
(
newverts
.
size
()
!=
V_ID
(
i
,
j
,
k
)
+
1
)
printf
(
"v_id = %ld, [%d,%d,%d] = %d
\n
"
,
newverts
.
size
()
-
1
,
i
,
j
,
k
,
V_ID
(
i
,
j
,
k
));
}
...
...
@@ -202,23 +202,23 @@ Mesh * ParallelCubePrismGenerator(INMOST_MPI_Comm comm, int nx, int ny, int nz)
for
(
int
j
=
localstart
[
1
]
+
1
;
j
<=
localend
[
1
];
j
++
)
for
(
int
k
=
localstart
[
2
]
+
1
;
k
<=
localend
[
2
];
k
++
)
{
const
INMOST_DATA_
ENUM
_TYPE
NE_nvf1
[
18
]
=
{
0
,
4
,
6
,
2
,
0
,
3
,
7
,
4
,
2
,
6
,
7
,
3
,
0
,
2
,
3
,
4
,
7
,
6
};
const
INMOST_DATA_
ENUM
_TYPE
NE_nvf2
[
18
]
=
{
0
,
4
,
7
,
3
,
1
,
3
,
7
,
5
,
0
,
1
,
5
,
4
,
0
,
3
,
1
,
4
,
5
,
7
};
const
INMOST_DATA_
INTEGER
_TYPE
NE_nvf1
[
18
]
=
{
0
,
4
,
6
,
2
,
0
,
3
,
7
,
4
,
2
,
6
,
7
,
3
,
0
,
2
,
3
,
4
,
7
,
6
};
const
INMOST_DATA_
INTEGER
_TYPE
NE_nvf2
[
18
]
=
{
0
,
4
,
7
,
3
,
1
,
3
,
7
,
5
,
0
,
1
,
5
,
4
,
0
,
3
,
1
,
4
,
5
,
7
};
const
INMOST_DATA_
ENUM
_TYPE
NE_nvf3
[
18
]
=
{
0
,
4
,
6
,
2
,
2
,
6
,
5
,
1
,
1
,
5
,
4
,
0
,
0
,
2
,
1
,
4
,
5
,
6
};
const
INMOST_DATA_
ENUM
_TYPE
NE_nvf4
[
18
]
=
{
1
,
5
,
6
,
2
,
1
,
3
,
7
,
5
,
7
,
3
,
2
,
6
,
1
,
2
,
3
,
6
,
5
,
7
};
const
INMOST_DATA_
INTEGER
_TYPE
NE_nvf3
[
18
]
=
{
0
,
4
,
6
,
2
,
2
,
6
,
5
,
1
,
1
,
5
,
4
,
0
,
0
,
2
,
1
,
4
,
5
,
6
};
const
INMOST_DATA_
INTEGER
_TYPE
NE_nvf4
[
18
]
=
{
1
,
5
,
6
,
2
,
1
,
3
,
7
,
5
,
7
,
3
,
2
,
6
,
1
,
2
,
3
,
6
,
5
,
7
};
const
INMOST_DATA_
ENUM
_TYPE
numnodes
[
5
]
=
{
4
,
4
,
4
,
3
,
3
};
const
INMOST_DATA_
INTEGER
_TYPE
numnodes
[
5
]
=
{
4
,
4
,
4
,
3
,
3
};
Node
*
verts
[
8
]
;
verts
[
0
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
1
)];
verts
[
1
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
1
)];
verts
[
2
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
1
)];
verts
[
3
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
1
)];
verts
[
4
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
0
)];
verts
[
5
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
0
)];
verts
[
6
]
=
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
0
)];
verts
[
7
]
=
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
0
)];
ElementArray
<
Node
>
verts
(
m
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
1
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
1
,
k
-
0
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
1
,
k
-
0
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
1
,
j
-
0
,
k
-
0
)]
)
;
verts
.
push_back
(
newverts
[
V_ID
(
i
-
0
,
j
-
0
,
k
-
0
)]
)
;
// Create two prismatic cells in the mesh
if
((
i
+
j
)
%
2
==
0
)
...
...
@@ -280,10 +280,14 @@ int main(int argc, char *argv[])
filename
+=
".vtk"
;
else
filename
+=
".pvtk"
;
#if defined(USE_MPI)
MPI_Barrier
(
mesh
->
GetCommunicator
());
#endif
tt
=
Timer
();
mesh
->
Save
(
filename
);
// Save constructed mesh to the file
#if defined(USE_MPI)
MPI_Barrier
(
mesh
->
GetCommunicator
());
#endif
tt
=
Timer
()
-
tt
;
if
(
mesh
->
GetProcessorRank
()
==
0
)
std
::
cout
<<
"Save to file
\"
"
<<
filename
<<
"
\"
time: "
<<
tt
<<
std
::
endl
;
...
...
examples/MatSolve/main.cpp
View file @
519bb261
...
...
@@ -40,7 +40,7 @@ int main(int argc, char ** argv)
Solver
::
Vector
b
(
"rhs"
);
// Declare the right-hand side vector
Solver
::
Vector
x
(
"sol"
);
// Declare the solution vector
//std::cout << rank << " load matrix from " << std::string(argv[2]) << " ..." << std::endl;
long
double
t
=
Timer
(),
tt
=
Timer
();
double
t
=
Timer
(),
tt
=
Timer
();
mat
.
Load
(
std
::
string
(
argv
[
2
]));
//if interval parameters not set, matrix will be divided automatically
BARRIER
if
(
!
rank
)
std
::
cout
<<
"load matrix: "
<<
Timer
()
-
t
<<
std
::
endl
;
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment