Warning, /geant4/examples/extended/parallel/MPI/README.md is written in an unsupported language. File is not indexed.
0001 \page Examples_MPI Category : parallel/MPI
0002
0003 Geant4 MPI Interface
0004 ====================
0005
0006 Author:
0007 Koichi Murakami (KEK) / Koichi.Murakami@kek.jp
0008 Andrea Dotti (SLAC) / adotti@slac.stanford.edu
0009
0010
0011 About the interface
0012 ===================
0013 G4MPI is a native interface with MPI libraries. The directory contains
0014 a Geant4 UI library and a couple of parallelized examples.
0015 Using this interface, users applications can be parallelized with
0016 different MPI compliant libraries, such as OpenMPI, MPICH2 and so on.
0017
0018 System Requirements:
0019 --------------------
0020
0021 ### MPI Library
0022
0023 The MPI interface can work with MPI-compliant libraries,
0024 such as Open MPI, MPICH, Intel MPI etc.
0025
0026 For example, the information about Open MPI can be obtained from
0027 http://www.open-mpi.org/
0028
0029 MPI support:
0030 ------------
0031 G4mpi has been tested with the following MPI flavors:
0032 * OpenMPI 5.0.2 and 4.1.4
0033 * ~~MPICH 3.2~~
0034 * ~~Intel MPI 5.0.1~~
0035
0036 ### CMake
0037
0038 CMake is used to build <a href="./group__extended__parallel__MPI__libG4mpi.html"> G4MPI </a>library,
0039 that co-works with Geant4 build system.
0040
0041 ### Optional (for exMPI02)
0042
0043 ROOT for histogramming/analysis
0044
0045 How to build G4MPI
0046 ==================
0047 To build G4MPI library, use CMake on Geant4 library installed with CMake build.
0048
0049 Follow these commands,
0050
0051 > mkdir build
0052 > cd build
0053 > cmake -DGeant4_DIR=<your Geant4 install path>/lib[64]/Geant4-V.m.n \
0054 -DCMAKE_INSTALL_PREFIX=<where-G4mpi-lib-will-be-installed> \
0055 <g4source>/examples/extended/parallel/MPI/source
0056 > make
0057 > make install
0058
0059 The cmake step will try to guess where MPI is installed, mpi executables should
0060 be in PATH. You can specify CXX and CC environment variables to your specific
0061 mpi wrappers if needed.
0062
0063 The library and header files will be installed on the installation directory
0064 specified in CMAKE_INSTALL_PREFIX
0065 a CMake configuration file will also be installed
0066 (see examples on how to compile an application using G4mpi)
0067
0068 How to use
0069 ==========
0070
0071 How to make parallel applications
0072 ---------------------------------
0073
0074 An example of a main program:
0075
0076 ```c++
0077 #include "G4MPImanager.hh"
0078 #include "G4MPIsession.hh"
0079
0080 int main(int argc,char** argv)
0081 {
0082 // At first, G4MPImanager/G4MPIsession should be created.
0083 G4MPImanager* g4MPI= new G4MPImanager(argc,argv);
0084
0085 // MPI session (G4MPIsession) instead of G4UIterminal
0086 G4MPIsession* session= g4MPI-> GetMPIsession();
0087
0088 // user application setting
0089 G4RunManager* runManager= new G4RunManager();
0090
0091 ....
0092
0093 // After user application setting, just start a MPI session.
0094 MPIsession treats both interactive and batch modes.
0095 session-> SessionStart();
0096
0097 // Finally, terminate the program
0098 delete g4MPI;
0099 delete runManager;
0100 }
0101 ```
0102
0103 How to compile
0104 ---------------
0105
0106 Using cmake, assuming G4mpi library is installed in path _g4mpi-path_ and
0107 Geant4 is installed in _g4-path_:
0108
0109 > mkdir build
0110 > cd build
0111 > cmake -DGeant4_DIR=<g4-path>/lib[64]/Geant4-V.m.n \
0112 -DG4mpi_DIR=<g4mpi-path>/lib[64]/G4mpi-V.m.n \
0113 <source>
0114 > make
0115
0116 Check provided examples: under examples/extended/parallel/MPI/examples for an
0117 example of CMakeLists.txt file to be used.
0118
0119 ### Notes about session shell
0120
0121 LAM/MPI users can use "G4tcsh" as an interactive session shell.
0122 For other users (Open MPI/MPICH2), plesae use G4csh (default).
0123
0124 In case of OpenMPI, *LD_LIBRARY_PATH* for OpenMPI runtime libraries
0125 should be set at run time. Alternatively, you can add this path
0126 to the dynamic linker configuration using `ldconfig`.
0127 (needs sys-admin authorization)
0128
0129
0130 MPI runtime Environment
0131 -----------------------
0132 1. Make hosts/cluster configuration of your MPI environment.
0133 2. Launch MPI runtime environment, typically executing
0134 `lamboot` (LAM) / `mpdboot` (MPICH2) / `mpd` (Intel).
0135
0136 How to run
0137 ----------
0138 For example,
0139
0140 > mpiexec -n # <your application>
0141 <p>Replace mpicxx with your MPI compiler wrapper if you need to specify which one to use.</p>
0142
0143 Instead, `mpirun` command is more convenient for LAM users.
0144
0145
0146 MPI G4UI commands
0147 -----------------
0148 G4UI commands handling the G4MPI interface are placed in /mpi/.
0149
0150 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0151 Command directory path : /mpi/
0152
0153 Guidance :
0154 MPI control commands
0155
0156 Sub-directories :
0157 Commands :
0158 verbose * Set verbose level.
0159 status * Show mpi status.
0160 execute * Execute a macro file. (=/control/execute)
0161 beamOn * Start a parallel run w/ thread.
0162 .beamOn * Start a parallel run w/o thread.
0163 masterWeight * Set weight for master node.
0164 showSeeds * Show seeds of MPI nodes.
0165 setMasterSeed * Set a master seed for the seed generator.
0166 setSeed * Set a seed for a specified node.
0167 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0168
0169 ### Notes:
0170 While "/run/beamOn" and "/mpi/beamOn" commands invoke beam-on in background,
0171 so you can input UI commands even while event processing. Note that drawing
0172 tracks in OpenGL with these commands causes a crash. Please use /mpi/.beamOn
0173 command instead.
0174
0175 The original "/control/execute" and "/run/beamOn" are overwritten
0176 with "/mpi/execute" and "/mpi/beamOn" commands respectively,
0177 that are customized for the MPI interface.
0178
0179 Examples
0180 ========
0181 There are a couple of examples for Geant4 MPI applications.
0182
0183 For using ROOT libraries (exMPI02)
0184
0185 - *ROOTSYS* : root path of the ROOT package
0186
0187 \ref exMPI01
0188 ------------
0189
0190 A simple application.
0191
0192 **Configuration:**
0193
0194 - Geometry : chamber / calorimeter
0195 - Primary : particle gun (200 MeV electron as default)
0196 - Physics List : FTFP_BERT
0197
0198 **Features:**
0199 - Particles are transported in a geometry without any scoring.
0200 - Learn how to parallelized your G4 session.
0201
0202 \ref exMPI02 (ROOT application)
0203 -------------------------------
0204
0205 An example of dosimetry in a water phantom.
0206 Note: due to limited MT support in ROOT, in this example
0207 MT is disabled, but the code is migrated to MT, ready
0208 for MT when ROOT will support MT.
0209 For an example of MT+MPI take a look at exMPI03
0210
0211 **Configuration:**
0212 - Geometry : water phantom
0213 - Primary : broad beam (200 MeV proton)
0214 - Physics List : FTFP_BERT
0215 - Analysis : ROOT histogramming
0216
0217 **Features:**
0218 - Score dose distribution in a water phantom.
0219 - Learn how to parallelized your applications.
0220 - Create a ROOT file containing histograms/trees in each node.
0221
0222 \ref exMPI03 (merging of histograms via MPI)
0223 --------------------------------------------
0224 This example is the same as exMPI02 with the following
0225 differences:
0226 - It uses Geant4 analysis instead of ROOT for histogramming
0227 - It shows how to merge, using g4tools, histograms via MPI
0228 so that the entire statistics is accumulated in a single output file
0229 - It also shows how to merge G4Run objects from different ranks and
0230 how to merge scorers
0231 - MT is enabled.
0232 - Root output files from application run with `mpiexec -n 3`
0233 - dose-merged.root - merged histograms
0234 - dose-rank0,1,2 - histograms data collected on rank 0, 1,2 before merge
0235
0236 \ref exMPI04 (merging of ntuples via MPI)
0237 -----------------------------------------
0238 This example is the same as exMPI03 with added ntuple.
0239 - It uses Geant4 analysis for histogramming and ntuples.
0240 - It shows how to merge, using g4tools, ntuples via MPI in sequential mode,
0241 so that the entire statistics is accumulated in a single output file.
0242 - If MT is enabled, the ntuples are merged from threads to
0243 files per ranks.
0244 - Combined MT + MPI merging is not yet supported.
0245 - Merging ntuples is actually supported only with Root output format.
0246
0247 - Root output files from application run with `mpiexec -n 4`
0248 - Sequential application:
0249 (3 working ranks, the last rank dedicated for collecting ntuple data)
0250 - dose-merged.root - merged histograms from ranks 0, 1 and 2
0251 - dose-rank1,2.root - histograms data collected on rank N before merge
0252 - dose-rank3 - ntuples merged from ranks 0, 1 and 2
0253 - MT application:
0254 (4 working ranks)
0255 - dose-merged.root - merged histograms from ranks 0, 1, 2 and 3
0256 - dose-rank0,1,2,3.root - histograms data collected on rank N before merge;
0257 ntuples merged on each rank N from rank threads
0258