Warning, /geant4/examples/extended/parallel/MPI/README.md is written in an unsupported language. File is not indexed.
0001 Geant4 MPI Interface
0002 ====================
0003
0004 Author:
0005 Koichi Murakami (KEK) / Koichi.Murakami@kek.jp
0006 Andrea Dotti (SLAC) / adotti@slac.stanford.edu
0007
0008
0009 About the interface
0010 ===================
0011 G4MPI is a native interface with MPI libraries. The directory contains
0012 a Geant4 UI library and a couple of parallelized examples.
0013 Using this interface, users applications can be parallelized with
0014 different MPI compliant libraries, such as OpenMPI, MPICH2 and so on.
0015
0016 System Requirements:
0017 --------------------
0018
0019 ### MPI Library
0020
0021 The MPI interface can work with MPI-compliant libraries,
0022 such as Open MPI, MPICH, Intel MPI etc.
0023
0024 For example, the information about Open MPI can be obtained from
0025 http://www.open-mpi.org/
0026
0027 MPI support:
0028 ------------
0029 G4mpi has been tested with the following MPI flavors:
0030 * OpenMPI 1.8.1
0031 * MPICH 3.2
0032 * Intel MPI 5.0.1
0033
0034 ### CMake
0035
0036 CMake is used to build G4MPI library, that co-works with Geant4 build system.
0037
0038 ### Optional (for exMPI02)
0039
0040 ROOT for histogramming/analysis
0041
0042 How to build G4MPI
0043 ==================
0044 To build G4MPI library, use CMake on Geant4 library installed with CMake build.
0045
0046 Follow these commands,
0047
0048 > mkdir build
0049 > cd build
0050 > cmake -DGeant4_DIR=<your Geant4 install path>/lib[64]/Geant4-V.m.n \
0051 -DCMAKE_INSTALL_PREFIX=<where-G4mpi-lib-will-be-installed> \
0052 <g4source>/examples/extended/parallel/MPI/source
0053 > make
0054 > make install
0055
0056 The cmake step will try to guess where MPI is installed, mpi executables should
0057 be in PATH. You can specify CXX and CC environment variables to your specific
0058 mpi wrappers if needed.
0059
0060 The library and header files will be installed on the installation directory
0061 specified in CMAKE_INSTALL_PREFIX
0062 a CMake configuration file will also be installed
0063 (see examples on how to compile an application using G4mpi)
0064
0065 How to use
0066 ==========
0067
0068 How to make parallel applications
0069 ---------------------------------
0070
0071 An example of a main program:
0072
0073 ```c++
0074 #include "G4MPImanager.hh"
0075 #include "G4MPIsession.hh"
0076
0077 int main(int argc,char** argv)
0078 {
0079 // At first, G4MPImanager/G4MPIsession should be created.
0080 G4MPImanager* g4MPI= new G4MPImanager(argc,argv);
0081
0082 // MPI session (G4MPIsession) instead of G4UIterminal
0083 G4MPIsession* session= g4MPI-> GetMPIsession();
0084
0085 // user application setting
0086 G4RunManager* runManager= new G4RunManager();
0087
0088 ....
0089
0090 // After user application setting, just start a MPI session.
0091 MPIsession treats both interactive and batch modes.
0092 session-> SessionStart();
0093
0094 // Finally, terminate the program
0095 delete g4MPI;
0096 delete runManager;
0097 }
0098 ```
0099
0100 How to compile
0101 ---------------
0102
0103 Using cmake, assuming G4mpi library is installed in path _g4mpi-path_ and
0104 Geant4 is installed in _g4-path_:
0105
0106 > mkdir build
0107 > cd build
0108 > cmake -DGeant4_DIR=<g4-path>/lib[64]/Geant4-V.m.n \
0109 -DG4mpi_DIR=<g4mpi-path>/lib[64]/G4mpi-V.m.n \
0110 <source>
0111 > make
0112
0113 Check provided examples: under examples/extended/parallel/MPI/examples for an
0114 example of CMakeLists.txt file to be used.
0115
0116 ### Notes about session shell
0117
0118 LAM/MPI users can use "G4tcsh" as an interactive session shell.
0119 For other users (Open MPI/MPICH2), plesae use G4csh (default).
0120
0121 In case of OpenMPI, *LD_LIBRARY_PATH* for OpenMPI runtime libraries
0122 should be set at run time. Alternatively, you can add this path
0123 to the dynamic linker configuration using `ldconfig`.
0124 (needs sys-admin authorization)
0125
0126
0127 MPI runtime Environment
0128 -----------------------
0129 1. Make hosts/cluster configuration of your MPI environment.
0130 2. Launch MPI runtime environment, typically executing
0131 `lamboot` (LAM) / `mpdboot` (MPICH2) / `mpd` (Intel).
0132
0133 How to run
0134 ----------
0135 For example,
0136
0137 > mpiexec -n # <your application>
0138 <p>Replace mpicxx with your MPI compiler wrapper if you need to specify which one to use.</p>
0139
0140 Instead, `mpirun` command is more convenient for LAM users.
0141
0142
0143 MPI G4UI commands
0144 -----------------
0145 G4UI commands handling the G4MPI interface are placed in /mpi/.
0146
0147 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0148 Command directory path : /mpi/
0149
0150 Guidance :
0151 MPI control commands
0152
0153 Sub-directories :
0154 Commands :
0155 verbose * Set verbose level.
0156 status * Show mpi status.
0157 execute * Execute a macro file. (=/control/execute)
0158 beamOn * Start a parallel run w/ thread.
0159 .beamOn * Start a parallel run w/o thread.
0160 masterWeight * Set weight for master node.
0161 showSeeds * Show seeds of MPI nodes.
0162 setMasterSeed * Set a master seed for the seed generator.
0163 setSeed * Set a seed for a specified node.
0164 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0165
0166 ### Notes:
0167 While "/run/beamOn" and "/mpi/beamOn" commands invoke beam-on in background,
0168 so you can input UI commands even while event processing. Note that drawing
0169 tracks in OpenGL with these commands causes a crash. Please use /mpi/.beamOn
0170 command instead.
0171
0172 The original "/control/execute" and "/run/beamOn" are overwritten
0173 with "/mpi/execute" and "/mpi/beamOn" commands respectively,
0174 that are customized for the MPI interface.
0175
0176 Examples
0177 ========
0178 There are a couple of examples for Geant4 MPI applications.
0179
0180 For using ROOT libraries (exMPI02)
0181
0182 - *ROOTSYS* : root path of the ROOT package
0183
0184 exMPI01
0185 -------
0186 A simple application.
0187
0188 **Configuration:**
0189
0190 - Geometry : chamber / calorimeter
0191 - Primary : particle gun (200 MeV electron as default)
0192 - Physics List : FTFP_BERT
0193
0194 **Features:**
0195 - Particles are transported in a geometry without any scoring.
0196 - Learn how to parallelized your G4 session.
0197
0198 exMPI02 (ROOT application)
0199 --------------------------
0200 An example of dosimetry in a water phantom.
0201 Note: due to limited MT support in ROOT, in this example
0202 MT is disabled, but the code is migrated to MT, ready
0203 for MT when ROOT will support MT.
0204 For an example of MT+MPI take a look at exMPI03
0205
0206 **Configuration:**
0207 - Geometry : water phantom
0208 - Primary : broad beam (200 MeV proton)
0209 - Physics List : FTFP_BERT
0210 - Analysis : ROOT histogramming
0211
0212 **Features:**
0213 - Score dose distribution in a water phantom.
0214 - Learn how to parallelized your applications.
0215 - Create a ROOT file containing histograms/trees in each node.
0216
0217 exMPI03 (merging of histograms via MPI)
0218 ---------------------------------------
0219 This example is the same as exMPI02 with the following
0220 differences:
0221 - It uses Geant4 analysis instead of ROOT for histogramming
0222 - It shows how to merge, using g4tools, histograms via MPI
0223 so that the entire statistics is accumulated in a single output file
0224 - It also shows how to merge G4Run objects from different ranks and
0225 how to merge scorers
0226 - MT is enabled.
0227 - Root output files from application run with `mpiexec -n 3`
0228 - dose-merged.root - merged histograms
0229 - dose-rank0,1,2 - histograms data collected on rank 0, 1,2 before merge
0230
0231 exMPI04 (merging of ntuples via MPI)
0232 ---------------------------------------
0233 This example is the same as exMPI03 with added ntuple.
0234 - It uses Geant4 analysis for histogramming and ntuples.
0235 - It shows how to merge, using g4tools, ntuples via MPI in sequential mode,
0236 so that the entire statistics is accumulated in a single output file.
0237 - If MT is enabled, the ntuples are merged from threads to
0238 files per ranks.
0239 - Combined MT + MPI merging is not yet supported.
0240 - Merging ntuples is actually supported only with Root output format.
0241
0242 - Root output files from application run with `mpiexec -n 4`
0243 - Sequential application:
0244 (3 working ranks, the last rank dedicated for collecting ntuple data)
0245 - dose-merged.root - merged histograms from ranks 0, 1 and 2
0246 - dose-rank1,2.root - histograms data collected on rank N before merge
0247 - dose-rank3 - ntuples merged from ranks 0, 1 and 2
0248 - MT application:
0249 (4 working ranks)
0250 - dose-merged.root - merged histograms from ranks 0, 1, 2 and 3
0251 - dose-rank0,1,2,3.root - histograms data collected on rank N before merge;
0252 ntuples merged on each rank N from rank threads
0253