472 Posts in 232 Topics by 68 members
If this is your first visit, you will need to register before you can post. However, you can browse all messages below.
|Page: 1||Go to End|
|Author||Topic: Getting started with mpich version 3||2640 Views|
25 January 2017 at 12:33pm Last edited: 25 January 2017 12:53pm
I feel like I'm missing a few simple steps to get going with mpi on Claritas 6.5. I see discussion on the forums from some previous versions of Claritas/mpi that all look to be out of date now. The Claritas documentation gives a variety of instructions on how to install it, but (as the documentation says) MPICH version 3.1 is already installed on CentOS 6. Then the documentation directs to the mpich.org website, which also details how to download and install, and how to program for using MPI. Both Claritas and MPI documentation refer to examples that are in the distribution, but there are no examples in the install that came with our CentOS 6... It seems to me that what I need are:
The path to mpi executables added to my PATH environment.
A way to tell Claritas/mpi what remote machines to use when executing.
Is there more to it than that? Is there a description of how to configure what machines to use? Can the user select what machines to use? Since I'm not entering mpiexec on the command line this part is unclear.
Addendum: I see that using MPISTART/MPIEND in a flow it asks for a configuration file with list of processors at runtime, and I got that to run. But for IMAGE_K3T I'm not clear how to configure it.
25 January 2017 at 2:39pm
Once MPICH version 3 is installed on your Linux machine the only other step should be to set up your mpd.hosts file. The format of the mpd.hosts file is one line per node, with each line containing the node name and the number of cores for that node, separated by a colon, for example : -
Generally the mpd.hosts file is saved in the current working directory, if not then you will need to specify the path for the mpd.hosts file explicitly, eg
mpiexec -f /home/claritas/mpd.hosts (rather than simply mpiexec -f mpd.hosts, if saved in the current working directory)
MPI capable modules such as IMAGE-K3T will have a MPI_COMMAND parameter where the 'mpiexec -f mpd.hosts' style command can be entered. This is followed by a TOTAL_PROCS parameter where you can enter the number of cores you want to run the job with.
PLEASE NOTE that for RHEL(CentOS)6 operating systems the following command may be needed to enable the mpiexec command : -
module load mpich-x86_64
If you wish you should be able to add this command to your .cshrc file
|Go to Top|