General Lab Information

Sample MPI GNU Batch Job on the HPC1 Intel Xeon E5-2670 Cores: mpignu.pbs

Back to HPC1 Documentation

#PBS -m e
#PBS -M johndoe@bnl.gov
#PBS -S /bin/bash
#PBS -l nodes=1:ppn=16
#PBS -l walltime=00:02:00
#PBS -j oe
#PBS -N mpitest
#PBS -o $PBS_JOBID.out


## ASSUMPTIONS:
## 1. The user making use of this script knows how to read and
## edit shell scripts and will insert their


## Above are your PBS options. These options could also have
## been set on the command line as arguments to the qsub command.
## To run this job with the options listed above, just specify:
## qsub /software/samples/qsub-openmpi-bash.pbs
##
## PBS -m e
## Send an email when the job terminates. By default, this will go to
## your mail spool on the cluster head node, unless you specify and
## alternate location with the -M option, for example:
## PBS -M johndoe@bnl.gov
## PBS -S /bin/bash
## use /bin/bash for the execution shell environment
## PBS -l procs=16
## Select 16 processors, they may be on any node with available cores
## Alternatively, you may select "-l nodes=2:ppn=8" for two nodes and
## 8 processors per node if you need all your processing on dedicated
## nodes.
## PBS -l walltime=00:05:00
## what is the maximum run time before the scheduler terminates
## your process. Default is 01:00:00 ( 1 hour, 0 minutes,
## 0 seconds). This job is set to 00:05:00 (5 minutes)
## PBS -j oe
## This command merges Standard Output and Standard Error, and
## sends the combined stream to Standard Out.
## PBS -N mpitest
## use "mpitest" as the arbitrary job name
## PBS -o $PBS_JOBID.out
## use the file name "$PBS_JOBID.out" as the destination for standard
## output. Using the variable $PBS_JOBID will ensure a uniquely named
## output file.



## Load an MPI implementation. Available implementations are various
## versions of mpich, mpich2 and openmpi. For a full list of available
## implementation, run "module avail -l mpich2 mvapich2 openmpi"
# Load the default openmpi
module load openmpi

# For the GNU compiler only, it is not necessary to load it, hence we do not here.

## We requested nodes and processors (cores) per node. ${PBS_NODEFILE} lists
## these nodes. Count the number of processors selected.
NODECOUNT=`sort -u ${PBS_NODEFILE} | wc -l`
PROCCOUNT=`sort ${PBS_NODEFILE} | wc -w`

## Set and change to WORK directory based on Job ID, prevents duplication
## of files. You may also use PBS_O_WORKDIR, which is set by PBS to the
## directory where you executed qsub.
WORK=${PBS_O_WORKDIR}/${PBS_JOBID}
[ -d ${WORK} ] || mkdir -p ${PBS_O_WORKDIR}/${PBS_JOBID}
cp ${PBS_O_WORKDIR}/$source_file ${WORK}
cd ${WORK}


#######
####### YOUR SCRIPTING BELOW HERE
#######

## We can run any program here as if it were a regular shell script.
## First, prepare the environment for this run:
# Grab date for timestamp
DATE=`date +%F-%k:%M`

echo "Number of nodes is: ${NODECOUNT}"
echo "Number of processors is: ${PROCCOUNT}"
echo "The hostnames of the nodes: "
cat ${PBS_NODEFILE}
echo ""


## Compile program, provides test of mpicc
source_file_nosuffix="${source_file%.*}"
echo "Source filename stripped of its suffix is $source_file_nosuffix"
echo " "
mpicc -O2 $source_file -o $source_file_nosuffix


## Assume that we'll be running your binary with all the processors (cores) you
## reserved
mpiexec -n ${PROCCOUNT} ${WORK}/$source_file_nosuffix


#######
####### YOUR SCRIPTING ABOVE HERE
#######

# End-of-job cleanup goes here, if necessary

Top of Page

Back to HPC1 Documentation