Sample MPI Batch Jobs that Run on the Intel E5-2670 Processors and Use the GNU, Intel, and Portland Compilers
Sample GNU Compiler MPI Batch JobSample Intel Compiler MPI Batch Job
Sample PGI Compiler MPI Batch Job
To Run Sample GNU, Intel or PGI Compiler MPI Batch Job
Notes
Back to HPC1 Documentation
To Run Sample GNU/Intel/PGI Compiler MPI Batch Job mpignu.pbs or mpiintel.pbs or mpipgi.pbs
cp /software/samples/mpi-batch/* ~johndoe/gpu-batch-testModify the email address in mpignu.pbs (or mpiintel.pbs,mpipgi.pbs for Intel, PGI) to be your own
qsub -v source_file=mpi_hello.c mpignu.pbs (or mpiintel.pbs or mpipgi.pbs)
Your batch output file should appear in the directory from which you submitted the job, and have a filename like:
3111.hpc1.csc.bnl.local.out . Sample output appears next. Also, for this example there would be a subdirectory
3111.hpc1.csc.bnl.gov also created and containing files mpi_hello and mpi_hello.c
3111.hpc1.csc.bnl.local.out :
Number of nodes is: 1
Number of processors is: 16
The hostnames of the nodes:
node15
node15
node15
node15
node15
node15
node15
node15
node15
node15
node15
node15
node15
node15
node15
node15
Source filename stripped of its suffix is mpi_hello
Hello, world. My rank is 7 and I am node15.
Hello, world. My rank is 8 and I am node15.
Hello, world. My rank is 9 and I am node15.
Hello, world. My rank is 10 and I am node15.
Hello, world. My rank is 11 and I am node15.
Hello, world. My rank is 13 and I am node15.
Hello, world. My rank is 14 and I am node15.
Hello, world. My rank is 15 and I am node15.
Hello, world. My rank is 1 and I am node15.
Hello, world. My rank is 2 and I am node15.
Hello, world. My rank is 3 and I am node15.
Hello, world. My rank is 4 and I am node15.
Hello, world. My rank is 5 and I am node15.
Hello, world. My rank is 6 and I am node15.
Hello, world. My rank is 12 and I am node15.
Hello, world. My rank is 0 and I am node15.
The host (node15 in the sample output above) has two Intel Xeon processors having 8 cores apiece
thus 16 cores total. Thus the batch
job requests one host node and 16 cores.

Back to HPC1 Documentation