BNL Home

Sample LoadLeveler Batch Job Control File

Back to Batch Job Submission

For Blue Gene technical or administrative assistance, please open a ticket with the ITD Helpdesk by sending an email to or calling 631-344-5522.

Sample LoadLeveler batch job control file

Note: Do NOT specify a NY Blue predefined partition of compute nodes on which your job will be run. Instead, specify the number of nodes desired, as per below. See How to Specify Nodes for more details.

# @ job_type = bluegene
# @ class = normal
# The executable that will run your parallel application should always be specified as per the next line.
# @ executable = /usr/bin/mpirun
# Run job on 512 compute nodes. LoadLeveler will dynamically allocate the partition.
# @ bg_size = 512
# initialdir will be the initial directory. LoadLeveler changes to this
# directory before running the job. If you don't specify this, it defaults to your current working directory
# at the time the job was submitted.
# File names mentioned in the batch script that do not begin with a slash ( / ) are relative to the initial
# directory.
# The initial directory must exist on both the fen and the compute nodes.
# @ initialdir = /home/johndoe/app1/runs
# If for example your jobid is 82, your output and error will be written in
# directory /home/johndoe/app1/runs, to files 82.out and 82.err respectively.
# @ input = /dev/null
# @ output = $(jobid).out
# @ error = $(jobid).err
# Maximum wall clock time for job will be 5 minutes.
# @ wall_clock_limit = 00:05:00
# Send email to when job has completed.
# @ notification = complete
# @ notify_user =
# Specify executable for your parallel application, and arguments to that executable.
# Note that the arguments to specify for the executable will vary depending upon the executable.
# Specify any special environment variables for your application that need to be in the environment 
# presented to the job on the compute nodes, they will vary
# depending upon the application -  some applications will not require any - so delete or modify the
# -env specification below.
# @ arguments =  -exe /home/johndoe/app1/exe/app1.exe \ 
-cwd /home/johndoe/app1/runs \
-mode VN \
-env "LOGNAME=/path/to/my.logfile DATAPATH=/path/to/my.datafile" \
-args "-i input.inp -o output.out"
# The next statement marks the end of the job step. This example is a one-job-step batch job,
# so this is equivalent to saying that the next statement marks the end of the batch job.
# @ queue

Notes for Batch Job Control File Sample
(including how to monitor jobs)

  • Notice that you must specify:
    # @ class = normal
  • Notice that you should specify the full path to your executable, -exe /home/johndoe/app1/exe/app1.exe in this sample.
  • Notes for # @ arguments above:

    Specify -exe to provide full path to the executable (in this example app1.exe) for your parallel application that contains MPI calls, and specify -args to provide any arguments to that executable.
    In this example, app1.exe has 2 arguments, the input file input.inp and the output file output.out .

    Use -cwd to specify the current working directory of the job as seen by the compute nodes.

    To specify any special environment variables for your application that need to be in the environment presented to the job on the compute nodes, use
    to set the values of any required special environment variables in your batch job. For example:

    -env "LOGNAME=/path/to/my.logfile DATAPATH=/path/to/my.datafile"

    NOTE: Of course replace the names of the environment variables and their values in this example with whatever they are for your application! This example specifies execution process mode VN.
  • # @ notification = complete
    # @ notify_user =
    in the Batch script above causes email to be sent to you upon job completion. No notification if not specified. Other options include error, start, never, and always, described in IBM LoadLeveler Documentation.
  • On NY Blue, mpirun must be invoked in the batch job control file using the # @ executable statement, as shown in the sample above.
  • To submit script: llsubmit
  • Help for llq: llq -H
  • To list all LoadLeveler jobs: llq -b (Then note jobid)
  • Regarding the output from llq -b:
    The meaning of the code in the LL column can be found in the table on the SUR Blue Gene page.

    See /opt/ibmll/LoadL/full/include/llapi.h on the fen for the meaning of the PT and BG columns.

    We believe typedef enum bg_partition_state_t in that file specifies the partition states that are displayed in the PT column, for example CF in the PT column means that the partition is configuring.

    We also believe that typedef enum bg_job_state_t in that file (llapi.h) specifies the Blue Gene job states that are listed in the BG column.
    If you know otherwise please send mail to
  • To delete a LoadLeveler job:
    llcancel jobid where jobid was determined by invoking llq -b.
  • To display info as to why a jobid remains in the Hold, NotQueued, Idle or Deferred state: llq -ls jobid
  • There is a 24-hour wall clock limit for all LoadLeveler jobs.
  • See the IBM LoadLeveler Manual for other LoadLeveler commands.

Back to Batch Job Submission

Top of Page