For performance and stability reasons, we recommend using
xqsub commands for submitting batch jobs.
Moreover, when submitting multiple jobs, add a sleep delay between jobs or use job arrays for submitting identical jobs.
Be aware the there is a maximum size limit of 64KB for scripts submitted to the queue system. Scripts submitted to the queuing system should mainly consist of parameters for the queuing system and executions of the "real" job.
The batch job queuing system on Computerome is based on TORQUE Resource Manager (generally
q... type commands) and Moab Workload Manager (generally
m... type commands).Additionally, we have
xmsub, perl wrapper scripts to
msub respectively, which build a job submission script for you.Extensive documentation is available here:
Submitting batch jobs
- Fat nodes with 40 CPU cores and 1,5TB of memory
- Thin nodes with 40 CPU cores and 192 GB of memory
- GPU nodes with 40 CPU cores, 192 GB of memory and one NVIDIA Tesla V100 GPU card
You can submit jobs via the command
msub. We strongly encourage you to take advantage of modules in your pipelines as it gives you better control of your environment.In order to submit jobs that will run on one node only you will only have to specify the following resources:
- How long time you expect the job to run ⇒ '-l walltime=<time>'
- How much memory your job requires ⇒ '-l mem=xxxgb'
- How many CPUs and GPUs ⇒ '-l nodes=1:ppn=<number of CPUs>:gpus=<number of GPUs>' ; CPU will be from 1 to 40, GPU will be 0 or 1 (':gpus=...' can be left out if not used) .
- The <group_NAME> for your current project ⇒ '-W group_list=<group_NAME> -A <group_NAME>' .
To run a job with 23 CPUs, 100GB memory lasting an hour you can use the command:
Same job as above, also using GPU:
Example using msub:
The parameters nodes, ppn, mem is just an example and you should be change to suit your specific job
When you want to test something in the batch system, it is strongly recommended to run in an interactive job, by using the following:
This will give you access to a single compute node, where you can perform your testing without affecting other users.
Computerome is now offering an even more straightforward way to work interactively, the way you do on your own computer or a local linux server, instead of having to submit everything through the queuing system.Just login and type
iqsub and the system will ask you 3 simple questions, after which you'll be redirected to a full, private node.
Script file example
A script for a file to be submitted with qsub might begin with lines like:
$PBS... variables are set for the batch job by Torque.
However, we recommend that you do it anyway, since it improves the portability of the jobscript and serves as a reminder of the requirements.
We also strongly advise against the use of the "-V" option, as it makes it hard to debug possible errors during runtime.
Specifying a different project account
If you run jobs under different projects, for instance pr_12345 and pr_54321, you must make sure that each project gets accounted for separately in the system's accounting statistics.You specify the relevant project account (for example, pr_54321) for each individual job by using these flags to the qsub command:
or in the job script file, add line like this near the top:
Please use project names only by agreement with your project owner.
Estimating job resource requirements
First time you run your script, you may not have a clear picture of what kind of resource requirements it has. To get a rough estimate, you could submit a job to a full node, with large walltime:
Regular compute node (aka. 'thinnode'):
To see the actual resource usage, see output from command
You can add this line to the bottom of your script
qstat -f -1 $PBS_JOBID
It will generate something like the following:
Look at resources_used.xyz for hints.
Requesting a maximum memory size
A number of node features can be requested, see the Torque Job Submission page. For example, you may require a minimum physical memory size by requesting:
i.e.: 2 entire nodes, 16 CPU cores on each, the total memory of all nodes >= 120 GB RAM.
To see the available RAM memory sizes on the different nodes types see the Hardware page.
Waiting for specific jobs
It is possible to specify that a job should only run after another job has completed succesfully, please see the -W flags in the qsub page.To run <your script> after job 12345 has completed succesfully::
Be sure that the exit status of job 12345 is meaningful: if it exits with status 0, you second job will run. If it exits with any other status, you second job will be cancelled.It is also possible to run a job if another job fails (``afternotok``) or after another job completes, regardless of status (``afterany``). Be aware that the keyword ``after`` (as in ``-W depend=after:12345``) means run after job 12345 has *started*.
Submitting jobs to 40-CPU fat nodes
The high memory (1536 GB) nodes we define to have a node property of fatnode. You could submit a batch job like in these examples:: 2 entire fatnodes, 32 CPUs each, total 64 CPU cores
Explicitly the g-11-f0042 node, 40 CPU cores:
2 entire fatnodes, each, memory of all nodes => 2000 GB RAM)
Submitting jobs to 40-CPU thin nodes
The standard memory (192 GB) nodes we define to have a node property of thinnode.You could submit a batch job like in these examples::2 entire thinnodes, 40 CPUs each, total 80 CPU cores)
Explicitly the g-01-c0052 node, 40 CPU cores
Submitting 1-CPU jobs
You could submit a batch job like in this example:
Running parallel jobs using MPI
In order to optimize performance, the queuing system is configured to place jobs on nodes connected to the same InfiniBand switch (30 nodes per switch) if possible.
To get nodes close to each other, use
procs=<number_of_procs> and leave out
ppn=.To avoid interference with other jobs,
procs= should be a multiple of cores per node (ie. 28 for mpinode).
Submitting multiple identical jobs can be done using job arrays. Job arrays can be created by using the -t option in the qsub submission script. The -t option allows many copies of the same script to be submitted at once. Additional information about -t option can be found in the qsub command reference. Moreover, PBS_ARRAYID environmental variable allows to differentiate the different jobs in the array. The amount of resources required in the qsub submission script is the amount of resources that each job will get.
For instance adding the line:
in the qsub script will cause running the job 15 times with not more than 5 actives jobs at any given time.
PBS_ARRAYID values will run from 0 to 14, as shown below:
Monitoring batch jobs
In addition, the Moab scheduler can be inquired using the showq command:
If you want to check the status of a particular jobid use checkjob command:
-vflag(s) to this command will increase the verbosity.
Badly behaving jobs
pestat has not been maintained since 2018 and is unsupported.
As a result, it may not be up to date with current Moab and Torque versions, and you should only use the results as a quideline and pointer to further investigation using standard queueing system tools, such as checkjob, showq and qstat.
Another useful command for monitoring batch jobs is pestat, available as a module. Show status of badly behaving jobs, with bad fields marked by star (*)
An example of usage of pestat:
The example job above is behaving correctly. Please consult the script located at `which pestat` for the description of the fields. The most important fields are:state = Torque state (second column)node can be free (not all the cores used), excl (all cores used) or down.load = CPU load average (third column)pmem = Physical memory (fourth column)amount of physical RAM installed in the nodencpu = total number of CPU cores (fifth column)resi = Resident (used) memory (seventh column)total memory in use on the given node (the one reported under RES by the "top" command),If used memory exceeds physical RAM on the node, or CPU load is significantly lower than number of CPU cores, the job becomes a candidate to be killed.An example of a job exceeding physical memory:
An example of a job with incorrect CPU load:
Searching for free resources
Show what resources are available for immediate use (see `Batch_jobs#batch-job-node-properties`_ for more options):Fatnode:
pestat can also be used to check what resources are free:
The node risoe-r01-f010 is occupied by 1 job (9th column) and two users (8th column) each requesting 1 core. The node risoe-r02-f024 is totally free.
Canceling a given job:
Force cancel job - try this if regular cancel fails
Canceling all jobs of a given user (privileged command):
Re-queue a job (privileged command):
Change walltime (privileged command):
Changing the wallclock limit of a job by 10 hours 11 minutes and 12 seconds (request Computerome Support in good time to extend walltime for running job):
<jobex> is a regex(7) regular expression preceeded by x: e.g. "x:abc12[0-9]"