Skip to content
Contact Support

COMSOL

COMSOL is a multiphysics solver that provides a unified workflow for electrical, mechanical, fluid, and chemical applications. COMSOL Homepage

Warning

COMSOL is proprietary software. Make sure you meet the requirements for it's usage.

Available Modules

module load COMSOL/5.6

Licences

The following network licence servers can be accessed from the NeSI cluster.

Institution Faculty Uptime
University of Auckland Physics 98%
Auckland Bioengineering Institute Implantable devices group 98%
University of Auckland Faculty of Engineering 98%
University of Auckland Deptartment of Engineering Science 98%
University of Auckland Deptartment of Engineering Science 98%
University of Otago 99%
University of Canterbury 100%

If you do not have access, or want a server connected Contact our Support Team.

comsol --help

Will display a list of COMSOL batch commands.

Batch Submission

When using COMSOL batch the following flags can be used to control distribution.

-mpibootstrap slurm  Instructs COMSOL to get it's settings from SLURM
-np <cpus> Number of CPUs to use in each task. Equivalent to Slurm input --cpus-per-task or environment variable ${SLURM_CPUS_PER_TASK}
-nn <tasks> Number of tasks total. --ntasks or ${SLURM_NTASKS}
-nnhost <tasks> Number of tasks per node. --ntasks-per-node ${SLURM_NTASKS_PER_NODE}
-f <path to hostlist> Host file. You won't need to set this in most circumstances.

Example Scripts

Single process with a single thread Usually submitted as part of an array, as in the case of parameter sweeps.

#!/bin/bash -e

#SBATCH --job-name      COMSOL-serial
#SBATCH --licenses      comsol@uoa_foe
#SBATCH --time          00:05:00          # Walltime
#SBATCH --mem           1512               # total mem

module load COMSOL/5.6
comsol batch -inputfile my_input.mph
#!/bin/bash -e
#SBATCH --job-name      COMSOL-shared
#SBATCH --licenses      comsol@uoa_foe
#SBATCH --time          00:05:00        # Walltime
#SBATCH --cpus-per-task 8
#SBATCH --mem           4G              # total mem
module load COMSOL/5.6
comsol batch -mpibootstrap slurm -inputfile my_input.mph 
#!/bin/bash -e

#SBATCH --job-name      COMSOL-distributed
#SBATCH --licenses      comsol@uoa_foe
#SBATCH --time          00:05:00            # Walltime
#SBATCH --ntasks        8         
#SBATCH --mem-per-cpu   1500                # mem per cpu

module load COMSOL/5.6
comsol batch -mpibootstrap slurm -inputfile my_input.mph
#!/bin/bash -e
#SBATCH --job-name         COMSOL-hybrid
#SBATCH --licenses         comsol@uoa_foe
#SBATCH --time             00:05:00          # Walltime
#SBATCH --ntasks           4                 
#SBATCH --cpus-per-task    16
#SBATCH --mem-per-cpu      1500B             # total mem

module load COMSOL/5.6
comsol batch -mpibootstrap slurm -inputfile my_input.mph
#!/bin/bash -e
#SBATCH --job-name         COMSOL-livelink
#SBATCH --licenses         comsol@uoa_foe
#SBATCH --time             00:05:00
#SBATCH --cpus-per-task    16
#SBATCH --mem-per-cpu      1500

module purge

module load COMSOL/5.6
module load MATLAB/2021b

comsol mphserver -silent &
matlab -batch "addpath('/opt/nesi/share/COMSOL/comsol154/multiphysics/mli/');mphstart;MyScript"

Warning

If no output file is set, using --output the input file will be updated instead.

Interactive Use

Providing you have set up X11, you can open the COMSOL GUI by running the command comsol.

Large jobs should not be run on the login node.

If you are using COMSOL LiveLink, you will need to load a MATLAB module (in addition to the COMSOL module), e.g.

module load MATLAB/2021b

Then

comsol matlab -mlroot <path>

Where <mlpath> is the root directory of the MATLAB version you are using (dirname $(dirname $(which matlab))).

Best Practice

COMSOL is relatively smart with it's use of resources, if possible it is preferable to use --cpus-per-task over --ntasks

Memory requirements depend on job type, but will scale up with number of CPUs ≈ linearly.

Multithreading will benefit jobs using less than 8 CPUs, but is not recommended on larger jobs.

Performance is highly depended on the model used. The above should only be used as a rough guide. Speedup