This page briefly describes the different parallel versions of GAMESS-UK that are available.

In addition, a number of reports, focused on GAMESS-UK and looking at parallel developments and performance can be found here, with a number of more general reports that cover benchmarking and performance analysis of both parallel kernels and communication primitives, together with applications codes codes from chemistry, materials and engineering available here.

Parallel implementations of GAMESS-UK

GAMESS-UK is currently available in two different versions for parallel machines:

  • A replicated data version that relies on the virtual shared memory model provided by the Global Array toolkit (the "GA version"). This is the default parallel version of the code and the one that should be used by most users of GAMESS-UK.
  • A largely distributed-data version that is parallelised using MPI (the "MPI version") and makes use of MPI-based tools such as BLACS and ScaLAPACK.
    • Within the MPI build it is also possible to configure GAMESS-UK to run in "taskfarming" mode, for batch processing numerous small jobs under the umbrella of a single GAMESS-UK job.

Although these versions are currently separate, they will be incorporated into a single parallel binary in the future.

For most users of the parallel code, the only version that will be of interest is the GA version, and they can safely ignore the references to the MPI version.

Both versions are usually built on top of an MPI-implementation such as MPICH, although the GA-version can be built with the TCGMSG message-passing library for those cases where MPI is unavailble.

An explanation of the different functionality available in the Global Array-based build and the MPI build is below. For further information on the parallel builds, please see chapter 14 of the GAMESS-UK manual

Global Array-based build

The Global Array-based version of GAMESS-UK is parallelised using the virtual shared memory model provided by the Global Array toolkit and the PeIGS parallel eigensolver library.

Most of the data is replicated, but whenever a parallel linear algebra operation is performed, the data is copied into a Global Array, the GA tools are used to perform the opreration, and the data is copied back into a replicated object.

The following modules in GAMESS-UK have been parallelised using this strategy and are available for use in this version:

  • RHF, ROHF, UHF and GVB energies and gradients (conventional, in-core and direct), including effective core potentials.
  • Direct-MP2 energies and gradients (closed-shell).
  • Direct-SCF analytic 2nd derivatives.
  • Solvation using the Tomasi Polarizable Continuum Model.
  • Direct RPA.
  • Analysis options requiring the computation of properties on molecular grids.
  • Density Functional Theory (DFT):
  • closed- and open-shell (UKS) energies and gradients, with both explicit and fitting treatments of the Coulomb term.
  • Analytic Second Derivatives
  • Valence Bond module
  • Zeroth Order Regular Approximation (ZORA)
  • Direct Configuration Interaction (CI) (although this code requires the presence of a parallel filesystem, accessible by all nodes on the system).

MPI build

The MPI version uses a largely distributed data stategy that makes use of the MPI tools for operations on the distributed matricies and LAPACK tools for operations on the few matricies that are replicated.

To use this version, as well as an MPI-implementation such as MPICH, implementations of BLACS, LAPACK and ScaLAPACK (all freely available from must also be installed on the machine.

The MPI version has limited functionality, but scales well on large parallel machines and, due to it's largely distributed data strategy, is best used for extremely large calculations.

The following functionality in GAMESS-UK is available for use in this version:

  • RHF and UHF energies and gradients
  • Closed and open-shell DFT energies and gradients