
Run a reproducible GROMACS benchmark on an RTX 4090 using Compute’s GPU-optimized image. This guide shows how to verify GPU access, install GROMACS correctly, and run a basic GPU-offloaded benchmark without guesswork.
This article is not a troubleshooting playbook and not a promise of one-click setup. It is a practical, reproducible path that matches how Compute instances actually work today.
This guide assumes:
Compute instances are containerized environments. You should not assume that “apt-get anything I want” or “build once and forget” will hold across restarts. If you need a fixed environment long-term, create a custom template.
GPU-optimized image
Do not assume GROMACS is preinstalled. It is not.
Launch the instance.
SSH into the instance using the command shown in the UI.
First check that the GPU is visible:
nvidia-smi
You should see the RTX 4090 listed.
If nvidia-smi fails or shows no GPU, stop here. Terminate the instance and retry. If it happens again, this is a platform issue and should go to Support.
You have two supported paths. Pick one and stick to it.
This avoids CUDA, compiler, and build mismatches.
Check that Docker or a compatible container runtime is available:
docker --version
Then run:
docker run --rm --gpus all gromacs/gromacs:2024.1 gmx --version
If this works and shows a GPU-enabled build, you are ready to run jobs using the container.
This is the safest option on Compute today.
Only do this if you know you need a custom build.
At a high level, this means:
Follow the official GROMACS installation guide and make sure CUDA support is enabled. Do not mix instructions from blogs or older guides.
Be aware: changes made this way are not guaranteed to persist across instance lifecycles unless you convert the result into a custom template.
Create a working directory:
mkdir -p ~/gromacs
cd ~/gromacs
You need a .tpr file to run mdrun.
If you already have one, copy it here.
If not, generate it from existing inputs:
gmx grompp -f md.mdp -c conf.gro -p topol.top -o system.tpr
Run GROMACS with explicit GPU flags:
gmx mdrun -s system.tpr -deffnm bench \
-nb gpu -pme gpu -update gpu -pin on
While it runs, confirm GPU activity in another shell:
nvidia-smi
You should see non-zero utilization.
When the run finishes, note the reported performance (ns/day).
gmx --version reports GPU supportnvidia-smi shows activity during the runIf those conditions are not met, this is not a valid benchmark.
“GROMACS not found”
You selected the GPU-optimized image. GROMACS is not preinstalled. Use the container or install it explicitly.
CUDA or GPU errors at runtime
You are mixing incompatible CUDA versions, or the container does not have GPU access. Verify with nvidia-smi and gmx --version.
Inconsistent performance between runs
You are changing instance size, CPU allocation, or container versions. Benchmarks are only meaningful if the environment is stable.
Performance depends on:
Numbers from this article are illustrative only. Always benchmark your own workload.
In those cases, this article will frustrate you. Create a custom template or contact Support instead.
Start a GPU instance with a CUDA-ready template (e.g., Ubuntu 24.04 LTS / CUDA 12.6) or your own GROMACS image. Enjoy flexible per-second billing with custom templates and the ability to start, stop, and resume your sessions at any time. Unsure about FP64 requirements? Contact support to help you select the ideal hardware profile for your computational needs.