Boltz-2
Boltz v2 structure and property prediction (jwohlwend/boltz), wrapped as a GPU job on subseq.bio.
Inputs and outputs
- Your files are mounted read-only under
/inputsinside the container. - Prediction outputs and metadata should be written to
/outputs. - A shared reference volume is mounted at
/ref; Boltz-2 caches model weights under/ref/cache(enforced automatically). - The container entrypoint is
boltz predict, so the job arguments are passed directly to that CLI.
Example 1 — basic MSA-backed run
Use the default pattern shown on the New Job form: point Boltz-2 at your inputs directory, enable the hosted MSA server, and turn on potentials.
/inputs
--use_msa_server
--use_potentials
--out_dir=/outputs
- Place your FASTA and any required config files under
/inputswhen uploading. --use_msa_servertells Boltz-2 to fetch MSAs via the configured remote server.--use_potentialsenables the learned potentials during inference.--out_dir=/outputsdirects all prediction artifacts to the job output volume.
Example 2 — local configuration file
You can also drive Boltz-2 with an explicit configuration file that lives under /inputs, while still caching weights under /ref/cache.
/inputs/my_config.yaml
--out_dir=/outputs/my_config_run
--use_msa_server
--use_potentials
/inputs/my_config.yamlis a Boltz configuration file you prepare based on the upstream examples.- Outputs for this run will be written under
/outputs/my_config_run. - The platform automatically appends
--cache=/ref/cacheto keep downloaded weights in the shared reference volume.
Notes
- Boltz-2 jobs currently require a GPU; they are scheduled onto the GPU cluster with 1 GPU, 8 vCPU, and 32 GiB RAM as a baseline.
- For full CLI options and configuration schemas, see the Boltz GitHub repository (boltz-2 section).
- To start a job from the UI, go to New Job → Boltz-2 and paste one of the argument blocks above.