Checkpointless training in Amazon SageMaker HyperPod - Amazon SageMaker AI

Checkpointless training in Amazon SageMaker HyperPod

Checkpointless training on Amazon SageMaker HyperPod enables faster recovery from training infrastructure faults. The following documentation helps you get started with checkpointless training and fine-tuning for NeMo-supported models.

Checkpointless training has the following pre-requisites:

Checkpointless training on SageMaker HyperPod is built on top of the NVIDIA NeMo Framework User Guide. You can run checkpointless training with pre-created SageMaker HyperPod recipes. If you're familiar with NeMo, the process of using the checkpointless training recipes is similar. With minor changes, you can start training a model using checkpointless training features that enable you to recover quickly from training faults.

The following HyperPod recipes are pre-configured with checkpointless training optimizations. You can specify your data paths as part of the recipe and use the associated launch script to run training (see the quick start guide below):

Model Method Size Nodes Instance Accelerator Recipe Script Tutorial
GPT OSS Full finetune example 120b 16 p5.48xlarge GPU H100 link link link
GPT OSS LoRA-example 120b 2 p5.48xlarge GPU H100 link link link
Llama3 Pretrain example 70b 16 p5.48xlarge GPU H100 link link link
Llama3 LoRA-example 70b 2 p5.48xlarge GPU H100 link link link

The following quick-start guide provides tutorials for using checkpointless training recipes:

Getting started examples

If you’d like to pre-train or fine-tune custom models, see Tutorials - Amazon SageMaker HyperPod Checkpointless Pretraining or Finetuning Custom Models.

To learn more about incorporating specific checkpointless training components, HyperPod checkpointless training features.