Checkpointless training in Amazon SageMaker HyperPod
Checkpointless training on Amazon SageMaker HyperPod enables faster recovery from training infrastructure faults. The following documentation helps you get started with checkpointless training and fine-tuning for NeMo-supported models.
Checkpointless training has the following pre-requisites:
-
Getting started with Amazon EKS support in SageMaker HyperPod
-
Installing the training operator. You must install v1.2.0 or above.
Checkpointless training on SageMaker HyperPod is built on top of the
NVIDIA NeMo Framework User Guide
The following HyperPod recipes are pre-configured with checkpointless training optimizations. You can specify your data paths as part of the recipe and use the associated launch script to run training (see the quick start guide below):
| Model | Method | Size | Nodes | Instance | Accelerator | Recipe | Script | Tutorial |
|---|---|---|---|---|---|---|---|---|
| GPT OSS | Full finetune example | 120b | 16 | p5.48xlarge | GPU H100 | link |
link |
link |
| GPT OSS | LoRA-example | 120b | 2 | p5.48xlarge | GPU H100 | link |
link |
link |
| Llama3 | Pretrain example | 70b | 16 | p5.48xlarge | GPU H100 | link |
link |
link |
| Llama3 | LoRA-example | 70b | 2 | p5.48xlarge | GPU H100 | link |
link |
link |
The following quick-start guide provides tutorials for using checkpointless training recipes:
Getting started examples
-
Tutorials - Amazon SageMaker HyperPod Checkpointless Full Finetuning GPT OSS 120b
-
Tutorials - Amazon SageMaker HyperPod Checkpointless PEFT-LoRA GPT OSS 120b
-
Tutorials - Amazon SageMaker HyperPod Checkpointless Pretraining Llama 3 70b
-
Tutorials - Amazon SageMaker HyperPod Checkpointless PEFT-LoRA Llama 3 70b
If you’d like to pre-train or fine-tune custom models, see Tutorials - Amazon SageMaker HyperPod Checkpointless Pretraining or Finetuning Custom Models.
To learn more about incorporating specific checkpointless training components, HyperPod checkpointless training features.