Share feedback
Answers are generated based on the documentation.

Lab: Fine-Tuning Local Models

This lab provides a hands-on walkthrough of fine-tuning AI models using Docker Offload, Docker Model Runner, and Unsloth. Learn how to customize models for your specific use case, validate the results, and share them via Docker Hub.

What you'll learn

  • Use Docker Offload to fine-tune a model with GPU acceleration
  • Package and share the fine-tuned model on Docker Hub
  • Run the custom model with Docker Model Runner
  • Understand the end-to-end workflow from training to deployment

Modules

#ModuleDescription
1IntroductionOverview of fine-tuning concepts and the Docker AI stack
2Fine-Tuning with Docker OffloadRun fine-tuning using Unsloth and Docker Offload
3Validate and PublishTest the fine-tuned model and publish to Docker Hub
4ConclusionSummary, key takeaways, and next steps

Prerequisites

  • Docker Desktop with Docker Offload enabled
  • GPU access with Docker Offload cloud resources

Launch the lab

Ensure you have Docker Offload running, then start the labspace:

$ docker compose -f oci://dockersamples/labspace-fine-tuning up -d

Then open your browser to http://localhost:3030.