Lab: Fine-Tuning Local Models
Table of contents
This lab provides a hands-on walkthrough of fine-tuning AI models using Docker Offload, Docker Model Runner, and Unsloth. Learn how to customize models for your specific use case, validate the results, and share them via Docker Hub.
What you'll learn
- Use Docker Offload to fine-tune a model with GPU acceleration
- Package and share the fine-tuned model on Docker Hub
- Run the custom model with Docker Model Runner
- Understand the end-to-end workflow from training to deployment
Modules
| # | Module | Description |
|---|---|---|
| 1 | Introduction | Overview of fine-tuning concepts and the Docker AI stack |
| 2 | Fine-Tuning with Docker Offload | Run fine-tuning using Unsloth and Docker Offload |
| 3 | Validate and Publish | Test the fine-tuned model and publish to Docker Hub |
| 4 | Conclusion | Summary, key takeaways, and next steps |
Prerequisites
- Docker Desktop with Docker Offload enabled
- GPU access with Docker Offload cloud resources
Launch the lab
Ensure you have Docker Offload running, then start the labspace:
$ docker compose -f oci://dockersamples/labspace-fine-tuning up -d
Then open your browser to http://localhost:3030.