A complete, opinionated recipe for adapting Gemma 4 E2B to your domain — from multimodal dataset construction and QLoRA configuration through training loop debugging, evaluation, and production deployment. April 2026 · ~28 min read · HuggingFace / TRL / PEFT Recipe at a Glance Serves: 1 fine-tuned model Ingredients (hardware) NVIDIA GPU ≥ 24 GB VRAM System RAM ≥ 32 GB Storage (SSD) ≥ 50 GB free Training dataset 1K–100K samples Python ≥ 3.11 CUDA ≥ 12.1 Ingredients (libraries) transformers ≥ 4.51 peft ≥ 0.12 trl ≥ 0.12 bitsandbytes ≥ 0.44 accelerate ≥ 0.34 wandb / mlflow any Contents 01. Why Fine-Tune E2B? 02. Dataset Construction 03. QLoRA Configuration 04. Multimodal Fine-Tuning 05. The Training Loop 06. Debugging ...
This is my mindless rambling.