← Back to Tutorials
Google Gemma Masterclass: Deployment, Fine-Tuning & Agent Integration
Master Google Gemma open-weight models from scratch — covering local deployment, Ollama inference, LoRA fine-tuning, multimodal applications, and deep integration with AI Agent toolchains like Hermes Agent.
Start Learning
Curriculum
01
Meet Gemma: The Google Open-Weight Model Family
02
Local Deployment: One-Click Setup with Ollama + Gemma
03
Inference with Hugging Face Transformers
04
Cloud Deployment on Google Cloud Vertex AI
05
Unlocking Multimodal: Image Understanding & Visual QA
06
128K Context Window: Document Analysis & Code Review
07
Function Calling & Structured Output
08
LoRA Fine-Tuning with PEFT: Build Your Custom Gemma
09
QLoRA 4-bit Fine-Tuning on Consumer GPUs
10
Native JAX/Flax Fine-Tuning (Official Path)
11
Evaluation & Model Export After Fine-Tuning
12
Gemma + Hermes Agent: Full-Stack Local AI Agent
13
Gemma + LangChain/LlamaIndex: Building RAG Applications
14
Gemma + vLLM: High-Performance Inference Server
15