Code Generation & Scaffolding in Practice

Updated on 4/15/2026

title: "Lesson 14 | Code Generation & Scaffolding in Practice" summary: "From API design to full-stack code generation. Master AI-assisted architectural exploration and TDD workflows." sortOrder: 140 status: "published"

This is the 14th lesson in the Hermes Agent tutorial series.


Lesson 14 | Code Generation & Scaffolding in Practice

Subtitle: From API design to full-stack code generation. Master AI-assisted architectural exploration and TDD workflows.

Learning Objectives

In this lesson, you will delve into two core capabilities of Hermes Agent in the field of software engineering: Code Generation and Project Scaffolding. Upon completing this tutorial, you will be able to:

  1. Understand the Agent-Driven Development (ADD) Pattern: Learn how to integrate an AI Agent into your daily development workflow, transitioning from an instruction-giver to an architectural guide.
  2. Master the project_scaffolding Skill: Generate standardized project structures for frameworks like FastAPI, Express.js, and React with a single command, significantly boosting project kickoff efficiency.
  3. Become Proficient with the code_writer Skill: Master the art of directing the Agent to write, modify, refactor, and explain code through precise natural language instructions.
  4. Practice an AI-Assisted TDD Workflow: Learn how to combine Test-Driven Development (TDD) with the Agent by writing test cases first, then instructing the Agent to generate business logic that passes those tests, ensuring code quality.
  5. Explore Rapid Architectural Prototyping: Use the Agent to quickly generate prototypes with different tech stacks, enabling you to validate technology choices and architectural designs at a lower cost.

Core Concepts Explained

Before diving into the practical exercises, we must understand the core concepts underpinning this lesson. This is not just about "making AI write code"; it's about a new paradigm of human-computer collaborative software development.

1. Agent-Driven Development (ADD)

Agent-Driven Development (ADD) is a development methodology where the developer (human) acts as an architect, product manager, and quality assurance (QA) lead, while the AI Agent plays the role of a junior to mid-level developer.

  • Human's Role: Define requirements, design system architecture, break down tasks, write critical test cases, and review and merge the code generated by the Agent.
  • Agent's Role: Execute specific coding tasks based on clear instructions, such as creating project structures, writing module code, implementing API endpoints, generating unit test templates, and fixing bugs.

The advantage of this model is that it frees developers from tedious, repetitive "manual" coding, allowing them to focus on higher-level creative work. The Agent acts like a tireless pair-programming partner proficient in multiple languages and frameworks.

2. project_scaffolding Skill

Imagine the scenario of starting a new project: creating directory structures, configuring package.json or pyproject.toml, setting up a linter, writing a Dockerfile, creating a .gitignore... This process is both time-consuming and error-prone.

The project_scaffolding Skill was created to solve this exact problem. It is an advanced, built-in skill in Hermes Agent that comes pre-packaged with best-practice templates for various popular tech stacks. You simply provide the project type and name, and it will build a well-structured, ready-to-use initial project for you in seconds.

How It Works: The Skill encapsulates multiple project templates (e.g., FastAPI, React+Vite, Node.js+Express). When invoked, it selects the appropriate template based on the project_type parameter, copies the file structure, and dynamically replaces variables like the project name. This is far more powerful than simple mkdir and touch commands.

3. code_writer Skill

This is one of Hermes Agent's most powerful skills. The code_writer does more than just pass your prompt to an LLM and write the result to a file. It is an intelligent, context-aware writing tool.

Core Features:

  • File Operations: Can create new files, overwrite existing ones, or insert/modify code at specific locations (advanced usage).
  • Context Reading: Before generating code, it can be instructed to read the contents of one or more existing files. This is crucial as it allows the Agent to understand the current state of the project—such as defined functions, data models, or test cases—and generate new code that is compatible and consistent with the existing codebase.
  • Precise Instructions: Its effectiveness is highly dependent on the quality of your prompt. A good prompt should include:
    • Goal: What functionality needs to be implemented? (e.g., "Implement an API endpoint to create a user")
    • Constraints: What rules must be followed? (e.g., "Use a Pydantic model for data validation," "Must return a 201 Created status code")
    • Context: What existing code should be referenced? (e.g., "Reference the User model in app/models.py")

4. TDD and Agent Collaborative Workflow

The classic cycle of Test-Driven Development (TDD) is "Red-Green-Refactor":

  1. Red: Write a failing test case.
  2. Green: Write the minimum amount of code required to make the test pass.
  3. Refactor: Improve the code's structure without changing its external behavior.

When Hermes Agent is introduced, this process evolves into a human-AI collaborative dance:

  1. Red (Human): The developer writes a clear, specific, and failing test case. This is the crucial step of defining the "done" criteria.
  2. Green (Agent): The developer issues a command to the Agent with the goal of making the newly written test pass. For example: "The test_create_item test in tests/test_api.py is failing. Please implement the POST /items route in app/main.py to satisfy the test case requirements."
  3. Verify (Human): The developer runs the tests to confirm that the Agent's code has turned the test "green."
  4. Refactor (Human/Agent): The developer reviews the code. If refactoring is needed, they can do it themselves or issue a refactoring command to the Agent: "Please extract the database logic from app/main.py into a new app/crud.py module."

The beauty of this workflow is that the test case becomes the most precise and unambiguous communication language between the human and the Agent. It sets clear boundaries and a verifiable goal for the Agent's creativity, ensuring the quality and reliability of the generated code.


💻 Hands-on Demo

Now, let's walk through a complete end-to-end example to build a simple Todo List API based on FastAPI. We will strictly follow the AI-assisted TDD workflow.

Scenario: Building a Todo API

Our API needs to have the following basic functionalities:

  • Create a Todo item (POST /todos)
  • Get all Todo items (GET /todos)

Step 1: Initialize the Project with project_scaffolding

First, we'll have Hermes Agent create a standard FastAPI project structure for us.

# Instruct the Agent to create a FastAPI project named todo_api
hermes --skill project_scaffolding --args '{"project_type": "fastapi", "project_name": "todo_api"}'

After the Agent executes, you will see output similar to the following, and a todo_api directory will be generated:

> Executing Skill: project_scaffolding
> Arguments: {'project_type': 'fastapi', 'project_name': 'todo_api'}
> [project_scaffolding] Project 'todo_api' created successfully with 'fastapi' template.

Now, let's inspect the generated project structure:

cd todo_api
tree .

You will see a very professional structure:

.
├── app
│   ├── __init__.py
│   ├── main.py       # Main FastAPI application file
│   ├── models.py     # Data models
│   └── crud.py       # Database operations logic
├── tests
│   ├── __init__.py
│   └── test_main.py  # Test files
├── .gitignore
├── pyproject.toml    # Project dependencies and configuration (using Poetry)
└── README.md

Step 2: Define the Data Model

Before we start writing business logic, we need to define our data model. We want a Todo item to contain an id, title, description, and a done status.

We will use the code_writer Skill to create the Pydantic model.

# Instruct the Agent to define the Todo model in app/models.py
hermes --skill code_writer --args '{
  "file_path": "app/models.py",
  "prompt": "Create a Pydantic model for our Todo application. Create two models: `TodoBase` with `title` (str), `description` (str, optional), and `done` (bool, default=False). Then create a `Todo` model that inherits from `TodoBase` and adds an `id` (int) field."
}'

Check the app/models.py file. The Agent should have generated the following content:

# app/models.py
from typing import Optional
from pydantic import BaseModel

class TodoBase(BaseModel):
    title: str
    description: Optional[str] = None
    done: bool = False

class Todo(TodoBase):
    id: int

    class Config:
        orm_mode = True

Note: The Agent might add configurations like orm_mode based on its training data, which is very useful when integrating with a database ORM like SQLAlchemy.

Step 3: The TDD Flow - Implementing the "Create Todo" Endpoint

Now we enter the core TDD cycle.

3.1 (Red) Write a Failing Test

First, we'll write a test case in tests/test_main.py to test the Todo creation functionality. At this point, no relevant logic has been implemented in app/main.py, so this test is guaranteed to fail.

Open tests/test_main.py in your favorite editor and add the following content:

# tests/test_main.py
from fastapi.testclient import TestClient
from app.main import app

client = TestClient(app)

def test_create_todo():
    response = client.post(
        "/todos/",
        json={"title": "Test Todo", "description": "This is a test description."}
    )
    assert response.status_code == 201, response.text
    data = response.json()
    assert data["title"] == "Test Todo"
    assert data["description"] == "This is a test description."
    assert data["done"] is False
    assert "id" in data

def test_read_todos_initial():
    response = client.get("/todos/")
    assert response.status_code == 200
    assert response.json() == []

We've added two tests: one for creation and one to confirm the list is initially empty.

Now, install the dependencies and run the tests. We'll use Poetry (as configured by the scaffolding).

# Install project dependencies
poetry install

# Run pytest
poetry run pytest

You will see a red output indicating test failure, specifically for test_create_todo, which will likely report a 404 Not Found error because the /todos/ route does not exist yet.

=========================== FAILURES ===========================
_________________________ test_create_todo _________________________

    def test_create_todo():
        response = client.post(
            "/todos/",
            json={"title": "Test Todo", "description": "This is a test description."}
        )
>       assert response.status_code == 201, response.text
E       AssertionError: {"detail":"Not Found"}
E       assert 404 == 201

tests/test_main.py:11: AssertionError

3.2 (Green) Instruct the Agent to Write the Code

Now we have a clear objective: make the test_create_todo test pass. We will hand this task over to Hermes Agent and have it read the test file for context.

# Instruct the Agent to implement the POST /todos/ route to pass the test
hermes --skill code_writer --args '{
  "file_path": "app/main.py",
  "prompt": "The test `test_create_todo` in `tests/test_main.py` is failing with a 404 error. Please implement the `POST /todos/` endpoint in `app/main.py`. It should: \n1. Accept a Todo item payload (without an ID). \n2. For now, use a simple in-memory list as a database. \n3. Assign a new ID to the created item. \n4. Return the complete Todo item (with ID) with a 201 Created status code. \n5. Also implement the `GET /todos/` endpoint to return the list of all todos.",
  "read_context_files": ["tests/test_main.py", "app/models.py"]
}'

Key points of this prompt:

  • A clear problem: "The test test_create_todo ... is failing".
  • A clear instruction: "implement the POST /todos/ endpoint".
  • Specific implementation details: "use a simple in-memory list," "assign a new ID," "Return ... with a 201 status code."
  • Contextual awareness: read_context_files tells the Agent to read the test case and data model files, so it knows to import the Todo model and understands the expected structure of the return data.

After the Agent executes, the contents of app/main.py will be updated, likely as follows:

# app/main.py
from typing import List
from fastapi import FastAPI, HTTPException, status
from .models import Todo, TodoBase

app = FastAPI()

# In-memory database
db: List[Todo] = []
next_id = 1

@app.post("/todos/", response_model=Todo, status_code=status.HTTP_201_CREATED)
def create_todo(todo: TodoBase):
    global next_id
    new_todo = Todo(id=next_id, **todo.dict())
    db.append(new_todo)
    next_id += 1
    return new_todo

@app.get("/todos/", response_model=List[Todo])
def read_todos():
    return db

3.3 (Verify) Run the Tests Again

Now, let's run the tests again to verify the Agent's work.

poetry run pytest

This time, you should see a green output indicating that all tests have passed!

========================= 2 passed in 0.05s =========================

We have successfully implemented our first API endpoint with zero manual coding (besides the test case) by using the TDD + Agent workflow.

Step 4: Architectural Exploration and Full-Stack Generation

The power of this pattern doesn't stop here.

Architectural Exploration: Suppose you're now unsure whether to use an in-memory database or SQLite. You can quickly issue a new instruction to the Agent:

"Refactor the current app/main.py. Instead of an in-memory list, use Python's built-in sqlite3 module to store the todos in a file named todos.db. Create a function to initialize the database table if it doesn't exist."

The Agent can generate a new version using SQLite for you in minutes, allowing you to quickly evaluate and compare the pros and cons of different persistence solutions.

Front-End Code Generation: Now that the back-end API is ready, we can even ask the Agent to generate a simple front-end to consume it.

# First, create the front-end project structure
hermes --skill project_scaffolding --args '{"project_type": "react", "project_name": "frontend"}'

# Then, instruct the Agent to write a React component
hermes --skill code_writer --args '{
  "file_path": "frontend/src/components/TodoList.jsx",
  "prompt": "Create a React functional component named `TodoList`. It should use the `useState` and `useEffect` hooks to fetch data from our backend API at `http://localhost:8000/todos/`. It should then display the list of todos. For each todo, display its title and whether it is done or not. Handle loading and error states."
}'

The Agent will generate a fully functional React component in frontend/src/components/TodoList.jsx, saving you a significant amount of time writing boilerplate code for data fetching and state management.


Commands Used

  • hermes --skill project_scaffolding --args '{"project_type": "...", "project_name": "..."}'
    • Used to create a new project based on a preset template.
  • hermes --skill code_writer --args '{"file_path": "...", "prompt": "...", "read_context_files": [...]}'
    • The core code generation command. file_path specifies the target file, prompt provides detailed instructions, and read_context_files (optional) provides context.
  • poetry install
    • Installs the Python dependencies defined in pyproject.toml.
  • poetry run pytest
    • Runs the pytest test framework within the project's virtual environment.
  • tree .
    • Displays the files and folders in the current directory in a tree-like structure.

Key Takeaways

  1. Mindset Shift: Transition from a "code implementer" to a "system designer and instruction giver." Your core job becomes breaking down problems, defining interfaces, and writing high-quality tests.
  2. Structure First: Using project_scaffolding ensures that your project has a standard, robust structure from the very beginning, avoiding the repetitive labor of "setting up the scaffolding."
  3. Tests as Documentation, Tests as Contracts: In Agent-Driven Development, test cases are the most reliable bridge between human intent and AI execution. A good test is a perfect prompt.
  4. Context is Key: The read_context_files parameter of code_writer is central to its power. Always remember to provide sufficient context for your Agent's tasks to achieve higher-quality output.
  5. Iterate and Verify: Don't expect the Agent to generate perfect, final code in one go. Adopt a cycle of "instruct-generate-verify-iterate" to build out the project incrementally. This aligns perfectly with the principles of agile development.

Through this lesson, you have mastered a powerful workflow for modern software development using Hermes Agent. This will not only dramatically increase your development efficiency but also allow you to focus your energy on the truly challenging and creative aspects of architectural design and problem-solving.

References