Phase 3 / Ep 14: The First Line of Local Code (TDD: The Battle of Practical Algorithms)
We saved the first core logic development task in Phase 2 of docs/task_plan.md:
[ ](TDD) Establish the allocation logic for splitting local Task durations into TimeBlocks (block_allocator.spec.ts).
This is the core of the entire T-Block system: if a user creates a 2-hour "Write Weekly Report" Task, and it encounters the 12 PM lunch break, it must automatically split into an "1-hour morning block" and an "1-hour afternoon block".
1. Unleash the Agent: Triggering the Task
In your command box, enter this minimalist sentence:
"Execute the task 'Establish the allocation logic for splitting local Task durations into TimeBlocks' from
task_plan.mdin order."
You see, we didn't even bother with what algorithm to use or how to split it. Because the Agent will first read task_plan.md to get context, then look for rules in PRD_System_Design.md, and finally bounce back off the test-driven-development skill guardrail we just created.
2. The Agent's Self-Orchestration (A Real-Time Record of the System's Fully Automated Backend Operation)
At this point, the Agent behind the screen began a series of intense self-iterations:
[Round 1: Red Light]
- It read the TDD rules.
- It dutifully created
block_allocator.spec.tsundertests/. - In the test, it mocked a 12:00-13:00 lunch break block and inserted a two-hour Task.
- It executed the test! The terminal exploded, spewing red, because the main business file
src/allocator.tsdidn't even have empty function stubs.
[Round 2: Striving for Green Light]
- The AI rushed into
src/allocator.tsand rapidly typed fifty lines of interval collision merging (Interval merge) algorithm. - The AI continued to autonomously call the
run_commandtool to run the test again. - Error! The algorithm for a critical boundary (e.g., exactly at a one-minute boundary) was written incorrectly, and Vitest threw an AssertionError.
- The AI read the detailed error stack in the terminal, immediately modified the code, it automatically initiated a chain of error corrections!
[Round 3: Refactoring and Archiving]
As it automatically completed the fixes in a non-stop troubleshooting cycle. Finally! The terminal showed green: ✓ Task chunks correctly split.
Then, the AI relaxed, and applied Refactor rules to refactor the messy logical cyclomatic complexity. And recorded this success in progress.md, marking the [ ] item in task_plan.md with [x].
3. Human Experience
The above might have taken 3 minutes, during which you might have just had a sip of coffee. You did nothing. But this isn't because the AI is so divine. If we hadn't constrained it with the frameworks we built in previous episodes, it would have come asking you in the first round of errors: "Master, this code has an error, what should I do?"
Test guardrails are the only container that allows the AI to achieve a self-correction loop.
The algorithm works, everyone is happy. But what if, for once, the AI's algorithm was truly beyond its scope, and it failed after more than a dozen retries in the environment, burning through a lot of Tokens on the bill? In the next lesson, we will teach the king of system safeguards: The Three Strikes Out Defense Mechanism.