workflow

The Human-AI Peer Programming Process

The proven UCM methodology for collaborative software development with AI. Build better software through structured phases, continuous review, and incremental development.

📋 Source Process: This guide is based on the official UCM process documented in utaba/main/guidance/processes/ai-peer-process-software.md

Key Concepts

  • • You and AI work as peers in software development
  • • Commitment to incremental development and continuous review
  • • Build and maintain documentation
  • • Provide relevant context to AI

The Three-Phase Process

The UCM Human-AI Peer Programming Process consists of three structured phases, each designed for continuous collaboration and incremental development.

Planning Phase

Understanding requirements and creating specifications

1

Discuss High Level Requirements

Start by collaboratively understanding what you're building. This is a conversation, not a requirements dump.

Key Discussion Areas:

  • • What problem are we solving?
  • • Who are the target users?
  • • What are the core features needed?
  • • What are the business or technical constraints?
  • • What success looks like
2

Question, Add Input, Review, Refine

The AI should ask clarifying questions and provide input based on experience. This is collaborative refinement, not passive note-taking.

AI Input Examples:

"Have you considered how this integrates with existing systems?"
"What about user authentication and data privacy requirements?"
"This feature might have scalability implications - should we plan for that?"

Iterative Refinement:

Expect multiple rounds of discussion. Each iteration should add clarity and detail while identifying potential issues early.

3

Generate Product Specifications

Create formal documentation that captures the refined requirements and serves as the foundation for design and implementation.

Output: specifications/product-specification.md

Uses template: utaba/main/guidance/templates/product-specification-template.md

Contains: Clear problem statement, user personas, feature requirements, constraints, success criteria, and acceptance criteria.

Design Phase

Architecture planning and implementation strategy

1

Define Architecture

Design the high-level system architecture based on the requirements. Focus on major components, data flow, and system boundaries.

Key Activities:

  • • Identify major system components and services
  • • Define data flow and integration points
  • • Establish system boundaries and interfaces
  • • Consider scalability and performance needs
2

Reference Standards and Patterns

Select appropriate design patterns, coding standards, and best practices. Document these decisions for consistent implementation.

Output: docs/standards.md

References: utaba/main/guidance/development/development-guidelines.md
Contains coding standards, patterns to use, testing approach, and quality criteria.

3

Define Constraints and Technology Stack

Make explicit decisions about technology choices, deployment constraints, performance requirements, and other technical boundaries.

Document Decisions:

  • • Programming languages and frameworks
  • • Database and data storage approach
  • • Hosting and deployment strategy
  • • Performance and scalability targets
  • • Security and compliance requirements
4

Generate Architecture Documentation

Create comprehensive architecture documentation that consolidates all design decisions, patterns, and technology choices into a formal reference document.

Output: docs/architecture.md

Uses template: utaba/main/guidance/templates/architecture-template.md

Consolidates all architectural decisions, system design, technology stack, and integration points into a single reference document.

5

Generate Implementation Plan

Break down the project into phases and features. Map requirements to release versions and identify dependencies between components.

Output: plan/implementation-plan.md

Uses template: utaba/main/guidance/templates/phased-implementation-plan-template.md

Contains phased rollout, feature prioritization, dependencies, and timeline estimates.

Sprint/Increment Phase

Incremental development with continuous review

🎯 Core Principle: Build Small, Then STOP!

The fundamental rule of this phase is to implement small increments and pause for human review. This allows for course correction and ensures the AI doesn't build too much without oversight.

1

Build a Small Part Then STOP!

Implement one feature or component at a time. Focus on getting something working rather than building everything at once.

✅ Good Increment Examples:

  • • Basic user registration form (no validation yet)
  • • Simple data model for core entity
  • • One API endpoint with happy path
  • • Basic component rendering static data

❌ Too Large Examples:

  • • Complete user management system
  • • Full authentication with all edge cases
  • • Entire feature with all business logic
  • • Multiple related components at once
2

Review, Question, Optimise, Iterate

After each increment, stop and review what was built. This is where human judgment and AI capabilities combine for optimal results.

Review Questions:

  • • Does this match what we intended?
  • • Are there any issues with the approach?
  • • What should we adjust before continuing?
  • • Does this integrate well with existing code?
  • • Are we on track with the overall plan?
3

When Things Go Wrong: "Reflect"

When implementation doesn't work as expected, pause and analyze what happened. This reflection improves both the current project and future development.

Reflection Process:

1. What went wrong? - Identify the specific issue
2. Why did it happen? - Root cause analysis
3. What should we do differently? - Strategy adjustment
4. How do we prevent this? - Process improvement
4

Are Our Strategies Working?

Regularly evaluate whether your chosen approaches are delivering the expected results. Be willing to pivot when strategies aren't working.

Strategy Evaluation:

  • • Is the architecture scaling as expected?
  • • Are our development patterns efficient?
  • • Is the chosen technology stack working well?
  • • Are we meeting quality and performance goals?
5

Why Did You Create X When Documentation Said Y?

Address gaps between documentation and implementation. This keeps the AI aligned with the intended approach and improves future development.

Common Alignment Issues:

  • • Implementation differs from architectural design
  • • Code doesn't follow documented standards
  • • Features implemented differently than specified
  • • Dependencies or constraints ignored
6

Optimise the Documentation

Update documentation based on implementation learnings. This creates a feedback loop that improves both current and future projects.

Documentation Updates:

  • • Refine architectural decisions based on implementation reality
  • • Update standards based on what actually works
  • • Adjust specifications based on user feedback
  • • Document lessons learned and best practices
7

Update the Implementation Plan

Keep the implementation plan current based on progress and learnings. The AI should proactively maintain this as a living document.

Plan Updates Include:

  • • Mark completed features and components
  • • Adjust timeline estimates based on actual progress
  • • Update dependencies and prerequisites
  • • Reprioritize remaining work based on learnings

🎯 Key Success Factors

📋 Document Everything

Maintain specifications, architecture, standards, and implementation plans throughout the project.

Files: specifications/, docs/, plan/

🔄 Small Increments

Build small parts and stop for review. This prevents over-building and enables course correction.

Principle: Build → Stop → Review → Iterate

🤝 True Collaboration

Work as peers, not as human-tool. Both parties contribute ideas, questions, and solutions.

Mode: Peer programming, not code generation

📈 Continuous Learning

Reflect on failures, optimize documentation, and improve processes throughout development.

Focus: Process improvement and knowledge capture

Git Integration

The process integrates with version control for tracking progress and enabling review:

Frequent Commits: One feature or fix per commit after peer review
Human Control: You decide the level of interaction with version control
Review Points: Git commits serve as checkpoints for code review
Documentation Sync: Keep docs and code in sync through version control

💡 Take Away

This process is designed for small to medium software projects where AI and human work as true peers. The key is small incremental phases with human oversight and keeping context through specifications, plans and technical documents - allowing the unique strengths of both AI speed and human judgment to combine effectively.