All articles
Agentic AI9 min read2025-01-15

How I Trained 100+ Engineers to Code with AI Agents (Cursor + Claude)

The exact playbooks, workflows, and cultural shifts I used at Octdaily to get 100+ engineers shipping faster with Cursor IDE, Claude, and GitHub Copilot — without sacrificing code quality.

CursorClaudeGitHub CopilotAgentic WorkflowsEngineering Culture

Why Agentic AI, Not Just Autocomplete

Most engineers think "AI coding" means autocomplete on steroids. That's table stakes. What actually moves the needle is agentic AI — where the AI understands your codebase, plans multi-file changes, runs commands, reads test output, and iterates until it's done.

At Octdaily, I set out to train 100+ engineers not just to use AI tools, but to build entire daily workflows around them.

The Three-Layer AI Stack We Use

Layer 1 — Cursor IDE (Primary Workspace)

Cursor is our standard IDE. Every engineer uses it because:

  • The codebase is indexed and semantically searchable by the AI
  • Agent mode can make changes across dozens of files, run builds, and fix its own errors
  • Project-level rules (.cursorrules) enforce our coding standards automatically

Layer 2 — Claude (Anthropic) for Complex Reasoning

Claude handles tasks that require multi-document reasoning:

  • Breaking a Jira epic into technical sub-tasks with code stubs
  • Reviewing architecture decisions against HIPAA and FHIR compliance
  • Generating comprehensive test plans from acceptance criteria

Layer 3 — GitHub Copilot for Line-by-Line Assistance

Still valuable for fast completions within a file — especially for boilerplate-heavy areas like FHIR resource mapping and Angular form generation.

The Daily Workflow Playbook

Morning: AI-Assisted Sprint Planning

1. Paste Jira story into Claude:
   "Break this user story into .NET 8 API + Angular 17 tasks.
    Output: file paths to create, interfaces to define, test cases."
 
2. Claude outputs a structured task list with:
   - Files to create/modify
   - Interface contracts
   - Edge cases to handle
   - FHIR resources involved
 
3. Engineer reviews and turns this into a Cursor session

During Development: Cursor Agent Mode

Our .cursorrules file includes FHIR-specific context:

# .cursorrules
- All patient data models must implement IFhirResource
- Use Azure.Health.DataServices SDK for FHIR operations
- Every API endpoint must have a corresponding FHIR capability statement entry
- PHI fields must be annotated with [SensitiveData] attribute
- Tests use xUnit + NSubstitute, follow AAA pattern

The agent follows these rules automatically across every generated file.

PR Review: AI Pre-Review Before Human Review

Before any PR is assigned to a human reviewer, our CI pipeline runs:

# .github/workflows/ai-review.yml
- name: Claude PR Review
  run: |
    gh pr diff $PR_NUMBER | \
    claude review \
      --check-hipaa-compliance \
      --check-fhir-standards \
      --suggest-tests \
      --output pr-review.md
    gh pr comment --body-file pr-review.md
  env:
    PR_NUMBER: ${{ github.event.pull_request.number }}

This catches 70% of issues before a human even opens the diff.

Training Program Structure

Week 1 — Foundations

  • Prompt engineering for code tasks
  • How LLMs "see" a codebase (context windows, embeddings)
  • Cursor agent mode: what it can and can't do

Week 2 — Daily Workflow Integration

  • AI-first story breakdown
  • Pair programming with AI (you drive, AI co-pilots)
  • When NOT to use AI (security-sensitive, PHI-handling code needs extra review)

Week 3 — Advanced Patterns

  • Multi-agent orchestration (AI generates tests, AI fixes failures, AI documents)
  • Custom cursorrules for your team's patterns
  • Building your personal AI prompt library

Measured Outcomes

  • 40% reduction in average feature delivery time
  • 60% fewer back-and-forth PR comments (pre-review catches them)
  • 3x faster onboarding for new engineers (AI explains the codebase)
  • Zero HIPAA/FHIR standard violations slipping into production since the program started