Diachron: Provenance for AI-Written Code
•Diachron
## The Problem
AI is writing a meaningful chunk of new code, but the engineering workflow still assumes a human authored it. When something breaks, you can `git blame` the lines, but you can’t answer the questions that matter:
- **What was the intent?**
- **What actions actually changed the repo?**
- **Was anything verified after the change?**
- **What’s the rollback surface area?**
Most “LLM observability” tools trace **inference**, not **software state change**. For engineering teams (and even solo devs), that’s the gap.
## The Thesis
Software engineering needs provenance **from intent to evidence**, not just tokens and latency.
Diachron’s core idea: bind each AI session to a concrete chain:
```
prompt → tool call(s) → diff hunks → commands/tests → verification → PR/merge
```
If you can reconstruct that chain, you can generate:
- PR narratives that ship with proof
- “semantic blame” that explains *why* a line exists
- targeted rollback patches with high confidence
## The Wedge: PR Evidence Packs
The first “can’t go back” feature is an evidence pack that turns a PR description into something closer to an audit log:
```markdown
## What Changed
- src/auth/login.ts: Added refresh token logic (+45 lines)
- src/auth/token.ts: New token storage module (+120 lines)
## Why (Intent)
User asked: "Fix the 401 errors on page refresh"
## Evidence Trail
- Commands executed: npm test, npm run build
- Tests: ✅ 47 passed, 0 failed
- Build: ✅ Success
```
The goal is to make “trust, but verify” cheap.
### PR Evidence Pack Example (Expanded)
Below is the level of detail Diachron is designed to generate as a **single PR comment**. It’s intentionally skimmable for reviewers, but structured enough to be auditable.
```markdown
## Diachron Evidence Pack
### Summary
This PR was created with AI assistance. Diachron captured the chain from intent → diff hunks → verification.
### Intent
- Prompt: "Fix the 401 errors on page refresh"
- Scope: src/auth/*
### What Changed (by file)
- src/auth/login.ts (+45 / -12)
- src/auth/token.ts (+120 / -0)
- src/auth/__tests__/token.test.ts (+96 / -3)
### Verification
- Commands: npm test, npm run build
- Tests: ✅ 47 passed, 0 failed
- Build: ✅ Success
### Coverage
- Matched events → commits: 87%
- Unmatched events: 3 (notes/log output)
### Rollback
- Generated reverse patch available for this session
```
## The Linking Graph (Moat)
Diachron isn’t a single log. It’s a linking graph that lets you trace a line of code to a specific AI intent + evidence.
```
session
├─ exchanges (prompts / responses)
├─ tool_calls (edit/write/run)
├─ events (diff hunks, file paths)
├─ verification (tests, builds)
└─ git_context (branch, commits, PR)
```
This graph is what enables “semantic blame” and high-confidence rollback.
## Key Design Choices
### Local-first by default
The data that makes provenance valuable (prompts, diffs, repo context) is also the data developers least want to ship to a vendor. Diachron is designed so the default mode works locally, with exportable artifacts when needed.
### Evidence > vibes
Diachron favors concrete signals (tests executed, build succeeded, commands run) over “the model said it’s correct.” It’s not about generating confidence—it’s about recording proof.
## What’s Next
- A CLI that posts PR evidence packs automatically
- A `diachron blame` command that explains code with confidence grading
- A one-click rollback patch per session (and a safety-first “dry run” diff preview)
You might also like
Interested in working together?
Get in touch →