Trending:
AI & Machine Learning

Browser plugin tracks AI code contributions in GitHub pull requests

Open source tool refined-github-ai-pr adds AI attribution overlays to GitHub PR reviews, working with git-ai to flag which lines came from Cursor, Claude, or Copilot. Arrives as enterprises face rising questions about code accountability and compliance auditing of AI-generated work.

A new browser extension surfaces which code in GitHub pull requests came from AI tools, addressing a gap as enterprises grapple with accountability for generated contributions.

The refined-github-ai-pr plugin, built by developer rbby.dev, overlays AI attribution markers onto GitHub's PR diff view. It reads metadata from git-ai, an open source project that stores line-by-line AI authorship in Git Notes alongside commit history. The system tracks the generating model, original prompt, and survives git operations like squash merges and rebases.

The timing matters. Several high-profile open source projects have banned AI contributions outright after seeing low-quality PRs from contributors using Claude Code or Cursor. Zig, tldraw, and Ghostty now restrict or prohibit AI-generated code, citing maintainer burden. The plugin doesn't solve quality issues but makes visibility possible.

For enterprises, the compliance angle is trickier. AI-generated code creates audit trail gaps that matter for SOC 2 Type 2 and ISO 27001 requirements. Current AI coding tools auto-attribute commits but don't preserve prompt history or model versions. That leaves teams unable to trace decisions when bugs surface months later. Git-ai's approach of storing prompts with code addresses part of this, though human review accountability remains.

The broader pattern: Teams want AI coding productivity but need guardrails. Some are implementing percentage caps on AI contributions per PR. Others mandate human review flags. The tools are catching up to a workflow that's already happening at scale.

Worth noting: GitHub's native tooling offers no AI detection. Copilot has audit logs for usage tracking but not line-level attribution in PRs. The plugin ecosystem is filling gaps that platform vendors haven't addressed.

The real question is enforcement. Visibility helps, but teams still need policies on what's acceptable and processes to act on the data. A browser extension that shows 80% AI contribution doesn't tell you whether that's appropriate for the code in question.