AI platforms are superhuman at some tasks and surprisingly bad at others that seem equally hard. Ethan Mollick at Wharton calls this the "jagged frontier" — an uneven capability edge where you can't predict what will work and what won't just by looking at task difficulty.
I've been building with Claude Code for months now. I've hit this edge repeatedly. The model writes flawless Python scripts but forgets everything when I restart a session. It can orchestrate complex multi-step workflows but can't roll back when something goes wrong across multiple files.
So I started tracking it. Systematically.
The pattern nobody is tracking
Here's what I noticed: when a platform falls short, builders create workarounds. And then — eventually — the platform absorbs those workarounds. The builders who bet on those tools scramble.
I call this the absorption cycle. It's the most important dynamic in AI right now, and almost nobody is tracking it systematically.
The Absorption Cycle
1. Gap appears — Platform can't do X. Users are frustrated.
2. Workarounds emerge — Community builds tools to fill the gap.
3. Adoption grows — Workaround gets traction, sometimes funding.
4. Platform absorbs — Platform ships native solution. Workaround becomes redundant.
OpenAI just announced they're acquiring Promptfoo — the open-source eval testing tool. That's absorption in action. If you built your quality pipeline around Promptfoo, your vendor just became your competitor's feature.
What I built
The system has three parts: a tracker, a monitor, and a brief generator.
1. The Jagged Edge Tracker
A structured document tracking 8 capability gaps across 6 categories, with 30+ community workarounds mapped to each gap. Every workaround has an absorption risk assessment.
8 Gaps I'm Tracking
Memory & persistence: Session persistence across restarts · Cross-platform memory
Context management: Spec enforcement during generation · Context window utilization
Agentic rollback: Atomic rollback when agents make mistakes across files
Integration: Multi-agent coordination
Security: Agent output verification
UX: No-code/low-code agentic workflow automation
Each gap has workarounds. Session persistence, for example, has four: the CONTINUITY MCP server, Anthropic's auto memory, my own work file protocol, and PersistenceAI for enterprise. Each has different adoption levels and different absorption risk.
2. The RSS Monitor
A Python script on my Mac Mini checks 6 RSS feeds twice a day:
- Anthropic Blog + Engineering (via community-scraped feeds — they don't have official RSS)
- OpenAI Blog (moved from
/blogto/news/rss.xml) - Google AI Blog
- Nate's Newsletter (Substack)
- Latent Space (Substack)
When new entries appear, the script hashes them against previous state and flags the changes. First run today: 60 new entries.
3. The Daily Brief
When changes are detected, Claude generates a personalized daily brief with 5 sections:
Daily Brief Format
What Changed — Platform announcements that affect my work
What You Should Try — Specific actions based on what I'm currently building
Workaround Watch — Community tools at risk of being absorbed
Content Opportunity — Newsletter angles from gaps and absorption events
System Improvement Ideas — Ways to upgrade my own workflows based on new capabilities
This isn't a news roundup. Those exist everywhere. This is a decision brief. It's personalized to what I'm building, what problems I'm hitting, and what workarounds I should adopt or abandon.
What the first brief caught
The most interesting signal from today's 60 entries:
Nate's Newsletter argues the jagged frontier is a measurement error. He says the frontier is actually smoothing — that AI capabilities are becoming more even, not more jagged. If he's right, absorption is accelerating. The workarounds are getting eaten faster than we think.
That's either a problem or an opportunity depending on what you're building.
Other signals: Anthropic published a guide on context engineering for agents (validates the architecture I've been building). OpenAI's Codex now has security scanning in research preview. Cursor launched cloud-hosted coding agents.
The architecture behind it
This system runs on the same 3-layer framework I built for my coaching transcript processing system: DOE (Directive → Orchestration → Execution).
- Directive: A markdown SOP that defines what to scan, how to classify updates, when to flag absorption events, and what the daily brief should contain.
- Orchestration: Claude reads the directive, checks feeds, classifies updates, updates reference docs, and generates the brief.
- Execution: A Python script handles RSS fetching, state management, and Claude CLI invocation. Deterministic flow, LLM-powered judgment.
The key: every error makes the system stronger. When a feed URL breaks (Anthropic returned 404 on their "RSS" endpoint — turns out they don't have one), the system logs the learning, updates the directive, and moves to an alternative. Self-annealing. The same loop that makes the coaching system reliable.
Why "daily"?
Most AI newsletters are weekly. They summarize what happened. By the time you read it, you've already seen the headlines.
Daily is different. Daily means I catch the Promptfoo acquisition the day it happens and immediately update the absorption risk on every eval tool in my tracker. Daily means when Anthropic publishes a context engineering guide, I ingest it that afternoon and update my own system.
The value isn't the news. The value is the decision: what does this mean for what I'm building today?
What this becomes
I'm turning this into a newsletter called The Jagged Edge.
Not "what's new in AI." That's covered. This tracks what it means — which workarounds to adopt, which to avoid, when to switch. Written by someone who's actually building with these tools, not just reporting on them.
The tracker, the feeds, the brief format, the absorption cycle framework — all of it will feed a weekly edition. The daily brief stays personal (for my own workflow). The newsletter distills the most important patterns for anyone building with AI.
Would you want this?
I'm genuinely asking. Would a daily brief like this be useful to you?
- If you're a founder — would knowing which capability gaps are about to close change what you build?
- If you're a developer — would tracking workaround absorption risk change which tools you depend on?
- If you're a technical leader — would a decision brief (not a news brief) change how you evaluate AI investments?
Connect with me on LinkedIn and DM me "daily." I'll add you to the early list when The Jagged Edge launches.
The Stack
Tracker: Structured markdown (8 gaps, 30+ workarounds, absorption risk ratings)
RSS Monitor: Python + xml.etree (6 feeds, MD5 state hashing, 60s timeout)
LLM: Claude Max via CLI on Mac Mini (always-on)
Brief Delivery: Google Docs (HITL review) + Resend (email notification)
Architecture: DOE (Directive → Orchestration → Execution)
Reference Data: 4 platform docs (Anthropic, OpenAI, Google, Apple) + synthesis + trusted sources registry
Walter Roth is a non-developer building production AI systems with Claude Code. He writes about what works, what breaks, and what he builds to fix it. Previous post: A coach in Costa Rica and an AI developer walk into a standing desk.