v1.18.0 — Now with Mermaid session charts

Know your context.
Act before Claude degrades.

Real-time context monitoring for Claude Code. Track token usage, catch degradation early, and understand your AI spend — all without leaving your terminal.

Claude Code — context-stats live statusline
$ pip install context-stats
Successfully installed context-stats-1.18.0

$ context-stats graph

Context usage — last 12 interactions
▓▓▓▓▓▓▓▓░░░░░░ 64,000 free (32.0%)

╔══ Session: abc-123 ══════════════╗
║ Model: Claude Opus 4.6
║ Zone: ◆ Code-only
║ MI: 0.918 (high) ║
║ Delta: +2,500 tokens this step ║
╚══════════════════════════════════╝

Delta per interaction:
1│ ██
2│ ████
3│ ████████
4│ ████████████
5│ ████████████████
6│ ████████████████████

$
GitHub stars
Forks
PyPI downloads
0
Dependencies
MIT
Free forever

Claude Code doesn't tell you
when you're in trouble.

By the time you notice output quality dropping, you've already wasted tokens — and money.

🪫

Invisible context drain

Each tool call, each file read quietly consumes context. You only find out when Claude starts looping or forgetting what it just wrote.

💸

Surprise API bills

A single runaway session can burn thousands of tokens before you realize the model is operating at 20% intelligence. No warning, no way to see trends.

📉

Silent model degradation

As context fills, Claude's effective intelligence drops along a measurable curve. Running at 30% context free isn't the same as running at 80% free.

🌀

No session history

After a session ends, you can't audit what happened — which interactions consumed the most tokens or why a particular task ballooned the context.

Three levels of analytics,
one lightweight tool.

From real-time awareness to multi-week cost reports — context-stats gives you complete visibility without ever leaving your terminal.

Level 1 — Live

Real-Time Statusline

Persistent status line in your Claude Code session — always on, zero friction.

  • Context zone with color-coded action signal
  • Model Intelligence (MI) score per interaction
  • Token delta since last refresh
  • Git branch + project name
  • Live ASCII graph dashboard (context-stats graph)
Level 2 — Session

Per-Session Deep Dive

Export a detailed Markdown report for any completed session.

  • Executive snapshot with model, duration, interactions
  • Interaction timeline with per-step context + MI
  • Cache activity analysis (creation vs. read ratio)
  • Mermaid visual charts (context growth, zones, cache)
  • Cache keep-warm to maintain 5-min TTL
Level 3 — Report

Multi-Week Analytics

Understand cost trends, model mix, and efficiency across all your projects.

  • Cost breakdown by model family (Opus/Sonnet/Haiku)
  • Daily activity heatmaps
  • Cross-project efficiency metrics
  • 41%+ cache hit ratio analysis
  • Cost optimization opportunities

What you actually see
in the terminal.

Three commands — each level of analytics, shown in full.

$ context-stats graph --type all

Context Stats (my-project • abc-1234)

Context usage — last 14 interactions
▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░ 88,000 free (44.0%)

╔══ Session: abc-1234 ═════════════════════╗
Model: Claude Opus 4.6
Zone: ◆ Code-only
MI: 0.712 (context pressure)
Delta: +3,140 tokens this step
Cost: $0.2847
╚══════════════════════════════════════════╝

Context Growth Per Interaction
Max: 8,420 Min: 0 Points: 14

8,420 │
2,810 │
0 │●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
└──────────────────────────────────────────
10:12 11:08 12:30

Model Intelligence (MI) Over Time
Calibrated against MRCR v2 8-needle benchmark

1.00 │●●
0.90 │ ●●●
0.80 │ ●●●
0.70 │ ●●●
0.60 │ ●●
0.50 │
└──────────────────────────────────────────
step 1 step 14

Cache Activity
Creation ░ Read ▓

3│ ░░░░░░░░▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
2│ ░░░░░░░░░░░░▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
1│ ░░░░░░░░░░░░░░░░▓▓▓▓▓▓▓▓▓▓▓▓
$ context-stats export abc-1234
Exported → session-abc-1234-2026-04-07.md

# Session Export — abc-1234
Generated: 2026-04-07 12:30:15 | context-stats v1.18.0

## Executive Summary

| Metric | Value |
|---------------------|------------------------|
| Model | Claude Opus 4.6 |
| Duration | 2h 18m 03s |
| Interactions | 14 |
| Peak Context Used | 112,000 / 200,000 (56%) |
| Final MI Score | 0.712 |
| Total Cost | $0.2847 |
| Cache Hit Ratio | 63.4% |
| Git Branch | feat/landing-page |
| Lines Changed | +284 / -41 |

## Interaction Timeline

| Step | Time | Context Used | Free | MI | Delta | Zone |
|------|-------|--------------|---------|-------|---------|-------------|
| 1 | 10:12 | 18,240 | 181,760 | 0.982 | +18,240 | Planning |
| 2 | 10:24 | 24,610 | 175,390 | 0.971 | +6,370 | Planning |
| 5 | 10:58 | 61,340 | 138,660 | 0.894 | +8,100 | Planning |
| 8 | 11:22 | 84,920 | 115,080 | 0.812 | +5,280 | Code-only |
| 11 | 11:51 | 99,480 | 100,520 | 0.764 | +4,100 | Code-only |
| 14 | 12:30 | 112,000 | 88,000 | 0.712 | +3,140 | Code-only |

## Cache Analysis

Cache creation tokens: 28,400
Cache read tokens: 49,120 (63.4% hit rate)
Cache savings: ~$0.087 (vs. uncached)

```mermaid
pie title Cache Efficiency
"Cache Hit" : 63.4
"Fresh tokens" : 36.6
```
$ context-stats report --since-days 30
Analyzing 312 sessions across 18 projects...

# Token Usage Analytics Report
Generated: 2026-04-07 12:00:00 | context-stats v1.18.0

## Executive Summary (last 30 days)

| Metric | Value |
|-------------------------|-------------------------------|
| Total Spend | $1,842.35 |
| Total Sessions | 312 |
| Projects Analyzed | 18 |
| Cache Hit Ratio | 41.2% |
| Avg Session Cost | $5.90 |
| Avg Session Duration | 3h 22m 14s |
| Most Expensive Session | a3f9c12d… ($87.40, 4.7%) |
| Most Expensive Project | project-alpha ($412.80, 22.4%) |

## Model Usage

| Model | Sessions | Cost | % Total |
|--------|----------|------------|---------|
| opus | 241 | $1,512.30 | 82.1% |
| sonnet | 42 | $218.75 | 11.9% |
| haiku | 29 | $111.30 | 6.0% |

## Weekly Spend Trend

W10 ███████████ $284.50 (48 sessions)
W11 ████████████████ $412.80 (74 sessions)
W12 █████████████ $348.20 (68 sessions)
W13 ████████████████████ $489.60 (82 sessions) ← peak
W14 ████████ $218.75 (31 sessions)
W15 ███ $88.50 (9 sessions)

## Top Projects by Cost

| # | Project | Sessions | Cost | Cache % | Dominant |
|---|-----------------|----------|-----------|----------|----------|
| 1 | project-alpha | 28 | $412.80 | 12.0% | opus |
| 2 | project-beta | 41 | $318.44 | 44.1% | opus |
| 3 | project-gamma | 35 | $264.17 | 38.4% | opus |
| 4 | project-delta | 27 | $198.53 | 49.2% | opus |
| 5 | project-epsilon | 22 | $187.62 | 31.0% | opus |

## Activity by Day of Week

Mon ############........ 52 sessions
Tue ##################... 68 sessions
Wed #################### 71 sessions ← busiest
Thu ############........ 48 sessions
Fri ###############..... 55 sessions
Sat ######.............. 22 sessions
Sun #######............. 26 sessions

Every context state, visualized.

Color-coded zones tell you exactly what to do — no guesswork required.

Planning zone — green, plenty of context
Planning Zone — plenty of context, keep working
Code-only zone — yellow, context tightening
Code-Only Zone — tightening, finish current task
Dump zone — orange, quality declining
Dump Zone — quality declining, wrap up now
Live ASCII graph dashboard
Live ASCII graph dashboard — context-stats graph

Five zones. One clear action each.

context-stats maps token usage to five zones so you always know what to do next.

Planning Plenty of room — keep planning and coding freely <25%
Code-only Context tightening — finish current task, skip exploration 25–40%
Dump Quality declining — wrap up, prepare to export context 40–70%
ExDump Near hard limit — start a new session immediately 70–75%
Dead Context exhausted — stop, nothing productive remains >75%

What you see at every context zone.

The statusline updates live after each prompt. Here's exactly what it shows across all five zones.

Plan used < 50,000  ·  < 25% used
my-project | main [2] | 32,480 (16.2%) | Plan | Opus 4.6 (200k context) | 76f0c6e5-47e1-4656-9ffe-59ba12005967
→ Plenty of room. Keep planning and coding freely.
Code 50,000 ≤ used < 80,000  ·  25–40% used
my-project | feat/new-ui [5] | 65,210 (32.6%) | Code | +494 | Opus 4.6 (200k context) | e43c4908-3374-46e5-b8e4-73ab96e42d0f
→ Context tightening. Finish current task, skip exploration.
Dump 80,000 ≤ used < 140,000  ·  40–70% used
my-project | fix/bug-123 [8] | 112,640 (56.3%) | Dump | +198 | Opus 4.6 (200k context) | c58eab6e-9fd9-4a99-8fee-8de07b566b9a
→ Quality declining. Wrap up now, prepare to export context.
ExDump 140,000 ≤ used < 150,000  ·  70–75% used
my-project | fix/bug-123 [12] | 144,380 (72.2%) | ExDump | +1,240 | Opus 4.6 (200k context) | d4e5f6a7-b8c9-0123-defa-234567890123
→ Near hard limit. Start a new session immediately.
Dead used ≥ 150,000  ·  ≥ 75% used
my-project | fix/bug-123 [15] | 158,920 (79.5%) | Dead | +320 | Opus 4.6 (200k context) | e5f6a7b8-c9d0-1234-efab-345678901234
→ Context exhausted. Stop — nothing productive remains.

Up and running in 60 seconds.

No config files. No sign-up. No data leaving your machine.

1

Install the package

One pip command, zero dependencies.

pip install context-stats
2

Configure Claude Code

Add the status line to ~/.claude/settings.json.

{ "statusLine": { "type": "command", "command": "claude-statusline" } }
3

Start a session

Your status line is live. Context zone, MI score, token delta — all visible.

claude # or open Claude Code
4

Explore & analyze

Use any of these commands any time — no session ID needed.

# Live ASCII dashboard — context, MI, cache, delta context-stats graph # Export full session report to Markdown context-stats export <session_id> # Multi-project analytics (last 30 days) context-stats report --since-days 30 # Explain raw JSON from Claude Code stdin context-stats explain

Common questions

Does context-stats send any data to the cloud?

No. All session data stays local in ~/.claude/statusline/. There are no network requests, no telemetry, no external API calls of any kind. Your token usage and session data never leave your machine.

Will it slow down my Claude Code sessions?

No. The statusline script is a lightweight Python process that runs synchronously on each prompt. It reads from local CSV files and writes a single line to stdout. Typical execution is under 10ms. Git operations have a 5-second timeout to prevent hangs.

Does it work on Windows?

Yes. context-stats is pure Python 3.9+ with zero external dependencies and runs on macOS, Linux, and Windows. The statusline script and CLI both work across all platforms.

What Python version do I need?

Python 3.9 or higher. No external packages are required — context-stats uses only the Python standard library.

Can I customize the colors and display?

Yes. Create ~/.claude/statusline.conf with key=value pairs. You can customize 18 named colors or use hex codes, toggle MI display, token detail, delta tracking, session ID, motion effects, and more. Full configuration reference in the docs.

How accurate are the token counts?

context-stats reads the token data directly from Claude Code's status line JSON. It does not estimate — it reports the exact values that Claude Code provides. The MI score is calibrated against the MRCR v2 8-needle benchmark.

What's the difference between the statusline and the graph dashboard?

The statusline (claude-statusline) is the persistent one-line display shown by Claude Code at each prompt — it's always on. The graph dashboard (context-stats graph) is an on-demand ASCII chart view you run manually to see usage history, MI trends, cache activity, and delta per interaction over the session.

Stop flying blind.
Start every session with a map.

Free. MIT licensed. Zero external dependencies.
All your data stays local — always.

No data sent anywhere Install in under 60s MIT licensed, free forever Zero dependencies