TL;DR
Claude Code now has an /insights command. It analyzes your recent usage patterns and generates a personalized report. When I ran it, I got surprisingly insightful feedback—it pinpointed issues like sessions ending in exploration phase and completion tracking problems with parallel tasks.
What is /insights?
It's a new Claude Code feature added on February 4th. Type /insights in the terminal, and it analyzes your recent sessions to generate an HTML report. I learned about it when Evan shared Nate Meyvis's blog post.
The report includes:
- Session breakdown by work area
- Tool usage statistics (Read, Bash, Task, etc.)
- What worked well, what didn't
- Specific improvement suggestions (with copy-paste ready prompts)
- Workflows worth trying
My Analysis Results
Here's what came out of analyzing 844 sessions and 5,522 messages over 6 days.
Your User Profile

There's a section early in the report that summarizes your usage pattern, and it was surprisingly accurate.
"You are a power user who delegates extensively through autonomous task execution."
It says I'm a power user who delegates extensively through autonomous task execution. That means starting complex tasks and letting Claude run on its own. I've been officially recognized as a power user by Claude Code lol.
The core pattern was described as:
"You operate as an orchestrator who launches autonomous, parallel task execution with detailed upfront specs, then monitors from a distance rather than engaging in interactive back-and-forth coding."
Rather than going back and forth with code changes, I operate as an orchestrator launching parallel tasks and monitoring from a distance. That's accurate.
Strengths Identified
Parallel Task Orchestration
I used Task-related tools extensively. Lots of work like migrating multiple UI components simultaneously or having features implemented autonomously overnight.
Systematic QA Verification
One of my QA sessions passed all 7 test items. The checklist-based verification pattern—checking dev server, PWA icons, mobile layouts—worked well.
Problems Identified Too

75% of sessions ended with "unclear outcomes." Sessions often got cut off before completion, or ended after just exploring the codebase without actual implementation.
This probably happened because I recently started splitting workflows into smaller pieces when automating work with Claude Code.
Suggested Improvements
The report suggests immediately usable improvements. Workflow rules to add to CLAUDE.md, Custom Skill templates, Hook configuration examples—all provided in copy-paste ready format.
Rules like "don't stop at exploration, complete the implementation" and Hook settings for automatic type-checking after edits looked particularly useful.
Interesting Findings
The report characterized my work style as "fire and forget." Starting complex tasks and leaving without checking results.
There was an amusing line at the end of the report:
"User asked Claude to 'autonomously implement a feature roadmap overnight' - the AI equivalent of leaving your roommate a chore list while you sleep"
It's like leaving a chore list for your roommate and going to sleep. Fair point, and the report showed that this needs clear completion criteria to work well. Something worth adding to my workflow.
On the Horizon

The report also suggests "workflows worth trying." Things like Self-Healing Implementation (implement → test → fix loop), parallel migration by component, automatic QA per commit.
These almost exactly match the workflows I've been wanting to build while using Claude Code. Seeing that it understood my goals correctly made me feel I'm heading in the right direction.
Conclusion
/insights lets you see your Claude Code usage patterns objectively. Problems I experienced but couldn't clearly identify became obvious when I saw them as data. It felt like getting feedback from a manager who's always watching you.
If you're using Claude Code, I recommend running it once.