Over the last few months I've been working on squirrelscan - a cli tool that audits websites for security, performance, SEO, and other issues. I wrote it to integrate it with coding agents so that it can help clean up a lot of the issues developers don't bother with, or don't know about.
It's been working well - and there are now over 240 different rules that it checks in 21 categories.
I seem to not only be the other one who thinks so. I soft-launched the skill into the skills.sh directory and it's now getting over 10k installs a week.
I'm finally getting around to writing a post on squirrelscan - what it does, the background and what's in its future.
Background
I've been building a lot of websites and webapps with Claude Code and other coding agents in the past year - prototypes, test projects, and actual real work. I've never felt more productive. There was one part of the development loop with coding agents that was still slow and felt antiquated - testing websites for security, performance, SEO, accessability, image size and other issues.
My previous method was to rely on audits from ahrefs and Google Lighthouse to find issues which would then be fixed. Lighthouse has a cli but there are a host of issues it doesn't cover. ahrefs requires manually triggering a crawl, and then waiting what felt like an eternity to get results back.
Designing an Audit Tool
None of the existing tools are coding agent native and the manual process doesn't really close the loop well.
A good agent-native tool would be:
a) Fast. The tools that thrive in the coding agent ecosystem are fast (see: uv, bun, oxfmt, et al.). An auditing tool also needs to be fast and embedded within the coding agent loop for fast feedback.
b) Comprehensive. If you're capturing web-pages from a black-box perspective, run as many checks as possible. Stop some of the most common mistakes that developers are making - especially with vibe coding (ie. not editing code yourself, or even looking at it)
c) LLM-Native. Both an interface and outputs suited to llms - that means no user prompts on inputs and an extensive cli interface, and an output format tailored for llms.
d) Reliable. Coding agents require strict harnesses to guarantee outputs and not hallucinate or skip over work. A good audit tool would force a coding agent to complete all fixes by tracking issues and their completion status.
So I decided to write my own auditing tool. Hence, squirrelscan.
I've been writing web crawlers for more than a decade and a lot of that domain knowledge went into the squirrelscan crawler. I've also spent months adding the 200+ rules for coverage, and then integrating it via agent skills into coding agents.
There is still a lot more to do - but right now it's a solid foundation and running the audit-website skill in your code repo will fix a very large number of problems.
230+ rules and counting
There are a lot of rules here that you'd expect. It already has ~98% coverage of most commercial tools, but there are some rules that are more unique that I've always wanted in a tool:
- Leaked secrets - Detects over 100 leaked secret types
- Video schema validation - It'll validate video schemas. Will create and include a thumbnail and generate captions for videos missing them.
- NAP consistency - It'll detect typos and inconsistencies across the site in your contact details.
- Picks up render blocking and complicated DOM trees in performance rules
- noopener on external links (find this all the time)
- Warns on public forms that don't have a CAPTCHA that probably should to prevent spam
- Adblock and blocklist detection - this is currently in the beta channel. It detects if an element or included script will be blocked by adblock, privacy lists or security filters.
The audit and rules engine covers more ground than you'd expect:
- SEO basics - meta tags, headings, canonical URLs, Open Graph, structured data, robots.txt, sitemaps.
- Performance - image optimization, lazy loading, render-blocking resources, compression.
- Security - HTTPS, mixed content, security headers, cookie attributes.
- Accessibility - alt text, ARIA labels, form labels, color contrast hints, keyboard navigation.
- Technical stuff - broken links, redirect chains, duplicate content, mobile-friendliness.
Each rule has a severity level. Critical issues (your site not being HTTPS) get flagged differently than minor warnings (slightly long meta description). Your AI can prioritize accordingly.
The report formats
Traditional audit tools love their dashboards. Colorful charts, trend graphs, comparison tables. Great for client presentations. Terrible for automation.
I spent a lot of time iterating on an llm native format - one that is both well understood by agents, but also compact and saves tokens.
If you want the fancy reports, you can output and share in HTML.
Skills and integrations
If you're using a coding agent that supports the agent skills standard (most do now), there's a skill that makes integration smoother. Install once, then just say "audit my website" without remembering exact command syntax. Or you can trigger with the slash command /audit-website (in codex it is $audit-website)
Skill handles running the audit, parsing results, prioritizing by severity, suggesting fixes. Turns "use this CLI tool" into "tell your AI what you want."
What's next
Version 0.1. Foundation is solid, and there are releases almost every day with updates.
Diff reports to compare audits over time - see what improved, what regressed. Custom rules for project-specific stuff (weird internal conventions the standard rules don't cover). Performance benchmarks with Core Web Vitals. More integrations - Cursor, Windsurf, other AI coding tools.
It's also been designed to support plugins and hooks - which will be coming soon. You'll be able to receive the context of an audit in your own scripts and trigger your own rules, or supress others.
Goal is making squirrelscan the default way AI agents interact with website quality. Not just SEO, but the whole spectrum of "things that can go wrong with a website."
Try it
If you're building websites with AI assistance (at this point, who isn't?), give it a shot.
curl -fsSL https://squirrelscan.com/install | bashAsk your AI to audit something. See if the workflow clicks like it did for me.
Tool is free. Project is on GitHub. Find bugs or want features, file an issue. Build or do something cool with it, would love to hear about it.
The best way to keep up with updates is to follow along on Twitter

