Membership-only assertions wouldn't catch phantom URLs added by future
build changes. Tighten back to an exact-list assertion now that we know
the fixture's exact output, and assert lastmod count tracks loc count.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirrors the .category-subtitle a underline style for visual cohesion in
the hero, and locks in the gating behavior with a negative assertion so
a regression that drops the page_kind guard would be caught.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Use the precomputed sub["url"] to identify which subcategories belong
to a category. Avoids parsing the "Cat > Sub" value string, which would
silently misfire if a category name ever contained " > ".
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Categories and groups will share the /categories/ URL namespace.
Fail the build with a clear error message if a future README change
introduces a collision.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Avoids a slug collision between the group "Miscellaneous" and the
category of the same name once both share the /categories/ URL
namespace introduced in the upcoming filter-URL refactor.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Add llms.txt Jinja2 template with a categories_md placeholder
- Extract categories body from README and inject it into the template
- Annotate bullet-entry lines with GitHub star counts (N GitHub stars)
for the main index.md and bare numbers for llms.txt
- Add TestAnnotateEntriesWithStars unit tests
Co-Authored-By: Claude <noreply@anthropic.com>
* update gitignore
* feat: tighten homepage metadata
* fix: trim generated HTML whitespace
* feat(website): add discovery files and markdown alternate
* feat(website): add sitemap lastmod
* feat(seo): add Content-Signal directive to robots.txt
Signals search, ai-input, and ai-train to crawlers
via the experimental Content-Signal header in robots.txt.
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Set exclude-newer to 3 days and only-binary/:all: in pyproject.toml to
limit dependency freshness window and block source builds
- Switch uv sync to --locked in Makefile, ci.yml, and deploy-website.yml
to enforce the lockfile rather than re-resolving on each install
- Regenerate uv.lock with exclude-newer snapshot recorded
Co-Authored-By: Claude <noreply@anthropic.com>
- Rising Star: reduce star-growth window from 2 years to 1 year
- Hidden Gem: reduce minimum repo age from 6 months to 3 months
- Rejection rule: reduce minimum repo age from 3 months to 1 month
Co-Authored-By: Claude <noreply@anthropic.com>
Note the editorial-independence policy so sponsor placements are never conflated with curated listings.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Entry Guidelines already defers to CONTRIBUTING.md, so keeping a parallel Entry Format section here creates drift risk (and the placeholder text was already inconsistent).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Rename PR Review Guidelines to Entry Guidelines and clarify that CONTRIBUTING.md rules apply to any entry addition or removal, not just PR reviews.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
One entry per commit when adding or deleting, but format or wording changes across entries can be bundled.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Split vibevoice and voxcpm out of Pre-trained Models and Inference (which now skews to LLMs and diffusion) into a dedicated Speech subcategory to make room for TTS/ASR growth.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Microsoft's open-source voice AI family (TTS + ASR) with 40k stars, ICLR 2026 Oral, and ASR integrated into Hugging Face Transformers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
It is a pretrained neural TTS foundation model, not an audio manipulation library, so it fits better alongside transformers, diffusers, and vllm.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
OpenBMB's tokenizer-free TTS with multilingual voice design and cloning (15k stars, Apache 2.0).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Apple's ml-explore team library for running and fine-tuning LLMs on Apple Silicon with MLX (4.9k stars, MIT).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Google Research's pretrained time-series foundation model (18k stars, Apache 2.0).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>