DevelopmentTesting
Testing
pytest with hypothesis, parallel execution, coverage, and golden-file regression tests.
Stack
| Tool | Purpose |
|---|---|
pytest | Test runner |
pytest-xdist | Parallel test execution (-n auto) |
hypothesis | Property-based / fuzz testing |
pytest-cov | Coverage reporting |
pytest-mock | Mock utilities |
Run tests
# Full suite (parallel) via CLI
uv run readme dev test
# Direct pytest (equivalent)
uv run python -m pytest
# Single file, serial (useful for debugging)
uv run python -m pytest tests/test_cli.py -v -n 0
# Only non-integration tests
uv run python -m pytest -m "not integration" -v
# Filter by expression
uv run readme dev test -k "test_banner"
# Skip coverage
uv run readme dev test --no-coveragepytest configuration
From pyproject.toml:
[tool.pytest.ini_options]
addopts = "-n auto --verbose --hypothesis-show-statistics --html=logs/report.html ..."
testpaths = ["tests"]
timeout = 500Coverage is written to logs/coverage/ (HTML) — excluded from git.
Test files
| File | What it tests |
|---|---|
tests/test_cli.py | All CLI commands via typer.testing.CliRunner (imports from scripts.cli package) |
tests/test_ink_garden.py | ink_garden.generate() smoke + golden-file regression |
tests/test_word_clouds_red.py | RED (xfail) stubs for unimplemented word-cloud features |
tests/test_living_art_media.py | Skipped — requires pre-generated artifacts |
tests/test_readme_gfm_ux.py | Skipped — requires full pipeline README |
tests/test_techs.py | Technology parser: parse, validation, Hypothesis fuzz |
tests/test_banner.py | Banner generation (patches svgwrite) |
tests/test_fetch_metrics.py | Metrics fetcher (expanded coverage) |
tests/test_readme_sections.py | README section assembler |
tests/test_readme_svg.py | SVG rendering helpers |
tests/test_skills.py | Skills badge generator |
tests/test_qr.py | QR code generation (Cairo integration) |
tests/test_utils.py | Logger and utility functions |
tests/test_card_contracts_blog_red.py | RED — unimplemented blog card features |
Golden-file tests
tests/test_ink_garden.py::TestGoldenFiles compares deterministic SVG output against stored fixtures:
tests/fixtures/ink_garden/
├── minimal_full.svg # ~160 KB
├── rich_full.svg # ~1.4 MB
└── rich_mid.svg # ~1.4 MBTo regenerate after an intentional output change:
# Delete fixtures and re-run to regenerate
rm -rf tests/fixtures/ink_garden/
uv run python -c "
from scripts.art.ink_garden import generate, seed_hash
import pathlib
# ... (see test file for full snippet)
"
uv run python -m pytest tests/test_ink_garden.py -vMocking conventions
- Heavy rendering (svgwrite, cairo) — patched via
@patchat the source module - Lazy imports — patch at
scripts.banner.generate_banner, NOTscripts.cli.generate_banner - Config I/O — use
tmp_pathfixture + real YAML files (no config mocking) - Do not mock the database (N/A) or the filesystem with MagicMock (use
tmp_path)
Loguru handler reset
scripts/utils.py calls loguru_logger.remove() at import time. Tests that import any scripts.* module need the reset_loguru_handlers autouse fixture (see test_utils.py).
RED tests (xfail)
tests/test_word_clouds_red.py contains three @pytest.mark.xfail(strict=True) stubs for features not yet implemented:
palette_tokenizationfieldlayout_readabilityknobsstyle_variantoutput differentiation
These tests will start passing automatically when the features land.