π§ͺ ZSHAND Testing GuideΒΆ
π This guide covers how to write tests, where they go, how to run them, and how to maintain coverage across the ZSHAND framework.
π§ Philosophy
Every function should have a spec. Every edge case documented in a header should have a corresponding It block. Tests are living documentation β if a test doesn't exist, the behavior isn't guaranteed.
ποΈ Test ArchitectureΒΆ
π Directory StructureΒΆ
tests/
βββ π― run_all.zsh Master test runner (entry point)
βββ π§ bootstrap_env.zsh Test environment setup
βββ π§ test_helpers.zsh Common assertions & utilities (~1600 lines)
βββ π§ ensure_bundles_fresh.zsh Pre-test bundle validation
βββ π bin/ Tests for bin/* CLI tools
βββ ποΈ build/ Tests for build system
βββ βοΈ core/ Tests for core modules
βββ π¦ dependencies/ Dependency version checks
βββ π file-system/ Filesystem operation tests
βββ π οΈ functions/ Tests for user-facing functions
βββ πͺ¨ hooks/ Tests for integration hooks
βββ π₯ installers/ Tests for system installers
βββ π private_functions/ Tests for internal functions
βββ π§± shared_functions/ Tests for shared utilities
βββ π startup/ Tests for startup scripts
βββ π§ͺ tests-tests/ Meta-tests (test infrastructure itself)
βββ π¨ ui/ UI rendering tests
βββ β¨οΈ widgets/ Tests for ZLE widgets
π Spec Mirroring ConventionΒΆ
Every source file maps to a spec file with a predictable path:
| Source | Spec |
|---|---|
functions/dcopy.zsh | tests/functions/test_dcopy.zsh |
core/08_audit.zsh | tests/core/test_08_audit.zsh |
shared_functions/01_stderr_error.zsh | tests/shared_functions/test_01_stderr_error.zsh |
bin/aicontext | tests/bin/test_aicontext.zsh |
widgets/aicmit.zsh | tests/widgets/test_aicmit.zsh |
π¬ The
test_prefix is the naming convention for test files in the framework's custom test runner. Each directory also has arun_<dir>_tests.zshorchestrator.
βΆοΈ Running TestsΒΆ
π Quick StartΒΆ
# Run everything
zsh tests/run_all.zsh
# Run specific test suites
zsh tests/run_all.zsh core functions
# Run with debug output
zsh tests/run_all.zsh --debug
# Run with full trace (command-level verbosity)
zsh tests/run_all.zsh --verbose
# Run with LLM context on failures (AI debugging)
zsh tests/run_all.zsh --llm
ποΈ Command-Line OptionsΒΆ
| Flag | Short | Purpose |
|---|---|---|
--debug | -d | Enable debug output during execution |
--trace | -t | Enable trace output (command-level) |
--verbose | -v | Enable both debug and trace |
--llm | -l | Enable LLM context on failures |
--help | -h | Display help information |
π¦ Available Test SuitesΒΆ
bin build core dependencies file-system functions
hooks installers private_functions shared_functions
startup tests-tests ui widgets
π§ͺ ShellSpec Tests (BDD)ΒΆ
For files that use ShellSpec (BDD-style specs in spec/):
# Run all ShellSpec specs
just test
# Run specific spec file
just test spec/functions/dcopy_spec.sh
# Run specific directory
just test spec/core/
# Run with coverage (Kcov)
just test-coverage
βοΈ Writing TestsΒΆ
π Test File StructureΒΆ
Each test file in the tests/ directory follows this pattern:
#!/usr/bin/env zsh
# Test file for: functions/my_function.zsh
# Source test helpers
source "${0:A:h}/../test_helpers.zsh"
# Source the function under test
source "${PROJECT_ROOT}/functions/my_function.zsh"
# ββ Tests βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
test_my_function_happy_path() {
local result
result=$(my_function "valid_input")
assert_equals "expected_output" "$result" "Should return expected output"
}
test_my_function_missing_arg() {
my_function 2>/dev/null
assert_equals 1 $? "Should exit 1 when argument missing"
}
test_my_function_edge_case() {
local result
result=$(my_function " spaces ")
assert_not_empty "$result" "Should handle whitespace input"
}
# ββ Run βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
run_tests
π§ͺ ShellSpec BDD FormatΒΆ
For ShellSpec tests (in spec/ directory):
#!/usr/bin/env shellspec
# ββ my_function_spec β Tests for my_function ββββββββββββββββββββββββββββββ
Describe "my_function"
# Setup: source the function
setup() {
source "$PROJECT_ROOT/functions/my_function.zsh"
}
Before 'setup'
Describe "happy path"
It "returns 0 on valid input"
When call my_function "valid_input"
The status should be success
The output should include "expected"
End
It "handles multiple arguments"
When call my_function "arg1" "arg2"
The status should be success
End
End
Describe "error handling"
It "exits 1 when argument missing"
When call my_function
The status should be failure
The stderr should include "ERROR:"
End
It "exits 1 when file not found"
When call my_function "/nonexistent/path"
The status should be failure
End
End
Describe "edge cases"
It "handles empty string input"
When call my_function ""
The status should be failure
End
It "handles paths with spaces"
When call my_function "/path/with spaces/file"
The status should be success
End
End
End
β Common ShellSpec MatchersΒΆ
| Matcher | Purpose | Example |
|---|---|---|
The status should be success | Exit code 0 | Happy path |
The status should be failure | Exit code non-zero | Error paths |
The output should include "text" | stdout contains | Output verification |
The stderr should include "text" | stderr contains | Error messages |
The output should eq "exact" | Exact stdout match | Precise output |
The variable VAR should eq "val" | Variable check | State verification |
The path "file" should be exist | File exists | File operations |
The output should be blank | Empty stdout | Silent operations |
π§ Test Helpers (test_helpers.zsh)ΒΆ
The test helpers file provides common utilities:
| Helper | Purpose |
|---|---|
assert_equals | Compare expected vs actual value |
assert_not_equals | Values should differ |
assert_not_empty | Value must not be empty |
assert_contains | String contains substring |
assert_file_exists | File must exist |
assert_exit_code | Verify function exit code |
run_tests | Execute all test_* functions |
setup_test_env | Create temporary test environment |
teardown_test_env | Clean up temporary files |
π CoverageΒΆ
π― Coverage TargetsΒΆ
| Level | Target | Notes |
|---|---|---|
| π Overall | >85% | Across all test suites |
| βοΈ Core modules | >90% | Critical path code |
| π οΈ Functions | >80% | User-facing behavior |
| π§± Shared functions | >95% | Low-level primitives |
π Measuring CoverageΒΆ
Coverage is measured via Kcov in CI:
# Run with coverage (generates HTML report)
just test-coverage
# View coverage report
open coverage/index.html
πΊοΈ Mapping Headers to TestsΒΆ
The HEADER_STANDARD.md defines how header sections map to test scenarios:
| Header Section | Maps to Test |
|---|---|
| βΆοΈ USAGE | Signature, arguments, prerequisites |
| πͺ EXIT CODES | Every exit code = a test case |
| β οΈ EDGE CASES | Every row = a test case |
| π TROUBLESHOOTING | Every error message = a test case |
| π» EXAMPLES | Every example = a verifiable test |
| π¦ DEPENDENCIES | Required deps: missing β exit 1 |
| π ENVIRONMENT | Each var: unset, default, override |
Header-driven testing
When writing tests, open the file's header and work through each section. Every β¬ in the TESTING must-cover list is a test waiting to be written. Update β¬ β β
as you write each spec.
π Debugging Test FailuresΒΆ
π Verbose OutputΒΆ
# Framework test runner with debug
zsh tests/run_all.zsh --debug core
# ShellSpec with verbose output
shellspec --format documentation spec/core/
π Common Failure PatternsΒΆ
| Symptom | Likely Cause | Fix |
|---|---|---|
| "function not found" | Source file not loaded in test | Add source at top of test |
| "assertion failed: expected vs actual" | Logic bug or stale expectation | Check both code and test |
| "permission denied" | Test touching real filesystem | Use temp dirs via setup_test_env |
| "command not found: jq" | Optional dep not installed | Guard with (( $+commands[jq] )) skip |
| Tests pass locally, fail in CI | Environment difference | Check env vars, paths, installed tools |
π Isolating a FailureΒΆ
# Run just the failing suite
zsh tests/run_all.zsh --debug core
# Run just the failing test file
zsh tests/core/test_08_audit.zsh
# Run with zsh trace (shows every command)
zsh -x tests/core/test_08_audit.zsh
π Test Writing ChecklistΒΆ
For every new or modified function:
- β Happy path β Does it work with valid input?
- β Missing arguments β Does it exit cleanly with usage hint?
- β Invalid arguments β Does it reject bad input?
- β Missing dependencies β Does it degrade gracefully?
- β Empty input β Does it handle empty strings/files?
- β Paths with spaces β Does it quote correctly?
- β Exit codes β Does every documented code get tested?
- β Error messages β Does every TROUBLESHOOTING entry trigger?
- β Edge cases β Does every EDGE CASES row have a test?
- β
Update header β Mark
β¬ββin the TESTING must-cover list
π Related DocumentsΒΆ
| Document | Purpose |
|---|---|
| π HEADER_STANDARD.md | TESTING section format and ShellSpec mapping |
| π€ CONTRIBUTING.md | Quality gate and submission checklist |
| ποΈ ARCHITECTURE.md | Directory structure and test location |
| β‘ PERFORMANCE.md | Benchmark testing |