Skip to content

πŸ§ͺ ZSHAND Testing GuideΒΆ

πŸ“‹ This guide covers how to write tests, where they go, how to run them, and how to maintain coverage across the ZSHAND framework.

🧠 Philosophy

Every function should have a spec. Every edge case documented in a header should have a corresponding It block. Tests are living documentation β€” if a test doesn't exist, the behavior isn't guaranteed.


πŸ—οΈ Test ArchitectureΒΆ

πŸ“ Directory StructureΒΆ

tests/
β”œβ”€β”€ 🎯 run_all.zsh              Master test runner (entry point)
β”œβ”€β”€ πŸ”§ bootstrap_env.zsh        Test environment setup
β”œβ”€β”€ πŸ”§ test_helpers.zsh         Common assertions & utilities (~1600 lines)
β”œβ”€β”€ πŸ”§ ensure_bundles_fresh.zsh Pre-test bundle validation
β”œβ”€β”€ πŸ“Ÿ bin/                      Tests for bin/* CLI tools
β”œβ”€β”€ πŸ—οΈ build/                    Tests for build system
β”œβ”€β”€ βš™οΈ core/                     Tests for core modules
β”œβ”€β”€ πŸ“¦ dependencies/             Dependency version checks
β”œβ”€β”€ πŸ“‚ file-system/              Filesystem operation tests
β”œβ”€β”€ πŸ› οΈ functions/                Tests for user-facing functions
β”œβ”€β”€ πŸͺ¨ hooks/                    Tests for integration hooks
β”œβ”€β”€ πŸ“₯ installers/               Tests for system installers
β”œβ”€β”€ πŸ” private_functions/        Tests for internal functions
β”œβ”€β”€ 🧱 shared_functions/         Tests for shared utilities
β”œβ”€β”€ πŸš€ startup/                  Tests for startup scripts
β”œβ”€β”€ πŸ§ͺ tests-tests/              Meta-tests (test infrastructure itself)
β”œβ”€β”€ 🎨 ui/                       UI rendering tests
└── ⌨️ widgets/                   Tests for ZLE widgets

πŸ”„ Spec Mirroring ConventionΒΆ

Every source file maps to a spec file with a predictable path:

Source Spec
functions/dcopy.zsh tests/functions/test_dcopy.zsh
core/08_audit.zsh tests/core/test_08_audit.zsh
shared_functions/01_stderr_error.zsh tests/shared_functions/test_01_stderr_error.zsh
bin/aicontext tests/bin/test_aicontext.zsh
widgets/aicmit.zsh tests/widgets/test_aicmit.zsh

πŸ’¬ The test_ prefix is the naming convention for test files in the framework's custom test runner. Each directory also has a run_<dir>_tests.zsh orchestrator.


▢️ Running TestsΒΆ

πŸš€ Quick StartΒΆ

# Run everything
zsh tests/run_all.zsh

# Run specific test suites
zsh tests/run_all.zsh core functions

# Run with debug output
zsh tests/run_all.zsh --debug

# Run with full trace (command-level verbosity)
zsh tests/run_all.zsh --verbose

# Run with LLM context on failures (AI debugging)
zsh tests/run_all.zsh --llm

πŸŽ›οΈ Command-Line OptionsΒΆ

Flag Short Purpose
--debug -d Enable debug output during execution
--trace -t Enable trace output (command-level)
--verbose -v Enable both debug and trace
--llm -l Enable LLM context on failures
--help -h Display help information

πŸ“¦ Available Test SuitesΒΆ

bin  build  core  dependencies  file-system  functions
hooks  installers  private_functions  shared_functions
startup  tests-tests  ui  widgets

πŸ§ͺ ShellSpec Tests (BDD)ΒΆ

For files that use ShellSpec (BDD-style specs in spec/):

# Run all ShellSpec specs
just test

# Run specific spec file
just test spec/functions/dcopy_spec.sh

# Run specific directory
just test spec/core/

# Run with coverage (Kcov)
just test-coverage

✏️ Writing Tests¢

πŸ“ Test File StructureΒΆ

Each test file in the tests/ directory follows this pattern:

#!/usr/bin/env zsh
# Test file for: functions/my_function.zsh

# Source test helpers
source "${0:A:h}/../test_helpers.zsh"

# Source the function under test
source "${PROJECT_ROOT}/functions/my_function.zsh"

# ── Tests ───────────────────────────────────────────────────────────────────

test_my_function_happy_path() {
    local result
    result=$(my_function "valid_input")
    assert_equals "expected_output" "$result" "Should return expected output"
}

test_my_function_missing_arg() {
    my_function 2>/dev/null
    assert_equals 1 $? "Should exit 1 when argument missing"
}

test_my_function_edge_case() {
    local result
    result=$(my_function "  spaces  ")
    assert_not_empty "$result" "Should handle whitespace input"
}

# ── Run ─────────────────────────────────────────────────────────────────────
run_tests

πŸ§ͺ ShellSpec BDD FormatΒΆ

For ShellSpec tests (in spec/ directory):

#!/usr/bin/env shellspec
# ── my_function_spec β€” Tests for my_function ──────────────────────────────

Describe "my_function"
  # Setup: source the function
  setup() {
    source "$PROJECT_ROOT/functions/my_function.zsh"
  }
  Before 'setup'

  Describe "happy path"
    It "returns 0 on valid input"
      When call my_function "valid_input"
      The status should be success
      The output should include "expected"
    End

    It "handles multiple arguments"
      When call my_function "arg1" "arg2"
      The status should be success
    End
  End

  Describe "error handling"
    It "exits 1 when argument missing"
      When call my_function
      The status should be failure
      The stderr should include "ERROR:"
    End

    It "exits 1 when file not found"
      When call my_function "/nonexistent/path"
      The status should be failure
    End
  End

  Describe "edge cases"
    It "handles empty string input"
      When call my_function ""
      The status should be failure
    End

    It "handles paths with spaces"
      When call my_function "/path/with spaces/file"
      The status should be success
    End
  End
End

βœ… Common ShellSpec MatchersΒΆ

Matcher Purpose Example
The status should be success Exit code 0 Happy path
The status should be failure Exit code non-zero Error paths
The output should include "text" stdout contains Output verification
The stderr should include "text" stderr contains Error messages
The output should eq "exact" Exact stdout match Precise output
The variable VAR should eq "val" Variable check State verification
The path "file" should be exist File exists File operations
The output should be blank Empty stdout Silent operations

πŸ”§ Test Helpers (test_helpers.zsh)ΒΆ

The test helpers file provides common utilities:

Helper Purpose
assert_equals Compare expected vs actual value
assert_not_equals Values should differ
assert_not_empty Value must not be empty
assert_contains String contains substring
assert_file_exists File must exist
assert_exit_code Verify function exit code
run_tests Execute all test_* functions
setup_test_env Create temporary test environment
teardown_test_env Clean up temporary files

πŸ“Š CoverageΒΆ

🎯 Coverage Targets¢

Level Target Notes
🌐 Overall >85% Across all test suites
βš™οΈ Core modules >90% Critical path code
πŸ› οΈ Functions >80% User-facing behavior
🧱 Shared functions >95% Low-level primitives

πŸ“ˆ Measuring CoverageΒΆ

Coverage is measured via Kcov in CI:

# Run with coverage (generates HTML report)
just test-coverage

# View coverage report
open coverage/index.html

πŸ—ΊοΈ Mapping Headers to TestsΒΆ

The HEADER_STANDARD.md defines how header sections map to test scenarios:

Header Section Maps to Test
▢️ USAGE Signature, arguments, prerequisites
πŸšͺ EXIT CODES Every exit code = a test case
⚠️ EDGE CASES Every row = a test case
πŸ” TROUBLESHOOTING Every error message = a test case
πŸ’» EXAMPLES Every example = a verifiable test
πŸ“¦ DEPENDENCIES Required deps: missing β†’ exit 1
🌍 ENVIRONMENT Each var: unset, default, override

Header-driven testing

When writing tests, open the file's header and work through each section. Every ⬜ in the TESTING must-cover list is a test waiting to be written. Update ⬜ β†’ βœ… as you write each spec.


πŸ› Debugging Test FailuresΒΆ

πŸ” Verbose OutputΒΆ

# Framework test runner with debug
zsh tests/run_all.zsh --debug core

# ShellSpec with verbose output
shellspec --format documentation spec/core/

πŸ“ Common Failure PatternsΒΆ

Symptom Likely Cause Fix
"function not found" Source file not loaded in test Add source at top of test
"assertion failed: expected vs actual" Logic bug or stale expectation Check both code and test
"permission denied" Test touching real filesystem Use temp dirs via setup_test_env
"command not found: jq" Optional dep not installed Guard with (( $+commands[jq] )) skip
Tests pass locally, fail in CI Environment difference Check env vars, paths, installed tools

πŸ”„ Isolating a FailureΒΆ

# Run just the failing suite
zsh tests/run_all.zsh --debug core

# Run just the failing test file
zsh tests/core/test_08_audit.zsh

# Run with zsh trace (shows every command)
zsh -x tests/core/test_08_audit.zsh

πŸ“ Test Writing ChecklistΒΆ

For every new or modified function:

  • βœ… Happy path β€” Does it work with valid input?
  • βœ… Missing arguments β€” Does it exit cleanly with usage hint?
  • βœ… Invalid arguments β€” Does it reject bad input?
  • βœ… Missing dependencies β€” Does it degrade gracefully?
  • βœ… Empty input β€” Does it handle empty strings/files?
  • βœ… Paths with spaces β€” Does it quote correctly?
  • βœ… Exit codes β€” Does every documented code get tested?
  • βœ… Error messages β€” Does every TROUBLESHOOTING entry trigger?
  • βœ… Edge cases β€” Does every EDGE CASES row have a test?
  • βœ… Update header β€” Mark ⬜ β†’ βœ… in the TESTING must-cover list

Document Purpose
πŸ“ HEADER_STANDARD.md TESTING section format and ShellSpec mapping
🀝 CONTRIBUTING.md Quality gate and submission checklist
πŸ—οΈ ARCHITECTURE.md Directory structure and test location
⚑ PERFORMANCE.md Benchmark testing