Automated Testing
SK Wwise MCP has three layers of testing — unit, integration, and eval — all built on Python and pytest.
Unit Tests
~450 tests across 27 files in tests/unit/. All WAAPI calls are mocked, so no Wwise installation is required.
Coverage includes:
- Core modules — WAAPI dispatcher, query engine, audio conversion, CLI wrapper
- Business logic — pipeline operations, transport, profiling, object manipulation
- MCP servers — one test file per server (browse, objects, containers, pipeline, etc.)
uv run pytest tests/unit/ -vIntegration Tests
8 test files in tests/integration/, marked with @pytest.mark.integration. These run against a live Wwise headless server with a test project.
What they test:
- Object queries, creation, deletion, and modification
- Switch/Blend Container assignments
- Audio import and SoundBank operations
- Transport/playback operations
- Audio peaks and Media Pool reads
- Generic WAAPI passthrough
The conftest.py fixtures handle setup automatically:
- Locate WwiseConsole via
WWISEROOTor the Windows registry - Start a headless WAAPI server on port 8080
- Wait up to 20 seconds for the connection
- Clean up leftover test objects before and after each run
uv run pytest tests/integration/ -vTo skip integration tests when running the full suite:
uv run pytest tests/ -v -m "not integration"Eval Tests (LLM Routing)
39 test cases in tests/eval/ that verify the AI agent selects the correct MCP tools for a given prompt. This catches routing regressions when tools or server configurations change.
Each case defines a prompt and the expected tool(s):
{
"id": 1,
"prompt": "Ping Wwise to check if it's available",
"expected_tools": ["ping_wwise"],
"category": "browse"
}Categories: browse, audition, generic, objects, media-read, cross-server.
Running Evals
Eval runs use Claude Code skills for orchestration:
Setup — create a test project and start the WAAPI server:
/eval-setupRun cases — choose batch (5 cases/invocation) or granular (1 case):
/loop 60s /eval-batch /loop 30s /eval-routingGenerate report:
python tests/eval/report.pyCleanup:
/eval-teardown
A PostToolUse hook in .claude/settings.json automatically logs every Wwise MCP tool call to tests/eval/tool_log.jsonl during eval runs. The verify.py script compares actual vs. expected tools and writes results to test_results.json.
Artifacts
| File | Purpose |
|---|---|
tests/eval/tool_log.jsonl | Timestamped MCP tool calls |
tests/eval/test_results.json | Pass/fail results per case |
tests/eval/setup_manifest.json | Eval project path, WAAPI PID, created objects |
Prerequisites
| Test layer | Requirements |
|---|---|
| Unit | Python 3.12+, pytest>=8.0 |
| Integration | + Wwise installed, WWISEROOT set or discoverable via registry |
| Eval | + Claude Code with SK Wwise MCP servers configured |
Install dev dependencies:
uv sync # installs runtime + dev deps, creates .venv