🔄 Common Workflows
Practical patterns for using ProofScan in different scenarios
Development Workflow
Using ProofScan while developing an MCP server:
1. Set Up Development Environment
# Initialize ProofScan
pfscan config init
# Add your development server
pfscan connectors add \
--id dev-server \
--stdio "node dist/index.js"
2. Start Monitoring
# Terminal 1: Start scanning
pfscan scan start --id dev-server
# Terminal 2: Watch events in real-time
pfscan view --follow --fulltime --json
3. Test Your Implementation
Use your MCP client to interact with the server. ProofScan will capture all traffic.
4. Analyze Results
# View hierarchical structure
pfscan tree
# Check for errors
pfscan view --errors
# Generate summary
pfscan summary
5. Iterate and Refine
Based on the captured data, fix bugs and improve your implementation. Repeat the process.
Debugging Workflow
Troubleshooting issues with MCP communications:
Scenario: Connection Not Establishing
# Try to start scanning
pfscan scan start --id problematic-server
# Check connector status
pfscan status --id problematic-server
# View any errors
pfscan view --errors --since 10m
Scenario: Slow Response Times
# Scan the server
pfscan scan start --id slow-server
# View events with latency info
pfscan view --fulltime
# Find slow operations (latency > 1000ms)
pfscan view | grep "lat=[0-9][0-9][0-9][0-9]ms"
Scenario: Unexpected Behavior
# Capture a complete session
pfscan scan start --id mystery-server
# View complete message flow
pfscan tree --rpc-all
# Export for detailed analysis
pfscan view --json > debug.json
Testing Workflow
Using ProofScan for automated testing:
1. Create Test Script
#!/bin/bash
# test-mcp-server.sh
# Start ProofScan
SESSION_ID=$(pfscan scan start --id test-server | grep "Session:" | awk '{print $2}')
# Run your tests
npm test
# Stop scanning
pfscan scan stop
# Generate report
pfscan summary --session $SESSION_ID
# Check for errors
ERROR_COUNT=$(pfscan view --session $SESSION_ID --errors | wc -l)
if [ $ERROR_COUNT -gt 0 ]; then
echo "❌ Test failed: $ERROR_COUNT errors detected"
exit 1
else
echo "✅ Test passed: No errors"
exit 0
fi
2. Integration with CI/CD
# .github/workflows/test.yml
name: MCP Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install ProofScan
run: npm install -g proofscan
- name: Run tests with ProofScan
run: ./test-mcp-server.sh
- name: Upload scan results
if: failure()
uses: actions/upload-artifact@v2
with:
name: proofscan-logs
path: ~/.config/proofscan/sessions/
Production Monitoring
Continuous monitoring of production MCP servers:
1. Set Up Monitoring
# Add production servers
pfscan connectors add \
--id prod-api \
--sse "https://api.example.com/mcp"
pfscan connectors add \
--id prod-db \
--sse "https://db.example.com/mcp"
2. Start Continuous Scanning
# Start monitoring in background
pfscan scan start --id prod-api &
pfscan scan start --id prod-db &
3. Set Up Alerting
#!/bin/bash
# monitor-errors.sh
while true; do
ERRORS=$(pfscan view --since 5m --errors | wc -l)
if [ $ERRORS -gt 10 ]; then
echo "⚠️ High error rate detected: $ERRORS errors in last 5 minutes"
# Send alert (Slack, email, etc.)
fi
sleep 300 # Check every 5 minutes
done
4. Generate Daily Reports
#!/bin/bash
# daily-report.sh
# Get today's sessions
TODAY=$(date +%Y-%m-%d)
# Generate reports for each connector
for CONNECTOR in prod-api prod-db; do
echo "=== Report for $CONNECTOR ==="
pfscan summary --connector $CONNECTOR --since $TODAY
echo ""
done
Audit & Compliance Workflow
Creating verifiable audit trails:
1. Capture Critical Operations
# Start scanning for audit
pfscan scan start --id financial-api
# Perform critical operations
# (All communications are captured automatically)
2. Generate Proof Records
# Generate POPL proof
pfscan popl generate \
--session \
--output audit-2024-02-17.json
# Include metadata
pfscan popl generate \
--session \
--metadata "audit_type:financial,period:Q1-2024" \
--output Q1-2024-audit.json
3. Verify Proof Integrity
# Verify the proof
pfscan popl verify --file audit-2024-02-17.json
# Should output:
# ✓ Proof signature valid
# ✓ All events verified
# ✓ Timeline consistent
4. Archive and Store
# Create tamper-proof archive
tar -czf audits-2024-Q1.tar.gz ~/.config/proofscan/sessions/
# Store in secure location
aws s3 cp audits-2024-Q1.tar.gz s3://audit-bucket/2024/Q1/
Performance Analysis Workflow
Identifying and optimizing performance bottlenecks:
1. Baseline Measurement
# Scan under normal load
pfscan scan start --id target-server
# Run standard workload
./run-benchmark.sh
# Generate performance summary
pfscan summary
2. Identify Slow Operations
# Find operations taking > 1 second
pfscan view | grep "lat=[0-9][0-9][0-9][0-9]ms"
# Sort by latency
pfscan view --json | jq '.[] | select(.latency_ms > 1000)' | jq -s 'sort_by(.latency_ms) | reverse'
3. Analyze RPC Patterns
# View RPC call frequency
pfscan tree --rpc-all
# Count calls by method
pfscan view --json | jq '.message.method' | sort | uniq -c | sort -rn
4. Compare Before/After
# Capture baseline
pfscan scan start --id server
SESSION_BEFORE=$(pfscan status | grep Session | awk '{print $2}')
# Make optimization changes
# Capture after optimization
pfscan scan start --id server
SESSION_AFTER=$(pfscan status | grep Session | awk '{print $2}')
# Compare summaries
echo "=== BEFORE ==="
pfscan summary --session $SESSION_BEFORE
echo "=== AFTER ==="
pfscan summary --session $SESSION_AFTER
Multi-Server Workflow
Monitoring multiple MCP servers simultaneously:
1. Register Multiple Connectors
# Add all servers
pfscan connectors add --id api-server --sse "https://api.example.com/mcp"
pfscan connectors add --id db-server --sse "https://db.example.com/mcp"
pfscan connectors add --id cache-server --stdio "redis-mcp-server"
2. Start All Scans
# Start all in background
for connector in api-server db-server cache-server; do
pfscan scan start --id $connector &
done
3. Monitor Aggregate View
# Watch all events
pfscan view --follow
# View by connector
pfscan tree
# Check individual status
pfscan status
Workflow Tips
💡 Best Practices
- Session Management: Use descriptive connector IDs to identify sessions later
- Time Filters: Use
--sinceand--untilfor focused analysis - JSON Output: Use
--jsonfor integration with other tools - Background Tasks: Run scans in background (
&) for continuous monitoring - Regular Cleanup: Archive old sessions to save disk space
- Automation: Script repetitive workflows for consistency
Next Steps
- → Command Reference - Detailed command documentation
- → Examples - More practical examples
- → Architecture - Understanding internals