Most MSPs treat WiFi auditing as a one-off manual task. It doesn't have to be. With a $120 edge sensor per site, a cron job, and four API calls, you can run fully automated nightly captures, generate compliance-grade PDF reports, file PSA tickets, and ping your Slack channel on criticals — without touching a keyboard.
Architecture Overview: What the Pipeline Looks Like
Before diving into code, here's the full data flow from capture to ticket:
- Edge sensor (Raspberry Pi 4 + Alfa AWUS036AXML) runs a cron-scheduled hcxdumptool capture each night during the client's off-hours window (e.g. 02:00–02:15 local time).
- SCP transfer ships the .pcapng file to your central automation server (or directly to an ephemeral cloud VM).
- Python script uploads the file to the wifiaudit.io API, polls for job completion, and downloads the PDF report.
- PSA integration attaches the PDF to the client's recurring audit ticket in ConnectWise Manage, HaloPSA, or Autotask.
- Alert routing — if any finding has
severity == "critical", a Slack or Teams webhook fires immediately. Non-critical results wait for the daily digest.
Total elapsed time from end-of-capture to Slack alert: under 4 minutes. Engineer intervention required: zero, unless a critical fires.
Step 1: Edge Sensor Setup and Scheduled Capture
Hardware
Per-site hardware cost is about $110–$130: Raspberry Pi 4 (2 GB, ~$55), Alfa AWUS036AXML Wi-Fi 6 adapter (~$45), SD card and power supply (~$20). Run Kali Linux or Raspberry Pi OS with hcxtools installed. Register each sensor with a unique hostname (e.g. sensor-acme-hq) so your automation can tag reports to the right client.
Cron-Scheduled Capture
Deploy this crontab on each sensor. It captures for 12 minutes starting at 02:00, then SCP-transfers the result and cleans up locally:
# /etc/cron.d/wifi-capture — runs as root
# m h dom mon dow command
0 2 * * * root /usr/local/bin/run-capture.sh >> /var/log/wifi-capture.log 2>&1
### /usr/local/bin/run-capture.sh ###
#!/usr/bin/env bash
set -euo pipefail
CLIENT="acme-hq"
IFACE="wlan0"
OUTDIR="/tmp/captures"
REMOTE="automation@10.0.0.5:/srv/captures/"
SSH_KEY="/root/.ssh/automation_ed25519"
DURATION="720" # seconds
mkdir -p "$OUTDIR"
OUTFILE="$OUTDIR/${CLIENT}-$(date +%Y%m%d).pcapng"
# Capture — PMKID + handshakes, no beacon flood
hcxdumptool -i "$IFACE" -o "$OUTFILE" \
--disable_deauthentication \
--enable_status=1 &
HPID=$!
sleep "$DURATION"
kill $HPID
# Ship to automation server
scp -i "$SSH_KEY" -q "$OUTFILE" "$REMOTE"
rm -f "$OUTFILE"
echo "[$(date -Is)] Capture shipped: $OUTFILE"Authorization is non-negotiable. The --disable_deauthentication flag keeps the capture passive, but you still need a signed wireless audit authorization from each client before deploying a sensor. Include the sensor's MAC address and the authorized SSID list in the authorization letter. Store a copy alongside the engagement record in your PSA.
Step 2: Automated Upload, Analysis, and PDF Retrieval
The automation server watches the incoming directory with a watchdog process (or simply runs from the same cron, 30 minutes after capture start). The Python script below handles the full wifiaudit.io API lifecycle: upload → poll → download → severity check → route alerts.
#!/usr/bin/env python3
# msp_pipeline.py — upload, analyze, alert, deliver
# Requirements: pip install requests
import os, sys, time, json, requests
from pathlib import Path
API_KEY = os.environ["WIFIAUDIT_API_KEY"]
BASE = "https://api.wifiaudit.io/api/v1"
HEADERS = {"X-API-Key": API_KEY}
SLACK_URL = os.environ.get("SLACK_WEBHOOK_URL", "")
CW_BASE = os.environ.get("CW_BASE_URL", "") # e.g. https://na.myconnectwise.net
CW_AUTH = os.environ.get("CW_AUTH_HEADER", "") # Basic base64(company+pubkey:privkey)
def upload_pcap(pcap_path: Path, client: dict) -> str:
with pcap_path.open("rb") as f:
resp = requests.post(
f"{BASE}/jobs", headers=HEADERS,
files={"file": (pcap_path.name, f, "application/octet-stream")},
data={"ssid": client["ssid"], "organization": client["name"]}
)
resp.raise_for_status()
job_id = resp.json()["job_id"]
print(f" → Uploaded. Job ID: {job_id}")
return job_id
def poll_job(job_id: str, max_wait: int = 300) -> dict:
deadline = time.time() + max_wait
while time.time() < deadline:
time.sleep(12)
r = requests.get(f"{BASE}/jobs/{job_id}", headers=HEADERS)
data = r.json()
if data["status"] == "completed":
return data
if data["status"] == "failed":
raise RuntimeError(f"Job {job_id} failed: {data}")
raise TimeoutError(f"Job {job_id} did not complete in {max_wait}s")
def download_pdf(job_id: str, dest: Path) -> Path:
r = requests.get(f"{BASE}/jobs/{job_id}/report", headers=HEADERS)
r.raise_for_status()
dest.write_bytes(r.content)
return dest
def alert_slack(client_name: str, findings: list, job_id: str):
criticals = [f for f in findings if f.get("severity") == "critical"]
if not criticals or not SLACK_URL:
return
lines = "\n".join(f"• {f['title']}" for f in criticals)
payload = {
"text": (
f":rotating_light: *Critical WiFi findings — {client_name}*\n"
f"{lines}\n"
f"Job: `{job_id}` | Review report in PSA ticket."
)
}
requests.post(SLACK_URL, json=payload, timeout=10)
def attach_to_connectwise(ticket_id: int, pdf_path: Path, client_name: str):
if not CW_BASE or not CW_AUTH:
return
url = f"{CW_BASE}/v4_6_release/apis/3.0/service/tickets/{ticket_id}/attachments"
with pdf_path.open("rb") as f:
requests.post(
url,
headers={"Authorization": CW_AUTH, "clientId": os.environ["CW_CLIENT_ID"]},
files={"file": (pdf_path.name, f, "application/pdf")},
data={"title": f"WiFi Audit Report — {client_name}", "isPublic": "false"},
timeout=30
)
print(f" → PDF attached to CW ticket #{ticket_id}")
# --- Main ---
if __name__ == "__main__":
manifest_path = Path(sys.argv[1]) # JSON: {name, ssid, pcap, cw_ticket_id}
client = json.loads(manifest_path.read_text())
pcap = Path(client["pcap"])
print(f"[{client['name']}] Starting audit pipeline...")
job_id = upload_pcap(pcap, client)
result = poll_job(job_id)
pdf_out = Path(f"/srv/reports/{client['name'].replace(' ','_')}-{time.strftime('%Y%m%d')}.pdf")
download_pdf(job_id, pdf_out)
print(f" → Report saved: {pdf_out}")
findings = result.get("findings", [])
alert_slack(client["name"], findings, job_id)
attach_to_connectwise(client["cw_ticket_id"], pdf_out, client["name"])Call it with a per-client JSON manifest file. Your automation server's cron can glob /srv/captures/*.pcapng, look up the client manifest by hostname prefix, and invoke the script per file. The manifest keeps PSA ticket IDs, SSID names, and client metadata decoupled from the script logic.
HaloPSA and Autotask use the same pattern. HaloPSA: POST /api/Attachment with ticket_id in the body. Autotask: POST /atservicesrest/v1.0/ticketAttachments. Both accept multipart/form-data with the PDF binary. Swap out the attach_to_connectwise function body — the calling interface stays identical.
Step 3: Low-Code Alternative with n8n
If your team prefers a visual workflow tool, n8n handles this pipeline cleanly. Zapier cannot — it doesn't support binary file uploads in standard triggers. Here's the minimal n8n node sequence:
- Schedule Trigger — fires at 02:30 daily (30 min after capture ends)
- SSH node — runs
ls /srv/captures/*.pcapngto enumerate new files - Read Binary File node — reads each .pcapng into binary buffer
- HTTP Request node —
POST https://api.wifiaudit.io/api/v1/jobs, body typeForm-Data, attach binary asfilefield, addssidandorganizationtext fields. Header:X-API-Key: {{ $env.WIFIAUDIT_API_KEY }} - Wait node — 90 seconds
- HTTP Request node —
GET /api/v1/jobs/{{ $json.job_id }}, loop with IF node untilstatus == completed - HTTP Request node —
GET /api/v1/jobs/{{ $json.job_id }}/report, response formatFile - IF node — check
{{ $json.findings.some(f => f.severity === 'critical') }} - Slack node (critical branch) — post to
#security-alertswith finding titles - HTTP Request node — attach PDF to ConnectWise / HaloPSA ticket via REST
The entire workflow fits on one n8n canvas and takes about 45 minutes to configure the first time. Export it as JSON and import it for each new MSP client — just swap the manifest variables (SSID, ticket ID, organization name) in the workflow's environment settings.
PSA Ticket Strategy: Keep It Tidy
Don't create a new ticket per audit. Instead, maintain one recurring monthly security review ticket per client site. Each audit run appends the PDF as an attachment and adds a private note with the finding summary. This gives you:
- A clean audit trail the client (or their compliance auditor) can review in one place
- Time-stamped evidence of continuous monitoring — useful for NIS2 Article 21 and ISO 27001 A.8.20 compliance documentation
- Reduced ticket noise — one ticket per site per month, not one per audit run
In ConnectWise Manage, use the POST /service/tickets/{id}/notes endpoint to append the finding count and severity summary as a private internal note alongside the PDF attachment. Set "detailDescriptionFlag": false and "internalAnalysisFlag": true to keep it off the client portal view until you've reviewed criticals.
Severity Routing: What Gets an Immediate Alert
| Severity | Example Finding | Action |
|---|---|---|
| Critical | PSK cracked (<30 min), WPA version < WPA2 | Slack/Teams alert within 5 min, ticket flagged urgent |
| High | PMKID exposed, weak key length | Included in next-morning digest email, ticket priority elevated |
| Medium | Management frames unprotected (no 802.11w) | PDF note only, discussed at next QBR |
| Low | SSID broadcasting non-standard band | Report only |
The wifiaudit.io API returns a findings array in the completed job response. Each element has title, severity, description, and remediation fields. Filter on severity before deciding alert path — don't send every finding to Slack or your team will start ignoring the channel within a week.
Rotate your API key per environment. Use a separate wifiaudit.io API key for each environment: development, staging, and production. Store keys in your secrets manager (HashiCorp Vault, AWS Secrets Manager, or even a local .env not committed to git). A leaked key means someone else can consume your audit quota and read your client reports.
Scaling Across 20+ Sites
Once the single-site pipeline is solid, scaling is a configuration problem, not a code problem. Maintain a clients.json file on your automation server — an array of client objects, each with name, ssid, sensor_hostname, cw_ticket_id, and audit_schedule (not all clients need nightly captures; quarterly may be sufficient for low-risk sites). Your master cron job reads this file, selects clients scheduled for tonight, and dispatches a subprocess per client.
API rate limits on the wifiaudit.io platform are generous for MSP volumes — check your plan's concurrent job limit and stagger uploads by 60 seconds per client if you're running 15+ sites simultaneously. The Growth plan ($149/month) covers 100 audits/month with 10 concurrent jobs, which handles most MSPs comfortably.
FAQ
What capture hardware should I deploy at MSP client sites for automated PCAP collection?
A Raspberry Pi 4 running Kali Linux with an Alfa AWUS036AXML (Wi-Fi 6, ~$45) is the most cost-effective edge sensor. Use hcxdumptool for scheduled captures and ship the resulting .pcapng via SCP back to your automation server. Total hardware cost per site is under $120 — recoverable on the first invoice.
How do I attach the audit PDF to a ConnectWise Manage ticket automatically?
Use the ConnectWise Manage REST API: POST /v4_6_release/apis/3.0/service/tickets/{ticketId}/attachments with the PDF binary as multipart/form-data. Set the Authorization header to Basic base64(companyId+publicKey:privateKey) and include your clientId header. Store credentials in environment variables. The same multipart pattern works for HaloPSA (POST /api/Attachment) and Autotask (POST /atservicesrest/v1.0/ticketAttachments).
What counts as a 'critical' finding that should trigger a Slack or Teams alert?
The wifiaudit.io API returns a findings array with severity values of critical, high, medium, and low. Critical findings include: PSK cracked within the wordlist (weak passphrase), WPA version older than WPA2, and PMKID exposure with a recoverable key. Filter on severity == "critical" and route those to your alerting webhook immediately rather than batching them into the daily digest.
Can I run this pipeline without writing Python — for example using n8n or Zapier?
n8n works well because it supports multipart/form-data HTTP requests natively, which you need for the PCAP upload step. Zapier does not support binary file uploads in standard workflow steps. In n8n, use an HTTP Request node with method POST, body type Form-Data, and attach the file from a prior Read Binary File node. Chain a second HTTP Request to poll for job completion, then route the PDF to your PSA and Slack nodes. The full workflow fits on one canvas in about 10 nodes.
Automate Your First WiFi Audit Today
Get an API key, run the pipeline against your own office network, and have a repeatable client service ready to pitch this week.
Get API Key — 3 Audits Free