// CYNDRA_ONBOARDING / 05_SECURITY

Security & data privacy

Cyndra runs on your hardware. Your conversations, files, and agent memory never leave your Mac mini except for the queries you explicitly send to a cloud LLM — and even those can be replaced with a fully local model. No telemetry, no backend, no phone home.

The questions on this page are from real customer security calls. They're not hypothetical.

Network & device hardening

Two recommendations for hardening the host machine. Both are optional but worth doing if security posture matters to your business.

Network isolation

  • Put the Mac mini on its own VLAN or network segment for extra isolation from your main corporate network.
  • If you don't run VLANs, a dedicated guest network on your router achieves a similar effect.
  • The agent only needs outbound HTTPS access — to the LLM provider, to integrations you wire up, and to update servers. No inbound ports required.

Mac mini system configuration

  • Enable Remote Management and Remote Login so we can reach the machine for support if something breaks (off by default after setup if you prefer).
  • Prevent automatic sleep — the agent only works while the Mac is awake.
  • Set the Mac to auto-restart on power failure so it comes back online after a power blip without needing physical access.
  • Enable FileVault disk encryption. Use a strong login password and store it in your password manager.
  • Keep the Mac mini in a physically secure location — server room, locked office, or somewhere only authorized people reach it.

Data handling

  • Conversations and files stay local on the Mac mini. Nothing is uploaded to Cyndra.
  • Agent memory — skills, preferences, context — is stored as local files on the device.
  • No telemetry or usage data is sent to Cyndra.
  • Logs stay on the device unless you explicitly share them for support.
  • When using a cloud LLM (Claude or OpenAI), your queries are sent to the provider's API under their standard data-handling policies. Anthropic does not train on API data. OpenAI's API has the same default.
Bottom line: the only data that leaves your network is the messages you send to the LLM provider. That's the same as you using Claude.ai or ChatGPT directly — no different, no worse.

Local model option (maximum security)

For organizations that cannot send any data to cloud providers, Cyndra supports a fully local-model setup. The architecture supports both — you can start with cloud models and migrate to local later, or run local from day one.

  • Run Ollama or a similar local-inference server on a Mac Studio ($8K-$10K hardware budget).
  • Roughly 90% as capable as cloud models for most business tasks.
  • Zero data leaves the building. Not the LLM query, not metadata, not anything.
  • Same agent code, same integrations, same Telegram interface — only the model swaps.

Ideal for

  • Law firms with attorney-client privileged data.
  • Medical practices handling PHI under HIPAA.
  • Financial institutions with regulated client data.
  • Defense contractors with ITAR or CUI constraints.
  • Any business that simply doesn't want to send data to a third party.
Migration path: start on cloud models for the first month while your team gets value fast. Once everything is dialed in and you know what your agent actually does day to day, swap in a local model. The agent doesn't notice.

FAQ — from real client security calls

Every question below came from a real customer security review. If you have one we missed, ask us — we'll add it.

Where does my data live?
100% on your Mac mini. Nothing on Cyndra's servers. Only LLM queries go to Claude or OpenAI, and they don't train on API data.
Can Cyndra see my data?
No. We install via NPM. There's no backend, no database, no phone home. We have zero access to your data after setup.
What if your company gets hacked?
Your data isn't on our servers, so there's nothing to hack. Your agent runs entirely on your hardware.
What about prompt injection attacks?
Sub-agents run in isolated Docker containers with limited credential access. Even if a prompt injection succeeds in one container, it can't reach credentials from other integrations.
Can the agent send emails without my permission?
No, it always asks first by default. You can optionally grant blanket approval for specific contacts or threads.
What if someone steals the Mac mini?
Enable FileVault disk encryption. Set a strong login password. The Mac mini should live in a physically secure location — server room, locked office, or somewhere only authorized people can reach it.
Do you have SOC 2 certification?
In progress. We've initiated the process with Sprinto. Type 2 certification requires a 3-month observation period.
Can you sign a BAA for HIPAA?
For HIPAA-regulated data we recommend the local-model option, where zero data leaves your network. We can discuss BAA requirements case by case for hybrid setups.
What happens if we need to remove the agent completely?
Uninstall Docker, remove the NPM packages, delete the agent directory. Everything is local, so removal is complete and immediate.
Our EDR flags unknown software. What do we whitelist?
Full component list: Homebrew, Node.js v20+, Docker Desktop, Claude Code CLI, Cyndra Agent (NPM package), OneCLI, AnyDesk (setup only), Telegram desktop. We provide SHA hashes and installation paths on request.
Who has access to the agent's Telegram or WhatsApp?
Only authorized sender IDs on the allowlist. Unauthorized messages are ignored or stored without triggering the agent.
Can different team members have different access levels?
Not yet natively. Per-user permissions are on the roadmap. Current workaround: a separate agent per person or per department, each with only the tools that role needs.
What if your team all got hit by a bus?
The tech is built on open-source foundations — Claude Code SDK, Docker, NPM. Any competent developer can maintain it. We have a team of 11 engineers and growing.
We use Microsoft only and our IT locks everything down. Is that a problem?
Microsoft 365 integration requires Azure app registration, which may need IT admin consent. We handle this on the setup call. For extremely locked-down environments, the local-model option bypasses every cloud concern.

// SECURITY_REVIEW

Need a deeper security review?

If your IT or compliance team needs answers we haven't covered — vendor questionnaires, BAAs, custom architecture diagrams, SHA hashes for EDR allow-listing — we'll get on a call and walk through it. Most reviews wrap in 30 minutes.