// CYNDRA_ONBOARDING / 05_SECURITY
Security & data privacy
Cyndra runs on your hardware. Your conversations, files, and agent memory never leave your Mac mini except for the queries you explicitly send to a cloud LLM — and even those can be replaced with a fully local model. No telemetry, no backend, no phone home.
The questions on this page are from real customer security calls. They're not hypothetical.
Network & device hardening
Two recommendations for hardening the host machine. Both are optional but worth doing if security posture matters to your business.
Network isolation
- Put the Mac mini on its own VLAN or network segment for extra isolation from your main corporate network.
- If you don't run VLANs, a dedicated guest network on your router achieves a similar effect.
- The agent only needs outbound HTTPS access — to the LLM provider, to integrations you wire up, and to update servers. No inbound ports required.
Mac mini system configuration
- Enable Remote Management and Remote Login so we can reach the machine for support if something breaks (off by default after setup if you prefer).
- Prevent automatic sleep — the agent only works while the Mac is awake.
- Set the Mac to auto-restart on power failure so it comes back online after a power blip without needing physical access.
- Enable FileVault disk encryption. Use a strong login password and store it in your password manager.
- Keep the Mac mini in a physically secure location — server room, locked office, or somewhere only authorized people reach it.
Data handling
- Conversations and files stay local on the Mac mini. Nothing is uploaded to Cyndra.
- Agent memory — skills, preferences, context — is stored as local files on the device.
- No telemetry or usage data is sent to Cyndra.
- Logs stay on the device unless you explicitly share them for support.
- When using a cloud LLM (Claude or OpenAI), your queries are sent to the provider's API under their standard data-handling policies. Anthropic does not train on API data. OpenAI's API has the same default.
Local model option (maximum security)
For organizations that cannot send any data to cloud providers, Cyndra supports a fully local-model setup. The architecture supports both — you can start with cloud models and migrate to local later, or run local from day one.
- Run Ollama or a similar local-inference server on a Mac Studio ($8K-$10K hardware budget).
- Roughly 90% as capable as cloud models for most business tasks.
- Zero data leaves the building. Not the LLM query, not metadata, not anything.
- Same agent code, same integrations, same Telegram interface — only the model swaps.
Ideal for
- Law firms with attorney-client privileged data.
- Medical practices handling PHI under HIPAA.
- Financial institutions with regulated client data.
- Defense contractors with ITAR or CUI constraints.
- Any business that simply doesn't want to send data to a third party.
FAQ — from real client security calls
Every question below came from a real customer security review. If you have one we missed, ask us — we'll add it.
Where does my data live?
Can Cyndra see my data?
What if your company gets hacked?
What about prompt injection attacks?
Can the agent send emails without my permission?
What if someone steals the Mac mini?
Do you have SOC 2 certification?
Can you sign a BAA for HIPAA?
What happens if we need to remove the agent completely?
Our EDR flags unknown software. What do we whitelist?
Who has access to the agent's Telegram or WhatsApp?
Can different team members have different access levels?
What if your team all got hit by a bus?
We use Microsoft only and our IT locks everything down. Is that a problem?
// SECURITY_REVIEW
Need a deeper security review?
If your IT or compliance team needs answers we haven't covered — vendor questionnaires, BAAs, custom architecture diagrams, SHA hashes for EDR allow-listing — we'll get on a call and walk through it. Most reviews wrap in 30 minutes.